Numerical Solution of Generalized Lyapunov Equations. Thilo Penzl. March 13, Abstract

Size: px
Start display at page:

Download "Numerical Solution of Generalized Lyapunov Equations. Thilo Penzl. March 13, Abstract"

Transcription

1 Numerical Solution of Generalized Lyapunov Equations Thilo Penzl March 3, 996 Abstract Two ecient methods for solving generalized Lyapunov equations and their implementations in FORTRAN 77 are presented. The rst one is a generalization of the Bartels{Stewart method and the second is an extension of Hammarling's method to generalized Lyapunov equations. Our LAPACK based subroutines are implemented in a quite exible way. They can handle the transposed equations and provide scaling to avoid overow in the solution. Moreover, the Bartels{Stewart subroutine oers the optional estimation of the separation and the reciprocal condition number. A brief description of both algorithms is given. The performance of the software is demonstrated by numerical experiments. Key Words: Mathematical software, generalized Lyapunov equation, generalized Stein equation, condition estimation. AMS(MOS) subject classications: 65F5, 65F5, 93B4, 93B5 Basic Properties of Generalized Lyapunov Equations Lyapunov equations play an essential role in control theory. In the past few years its generalizations, the generalized continuous{time Lyapunov equation (GCLE) A T XE + E T XA = Y () and the generalized discrete{time Lyapunov equation (GDLE) or generalized Stein equation A T XA E T XE = Y; (2) have received a lot of interest. A, E, and Y are given real n n matrices. The right hand side Y is symmetric and so is the solution matrix X if the equation has a unique solution. We can consider the GCLE and the GDLE as special cases of the generalized Sylvester equation R T XS + U T XV = Y; (3) where in general, X and Y are n m matrices. Since equation (3) is linear in the entries of X, it can be written as a system of linear equations. To this end, the entries of X are usually arranged in a vector by stacking the columns of X = (x ij ) nm. This is done by the mapping vec, which is dened as vec(x) = (x ; : : : ; x n ; x 2 ; : : : ; x n2 ; : : : ; x m ; : : : ; x nm ) T : It can easily be shown that (3) is equivalent to S T R T + V T U T vec(x) = vec(y ); (4) where denotes the Kronecker product of two matrices, e.g. [3]. The order of this system is nm. Therefore, it is not practicable to nd X by solving the corresponding system of linear equations unless n and m are very small. Technische Universitat Chemnitz{Zwickau, Fakultat fur Mathematik, 97 Chemnitz, FRG, E{mail: tpenzl@mathematik.tu-chemnitz.de. Supported by Deutsche Forschungsgemeinschaft, research grant Me 79/7{ Singulare Steuerungsprobleme.

2 The solvability of (3) depends on the generalized eigenstructure of the matrix pairs (R; U) and (V; S). A matrix pencil R U is called regular i there exists a pair of complex numbers ( ; ) such that R U is nonsingular. If Rx = Ux holds for a vector x 6=, the pair (; ) 6= (; ) is a generalized eigenvalue. Two generalized eigenvalues (; ) and (; ) are considered to be equal i =. The set of all generalized eigenvalues of the pencil R U is designated by (R; U). The following theorem, e.g. [3], gives necessary and sucient conditions for the existence and uniqueness of the solution of (3). Theorem The matrix equation (3) has a unique solution if and only if. R U and V S are regular pencils and 2. (R; U) \ ( V; S) = ;. Let ( i ; i ) denote the generalized eigenvalues of the matrix pencil A E. For simplicity, we switch to the more conventional form of a matrix pencil A E, whose eigenvalues are given by i = i = i with i = when i =. Applying the above theorem to () and (2) gives the following corollary. Corollary Let A E be a regular pencil. Then. the GCLE () has a unique solution if and only if all eigenvalues of A E are nite and i + j 6= for any two eigenvalues i and j of A E, 2. the GDLE (2) has a unique solution if and only if i j 6= for any two eigenvalues i and j of A E (under the convention = ). As a consequence, singularity of one of the matrices A and E implies singularity of the GCLE (). If both A and E are singular, the GDLE (2) is singular too. Thus, for an investigation of the unique solution of a generalized Lyapunov equation we may assume one of the matrices A and E to be nonsingular. Since A and E play a symmetric role (up to a sign in the discrete{time case), we expect E to be invertible. Under this assumption equations () and (2) are equivalent to the Lyapunov equations and (AE ) T X + X(AE ) = E T Y E (5) (AE ) T X(AE ) X = E T Y E ; (6) respectively, and the classical result about the positive denite solution of the stable Lyapunov equation [4] remains valid for the generalized equations. Theorem 2 Let E be nonsingular and Y be positive denite (semidenite).. If Re( i ) < for all eigenvalues i of A E, then the solution matrix X of the GCLE () is positive denite (semidenite). 2. If j i j < for all eigenvalues i of A E, then the solution matrix X of the GDLE (2) is positive denite (semidenite). There may be a denite solution in the discrete{time case even if E is singular. If j i j < for all eigenvalues i of E A, then the solution matrix X is negative denite (semidenite). But this, of course, means nothing but reversing the roles of A and E. Finally, it should be stressed that the reduction of a generalized equation () or (2) to a standard Lyapunov equation (5) or (6), respectively, is mainly of theoretical interest. If E is possibly ill{conditioned, this is not a numerically feasible approach to solve the generalized equation numerically. In the sequel we present two methods which solve generalized Lyapunov equations without inverting E. 2

3 2 Algorithms for Generalized Lyapunov Equations 2. A Generalization of the Bartels{Stewart Method The algorithm described in this section is an extension of the Bartels{Stewart method [2] to generalized Lyapunov equations (see also [3], [6], [2]). We restrict ourself to the GCLE. The derivation of the algorithm for the GDLE is straightforward. First the pencil A E is reduced to real generalized Schur form A s E s by means of orthogonal matrices Q and Z (QZ{algorithm [8]): A s = Q T AZ; E s = Q T EZ such that A s is upper quasitriangular and E s is upper triangular. Dening Y s = Z T Y Z; X s = QXQ T leads to the reduced Lyapunov equation A T s X se s + E T s X sa s = Y s ; () which is equivalent to (). Let A s, E s, Y s, and X s be partitioned into p by p blocks with respect to the subdiagonal structure of the upper quasi-triangular matrix A s. Especially, the main diagonal blocks are or 2 2 matrices according to real eigenvalues or conjugate complex eigenvalue pairs of the pencil A s E s, respectively. A s = Y s = A A p.... O A pp Y Y p..... Y p Y pp C A ; E s B C A ; X s = E E p.... O E pp X X p..... X p E pp Now () can be solved by a block forward substitution, which is more complicated to derive than that utilized in the Bartels{Stewart method for the standard Lyapunov equation. Since X s is symmetric, only p(p + )=2 Sylvester equations A T kkx kl E ll + E T kkx kl A ll = ^Y kl () of order at most 2 2 with right hand side matrices ^Y kl = Y kl + k;l X i=;j= (i;j)6=(k;l) A T ik X ij E jl + E T ik X ija jl need to be solved. According to (4) the solution X ij is found by solving the corresponding system of linear equations E T ll A T kk + A T ll Ekk T vec(xkl ) = vec( ^Ykl ): We determine the blocks in the upper triangle of X s in a row{wise order, i.e., we compute successively X,..., X p, X 22,..., X 2p,..., X pp. Computing the matrices ^Ykl by (2) would result in an overall complexity O(n 4 ). This can be avoided by a somewhat complicated updating technique based upon the following expansion of equation (2) kx X ^Y kl = Y kl A T l X X ij E jl A + E T l X ij A jl AA + i= j= k X A T ik X il E ll + Eik T X ila ll : i= 3 j= C A ; C A : (7) (8) (9) (2)

4 We determine ^Ykl in 2k sweeps Y () kl = Y kl ; kl = Y (2i 2) + A T ik Y (2i ) kl l j= X ij E jl A + E T X l X ij A jl A ; i = ; : : :; k; j= Y (2i) kl = Y (2i ) kl + A T ik X ile ll + E T ik X ila ll ; i = ; : : :; k ; ^Y kl = Y (2k ) kl : Our implementation of the algorithm computes Y (2i ) kl (k i) right before solving () for X il. After X il is known, Y (2i) kl (k > i) can be computed. What decreases the complexity to O(n 3 ) is that Y (2i ) i;l,..., Y (2i ) (the elements below the main diagonal are not of interest) can be updated l;l simultaneously, where we have to compute the terms P l j= X ije jl and P l j= X ija jl only once. Note that the blocks of the strict lower triangle of X s appearing in each of both sums are given by symmetry. Moreover, it is apparent that the updating technique described above can be realized by operating on the array Y without the need for further workspace. Finally, the solution matrix X is obtained from X s by (9). Lyapunov equations whose coecient matrices A and E are replaced by its transposed matrices can be solved in a similar fashion by block backward substitution. 2.2 Estimation of the Separation and the Condition Number The software for the Bartels{Stewart method provides optional estimates for both the separation and the condition number of the Lyapunov operator. Especially the latter is important for estimating the accuracy of the computed solution. In the continuous{time case the separation is dened as sep c = sep c (A; E) = min jjxjj F = A T XE + E T XA F : Due to the orthogonal invariance of the Frobenius norm, this quantity is not altered by the transformations (7) and (8). Therefore, the estimate can be obtained from the reduced equation () which lowers the computational cost signicantly. According to (4), sep c = K ; c 2 if K c = Es T AT s + AT s ET s is invertible. The quantity K c is estimated by an algorithm due to Higham [] based on Hager's method [9]. Actually, this method yields an estimate for the {norm. This is a quite good approximation of K c 2, since it deviates from K c by no more than a factor n. The algorithm, which is available as routine DLACON in LAPACK [], requires the solution of a few (say 4 or 5) generalized Lyapunov equations A T s X s E s + E T s X s A s = Y s or A s X s E T s + E sx s A T s = Y s. Finally, an estimate for the condition number of the generalized Lyapunov operator, which is represented by the corresponding matrix K c, is provided by cond(k c ) 2 ja s j F jje s j F sep c : Again, note that jajj F = jja s jj F and jje j F = ja s j F. The separation of the discrete{time Lyapunov operator is sep d = sep d (A; E) = min A T XA E T XE jjxjj F = F = K d 2 with K d = A T s A T s Es T Es T. After approximating sep d by the reciprocal estimate for K an estimate for the condition number is gained from cond(k d ) jja sjj 2 F + je s jj 2 F sep d : d 4

5 2.3 A Generalization of Hammarling's Method The method due to Hammarling is an alternative to the Bartels{Stewart method in case the Lyapunov equation to be solved is stable and its right hand side is semidenite. In [] Hammarling suggested that his method can be extended to generalized Lyapunov equations. We will present such a generalization in the sequel. We assume the pencil A E to be stable, i.e., its eigenvalues must lie in the open left half plane in the continuous{time case or inside the unit circle in the discrete{time case. If these conditions are met, the solutions X of the GCLE A T XE + E T XA = B T B (3) and the GDLE A T XA E T XE = B T B (4) are positive semidenite (see Theorem 2). In general, B is a real m n matrix, while A, E, and X are real square matrices of order n. The Cholesky factor U of the solution X = U T U can be found without rst computing X and without forming the matrix product B T B. We focus on the continuous{time equation (3) and give a note about the dierences in the discrete{time case. Similar to the Bartels{Stewart method, the algorithm consists of three parts. First the equation is transformed to a reduced form by means of the orthogonal matrices resulting from the generalized Schur factorization. After solving the reduced equation, the solution is retrieved by back transformation. Applying the QZ{algorithm to the pencil A E yields orthogonal matrices Q and Z such that A s = Q T AZ is upper quasitriangular and E s = Q T EZ is upper triangular. This leads to the reduced equation A T s U T s U s E s + E T s U T s U s A s = B T s B s ; (5) where the n n upper triangular matrix B s on the right hand side is formed as follows. If m n, the matrix B s is obtained from the rectangular QR{factorization Bs BZ = Q B ; where Q B is an orthogonal matrix of order m. Otherwise, we partition BZ as BZ = ^B ^B2 ; where ^B is an m m matrix with the QR{factorization ^B = Q B ^B3 : Thus, B s is given by B s = ^B3 Q T B ^B 2 : To solve the reduced equation (5), we partition the involved matrices as A A A s = 2 E E ; E A s = 2 ; 22 E 22 B B B s = 2 U U ; U B s = 2 : 22 U 22 The upper left blocks are p p matrices (p = ; 2), where p = 2 i the pencil A E has a pair of complex conjugate eigenvalues. Provided that U is nonsingular, the above partitioning leads to the following formulas A T U T U E + E T U T U A = B T B ; (6) A T 22 U T 2 + E T 22 U T 2 M = B T 2 M 2 A T 2 U T E T 2 U T M ; (7) A T 22 U T 22 U 22E 22 + E T 22 U T 22 U 22A 22 = B T 22 B 22 B T 2 B 2 (A T U T + 2 AT U T )(U 22 2 E 2 + U 2 E 22 ) (E T 2 U T + E T 22 U 2)(U T A 2 + U 2 A 22 ) = B T 22 B 22 yy T (8) 5

6 with the auxiliary matrices M = U A E (9) M 2 = B E (2) y = B T 2 (E2U T T + E22U T 2)M T T 2 : As mentioned in Section the matrix E is nonsingular so that the inverse of E exists. Moreover, it can be proved that U is nonsingular as well if B 6=. The above equations enable the entries of U s to be determined recursively. The block U is gained from the stable Lyapunov equation (6). Afterwards, the matrices M and M 2 are determined. Solving (6) for the case p = 2 is described later in an extra paragraph. After computing U, M, and M 2, the generalized Sylvester equation (7) is solved for U2. T If U is nonsingular, the existence of a unique solution U T 2 is guaranteed by Theorem. It remains to nd the Cholesky factor B22 ~ of the right hand side matrix of equation (8) B T 22B 22 + yy T = ~ B T 22 ~ B 22 : The upper triangular matrix B ~ 22 is cheaply obtained from the QR{factorization of B22 ~B22 y T = Q ~B ; where Q ~B is an orthogonal matrix and is the zero matrix which consists of p rows. The resulting equation A T 22U T 22U 22 E 22 + E T 22U T 22U 22 A 22 = B T 22B 22 has the same structure as (5) but its size is lowered by p. Hence, all entries of U s can be determined recursively. After the reduced equation (5) is solved, we nd the upper triangular solution matrix U of equation (3) by the QR{factorization of U s Q T = Q U U with the orthogonal matrix Q U. Of course, the latter need not be accumulated. The next part of this section is addressed to the problems caused by the non{real eigenvalues of the pencil A E when p = 2. The procedure for solving the 2 2 Lyapunov equation (6) is similar to that described above for the equation of order n. For an explanation of the algorithm we make use of complex computation. Nevertheless, in our implementation complex operations are emulated by real ones. First the pencil A E is reduced to Schur form by means of unitary matrices ^Q and ^Z such that ^Q H a a A ^Z = ^A = 2 ; a 22 ^Q H e e E ^Z = ^E = 2 : e 22 Let B ^Z = ^QB ^B be the QR{factorization of B ^Z with an unitary matrix ^QB, for which the entries on the main diagonal of the upper triangular matrix ^B are real and non{negative. Now the reduced form of (6) can be written as ^A H ^U H ^U ^E + ^EH ^U H ^U ^A = ^BH ^B : After partitioning ^B and ^U as follows b b ^B = 2 u u ; ^U = 2 ; b 22 u 22 we nd the entries of ^U from = p a e e a ; u = b ; 6

7 u 2 = b 2 + (a e 2 + a 2 e )u a 22 e + e 22 a ; 2 = p a 22 e 22 e 22 a 22 ; y = b 2 e (e 2 u + e 22 u 2 ); u 22 = 2 qb jyj2 : Finally, the upper triangular solution U of (6) is obtained by the QR{factorization ^U ^Q = QU U ; where Q U is a unitary matrix. It should be stressed that if p = 2 care is needed when computing the auxiliary matrices M and M 2 since U may be ill conditioned. This is the case when the pair (E T AT ; E T BT ) is near to an uncontrollable one. For a further discussion of this issue we refer to Section 6 in []. In our implementation we followed the procedure proposed there. For the discrete{time equation (4) most of the algorithm is similar to that for the continuous{ time equation. An essential dierence is the solution of the reduced equation A T s U T s U s A s E T s U T s U s E s = B T s B s : (2) Here the recursion is based on the formulas A T U T U A E T U T U E = B T B (22) A T 22U T 2M E T 22U T 2 = B T 2M 2 + E T 2U T A T 2U T M A T 22 U T 22 U 22A 22 E T 22 U T 22 U 22E 22 = B T 22 B 22 yy T ; where M and M 2 are dened as in (9) and (2), respectively. The matrix y is given by y = B T 2 A T 2U T + A T 22U T 2 C; where C is a matrix which fulls M 3 = I 2p M2 M M2 M T = CC T : From (9), (2), and (22) we obtain M T 2 M 2 + M T M = I p which leads to M3 2 = M 3. Hence, the symmetric matrix M 3 is the orthogonal projector onto span((m T 2 M T ) T )?. Thus, the 2p 2p matrix M 3 is actually positive semidenite and has rank p. Consequently, the factor C is a 2p p matrix and the matrix y consists of only p columns. 3 Software Implementation Our routines for solving the generalized Lyapunov equation possess some new features that should be mentioned here. Both Hammarling's method and the Bartels{Stewart method are implemented in a way that enables the transposed equations to be solved without transposing the involved matrices explicitly. Furthermore, our routines can benet from Schur factorizations of the pencil A E, which have been computed prior to calling the routine. To keep storage requirements low, the input right hand side and the output solution share the same array. This, of course, results in the loss of the right hand side matrix. An output parameter for scaling of the solution is provided to guard against overow. For the Bartels{Stewart method the symmetric right hand side matrix Y may be supplied as upper or lower triangle. In any case the full solution matrix is returned. Moreover, an optional estimation of the separation and the reciprocal condition number is provided. Of course, when solving the generalized Lyapunov equation via Hammarling's algorithm the condition estimator in the Bartels{Stewart routine can be utilized to detect ill{conditioning in the equation. The number of ops required by the routines is given by the following table. It strongly depends on whether the generalized Schur factorization of the pencil A E is supplied (FACT =.TRUE.) or not, when calling one of both main routines. The op estimate 66n 3 for the QZ{algorithm, which delivers this factorization, is taken from [8]. We split up the op count for the Bartels{Stewart 7

8 method into the three possible cases, where the solution (JOB='X'), the separation (JOB='S'), or both quantities (JOB='B') are to be computed. Note that we count a single oating point arithmetic operation as one op. The quantity c is an integer number of modest size (say 4 or 5). Method FACT=.TRUE. FACT=.FALSE. JOB='B' (26 + 8c)=3 n 3 ( c)=3 n 3 Bartels{Stewart JOB='S' 8c=3 n 3 (98 + 8c)=3 n 3 Hammarling JOB='X' 26=3 n 3 224=3 n 3 m n (3n 3 + 6mn 2 (2n 3 + 6mn 2 +6m 2 n 2m 3 )=3 +6m 2 n 2m 3 )=3 m > n (n 3 + 2mn 2 )=3 (29n 3 + 2mn 2 )=3 Number of ops required by the routines. Our implementation of the Bartels{Stewart algorithm is backward stable if the eigenvalues of the pencil A E are real. Otherwise, linear systems of order at most 4 are involved into the computation. These systems are solved by Gauss elimination with complete pivoting. The loss of stability in such eliminations is rarely encountered in practice. To our knowledge, there are no backward stability results for Hammarling's method for the standard Lyapunov equation. The source code is implemented in FORTRAN 77 and it meets the programming standards of the Working Group on Software [5]. It makes use of BLAS{routines and routines available in LAPACK 2. []. Although BLAS{routines of level 3 are used the algorithms are basically of level and 2. The type COMPLEX is not used. Complex computation is avoided or emulated by real operations. The remainder of this section gives a brief survey of the subroutine organization. An interface specication of both main routines is enclosed in the appendix. Bartels{Stewart method DGLP Main routine. To be called by the user. DGLPRC Solves the reduced continuous{time equation (). DGLPRD Solves the reduced discrete{time equation. DGELUF LU{factorization of a square matrix with complete pivoting. DGELUS Back substitution for linear systems whose coecient matrix has been factorized by DGELUF. Provides scaling. DMTRA Transposes a matrix. DZTAZ Computes Z T AZ or ZAZ T for a symmetric matrix A. Hammarling's method DGLPHM Main routine. To be called by the user. DGHRC Computes the Cholesky factor of the solution of the reduced continuous{time equation (5). DGHRD Computes the Cholesky factor of the solution of the reduced discrete{time equation (2). DGHNX2 Solves the matrix equation A T XC + E T XD = Y or AXC T + EXD T n m matrix X (m = ; 2). Provides scaling. = Y for the DGH2X2 Hammarling's method for the 2 2 Lyapunov equation in case the matrix pencil has complex conjugate eigenvalues. Provides scaling. DCXGIV Computes parameters for the complex Givens rotation. 8

9 DGELUF LU{factorization of a square matrix with complete pivoting. DGELUS Back substitution for linear systems whose coecient matrix has been factorized by DGELUF. Provides scaling. Note that the subroutines DGELUF and DGELUS are contained in LAPACK 2. under the names DGETC2 and DGESC2, respectively. 4 Numerical Experiments In this section we demonstrate the performance of our software applied to two sample problems. Both depend on a scale parameter t which aects the condition number of the equation. We compare the results obtained by the Bartels{Stewart subroutine DGLP and the Hammarling subroutine DGLPHM with those of the established subroutines SYLGC and SYLGD [7]. The tests were carried out on a HP Apollo series 7 workstation under IEEE double precision and machine precision 2:22 6. Example [7]: The matrices A and E are dened as A = (2 t )I n + diag(; 2; 3; : : :; n) + U T n E = I n + 2 t U n in the continuous{time case and A = 2 t I n + diag(; 2; 3; : : :; n) + U T n E = I n + 2 t U n in the discrete{time case, where I n is the n n identity matrix and U n is an n n matrix with unit entries below the diagonal and all other entries zero. These systems become increasingly ill{conditioned as the parameter t increases. In all cases Y is dened as the n n matrix that gives a true solution matrix X of all unit entries. We generated the above problems for a medium size (n = ) and various values of the parameter t. The following table shows the relative errors jj ^X XjjF = jxjj F, where X is the known true solution and ^X is the computed solution aected by roundo. Since the pencil A E is in general not stable, the Hammarling subroutine is not applied to this example. Cont.{time Eq. Discr.{time Eq. t DGLP SYLGC DGLP SYLGD 7.478E{3 2.34E{2.267E{ E{3 4.42E{ E{2.34E{2.E{ 2.3E{8.94E{9 2.72E{ E{ E{ E{ E{6.5E{6 4.46E{3 4.42E{3 7.63E{3 { Example. n =. Relative error. For t = 4 the subroutine SYLGD returned an error ag. For problems of a smaller size (n = ) we compare the estimates for the separation SEP and the reciprocal condition number RCOND with the 'true' values computed applying the singular value decomposition (SVD) to the corresponding Kronecker product matrix. Note that SYLGC and SYLGD do not return estimates for the separation. Cont.{time Eq. Discr.{time Eq. t DGLP true DGLP true 3.685E{.48E{ 2.492E+.49E+.952E{ E{4 3.99E{ E{4 2.97E{ E{7 3.85E{ E{ E{ E{ 3.725E{9 9.33E{ 4.88E{ E{ E{2 9.95E{3 Example. n =. Estimates for the separation. 9

10 Cont.{time Eq. Discr.{time Eq. t DGLP SYLGC true DGLP SYLGD true.98e{3.83e{3 3.83E{3 4.9E{3 7.4E{3.987E{2.699E{5.597E{ E{ E{ E{6.375E{5 2.66E{8.589E{8 4.44E{8 8.67E{9 2.87E{9.339E{8 3.62E{.55E{ 4.337E{ 8.467E{2 2.75E{2.38E{ 4.582E{4.53E{4 4.23E{ E{ E{5.286E{4 Example. n =. Estimates for the reciprocal condition number. Example 2: This example is generated in a way that enables setting the eigenvalues of the pencil A E, which is advantageous for two reasons. In contrast to the previous example it is guaranteed that the pencil has both real and conjugate complex eigenvalues. On the other hand stable pencils can easily be constructed to include the Hammarling subroutine into the comparison. If n = 3q we choose the n n matrices A and E as A = V n diag(a ; : : : ; A q )W n ; A i = E = V n W n s i t i t i t i t i where V n and W n are n n matrices containing only zeroes and ones. The unit entries of V n are on and below the anti{diagonal, those of W n are on and below the diagonal. The parameters s i and t i determining the eigenvalues of the pencil are chosen as s i = t i = t i in the continuous{time case and s i = =t i, t i = p 2s i =2 in the discrete{time case. The semidenite right hand side is formed as the normal matrix A Y = B T B; B = (; 2; : : :; n): The drawback of this example is that the true solution is not known. Therefore, the accuracy of the computed solution ^X is measured by the relative residual dened as jja T ^XE+E T ^XA+Y jjf = jy j F in the continuous{time case and jja T ^XA E T ^XE+Y jjf = jy jj F in the discrete{time case. But this, of course, is a worse criterion compared to the relative error in case the equation is ill{conditioned. The following results were obtained for problems of size n = 99. Cont.{time Eq. Discr.{time Eq. t DGLP DGLPHM SYLGC DGLP DGLPHM SYLGD E{ E{4 3.68E{4.76E{3.72E{ E{5.2.66E{3.28E{ E{4.85E{.844E{ 4.42E{ E{ E{ 3.96E{ E{ E{9 9.92E E{ 4.47E{ 2.423E{ 3.328E{5.4E{ E{ E{ E{9 9.35E{9 { { { Example 2. n = 99. Relative residual. For t = :8 in the discrete{time case each of the subroutines returned an error ag. Finally, the CPU{times obtained by means of the LAPACK{subroutine SECOND are compared for the parameters n = 99 and t = :2. They are split up into the times required for the three main stages of the computation, the transformation of the equation to the reduced form (T), the solution of the reduced equation (RE), and the back transformation of the solution (BT). Cont.{time Eq. Discr.{time Eq. DGLP DGLPHM SYLGC DGLP DGLPHM SYLGD T RE BT Total Example 2. n = 99. t = :2. CPU{times in seconds.

11 Concerning the accuracy of the computed solutions for both examples the involved subroutines do not dier signicantly. The estimates of the condition number obtained by DGLP and SYLGC/SYLGD are of comparable size as well. A distinct merit of our subroutines are the shorter CPU{times. This is mainly caused by the fact that SYLGC/SYLGD make use of several older LINPACK [4] and EISPACK [5] subroutines. Especially, the QZ{subroutine DGEGS from LAPACK invoked by our subroutines is often faster than the modied EISPACK subroutines QZHESG, QZITG, and QZVALG utilized by SYLGC/SYLGD by a factor 6. 5 Acknowledgements I wish to thank Andras Varga for providing his subroutines DMTRA and DZTAZ and the subroutines DGELUF and DGELUS written by Bo Kagstrom and Peter Poromaa.

12 A Subroutine Description A. Bartels{Stewart Method A.. Purpose To solve for X either the generalized continuous{time Lyapunov equation op(a) T X op(e) + op(e) T X op(a) = scale Y (23) or the generalized discrete{time Lyapunov equation op(a) T X op(a) op(e) T X op(e) = scale Y (24) where op(m) is either M or M T for M = A; E and the right hand side Y is symmetric. A, E, Y, and the solution X are N by N matrices. scale is an output scale factor, set to avoid overow in X. An estimation of the separation and the reciprocal condition number of the Lyapunov operator is provided. A..2 Specication A..3 SUBROUTINE DGLP( JOB, DISCR, FACT, TRANS, N, A, LDA, E, LDE, * UPPER, X, LDX, SCALE, Q, LDQ, Z, LDZ, IWORK, * RWORK, LRWORK, SEP, RCOND, IERR ) Arguments In N { INTEGER. Argument List The order of the matrix A. N. A { DOUBLE PRECISION array of DIMENSION (LDA,N). If FACT =.TRUE., then the leading N by N upper Hessenberg part of this array must contain the generalized Schur factor A s of the matrix A. A s must be an upper quasitriangular matrix. The elements below the upper Hessenberg part of the array A are not referenced. If FACT =.FALSE., then the leading N by N part of this array must contain the matrix A. Note: this array is overwritten if FACT =.FALSE.. LDA { INTEGER. The leading dimension of the array A as declared in the calling program. LDA N. E { DOUBLE PRECISION array of DIMENSION (LDE,N). If FACT =.TRUE., then the leading N by N upper triangular part of this array must contain the generalized Schur factor E s of the matrix E. The elements below the upper triangular part of the array E are not referenced. If FACT =.FALSE., then the leading N by N part of this array must contain the coecient matrix E of the equation. Note: this array is overwritten if FACT =.FALSE.. LDE { INTEGER. The leading dimension of the array E as declared in the calling program. LDE N. X { DOUBLE PRECISION array of DIMENSION (LDX,N). If JOB = 'B' or 'X', then the leading N by N part of this array must contain the right hand side matrix Y of the equation. On entry, either the lower or the upper triangular part of this array is referenced (see mode parameter UPPER). 2

13 If JOB = 'S', X is not referenced. Note: this array is overwritten if JOB = 'B' or 'X'. LDX { INTEGER. The leading dimension of the array X as declared in the calling program. LDX N. Q { DOUBLE PRECISION array of DIMENSION (LDQ,N). If FACT =.TRUE., then the leading N by N part of this array must contain the orthogonal matrix Q from the generalized Schur factorization. If FACT =.FALSE., Q need not be set on entry. LDQ { INTEGER. The leading dimension of the array Q as declared in the calling program. LDQ N. Z { DOUBLE PRECISION array of DIMENSION (LDZ,N). If FACT =.TRUE., then the leading N by N part of this array must contain the orthogonal matrix Z from the generalized Schur factorization. If FACT =.FALSE., Z need not be set on entry. LDZ { INTEGER. The leading dimension of the array Z as declared in the calling program. LDZ N. Arguments Out A { DOUBLE PRECISION array of DIMENSION (LDA,N). The leading N by N part of this array contains the generalized Schur factor A s of the matrix A. (A s is an upper quasitriangular matrix.) E { DOUBLE PRECISION array of DIMENSION (LDE,N). The leading N by N part of this array contains the generalized Schur factor E s of the matrix E. (E s is an upper triangular matrix.) X { DOUBLE PRECISION array of DIMENSION (LDX,N). If JOB = 'B' or 'X', then the leading N by N part of this array contains the solution matrix X of the equation. If JOB = 'S', X has not been referenced. SCALE { DOUBLE PRECISION. The scale factor set to avoid overow in X ( < SCALE ). Q { DOUBLE PRECISION array of DIMENSION (LDQ,N). The leading N by N part of this array contains the orthogonal matrix Q from the generalized Schur factorization. Z { DOUBLE PRECISION array of DIMENSION (LDZ,N). The leading N by N part of this array contains the orthogonal matrix Z from the generalized Schur factorization. SEP { DOUBLE PRECISION. An estimate for the separation of the Lyapunov operator. RCOND { DOUBLE PRECISION. An estimate for the reciprocal condition number of the Lyapunov operator. 3

14 Work Space IWORK { INTEGER array at least of DIMENSION (N2). If JOB = 'X', this array is not referenced. RWORK { DOUBLE PRECISION array at least of DIMENSION (LRWORK). On exit, if IERR =, RWORK() contains the optimal workspace. LRWORK { INTEGER. The dimension of the array RWORK. The following table contains the minimal work space requirements depending on the choice of JOB and FACT. JOB FACT LRWORK 'X'.TRUE. N 'X'.FALSE. 7N 'B', 'S'.TRUE. 2N 2 'B', 'S'.FALSE. max(2n 2 ; 7N) Note: For good performance, LRWORK must generally be larger. Tolerances None. Mode Parameters JOB { CHARACTER. Species if the solution is to be computed and if the separation (along with the reciprocal condition number) is to be estimated. JOB = 'X', JOB = 'S', JOB = 'B', DISCR { LOGICAL. (Compute the solution only); (Estimate the separation only); (Compute the solution and estimate the separation). Species which type of equation is to be solved. DISCR =.FALSE., (Continuous{time equation (23)); DISCR =.TRUE., (Discrete{time equation (24)). FACT { LOGICAL. Species whether the generalized real Schur factorization of the pencil A E is supplied on entry or not. FACT =.FALSE., FACT =.TRUE., TRANS { LOGICAL. (The generalized real Schur factorization is not supplied); (The generalized real Schur factorization is supplied). Species whether the transposed equation is to be solved or not. TRANS =.FALSE., (op(a)=a, op(e)=e); TRANS =.TRUE., (op(a)=a T, op(e)=e T ). UPPER { LOGICAL. Species whether the lower or the upper triangle of the array X is referenced. UPPER =.FALSE., UPPER =.TRUE., (Only the lower triangle is referenced); (Only the upper triangle is referenced). 4

15 Warning Indicator None. Error Indicator IERR { INTEGER. A..4 IERR = : Unless the routine detects an error (see next section), IERR contains on exit. Warnings and Errors detected by the Routine On entry, N <, or LDA < N, or LDE < N, or LDX < N, or LDQ < N, or LDZ < N, or JOB 62 f'b', 'b', 'S', 's', 'X', 'x'g. IERR = 2: IERR = 3: LRWORK too small. FACT =.TRUE. and the matrix contained in the upper Hessenberg part of the array A is not in upper quasitriangular form. IERR = 4: FACT =.FALSE. and the pencil A E cannot be reduced to generalized Schur form. LAPACK routine DGEGS has failed to converge. IERR = 5: DISCR =.TRUE. and the pencil A E has a pair of reciprocal eigenvalues. That is, i = = j for some i and j, where i and j are eigenvalues of A E. Hence, equation (24) is singular. IERR = 6: DISCR =.FALSE. and the pencil A E has a degenerate pair of eigenvalues. That is, i = j for some i and j, where i and j are eigenvalues of A E. Hence, equation (23) is singular. A.2 Hammarling's Method A.2. Purpose To compute the Cholesky factor U of the matrix X = op(u) T op(u), which is the solution of either the generalized c{stable continuous{time Lyapunov equation op(a) T X op(e) + op(e) T X op(a) = scale 2 op(b) T op(b) (25) or the generalized d{stable discrete{time Lyapunov equation op(a) T X op(a) op(e) T X op(e) = scale 2 op(b) T op(b) (26) without rst nding X and without the need to form the matrix op(b) T op(b). op(k) is either K or K T for K = A; B; E; U. A and E are N by N matrices, op(b) is an M by N matrix. The resulting matrix U is an N by N upper triangular matrix with non{negative entries on its main diagonal. scale is an output scale factor set to avoid overow in U. A.2.2 Specication SUBROUTINE DGLPHM( DISCR, FACT, TRANS, N, M, A, LDA, E, LDE, B, * LDB, SCALE, Q, LDQ, Z, LDZ, RWORK, LRWORK, * IERR ) 5

16 A.2.3 Arguments In N { INTEGER. Argument List The order of the matrix A. N. M { INTEGER. The number of rows in the matrix op(b). M. A { DOUBLE PRECISION array of DIMENSION (LDA,N). If FACT =.TRUE., then the leading N by N upper Hessenberg part of this array must contain the generalized Schur factor A s of the matrix A. A s must be an upper quasitriangular matrix. The elements below the upper Hessenberg part of the array A are not referenced. If FACT =.FALSE., then the leading N by N part of this array must contain the matrix A. Note: this array is overwritten if FACT =.FALSE.. LDA { INTEGER. The leading dimension of the array A as declared in the calling program. LDA N. E { DOUBLE PRECISION array of DIMENSION (LDE,N). If FACT =.TRUE., then the leading N by N upper triangular part of this array must contain the generalized Schur factor E s of the matrix E. The elements below the upper triangular part of the array E are not referenced. If FACT =.FALSE., then the leading N by N part of this array must contain the coecient matrix E of the equation. Note: this array is overwritten if FACT =.FALSE.. LDE { INTEGER. The leading dimension of the array E as declared in the calling program. LDE N. B { DOUBLE PRECISION array of DIMENSION (LDB,N). If TRANS =.TRUE., the leading N by M part of this array must contain the matrix B and N MAX(M,N). If TRANS =.FALSE., the leading M by N part of this array must contain the matrix B and N N. Note: this array is overwritten. LDB { INTEGER. The leading dimension of the array B as declared in the calling program. If TRANS =.TRUE., then LDB N. If TRANS =.FALSE., then LDB MAX(M,N). Q { DOUBLE PRECISION array of DIMENSION (LDQ,N). If FACT =.TRUE., then the leading N by N part of this array must contain the orthogonal matrix Q from the generalized Schur factorization. If FACT =.FALSE., Q need not be set on entry. LDQ { INTEGER. The leading dimension of the array Q as declared in the calling program. LDQ N. 6

17 Z { DOUBLE PRECISION array of DIMENSION (LDZ,N). If FACT =.TRUE., then the leading N by N part of this array must contain the orthogonal matrix Z from the generalized Schur factorization. If FACT =.FALSE., Z need not be set on entry. LDZ { INTEGER. The leading dimension of the array Z as declared in the calling program. LDZ N. Arguments Out A { DOUBLE PRECISION array of DIMENSION (LDA,N). The leading N by N part of this array contains the generalized Schur factor A s of the matrix A. (A s is an upper quasitriangular matrix.) E { DOUBLE PRECISION array of DIMENSION (LDE,N). The leading N by N part of this array contains the generalized Schur factor E s of the matrix E. (E s is an upper triangular matrix.) B { DOUBLE PRECISION array of DIMENSION (LDB,N). The leading N by N part of this array contains the Cholesky factor U of the solution matrix X of the problem. SCALE { DOUBLE PRECISION. The scale factor set to avoid overow in U ( < SCALE ). Q { DOUBLE PRECISION array of DIMENSION (LDQ,N). The leading N by N part of this array contains the orthogonal matrix Q from the generalized Schur factorization. Z { DOUBLE PRECISION array of DIMENSION (LDZ,N). The leading N by N part of this array contains the orthogonal matrix Z from the generalized Schur factorization. Work Space RWORK { DOUBLE PRECISION array at least of DIMENSION (LRWORK). On exit, if IERR =, RWORK() contains the optimal workspace. LRWORK { INTEGER. The dimension of the array RWORK. If FACT =.TRUE., then LRWORK MAX(6N-6,). If FACT =.FALSE., then LRWORK MAX(7N,). Note: For good performance, LRWORK must generally be larger. Tolerances None. Mode Parameters DISCR { LOGICAL. Species which type of equation is to be solved. DISCR =.FALSE., (Continuous{time equation (25)); DISCR =.TRUE., (Discrete{time equation (26)). 7

18 FACT { LOGICAL. Species whether the generalized real Schur factorization of the pencil A E is supplied on entry or not. FACT =.FALSE., FACT =.TRUE., TRANS { LOGICAL. (The generalized real Schur factorization is not supplied); (The generalized real Schur factorization is supplied). Species whether the transposed equation is to be solved or not. TRANS =.FALSE., TRANS =.TRUE., Warning Indicator None. Error Indicator IERR { INTEGER. A.2.4 IERR = : (op(k)=k, K = A; B; E; U); (op(k)=k T, K = A; B; E; U). Unless the routine detects an error (see next section), IERR contains on exit. Warnings and Errors detected by the Routine On entry, N <, or M <, or LDA < N, or LDE < N, or LDB < N, or LDQ < N, or LDZ < N, or (TRANS =.FALSE. and LDB < M). IERR = 2: IERR = 3: LRWORK too small. FACT =.TRUE. and the matrix contained in the upper Hessenberg part of the array A is not in upper quasitriangular form. IERR = 4: FACT =.FALSE. and the pencil A E cannot be reduced to generalized Schur form. LAPACK routine DGEGS has failed to converge. IERR = 5: FACT =.TRUE. and there is a 2 by 2 block on the main diagonal of the pencil A s E s with real eigenvalues. IERR = 6: IERR = 7: IERR = 8: DISCR =.FALSE. and the pencil A E is not c{stable. DISCR =.TRUE. and the pencil A E is not d{stable. DISCR =.TRUE. and the LAPACK routine DSYEVX has failed to converge during the solution of the reduced equation. This error is unlikely to occur. 8

19 B Example Programs Two sample programs are enclosed to demonstrate the usage of the routines DGLP and DGLPHM. B. Bartels{Stewart Method Example To nd the solution matrix X, the separation, and the reciprocal condition number of the equation where A T XE + E T XA = Y; A Program Text 3: : : : 3: : : : 2: A ; E : 3: : 3: 2: : : : : A ; and Y 64: 73: 28: 73: 7: 25: 28: 25: 8: * DGLP EXAMPLE PROGRAM TEXT * *.. Parameters.. INTEGER NIN, NOUT PARAMETER (NIN=5, NOUT=6) INTEGER NMAX PARAMETER (NMAX=2) INTEGER LDA, LDE, LDQ, LDX, LDZ PARAMETER (LDA=NMAX, LDE=NMAX, LDQ=NMAX, LDX=NMAX, + LDZ=NMAX) INTEGER LIWORK, LRWORK PARAMETER (LIWORK=NMAX**2, LRWORK=MAX(2*NMAX**2,7*NMAX)) *.. Local Scalars.. CHARACTER JOB DOUBLE PRECISION RCOND, SCALE, SEP INTEGER I, IERR, J, N LOGICAL DISCR, FACT, TRANS, UPPER *.. Local Arrays.. INTEGER IWORK(LIWORK) DOUBLE PRECISION A(LDA,NMAX), E(LDE,NMAX), Q(LDQ,NMAX), + RWORK(LRWORK), X(LDX,NMAX), Z(LDZ,NMAX) *.. External Subroutines.. EXTERNAL DGLP *.. Executable Statements.. * WRITE (NOUT,FMT=99999) * Skip the heading in the data file and read the data. READ (NIN,FMT='()') READ (NIN,FMT=*) N, JOB, DISCR, FACT, TRANS, UPPER IF (N.LE..OR. N.GT.NMAX) THEN WRITE (NOUT,FMT=99993) N ELSE READ (NIN,FMT=*) ((A(I,J),J=,N),I=,N) READ (NIN,FMT=*) ((E(I,J),J=,N),I=,N) IF (FACT) THEN READ (NIN,FMT=*) ((Q(I,J),J=,N),I=,N) READ (NIN,FMT=*) ((Z(I,J),J=,N),I=,N) END IF READ (NIN,FMT=*) ((X(I,J),J=,N),I=,N) * Find the solution matrix X and the scalars RCOND and SEP. A : 9

20 CALL DGLP(JOB,DISCR,FACT,TRANS,N,A,LDA,E,LDE,UPPER,X,LDX,SCALE, + Q,LDQ,Z,LDZ,IWORK,RWORK,LRWORK,SEP,RCOND,IERR) * IF (IERR.NE.) THEN WRITE (NOUT,FMT=99998) IERR ELSE IF (JOB.EQ.'B'.OR.JOB.EQ.'S') THEN WRITE (NOUT,FMT=99997) SEP WRITE (NOUT,FMT=99996) RCOND END IF IF (JOB.EQ.'B'.OR.JOB.EQ.'X') THEN WRITE (NOUT,FMT=99995) SCALE DO 2 I =, N WRITE (NOUT,FMT=99994) (X(I,J),J=,N) 2 CONTINUE END IF END IF END IF STOP * FORMAT (' DGLP EXAMPLE PROGRAM RESULTS',/X) FORMAT (' IERR on exit from DGLP = ',I2) FORMAT (' SEP = ',F8.4) FORMAT (' RCOND = ',F8.4) FORMAT (' SCALE = ',F8.4,//' The solution matrix X is ') FORMAT (2(X,F8.4)) FORMAT (/' N is out of range.',/' N = ',I5) END Program Data DGLP EXAMPLE PROGRAM DATA 3 B F F F T Program Results DGLP EXAMPLE PROGRAM RESULTS SEP =.2867 RCOND =.55 SCALE =. The solution matrix X is

21 B.2 Hammarling's Method Example To nd the Cholesky factor U of the solution X = U T U of the equation where A T XE + E T XA = B T B; A Program Text : 3: 4: : 5: 2: 4: 4: : A ; E 2: : 3: 2: : : 4: 5: : A ; and B = 2: : 7: : * DGLPHM EXAMPLE PROGRAM TEXT * *.. Parameters.. INTEGER NIN, NOUT PARAMETER (NIN=5, NOUT=6) INTEGER NMAX PARAMETER (NMAX=2) INTEGER LDA, LDB, LDE, LDQ, LDZ PARAMETER (LDA=NMAX, LDB=NMAX, LDE=NMAX, LDQ=NMAX, + LDZ=NMAX) INTEGER LRWORK PARAMETER (LRWORK=MAX(7*NMAX,6*NMAX-6,)) *.. Local Scalars.. DOUBLE PRECISION SCALE INTEGER I, IERR, J, N, M LOGICAL DISCR, FACT, TRANS *.. Local Arrays.. DOUBLE PRECISION A(LDA,NMAX), B(LDB,NMAX), E(LDE,NMAX), + Q(LDQ,NMAX), RWORK(LRWORK), Z(LDZ,NMAX) *.. External Subroutines.. EXTERNAL DGLPHM *.. Executable Statements.. * WRITE (NOUT,FMT=99999) * Skip the heading in the data file and read the data. READ (NIN,FMT='()') READ (NIN,FMT=*) N, M, DISCR, FACT, TRANS IF (N.LT..OR. N.GT.NMAX) THEN WRITE (NOUT,FMT=99995) N ELSEIF (M.LT..OR. M.GT.NMAX) THEN WRITE (NOUT,FMT=99994) M ELSE READ (NIN,FMT=*) ((A(I,J),J=,N),I=,N) READ (NIN,FMT=*) ((E(I,J),J=,N),I=,N) IF (FACT) THEN READ (NIN,FMT=*) ((Q(I,J),J=,N),I=,N) READ (NIN,FMT=*) ((Z(I,J),J=,N),I=,N) END IF IF (TRANS) THEN READ (NIN,FMT=*) ((B(I,J),J=,M),I=,N) ELSE READ (NIN,FMT=*) ((B(I,J),J=,N),I=,M) END IF * Find the Cholesky factor U of the solution matrix. CALL DGLPHM(DISCR,FACT,TRANS,N,M,A,LDA,E,LDE,B,LDB,SCALE,Q,LDQ, 2

22 + Z,LDZ,RWORK,LRWORK,IERR) * IF (IERR.NE.) THEN WRITE (NOUT,FMT=99998) IERR ELSE WRITE (NOUT,FMT=99997) SCALE DO 2 I =, N WRITE (NOUT,FMT=99996) (B(I,J),J=,N) 2 CONTINUE END IF END IF STOP * FORMAT (' DGLPHM EXAMPLE PROGRAM RESULTS',/X) FORMAT (' IERR on exit from DGLPHM = ',I2) FORMAT (' SCALE = ',F8.4,//' The Cholesky factor U of the solution + matrix is') FORMAT (2(X,F8.4)) FORMAT (/' N is out of range.',/' N = ',I5) FORMAT (/' M is out of range.',/' M = ',I5) END Program Data DGLPHM EXAMPLE PROGRAM DATA 3 F F F Program Results DGLPHM EXAMPLE PROGRAM RESULTS SCALE =. The Cholesky factor U of the solution matrix is

23 References [] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov, and D. Sorenson, LAPACK User's Guide, Philadelphia, PA, 992. [2] R.H. Bartels, G.W. Stewart, Solution of the Equation AX + XB = C, Comm. A.C.M., 5:82{826, 972. [3] K.{W.E. Chu, The solution of the matrix equation AXB CXD = Y and (Y A DZ; Y C BZ) = (E; F ), Linear Algebra Appl., 93:93{5, 987. [4] J.J. Dongarra, J.R. Bunch, C.B. Moler, G.W. Stewart, LINPACK User's Guide, Philadelphia, PA, 979. [5] B.S. Garbow, J.M. Boyle, J.J. Dongarra, C.B. Moler, Matrix Eigensystem Routines { EISPACK Guide Extension, Lecture Notes in Computer Science, Vol.5, Springer{Verlag, New York, 977. [6] J.D. Gardiner, A.J. Laub, J.J. Amato, C.B. Moler, Solution of the Sylvester Matrix Equation AXB T + CXD T = E, A.C.M. Trans. Math. Soft., 8:223{23, 992. [7] J.D. Gardiner, M.R. Wette, A.J. Laub, J.J. Amato, C.B. Moler, A FORTRAN{77 Software Package for Solving the Sylvester Matrix Equation AXB T + CXD T = E, A.C.M. Trans. Math. Soft., 8:232{238, 992. [8] G.H. Golub, C.F. Van Loan, Matrix Computations, John Hopkins University Press, Baltimore, Second Edition, 989. [9] W.W. Hager, Condition Estimates, SIAM J. Sci. Stat. Comput., 5:3{36, 984. [] S.J. Hammarling, Numerical Solution of the Stable, Non{negative Denite Lyapunov Equation, IMA J. Num. Anal., 2:33{323, 982. [] N.J. Higham, FORTRAN Codes for Estimating the One{Norm of a Real or Complex Matrix, with Applications to Condition Estimation, A.C.M. Trans. Math. Soft., 4:38-396, 988. [2] B. Kagstrom, L. Westin, Generalized Schur Methods with Condition Estimators for Solving the Generalized Sylvester Equation, IEEE Trans. Aut. Control, 34:745-75, 989. [3] P. Lancaster, M. Tismenetsky, The Theory of Matrices, Academic Press, Orlando, 2nd edition, 985. [4] J. Snyders, M. Zakai, On Non{negative Solutions of the Equation AD + DA T = C, SIAM J. Appl. Math., 8:74-74, 97. [5] Working Group on Software (WGS), Implementation and Documentation Standards, WGS{Report 9{, Eindhoven/Oxford,

NAG Library Routine Document F08UBF (DSBGVX)

NAG Library Routine Document F08UBF (DSBGVX) NAG Library Routine Document (DSBGVX) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

NAG Library Routine Document F08YUF (ZTGSEN).1

NAG Library Routine Document F08YUF (ZTGSEN).1 F08 Least-squares and Eigenvalue Problems (LAPACK) NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

NAG Library Routine Document F08FPF (ZHEEVX)

NAG Library Routine Document F08FPF (ZHEEVX) NAG Library Routine Document (ZHEEVX) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

NAG Fortran Library Routine Document F04CFF.1

NAG Fortran Library Routine Document F04CFF.1 F04 Simultaneous Linear Equations NAG Fortran Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

NAG Library Routine Document F08JDF (DSTEVR)

NAG Library Routine Document F08JDF (DSTEVR) F08 Least-squares and Eigenvalue Problems (LAPACK) NAG Library Routine Document (DSTEVR) Note: before using this routine, please read the Users Note for your implementation to check the interpretation

More information

NAG Library Routine Document F07HAF (DPBSV)

NAG Library Routine Document F07HAF (DPBSV) NAG Library Routine Document (DPBSV) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

NAG Library Routine Document F08VAF (DGGSVD)

NAG Library Routine Document F08VAF (DGGSVD) NAG Library Routine Document (DGGSVD) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

NAG Library Routine Document F08FAF (DSYEV)

NAG Library Routine Document F08FAF (DSYEV) NAG Library Routine Document (DSYEV) Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

NAG Library Routine Document F08FNF (ZHEEV).1

NAG Library Routine Document F08FNF (ZHEEV).1 NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

NAG Library Routine Document F02HDF.1

NAG Library Routine Document F02HDF.1 F02 Eigenvalues and Eigenvectors NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and

More information

NAG Library Routine Document F08PNF (ZGEES)

NAG Library Routine Document F08PNF (ZGEES) F08 Least-squares and Eigenvalue Problems (LAPACK) F08PNF NAG Library Routine Document F08PNF (ZGEES) Note: before using this routine, please read the Users Note for your implementation to check the interpretation

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents Eigenvalue and Least-squares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized

More information

NAG Library Routine Document F08QUF (ZTRSEN)

NAG Library Routine Document F08QUF (ZTRSEN) F08 Least-squares and Eigenvalue Problems (LAPACK) F08QUF NAG Library Routine Document F08QUF (ZTRSEN) Note: before using this routine, please read the Users Note for your implementation to check the interpretation

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

F08BEF (SGEQPF/DGEQPF) NAG Fortran Library Routine Document

F08BEF (SGEQPF/DGEQPF) NAG Fortran Library Routine Document NAG Fortran Library Routine Document Note. Before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order

Positive Solution to a Generalized Lyapunov Equation via a Coupled Fixed Point Theorem in a Metric Space Endowed With a Partial Order Filomat 29:8 2015, 1831 1837 DOI 10.2298/FIL1508831B Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Positive Solution to a Generalized

More information

2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique

2 MULTIPLYING COMPLEX MATRICES It is rare in matrix computations to be able to produce such a clear-cut computational saving over a standard technique STABILITY OF A METHOD FOR MULTIPLYING COMPLEX MATRICES WITH THREE REAL MATRIX MULTIPLICATIONS NICHOLAS J. HIGHAM y Abstract. By use of a simple identity, the product of two complex matrices can be formed

More information

Module 5.2: nag sym lin sys Symmetric Systems of Linear Equations. Contents

Module 5.2: nag sym lin sys Symmetric Systems of Linear Equations. Contents Linear Equations Module Contents Module 5.2: nag sym lin sys Symmetric Systems of Linear Equations nag sym lin sys provides a procedure for solving real or complex, symmetric or Hermitian systems of linear

More information

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R

Theorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat

More information

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition

Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Applied Mathematical Sciences, Vol. 4, 2010, no. 1, 21-30 Approximate Low Rank Solution of Generalized Lyapunov Matrix Equations via Proper Orthogonal Decomposition Amer Kaabi Department of Basic Science

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

Direct Methods for Matrix Sylvester and Lyapunov Equations

Direct Methods for Matrix Sylvester and Lyapunov Equations Direct Methods for Matrix Sylvester and Lyapunov Equations D. C. Sorensen and Y. Zhou Dept. of Computational and Applied Mathematics Rice University Houston, Texas, 77005-89. USA. e-mail: {sorensen,ykzhou}@caam.rice.edu

More information

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations

On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations Max Planck Institute Magdeburg Preprints Martin Köhler Jens Saak On GPU Acceleration of Common Solvers for (Quasi-) Triangular Generalized Lyapunov Equations MAX PLANCK INSTITUT FÜR DYNAMIK KOMPLEXER TECHNISCHER

More information

NAG Library Routine Document F08QVF (ZTRSYL)

NAG Library Routine Document F08QVF (ZTRSYL) F8 Least-squares and Eigenvalue Problems (LAPAK) F8QVF NAG Library Routine Document F8QVF (ZTRSYL) Note: before using this routine, please read the Users Note for your implementation to check the interpretation

More information

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

Institute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT

Institute for Advanced Computer Studies. Department of Computer Science. On the Adjoint Matrix. G. W. Stewart y ABSTRACT University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{97{02 TR{3864 On the Adjoint Matrix G. W. Stewart y ABSTRACT The adjoint A A of a matrix A

More information

A Continuation Approach to a Quadratic Matrix Equation

A Continuation Approach to a Quadratic Matrix Equation A Continuation Approach to a Quadratic Matrix Equation Nils Wagner nwagner@mecha.uni-stuttgart.de Institut A für Mechanik, Universität Stuttgart GAMM Workshop Applied and Numerical Linear Algebra September

More information

NAG Toolbox for Matlab nag_lapack_dggev (f08wa)

NAG Toolbox for Matlab nag_lapack_dggev (f08wa) NAG Toolbox for Matlab nag_lapack_dggev () 1 Purpose nag_lapack_dggev () computes for a pair of n by n real nonsymmetric matrices ða; BÞ the generalized eigenvalues and, optionally, the left and/or right

More information

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation.

A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation. 1 A New Block Algorithm for Full-Rank Solution of the Sylvester-observer Equation João Carvalho, DMPA, Universidade Federal do RS, Brasil Karabi Datta, Dep MSc, Northern Illinois University, DeKalb, IL

More information

NAG Library Routine Document F08ZFF (DGGRQF).1

NAG Library Routine Document F08ZFF (DGGRQF).1 NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent

More information

The Matrix Sign Function Method and the. Computation of Invariant Subspaces. November Abstract

The Matrix Sign Function Method and the. Computation of Invariant Subspaces. November Abstract The Matrix Sign Function Method and the Computation of Invariant Subspaces Ralph Byers Chunyang He y Volker Mehrmann y November 5. 1994 Abstract A perturbation analysis shows that if a numerically stable

More information

F04JGF NAG Fortran Library Routine Document

F04JGF NAG Fortran Library Routine Document F4 Simultaneous Linear Equations F4JGF NAG Fortran Library Routine Document Note. Before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised

More information

which is an important tool in measuring invariant subspace sensitivity [10, sec. 7..5], [7, 8]. Here, we are using the Frobenius norm, kak F = ( P i;j

which is an important tool in measuring invariant subspace sensitivity [10, sec. 7..5], [7, 8]. Here, we are using the Frobenius norm, kak F = ( P i;j PERTURBATION THEORY AND BACKWARD ERROR FOR AX? XB = C NICHOLAS J. HIGHAM y Abstract. Because of the special structure of the equations AX?XB = C the usual relation for linear equations \backward error

More information

Stability and Inertia Theorems for Generalized Lyapunov Equations

Stability and Inertia Theorems for Generalized Lyapunov Equations Published in Linear Algebra and its Applications, 355(1-3, 2002, pp. 297-314. Stability and Inertia Theorems for Generalized Lyapunov Equations Tatjana Stykel Abstract We study generalized Lyapunov equations

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Solving projected generalized Lyapunov equations using SLICOT

Solving projected generalized Lyapunov equations using SLICOT Solving projected generalized Lyapunov equations using SLICOT Tatjana Styel Abstract We discuss the numerical solution of projected generalized Lyapunov equations. Such equations arise in many control

More information

LAPACK-Style Codes for Level 2 and 3 Pivoted Cholesky Factorizations. C. Lucas. Numerical Analysis Report No February 2004

LAPACK-Style Codes for Level 2 and 3 Pivoted Cholesky Factorizations. C. Lucas. Numerical Analysis Report No February 2004 ISSN 1360-1725 UMIST LAPACK-Style Codes for Level 2 and 3 Pivoted Cholesky Factorizations C. Lucas Numerical Analysis Report No. 442 February 2004 Manchester Centre for Computational Mathematics Numerical

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this:

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this: Chapter 2. 2.0 Introduction Solution of Linear Algebraic Equations A set of linear algebraic equations looks like this: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1N x N =b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +

More information

On a quadratic matrix equation associated with an M-matrix

On a quadratic matrix equation associated with an M-matrix Article Submitted to IMA Journal of Numerical Analysis On a quadratic matrix equation associated with an M-matrix Chun-Hua Guo Department of Mathematics and Statistics, University of Regina, Regina, SK

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

problem Au = u by constructing an orthonormal basis V k = [v 1 ; : : : ; v k ], at each k th iteration step, and then nding an approximation for the e

problem Au = u by constructing an orthonormal basis V k = [v 1 ; : : : ; v k ], at each k th iteration step, and then nding an approximation for the e A Parallel Solver for Extreme Eigenpairs 1 Leonardo Borges and Suely Oliveira 2 Computer Science Department, Texas A&M University, College Station, TX 77843-3112, USA. Abstract. In this paper a parallel

More information

Computation of a canonical form for linear differential-algebraic equations

Computation of a canonical form for linear differential-algebraic equations Computation of a canonical form for linear differential-algebraic equations Markus Gerdin Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Matrix Eigensystem Tutorial For Parallel Computation

Matrix Eigensystem Tutorial For Parallel Computation Matrix Eigensystem Tutorial For Parallel Computation High Performance Computing Center (HPC) http://www.hpc.unm.edu 5/21/2003 1 Topic Outline Slide Main purpose of this tutorial 5 The assumptions made

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Sonderforschungsbereich 393. P. Hr. Petkov M. M. Konstantinov V. Mehrmann. DGRSVX and DMSRIC: Fortran 77. matrix algebraic Riccati equations with

Sonderforschungsbereich 393. P. Hr. Petkov M. M. Konstantinov V. Mehrmann. DGRSVX and DMSRIC: Fortran 77. matrix algebraic Riccati equations with Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern P. Hr. Petkov M. M. Konstantinov V. Mehrmann DGRSVX and DMSRIC: Fortran 77 subroutines for solving continuous{time matrix

More information

On Orthogonal Block Elimination. Christian Bischof and Xiaobai Sun. Argonne, IL Argonne Preprint MCS-P

On Orthogonal Block Elimination. Christian Bischof and Xiaobai Sun. Argonne, IL Argonne Preprint MCS-P On Orthogonal Block Elimination Christian Bischof and iaobai Sun Mathematics and Computer Science Division Argonne National Laboratory Argonne, IL 6439 fbischof,xiaobaig@mcs.anl.gov Argonne Preprint MCS-P45-794

More information

Generalized interval arithmetic on compact matrix Lie groups

Generalized interval arithmetic on compact matrix Lie groups myjournal manuscript No. (will be inserted by the editor) Generalized interval arithmetic on compact matrix Lie groups Hermann Schichl, Mihály Csaba Markót, Arnold Neumaier Faculty of Mathematics, University

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

An Exact Line Search Method for Solving Generalized Continuous-Time Algebraic Riccati Equations

An Exact Line Search Method for Solving Generalized Continuous-Time Algebraic Riccati Equations IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO., JANUARY 998 0 An Exact Line Search Method for Solving Generalized Continuous-Time Algebraic Riccati Equations Peter Benner and Ralph Byers Abstract

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda

A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda Journal of Math-for-Industry Vol 3 (20A-4) pp 47 52 A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Aio Fuuda Received on October 6 200 / Revised on February 7 20 Abstract

More information

Key words. modified Cholesky factorization, optimization, Newton s method, symmetric indefinite

Key words. modified Cholesky factorization, optimization, Newton s method, symmetric indefinite SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 4, pp. 197 111, October 1998 16 A MODIFIED CHOLESKY ALGORITHM BASED ON A SYMMETRIC INDEFINITE FACTORIZATION

More information

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University

Lecture 13 Stability of LU Factorization; Cholesky Factorization. Songting Luo. Department of Mathematics Iowa State University Lecture 13 Stability of LU Factorization; Cholesky Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa

More information

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS

DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DIRECT METHODS FOR MATRIX SYLVESTER AND LYAPUNOV EQUATIONS DANNY C. SORENSEN AND YUNKAI ZHOU Received 12 December 2002 and in revised form 31 January 2003 We revisit the two standard dense methods for

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

G03ACF NAG Fortran Library Routine Document

G03ACF NAG Fortran Library Routine Document G03 Multivariate Methods G03ACF NAG Fortran Library Routine Document Note. Before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report, June 1996 Abstract We propose

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

THE APPLICATION OF EIGENPAIR STABILITY TO BLOCK DIAGONALIZATION

THE APPLICATION OF EIGENPAIR STABILITY TO BLOCK DIAGONALIZATION SIAM J. NUMER. ANAL. c 1997 Society for Industrial and Applied Mathematics Vol. 34, No. 3, pp. 1255 1268, June 1997 020 THE APPLICATION OF EIGENPAIR STABILITY TO BLOCK DIAGONALIZATION NILOTPAL GHOSH, WILLIAM

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Reduction of two-loop Feynman integrals. Rob Verheyen

Reduction of two-loop Feynman integrals. Rob Verheyen Reduction of two-loop Feynman integrals Rob Verheyen July 3, 2012 Contents 1 The Fundamentals at One Loop 2 1.1 Introduction.............................. 2 1.2 Reducing the One-loop Case.....................

More information

1. Introduction. Applying the QR algorithm to a real square matrix A yields a decomposition of the form

1. Introduction. Applying the QR algorithm to a real square matrix A yields a decomposition of the form BLOCK ALGORITHMS FOR REORDERING STANDARD AND GENERALIZED SCHUR FORMS LAPACK WORKING NOTE 171 DANIEL KRESSNER Abstract. Block algorithms for reordering a selected set of eigenvalues in a standard or generalized

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Intel Math Kernel Library (Intel MKL) LAPACK

Intel Math Kernel Library (Intel MKL) LAPACK Intel Math Kernel Library (Intel MKL) LAPACK Linear equations Victor Kostin Intel MKL Dense Solvers team manager LAPACK http://www.netlib.org/lapack Systems of Linear Equations Linear Least Squares Eigenvalue

More information

On the -Sylvester Equation AX ± X B = C

On the -Sylvester Equation AX ± X B = C Department of Applied Mathematics National Chiao Tung University, Taiwan A joint work with E. K.-W. Chu and W.-W. Lin Oct., 20, 2008 Outline 1 Introduction 2 -Sylvester Equation 3 Numerical Examples 4

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

Automatica, 33(9): , September 1997.

Automatica, 33(9): , September 1997. A Parallel Algorithm for Principal nth Roots of Matrices C. K. Koc and M. _ Inceoglu Abstract An iterative algorithm for computing the principal nth root of a positive denite matrix is presented. The algorithm

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

Factorized Solution of Sylvester Equations with Applications in Control

Factorized Solution of Sylvester Equations with Applications in Control Factorized Solution of Sylvester Equations with Applications in Control Peter Benner Abstract Sylvester equations play a central role in many areas of applied mathematics and in particular in systems and

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul,

Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul, A COMPARISON OF ACCELERATION TECHNIQUES APPLIED TO THE METHOD RUDNEI DIAS DA CUNHA Computing Laboratory, University of Kent at Canterbury, U.K. Centro de Processamento de Dados, Universidade Federal do

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Block-Partitioned Algorithms for. Solving the Linear Least Squares. Problem. Gregorio Quintana-Orti, Enrique S. Quintana-Orti, and Antoine Petitet

Block-Partitioned Algorithms for. Solving the Linear Least Squares. Problem. Gregorio Quintana-Orti, Enrique S. Quintana-Orti, and Antoine Petitet Block-Partitioned Algorithms for Solving the Linear Least Squares Problem Gregorio Quintana-Orti, Enrique S. Quintana-Orti, and Antoine Petitet CRPC-TR9674-S January 1996 Center for Research on Parallel

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Consistency and efficient solution of the Sylvester equation for *-congruence

Consistency and efficient solution of the Sylvester equation for *-congruence Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 57 2011 Consistency and efficient solution of the Sylvester equation for *-congruence Fernando de Teran Froilan M. Dopico Follow

More information

An SVD-Like Matrix Decomposition and Its Applications

An SVD-Like Matrix Decomposition and Its Applications An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and

More information