STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS
|
|
- Percival Ross
- 5 years ago
- Views:
Transcription
1 Electronic Transactions on Numerical Analysis Volume 44 pp Copyright c 215 ISSN ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO XU Abstract The long standing problem is discussed of how to deflate the part associated with the eigenvalue infinity in a structured matrix pencil using structure preserving unitary transformations We derive such a deflation procedure and apply this new technique to symmetric Hermitian or alternating pencils and in a modified form to (anti)-palindromic pencils We present a detailed error and perturbation analysis of this and other deflation procedures and demonstrate the properties of the new algorithm with several numerical examples Key words structured staircase form structured Kronecker canonical form symmetric pencil Hermitian pencil alternating pencil palindromic pencil linear quadratic control H control AMS subject classifications 65F15 15A21 93B4 1 Introduction In this paper we develop numerical methods to calculate a structure preserving deflation under unitary (or in the real case real orthogonal) equivalence transformations for the infinite eigenvalue part in eigenvalue problems for structured matrix pencils (11) (λn M)x = where N = σ N N M = σ M M K nn σ N σ M {±1} and is either T the transpose or the conjugate transpose Here K stands for either R or C Eigenvalue problems of this form arise in many applications in particular in the context of linear quadratic optimal control problems (see eg ) H control problems (see eg ) and other applications; see eg 1 13 In most applications one needs information about the Kronecker structure (eigenvalues eigenvectors multiplicities Jordan chains sign characteristic) of these pencils; see 18 Numerically in order to obtain such information structure preserving methods (using congruence rather than equivalence) are preferred because only these preserve all possible characteristics and they reveal the symmetries in the Kronecker structure of a structured pencil of the form (11) A structure preserving method may also save computational time and work since it can make use of the symmetries in the matrix structures to simplify the computation In order to compute the Kronecker structure of (11) in a numerically backward stable way a structured staircase form under unitary structure preserving transformations has been derived in 5 and implemented as production software in 4 In the real case the unitary transformations can be chosen to be real orthogonal but to simplify in the following when speaking of unitary transformations we implicitly mean real orthogonal transformations in the case of real matrices The following result summarizes the staircase form Received May Accepted October Published online on January Recommended by D Szyld Supported by Deutsche Forschungsgemeinschaft through the DFG Research Center MATHEON Mathematics for Key Technologies in Berlin and by the University of Kansas Dept of Mathematics during a research visit in 213 The second author is partially supported by the Alexander von Humboldt Foundation and the Deutsche Forschungsgemeinschaft through the DFG Research Center MATHEON Mathematics for Key Technologies in Berlin Institut für Mathematik MA 4-5 TU Berlin Str des 17 Juni 136 D-1623 Berlin Germany (mehrmann@mathtu-berlinde) Department of Mathematics University of Kansas Lawrence KS 6645 USA (xu@mathkuedu) 1
2 2 V MEHRMANN AND H XU THEOREM 11 (Structured staircase form) For a pencil λn M with N = σ N N M = σ M M K nn there exists a real orthogonal or unitary matrix U K nn respectively such that U NU = N 11 N 1m N 1m+1 N 1m+2 N 12m Nm 1m+2 σ N N 1m N mm N mm+1 σ N N 1m+1 σ N N mm+1 N m+1m+1 σ N N 1m+2 σ N N m 1m+2 σ N N 12m n 1 n m l q ṃ q 2 q 1 (12) U MU = M 11 M 1m M 1m+1 M 1m+2 M 12m+1 σ M M 1m M mm M mm+1 M mm+2 σ M M 1m+1 σ M M mm+1 M m+1m+1 σ M M 1m+2 σ M M mm+2 σ M M 12m+1 n 1 n m l q ṃ q 1 where q 1 n 1 q 2 n 2 q m n m N j2m+1 j K njqj+1 1 j m 1 N m+1m+1 = = σ N K 2p2p M j2m+2 j = Γ j K njqj Γ j R njnj 1 j m Σ M m+1m+1 = 11 Σ 12 σ M Σ Σ 11 = σ M Σ 11 K 2p2p 12 Σ 22 Σ 22 = σ M Σ 22 K l 2pl 2p and the blocks Σ 22 and Γ j j = 1 m are nonsingular Proof The proof for the real case σ N = 1 and σ M = 1 has been given in 5; the other cases follow analogously It is clear that the transformation introduced in Theorem 11 preserves the structure of the pencil but note that in the real case or in the complex case with being the complex conjugate this transformation is a congruence transformation while in the complex case with being the transpose this is just a structure preserving equivalence transformation but not a congruence transformation This staircase form allows to deflate the singular part and some of the infinite eigenvalue parts of the pencil in a structure preserving way so that for eigenvalue eigenvector and
3 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 3 invariant subspace computations only the central block λn m+1m+1 M m+1m+1 of the form (12) has to be considered which is regular and of index at most one ie it has only finite eigenvalues plus infinite eigenvalues with Kronecker blocks of size one due to the fact that and Σ 22 are nonsingular The staircase form has recently been extended to parameter dependent matrix pencils arising in the control of differential-algebraic systems with variable coefficients 9 where however the transformation is more complex From the staircase form it is clear how to deflate all the parts associated with higher (than one) indices and singular parts using unitary transformations (despite the usual difficulties with rank decisions) However for a long time it remained an open problem how to deflate the remaining index one part associated with the eigenvalue infinity in a structure preserving way using unitary structure preserving transformations ie to reduce the central subpencil to a subpencil λn f M f of the same structure (with a nonsingular N f ) that contains all the information about the finite eigenvalues In this paper we solve this problem and develop a new technique for structure preserving deflation of infinite eigenvalues via unitary transformations We present an error and perturbation analysis and also several numerical examples that demonstrate the properties of the new method and also compare it to non-unitary structure preserving transformations 2 Deflation of the index one part A simple way to achieve a structure preserving but non-unitary deflation is to use the Schur complement Compute a factorization (21) U N1 N = of N with N 1 of full row rank This can be done via the rank-revealing QR decomposition or the singular value decomposition; see 8 The critical part in this factorization is the decision about the rank of N which is done in the usual way by setting those part to zero that are associated to the singular values which are smaller than the machine precision times the norm of N; see 6 for a detailed discussion of this topic Using the structure of the pencil we have (22) Ñ = U Ñ11 NU = M = U M11 M12 MU = σ M M 12 M22 where Ñ11 = σ N Ñ11 M ii = σ M M ii i = 1 2 By construction Ñ 11 is invertible and since λn M is regular and of index at most one M 22 is also invertible Then with I (23) L = 1 σ M M M I one has (24) with the Schur complement (U L) N(U L) = L Ñ11 Ñ L = (U L) M(U L) = L S M L = M22 (25) S = M 11 σ M M12 M 1 22 M 12
4 4 V MEHRMANN AND H XU Hence λn M is equivalent to the decoupled block diagonal pencil (λñ11 S) (λ M 22 ) This deflation procedure via a non-unitary transformation preserves the structure but if M 22 is ill-conditioned with respect to inversion then the eigenvalues of the pencil λñ11 S may be corrupted due to the ill-conditioning in the computation of the Schur complement S We analyze the properties of the Schur complement approach in Section 4 and present some numerical examples in Section 7 Another possibility to deflate the part associated with the infinite eigenvalues is to use equivalence transformations that are unitary but not structure preserving Starting again with the factorization (21) we form U M1 M = partitioned analogously and let V be a unitary M 2 matrix such that (26) M 2 V = M22 with M 22 square nonsingular Setting (27) N = U NV = N11 N12 M = U MV = M11 M12 M22 then since N 11 and M 22 in (27) are nonsingular we can eliminate N 12 and M 12 simultaneously by block Gaussian eliminations It follows that λn M is equivalent to the decoupled block diagonal pencil (λ N 11 M 11 ) (λ M 22 ) The computation of the subpencil λ N 11 M 11 can be performed in a backward stable way but it has lost its symmetry structures and as a consequence it is hard to obtain the resulting symmetric structure in the finite spectrum of λn M in further computations Thus this method is inadequate for many of the tasks where the exact eigenvalue symmetry is needed We present some numerical examples to demonstrate this deficiency in Section 7 Furthermore in the resulting pencil the sign characteristics which are invariants of the pencil under structure preserving transformations 18 are not preserved 3 Deflation under unitary structure preserving transformations To derive a structure preserving deflation under unitary structure preserving transformations consider the unitary matrices U V obtained in (21) and (26) Form (31) V NV =: N11 N 12 σ N N V MV =: 12 N 22 M11 M 12 σ M M 12 M 22 Then we have the following surprising result THEOREM 31 Consider a structured pencil of the form (11) which is regular and of index at most one Let Q = U Q11 Q V = 12 be partitioned conformably to the block Q 21 Q 22 structure in (31) where U V are obtained in (21) and (26) Then Q 11 is invertible and λn 11 M 11 = Q 11(λÑ11 S)Q 11 = Q 11(λ N 11 M 11 )
5 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 5 where the subpencils λn 11 M 11 λñ11 S and λ N 11 M 11 are given in (31) (24) and (27) respectively Consequently the finite eigenvalues of λn M are exactly the finite eigenvalues of the structured subpencil λn 11 M 11 Proof By definition of Q one has (32) λv NV V MV = Q (λ N M) and (33) λv NV V MV = Q (λñ M)Q Based on (22) one has N1 = U N = Ñ11 U M1 = U M = M 2 M11 M12 σ M M 12 M22 U and then (26) becomes σ M M 12 M22 Q = M 22 From this relation we obtain that (34) σ M M 12 Q 11 + M 22 Q 21 = and (35) σ M M 12 = M 22 Q 12 M22 = M 22 Q 22 Suppose that Q 11 is singular then there exists a nonzero vector x such that Q 11 x = By post-multiplying x to (34) and using the fact that M 22 is invertible one has Q 21 x = Then Q11 x = contradicting the fact that Q is unitary Therefore Q Q 11 must be invertible 21 Comparing the (11) block on both sides of (32) leads to the first equivalence relation λn 11 M 11 = Q 11(λ N 11 M 11 ) Since M 22 is invertible from the second relation in (35) so are M 22 and Q 22 From (34) it follows that (36) Q 21 Q 1 11 = σ 1 M M M Then the matrix L defined in (23) becomes I L = Q 21 Q 1 11 I and defining R = Q11 Q 12 Q 22
6 6 V MEHRMANN AND H XU and using the fact that Q is unitary it follows that Hence (33) can be written as Q = L R (37) λv NV V MV = R (λ L Ñ L L M L) R and comparing the (1 1) block on both sides of (37) leads to the second equivalence relation λn 11 M 11 = Q 11(λÑ11 S)Q 11 One can also obtain another perception of how the subpencils λñ11 S and λn 11 M 11 are related to the pencils λv NV V MV and λñ M The relation (37) implies that Setting R (λv NV V MV ) R 1 = L (λñ M) L = (λñ11 S) (λ M 22 ) Q11 L = = I L Q 21 we have Q = LR and also Q11 I Q 1 R = 11 Q 12 I Q = 22 1 Q11 R I (38) R (λv NV V MV )R 1 = L (λñ M)L = (λn 11 M 11 ) (λ M 22 ) which has the pencil (λn 11 M 11 ) in the (1 1) position The pencil λv NV V MV however is unitarily congruent or equivalent in the complex transpose case to the original pencil and the finite eigenvalue part can be simply extracted from the (1 1) block It follows from (38) that the columns of the matrices Q11 I Q 21 form orthonormal bases for the right deflating subspaces corresponding to the finite and infinite eigenvalues of λñ M respectively In contrast to this (24) shows that the Schur complement method simply uses the non-orthonormal basis with the columns of I Q 21 Q 1 11 ie the first block column of L for a structure preserving transformation to block diagonalize λñ M For the original pencil λn M with V = V 1 V 2 and U = U 1 U 2 the columns of V11 Q11 U12 V 1 = = U and U 2 = V 21 Q 21 span the right deflating subspaces corresponding to the finite and infinite eigenvalues respectively The minimal angle between the two right deflating subspaces is U 22 Θ min = min x range V 1 y range U 2 cos 1 x y x y = cos 1 V 1 U 2 = cos 1 Q 21
7 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 7 where denotes the Euclidean norm for vectors and the spectral norm for matrices The minimal angle measures the closeness of the two deflating subspaces For Q there exists a CS decomposition; see 8 If the size n 1 of Q 11 is larger than or equal to the size n 2 of Q 22 then this has the form W 1 W 2 Q11 Q 12 Q 21 Q 22 Z1 = Z 2 Φ Ψ I n1 n 2 Ψ Φ where W 1 W 2 Z 1 Z 2 are unitary matrices and the diagonal matrices Φ = diag(φ 1 φ n2 ) and Ψ = diag(ψ 1 ψ n2 ) have nonnegative diagonal entries satisfying Φ 2 + Ψ 2 = I n2 If n 1 n 2 then one has W 1 W 2 Q11 Q 12 Q 21 Q 22 Z1 = Z 2 Φ Ψ Ψ Φ I n2 n 1 Suppose that the singular values of Q 21 are cos θ 1 cos θ min{n1n 2} then except for possibly some extra singular values at 1 the singular values of Q 11 are sin θ 1 sin θ min{n1n 2} where θ 1 θ min{n1n 2} π/2 Thus we have Introducing Θ min = min θ j (39) ρ := M 1 22 M 12 = M 12 M 1 22 and using (36) one has and hence cot Θ min = max cot θ j = ρ j Θ min = cot 1 ρ Thus the smallest singular values of Q 11 and Q 22 are given by (31) σ min (Q 11 ) = σ min (Q 22 ) = sin Θ min = ρ 2 and Q 12 = Q 21 = ρ If ρ is large which happens commonly when M 1+ρ 2 22 is nearly singular then the numerical computation of the Schur complement S in (25) may be subject to large errors On the other hand in this case the block N 11 = Q 11Ñ11Q 11 may be close to a singular matrix as well which may also lead to big numerical problems in particular when deciding about the rank of N Let us summarize the methods for computing the subpencils λn 11 M 11 defined in (31) in the following algorithms The first algorithm Algorithm 1 uses the factorizations (21) and (26) to compute U and V and eventually the subpencil λn 11 M 11 The second algorithm Algorithm 2 is just a different version which computes the factorization (22) explicitly and the factorization (31) is computed using Q = U V instead of V
8 8 V MEHRMANN AND H XU ALGORITHM 1 Consider a regular structured pencil λn M of index at most one with N = σ N N and M = σ M M 1 Use a rank revealing QR or singular value decomposition to compute a unitary matrix U such that U N1 N = where N 1 is of full row rank Partition U = U 1 U 2 accordingly and set M 2 = U 2 M 2 Use a rank revealing QR or singular value decomposition to determine a unitary matrix V such that M 2 V = M 22 det M 22 3 Compute V N11 N NV = 12 σ N N V M11 M MV = N 22 σ M M 12 M 22 and return λn 11 M 11 as the pencil associated with the finite spectrum of λn M Algorithm 1 requires two singular value decompositions or rank revealing QR factorizations as well as the associated matrix-matrix multiplications The most difficult part is the rank decision for the matrix N in step 1 which will affect the number of finite eigenvalues and the size and therefore also the conditioning of M 2 An alternative version of Algorithm 1 is given in the following algorithm ALGORITHM 2 Consider a regular structured pencil λn M of index at most one with N = σ N N and M = σ M M 1 Use a Schur-like decomposition of N to compute a unitary matrix U such that U NU = Ñ11 =: Ñ det Ñ11 and compute M = U M11 M12 MU = σ M M 12 M22 2 Use a rank revealing QR or singular value decomposition to compute a unitary matrix Q11 Q Q = 12 partitioned accordingly such that Q 21 Q 22 σ M M 12 M22 Q = M22 3 Compute Q N11 N ÑQ = 12 σ N N Q M11 M MQ = N 22 σ M M 12 M 22 or if only λn 11 M 11 is needed then N 11 = Q 11Ñ11Q 11 M 11 = Q 11( M 11 Q 11 + M 12 Q 21 )
9 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 9 Algorithm 2 uses in contrast to Algorithm 1 a structured Schur-like decomposition in the first step and performs two unitary transformations on the pencil The other costs are comparable to those of Algorithm 1 We will perform the error and perturbation analysis for these algorithms as well as that of the Schur complement approach in the next section 4 Error and perturbation analysis In this section we present a detailed error and perturbation analysis for the different deflation procedures presented in the last sections Our error analysis for the structure preserving method under unitary transformations will be based on Algorithm 2 The error analysis for Algorithm 1 is essentially the same Let u be the unit roundoff and denote for each of the matrices defined in the previous section the computed counterpart by the corresponding calligraphic letter eg let U Ñ 11 be the computed U and Ñ11 in step 1 of Algorithm 2 Following standard backward error analysis (8) there exists a unitary matrix Ũ such that Ũ Ñ11 (N + δn u )Ũ = = Ñ where Ũ U = O(u) δn u = σ N δn u and δn u = O( N u) For the computed M we have M11 M = Ũ (M + δm u )Ũ = M12 σ M M 12 M22 with δm u = σ M δm u δm u = O( M u) Setting X = U Ũ we obtain from the exact factorization (22) that (41) Ñ = Ñ11 Ñ11 = X (Ñ + U δn u U)X = X + δñ11 δñ12 σ N δñ 12 δñ22 Assume that δñ11 < σ min (Ñ11) Then Ñ11 + δñ11 is nonsingular One can show that for I G K = G = (Ñ11 + δñ11) 1 δñ12 I X one has Ñ11 K + δñ11 σ N δñ 12 δñ12 δñ22 K = Ñ11 + δñ11 Ñ11 + δñ11 δñ22 σ N δñ = 12G Introduce the unitary matrix I G (I + GG X = ) 1 2 G I (I + G G) 1 2 where (I + GG ) 1 2 and (I + G G) 1 2 are the positive definite square roots of I + GG and I + G G respectively Then one has K = X (I + GG ) 1 2 (I + G G) 1 2 G (I + G G) 1 2
10 1 V MEHRMANN AND H XU and then Ñ11 X + δñ11 σ N δñ 12 δñ12 δñ22 ( ) X = (I + GG ) 1 2 (N11 + δn 11 )(I + GG ) 1 2 Comparing with (41) one can show that X = X diag(x 1 X 2 ) where X 1 X 2 are unitary Without loss of generality we set X = X I + δx11 δx =: I + δx =: 12 δx 21 I + δx 22 and introduce the condition number Since G = O(κ N u) and one has κ N = N Ñ = Ñ11 Ñ11 δx 12 = G(I + G G) 1 2 δx21 = G (I + GG ) 1 2 X 11 = (I + GG ) 1 2 I X22 = (I + G G) 1 2 I δx 12 δx 21 = O( G ) = O(κ N u) δx 11 δx 22 = O( G 2 ) = O(κ 2 Nu 2 ) Clearly here κ N measures the sensitivity of the null space of N Hence Ñ 11 = (I + δx 11 ) Ñ 11 (I + δx 11 ) + O( N u) = Ñ11 + δñ11 δñ11 = O ( (κ 2 Nu + 1) N u ) and it follows from M = X ( M + U δm u U)X that (42) M = M + δ M δ M = O(κN M u) Partition M conformable to M assume that δ M 22 < σ min ( M 22 ) so that M 22 is invertible and let S be the computed Schur complement based on M Then under the assumption that a backward stable linear system solver is used for computing M 1 22 M 12 columnwise one has S = M 11 σ M M12 M 1 22 M 12 + E s with (( E s = O M 11 + ρ M 12 + ρ 2 M ) ) ( ) 22 u = O (1 + ρ) 2 M u where ρ is defined in (39) Some elementary calculations and (42) lead to S = S + E s + F s =: S + δs
11 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 11 with ( ) F s = O (1 + ρ) 2 κ N M u where F s results from the errors introduced in M by performing the transformation with Ũ Note that due to the structure of the error we could absorb E s into F s Altogether we have λñ11 S = λ(ñ11 + δñ11) + (S + δs) with δñ11 = O ( ( ) (κ 2 N u + 1) N u) and δs = O (1 + ρ) 2 κ N M u It then follows that the columns of I σ M 1 M M and I span the right deflating subspaces corresponding to the finite and infinite eigenvalues respectively of the pencil ( ) λñ Es M + and λñ11 S λ M 22 are the corresponding computed subpencils associated to these bases Let Q be unitary such that Then σ M M 12 M22 Q = M 22 σ M M 1 22 M 12 = Q 21 Q 1 11 and therefore the right deflating subspace associated with the finite eigenvalues is spanned by the orthonormal columns of the matrix it follows from (42) that (43) M 22 = Q11 Q 21 Y = Q Q Introducing ) σ M M 12 M22 QY = ( M22 + E Y with E = E 1 E 2 = σ M δ M 12 δ M 22 Q Similar to the matrix X one may express I + δy11 δy Y = 12 I G = Y (I + G Y G Y ) 1 2 δy 21 I + δy 22 G Y I (I + G Y G Y ) 1 2 where G Y = ( M 22 + E 2 ) 1 E 1 One then has δy 12 δy 21 = O (κ N κ M u) δy 11 δy 22 = O ( κ 2 N κ 2 M u 2) where κ M = M 22 1 M 1
12 12 V MEHRMANN AND H XU Since Ũ ( λ (N + δn u ) (M + δm u ) ) Ũ Q is block upper triangular and because of the structure of the pencil the columns of Q11 Ũ Ũ I Q 21 are the right deflating subspaces of λ(n + δn u ) (M + δm u ) corresponding to the finite and infinite eigenvalues respectively Hence the minimal angle between these two subspaces is given by Θ min = cos 1 Q 21 and since Q 21 = Q 21 (I + δy 11 ) + Q 22 δy 21 it follows that and hence Q 21 = Q 21 + O(κ N κ M u) Θ min = Θ min + O(csc Θ min κ N κ M u) The perturbation in the deflating subspace associated to the finite eigenvalues can then be measured by δ f := sin 1 f with f = ( ) Q12 U Ũ Q 22 Q11 Q 21 = Q12 Q 22 X Q11 and the perturbation in the deflating subspace of the eigenvalue infinity can be measured by δ := sin 1 with = Inserting the obtained norm bounds we have ( ) I U Ũ I Q 21 δ f = O( δy 21 + δx ) = O(κ N κ M u) δ = O( δx 12 ) = O(κ N u) It should be noted that in the case that N is already in block diagonal form as Ñ which is for example the case when N arises from the middle block of the staircase form (12) then U = Ũ = I and the computed subpencil corresponding to the finite eigenvalues is given by ( ) λñ11 S S = S + E s E s = O (1 + ρ) 2 M u where E s is arising just from the computation of S In step 2 of Algorithm 2 for the computed version Q of Q there exists a unitary matrix Q such that Q Q = O(u) and (44) ( with E = O M ) ( ) 22 u = O M 22 u ) (σ M M 12 M22 + E Q = M 22
13 Introducing then similar to (43) we obtain σ M M 12 M22 Q = STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 13 M22 + Y = Q Q σ M δ M 12 δ M 22 Q =: M22 + E with E = O(κ N M u) Then one has ) ( M22 + EQ + E Y = M22 Partitioning EQ + E = E 11 to Y we may express I + δy11 δy Y = 12 = δy 21 I + δy 22 (( ) ) E 12 then E11 = O M 22 + κ N M u Similar I G Y (I + G Y G Y ) 1 2 G Y I (I + G Y G Y ) 1 2 where G Y = ( M 22 + E 12 ) 1 E 11 We obtain the estimates ( ) ( ) δy 21 δy 12 = O E 11 M 22 1 = O + κ (κ M22 N κ M )u and ( δy 11 δy 22 = O + κ (κ M22 N κ M ) 2 u 2) := M 22 M 22 1 Using these estimates the computed N 11 and M 11 can be expressed as where κ M22 with δn 11 = O ( Q 11 2 N u ) and N 11 = Q 11Ñ11 Q 11 + δn 11 M 11 = Q 11( M 11 Q11 + M 12 Q21 ) + δm 11 (( ) ) with δm 11 = O M 11 Q 11 + M 12 Q 21 Q 11 u Using the relation between λñ M and λñ M one has N 11 = Q 11Ñ11 Q 11 + N with N = O (( κ 2 N u + 1) N Q 11 2 u ) and M 11 = Q 11( M 11 Q11 + M 12 Q21 ) + M1 (( ) ) with M1 = O κ N M + M 11 Q 11 + M 12 Q 21 Q 11 u Since Q 21 Q 1 11 = (Q 21 (I + δy 11 ) + Q 22 δy 21 ) (Q 11 (I + δy 11 ) + Q 12 δy 21 ) 1 ( ) ( = Q 21 Q Q 22δY 21 (I + δy 11 ) 1 Q 1 11 = Q 21 Q Q 22 δy 21(I + δy 11 ) 1 Q 1 11 I + Q 12 δy 21 (I + δy 11 ) 1 Q 1 11 ) 1 ( I + Q 12 δy 21 (I + δy 11 ) 1 Q 1 11 ) 1
14 14 V MEHRMANN AND H XU using (36) one obtains M 11 = Q 11S Q 11 + M1 + M2 where by using (35) M2 = Q 11(Q 21 Q 1 11 ) M22 δy 21 (I + δy 11 ) 1 Q 1 11 ( I + Q 12 δy 21 (I + δy 11 ) 1 Q 1 11 ) 1 Q11 = Q 21 M 22 δy 21 (I + δy 11 ) 1 (I + Q 1 11 Q 12δY 21 (I + δy 11 ) 1 ) 1 Q 1 11 Q 11 Using δy 21 (I + δy 11 ) 1 = G Y = ( M 22 + E 12 ) 1 E 11 it follows that M2 = Q 21E 11 + o(u) and hence ( ( ) ) M2 = O Q 21 M 22 + κ N M u and M 11 = Q 11S Q 11 + M with M = O ((κ N M + M 22 Q 21 + M ) ) 11 Q 11 2 u Here we have used the fact that M 12 M 12 M 1 22 M 22 = Q 21 M 22 In the situation that only the finite eigenvalues are considered one can reduce the effort to compute λn 11 M 11 only So if we set σ N N 12 = Q Q 12Ñ11 11 σ M M 12 = Q ( 12 M11 Q11 + M ) 12 Q21 then the columns of Q11 Q 21 form an orthonormal basis for the right deflating subspace of λñ M 1 corresponding to the finite eigenvalues where M 1 is a perturbed M of order O( M 11 u) because of the inexact QR factorization (44) The resulting subpencil is λ Q Q 11Ñ11 11 Q ( 11 M11 Q11 + M ) 12 Q21 and the pencil λn 11 M 11 is obtained by adding the extra error pencil λ N M introduced by the numerical computation of N 11 and M 11 Similarly the columns of span the right deflating subspace of λñ I M 1 corresponding to the eigenvalue infinity and the columns of the matrices Q11 Ũ Ũ I Q 21 span the corresponding deflating subspaces of a pencil slightly perturbed from λn M Setting ( ) Q12 Q11 Q12 Q11 f = U Ũ = X ɛ Q 22 Q f := sin 1 f 22 Q 21 Q 21
15 and STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 15 = ( U ) I Ũ we have ɛ = δ = O(κ N u) and similarly ɛ f = O ( δy 21 + δx ) = O ɛ I := sin 1 ) ) + κ ((κ M22 N κ M u and the minimal angle between the two perturbed subspaces is given by ) ) Θ min = Θ min + O (csc Θ min + κ (κ M22 N κ M u If λn M is already in the form λñ M then U = Ũ = I and in this case N = O ( N Q 11 2 u ) (( M = O M 22 Q 21 + M 11 Q 11 2) ) u and it follows that with ɛ = ɛ f = O(κ M22 u) Θmin = Θ min + O(csc Θ min κ M22 u) We may express N 11 and M 11 as By using (31) we have N 11 = Q 11(Ñ11 + N ) Q 11 M 11 = Q 11(S + M ) Q 11 N = Q 11 1 N Q 11 M = Q 11 1 M Q 11 N = O (( κ 2 Nε + 1 ) κ 2 Q 11 N u ) (( ) ) M = O κ N Q M + κ 2 Q 11 M 11 + ρ Q 1 11 M 22 u where κ Q11 = Q 11 Q 1 11 Recalling from (31) that Q 1 11 = 1 + ρ 2 and comparing the errors in the subpencils λñ11 S and λn 11 M 11 it follows that M and δs have the same order while N can be larger than δñ11 by a factor κ 2 Q 11 The latter will be of equal order only when κ 2 Q 11 is not too large On the other hand if we transform λñ11 S to then we have Q 11(λÑ11 S) Q 11 = Q 11(λÑ11 S) Q 11 + Q 11(λδÑ11 δs) Q 11 Q Q 11δÑ11 11 = O (( κ 2 N u + 1 ) Q 11 2 N u ) Q 11δS Q ( ) 11 = O (1 + ρ) 2 Q 11 2 κ N M u The first quantity is of the same order as N while the second one can be larger than M by a factor κ 2 Q 11 unless κ Q11 is not too large So in terms of numerical stability the error analysis does not show which method has an advantage over the other This is an unusual circumstance in numerical analysis where usually the methods based on unitary transformations in the worst case situation have smaller errors than the ones based on non-unitary transformations We will demonstrate this effect in the numerical examples in Section 7
16 16 V MEHRMANN AND H XU 5 An alternative point of view Another way to understand the new unitary structure preserving method of deflating the infinite eigenvalue part is to consider the extension trick introduced in 9 Partition the matrix U in (21) as U = U 1 U 2 with U1 of the same size as N1 T For any eigenvector x of λn M corresponding to a finite eigenvalue λ it then follows from U (λn M)x = that U 2 Mx = So the original eigenvalue problem is equivalent to the non-square non-symmetric eigenvalue problem ( ) N M (51) λ U x = 2 M and we are only looking for eigenvectors in the nullspace of U 2 M Let z := V z1 x =: be partitioned according to the block structure in (31) Then the eigenvalue problem (51) is equivalent to the extended eigenvalue problem V ( ) N M λ I U V z 2 M N 11 N 12 = λ σ N N 12 N 22 Clearly then it follows that z 2 = and z 2 (λn 11 M 11 )z 1 = M 11 M 12 σ M M 12 M 22 z1 = z 2 M22 Hence an eigenvector of λn M corresponding to a finite eigenvalue has the form z1 x = V where z 1 is an eigenvector of the subpencil λn 11 M 11 More generally In1 the columns of V form an orthonormal basis for the deflating subspace of λn M corresponding to the finite eigenvalues 6 Palindromic and anti-palindromic pencils Structured pencils closely related to those discussed before are the palindromic/anti-palindromic pencils of the form λa σa where A is an arbitrary real or complex square matrix and σ = ±1; see 1 16 for a detailed discussion of the relationship Assume further that the eigenvalue σ (if it occurs) is semisimple which corresponds to the property of being regular and of index at most one for the structured pencils discussed before Applying a Cayley transformation (see 1 11) the pencil λn M with N := A σa = σn M := A + σa = σm is a structured pencil of the form (11) with σ N = σ and σ M = σ and the semi-simple eigenvalue σ becomes the eigenvalue and the pencil has index at most one To this pencil we can apply the discussed transformations V U as defined in (21) and (26) Then (61) U (A σa) = N1 U (A + σa) = M1 M 2
17 Partitioning STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 17 conformably one has N1 A = 1 σb 1 A 2 σb 2 and hence AU = A 1 A 2 U B1 A = B 2 M1 M 2 A = 1 + σb 1 A 2 + σb 2 A 2 = σb 2 M 2 = 2σB 2 Furthermore applying V to M 2 we obtain M 2 V = M 22 and hence B 2 V = B 22 with B 22 = σ M 22 /2 Setting then V (λn M)V = λ and thus V A11 A AV = 12 A 21 A 22 A 11 σa 11 A 21 σa 12 A A 12 σa 21 A 11 + σa 11 A 21 + σa σa 22 A 12 + σa 21 A 22 + σa 22 λn 11 M 11 = λ(a 11 σa 11 ) (A 11 + σa 11 ) is the subpencil of λn M with the finite eigenvalues only By taking the inverse Cayley transformation this is equivalent to the property that the palindromic subpencil λa 11 σa 11 contains all the eigenvalues of λa σa except the eigenvalue σ which has been deflated It should be noted though that the Cayley transformation or its inverse may lead to cancellation errors if there are eigenvalues close to σ but not equal to σ Note when some eigenvalues are close σ but not equal to σ since A σa has to be computed explicitly the rank decision in computing the first factorization (61) is more difficult to make 7 Numerical examples In this section we present several numerical tests to compare the computed finite eigenvalues of the subpencils generated by the three methods: structured unitary equivalence transformation (Algorithms 1 and 2) Schur complement transformation (by using (24)) and the non-structured equivalence transformation (with (27)) All the tests were performed in MATLAB (Version 82) with double precision All the tested pencils were real and we use relative errors to measure the accuracy of the computed finite eigenvalues where the exact" eigenvalues are computed with the Matlab code eig in extended precision vpa The tested pencils are all skew-symmetric/symmetric The finite eigenvalues of the subpencil extracted with the two structure preserving methods are computed with the MEX code skewhamileig (2 3) If the pencil associated with the finite eigenvalues is extracted via non-structured equivalence transformations we use the MATLAB code qz We also display the quantities Θ min δ f ɛ f ɛ as well as ρ defined in (39) Since Θ min and Θ min are always very close to Θ min we do not display them We do not show δ either since it is the same as ɛ The following quantities were obtained from the numerical tests
18 18 V MEHRMANN AND H XU Eu max Es max Eh max : the maximum relative error of the finite eigenvalues for a given pencil with the structured unitary equivalence transformation method the Schur complement method and the non-structured equivalence transformation method respectively Eu min Es min Eh min : the minimum relative error of the finite eigenvalues for a given pencil with the structured unitary equivalence transformation method the Schur complement method and the non-structured equivalence transformation method respectively Eu GM Es GM Eh GM : the geometric mean of the relative errors of the finite eigenvalues for a given pencil with the structured unitary equivalence transformation method the Schur complement method and the non-structured equivalence transformation method respectively Re hat : the maximum real part of the eigenvalues computed by the non-structured equivalence transformation method in the case when all the exact finite eigenvalues are purely imaginary EXAMPLE 1 We tested a set of skew-symmetric/symmetric pencils X T (λn M)X where 1 N = 1 β M = β α α 2 and X is a randomly generated nonsingular matrix The finite eigenvalues of such a pencil are always ±i 6 and ±i 6/β independent of X and α If β is small in modulus then N is close to a singular matrix and when α is small in modulus then it can be expected that the eigenvalue infinity part will affect the finite eigenvalues Note that when β = 1 then the pencil has two pairs of double eigenvalues We have performed the computations 1 times for each pair of (α β) where each time a different random matrix X is generated Tables display the results for each pair of (α β) The three methods determine the finite eigenvalues with about the same order of relative errors The non-structured method produces slightly less accurate eigenvalues Because it forms non-structured subpencils the computed eigenvalues are no longer purely imaginary The real parts of the computed finite eigenvalues grow as β is getting smaller For all three methods in this example the errors seem insensitive to the value of α while they increase when β decreases However α does affect the deflating subspace associated with the finite eigenvalues EXAMPLE 2 We tested a set of skew-symmetric/symmetric pencils X T (λn M)X where 1 2 β N = 1 1 M = 2 β α 1 β α 1 β and X is a randomly generated matrix The finite eigenvalues of such a pencil are always ±β with both algebraic and geometric multiplicities of 2 When β is tiny then all the finite eigenvalues are close to zero and the eigenvalue condition number is expected to increase Tables display the results for 4 different sets of (α β) (with 1 pencils for each set as in Example 1) illustrating that the three methods have essentially the same accuracy The results also show that the relative errors do increase
19 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 19 TABLE 71 (α β) = (1 3 1): eigenvalue errors errors in deflating subspaces Θ ρ Eu max 4e-15 9e-15 2e-14 4e-14 6e-14 6e-14 7e-14 8e-14 2e-13 4e-13 Es max 7e-15 4e-14 1e-14 6e-14 6e-14 6e-14 7e-14 1e-13 6e-14 5e-13 Eh max 1e-14 8e-14 2e-14 1e-13 6e-14 8e-14 6e-14 2e-13 1e-12 5e-13 Eu min 2e-15 9e-15 3e-15 1e-14 5e-15 6e-15 2e-14 1e-14 9e-16 3e-15 Es min 1e-14 5e-15 9e-15 4e-15 4e-15 2e-14 6e-15 4e-15 4e-15 Eh min 1e-15 4e-14 1e-14 1e-14 6e-15 3e-15 5e-15 9e-14 6e-15 2e-15 Eu GM 2e-15 9e-15 8e-15 2e-14 2e-14 2e-14 3e-14 3e-14 1e-14 3e-14 Es GM 2e-14 8e-15 2e-14 2e-14 2e-14 4e-14 3e-14 2e-14 4e-14 Eh GM 4e-15 6e-14 2e-14 4e-14 2e-14 2e-14 2e-14 1e-13 8e-14 4e-14 Re hat 2e-14 2e-13 2e-14 2e-13 1e-14 4e-14 9e-15 2e-13 4e-13 1e-13 ɛ f 3e-11 3e-11 2e-12 2e-11 4e-11 4e-11 1e-11 2e-11 1e-1 7e-11 δ f 3e-11 2e-11 2e-12 7e-12 2e-11 2e-11 1e-11 2e-11 1e-1 6e-11 ɛ 6e-16 6e-16 5e-16 7e-16 5e-16 5e-16 3e-16 3e-16 2e-16 5e-16 Θ 6e-1 2e-1 2e-1 2e-1 8e-2 3e-1 2e-1 8e-2 8e-2 1e-1 ρ TABLE 72 (α β) = ( ): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 8e-13 3e-12 6e-12 7e-12 9e-12 2e-11 8e-11 8e-11 3e-1 2e-9 Es max 3e-12 3e-12 7e-12 9e-12 4e-11 3e-11 4e-11 4e-11 6e-1 2e-9 Eh max 8e-12 7e-11 3e-1 8e-12 1e-1 3e-11 1e-1 4e-11 1e-9 3e-9 Eu min 6e-14 4e-16 8e-14 4e-13 2e-14 9e-15 9e-13 3e-15 5e-16 1e-11 Es min 6e-14 7e-16 3e-14 2e-13 2e-15 9e-15 1e-12 6e-15 7e-16 1e-11 Eh min 5e-14 4e-14 2e-12 4e-13 4e-14 8e-15 9e-13 9e-15 2e-15 1e-11 Eu GM 2e-13 3e-14 7e-13 2e-12 4e-13 4e-13 9e-12 5e-13 4e-13 2e-1 Es GM 4e-13 5e-14 4e-13 1e-12 3e-13 5e-13 8e-12 5e-13 7e-13 1e-1 Eh GM 6e-13 2e-12 2e-11 2e-12 2e-12 5e-13 1e-11 6e-13 2e-12 2e-1 Re hat 2e-6 3e-6 7e-5 2e-6 2e-5 7e-6 1e-5 1e-5 3e-4 6e-4 ɛ f 1e-8 8e-8 5e-8 8e-1 2e-7 1e-7 1e-7 1e-7 4e-7 6e-6 δ f 1e-8 8e-8 5e-8 8e-1 2e-7 1e-7 1e-7 1e-7 4e-7 6e-6 ɛ 2e-12 2e-11 6e-12 6e-12 4e-11 1e-11 7e-12 6e-12 1e-1 3e-11 Θ 2e-1 3e-1 2e-1 1e-1 1e-1 1e-1 5e-2 1e-1 8e-2 6e-3 ρ when β is getting tiny and that again the choice of α does not affect the accuracy very much For this example no method yields finite eigenvalues that are exactly real EXAMPLE 3 For skew-symmetric/symmetric pencils of the form X T (λn M)X where 1 2 β N = 1 1 M = 2 β 1 β 1 β α α and X being a randomly generated nonsingular matrix the finite eigenvalues of such a pencil are always ±β with both algebraic and geometric multiplicities of 2 The numerical results are similar to those of Example 2 and are not presented here
20 2 V MEHRMANN AND H XU TABLE 73 (α β) = (1 7 1): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 3e-15 4e-15 4e-15 6e-15 7e-15 9e-15 9e-15 9e-15 4e-14 6e-14 Es max 1e-15 4e-15 8e-15 5e-15 4e-15 8e-15 4e-15 2e-14 2e-14 4e-14 Eh max 3e-14 8e-15 3e-14 4e-15 5e-15 2e-14 3e-14 3e-14 4e-14 3e-14 Eu min 7e-16 9e-16 2e-16 7e-16 2e-15 5e-15 4e-15 5e-15 2e-15 Es min 2e-16 4e-16 1e-15 9e-16 1e-15 1e-15 2e-15 3e-15 2e-14 3e-15 Eh min 4e-16 4e-15 3e-15 2e-15 4e-15 4e-15 4e-15 1e-15 8e-15 1e-14 Eu GM 2e-15 2e-15 9e-16 2e-15 5e-15 6e-15 6e-15 1e-14 1e-14 Es GM 5e-16 1e-15 3e-15 2e-15 2e-15 3e-15 3e-15 8e-15 2e-14 1e-14 Eh GM 3e-15 5e-15 1e-14 3e-15 4e-15 9e-15 1e-14 7e-15 2e-14 2e-14 Re hat 3e-14 2e-14 3e-14 3e-15 1e-14 1e-14 3e-14 7e-15 3e-14 3e-14 ɛ f 5e-8 4e-8 1e-7 2e-7 2e-8 2e-7 4e-7 1e-7 1e-6 4e-7 δ f 4e-8 3e-8 1e-7 9e-8 2e-8 1e-7 3e-7 1e-7 6e-7 3e-7 ɛ 2e-16 5e-16 3e-16 3e-16 3e-16 7e-16 3e-16 2e-16 2e-15 5e-16 Θ 2e-1 7e-1 3e-1 2e-1 4e-1 3e-1 2e-1 7e-2 2e-1 2e-1 ρ TABLE 74 (α β) = ( ): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 1e-12 2e-12 9e-12 3e-11 3e-11 3e-11 52e-11 9e-11 1e-1 2e-1 Es max 2e-11 2e-12 7e-12 5e-12 2e-11 3e-11 7e-11 1e-1 8e-11 1e-1 Eh max 2e-11 3e-11 3e-11 7e-11 3e-1 1e-1 6e-11 1e-1 9e-11 1e-1 Eu min 5e-16 1e-15 2e-15 1e-13 4e-15 2e-15 9e-14 1e-14 3e-13 1e-15 Es min 2e-15 1e-14 9e-16 2e-13 5e-15 3e-15 4e-14 4e-15 2e-13 6e-15 Eh min 6e-15 9e-14 7e-15 3e-13 3e-14 1e-14 7e-14 5e-15 7e-13 1e-14 Eu GM 3e-14 4e-14 1e-13 2e-12 4e-13 3e-13 2e-12 1e-12 5e-12 4e-13 Es GM 2e-13 1e-13 8e-14 9e-13 3e-13 3e-13 2e-12 7e-13 4e-12 8e-13 Eh GM 3e-13 2e-12 8e-13 5e-12 2e-12 1e-12 2e-12 8e-13 8e-12 1e-12 Re hat 4e-6 6e-6 2e-5 1e-5 6e-5 2e-5 4e-6 2e-5 2e-5 2e-5 ɛ f 2e-4 4e-4 1e-3 4e-4 2e-3 5e-4 2e-3 2e-4 1e-4 2e-4 δ f 2e-4 4e-4 1e-3 4e-4 2e-3 5e-4 2e-3 2e-4 1e-4 2e-4 ɛ 6e-12 2e-12 2e-11 9e-12 3e-11 2e-11 1e-11 3e-11 4e-11 3e-11 Θ 3e-1 2e-1 5e-1 5e-2 1e-1 4e-1 7e-2 5e-2 2e-1 7e-2 ρ EXAMPLE 4 For skew-symmetric/symmetric pencils of the form X T (λn M)X where 1 β 1 N = 1 1 M = β 1 β α α 1 1 β and X a randomly generated nonsingular matrix there exist two 2 2 Jordan blocks one for the eigenvalue β and another for β In this case it is expected that the computed finite eigenvalues have big errors due to the Jordan blocks and when β becomes smaller then the errors will get larger Our numerical results presented in Tables 79 and 71 confirm these expectations for two sets of (α β) All the numerical tests confirm our expectations obtained from the error analysis and illustrate that surprisingly the Schur complement approach and the approach via orthogonal
21 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 21 TABLE 75 (α β) = (1 3 1): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 2e-15 2e-14 2e-14 2e-14 2e-14 2e-14 3e-14 5e-14 6e-14 7e-12 Es max 2e-15 3e-14 3e-14 9e-16 2e-14 2e-14 4e-14 9e-14 7e-14 3e-12 Eh max 2e-14 6e-14 6e-14 5e-14 2e-13 2e-14 9e-14 6e-14 2e-13 6e-12 Eu min 2e-15 2e-14 2e-14 2e-14 2e-14 2e-15 9e-15 2e-14 2e-14 2e-14 Es min 9e-16 3e-14 6e-15 9e-16 2e-14 7e-15 2e-14 2e-14 2e-15 5e-14 Eh min 2e-15 8e-15 2e-14 5e-16 1e-14 5e-16 7e-15 6e-14 1e-14 Eu GM 2e-15 2e-14 2e-14 2e-14 2e-14 6e-15 2e-14 3e-14 3e-14 4e-13 Es GM 2e-15 3e-14 2e-14 9e-16 2e-14 9e-15 3e-14 4e-14 41e-14 4e-13 Eh GM 5e-15 3e-14 3e-14 8e-15 4e-14 5e-15 3e-14 6e-14 4e-14 ɛ f 8e-12 9e-11 9e-12 2e-11 4e-11 4e-12 5e-12 3e-11 9e-12 2e-1 δ f 4e-12 4e-11 9e-12 1e-11 2e-11 4e-12 4e-12 2e-11 6e-12 2e-1 ɛ 4e-16 5e-16 6e-16 3e-16 4e-16 4e-16 2e-16 5e-16 3e-16 5e-16 Θ 3e-1 3e-1 5e-1 2e-1 3e-1 3e-1 3e-1 5e-1 2e-1 3e-2 ρ TABLE 76 (α β) = ( ): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 9e-5 2e-4 2e-4 3e-4 4e-4 4e-4 5e-4 7e-4 3e-3 2e-2 Es max 5e-5 5e-4 5e-5 2e-4 3e-4 3e-4 6e-4 2e-4 2e-3 6e-3 Eh max 2e-4 3e-4 2e-4 6e-3 4e-4 2e-3 2e-3 2e-3 4e-3 3e-2 Eu min 9e-5 2e-4 5e-6 7e-5 4e-4 5e-5 5e-4 5e-5 2e-5 2e-4 Es min 5e-5 4e-5 5e-5 2e-4 3e-4 5e-5 6e-4 2e-4 5e-5 2e-4 Eh min 2e-4 3e-4 9e-6 2e-4 4e-4 2e-4 3e-5 4e-5 7e-4 9e-4 Eu GM 9e-5 2e-4 3e-5 2e-4 4e-4 2e-4 5e-4 2e-4 2e-4 2e-3 Es GM 5e-5 2e-4 5e-5 2e-4 3e-4 2e-4 6e-4 2e-4 3e-4 8e-4 Eh GM 2e-4 3e-4 5e-5 9e-4 4e-4 4e-4 3e-4 2e-4 2e-3 5e-3 ɛ f 2e-12 3e-11 5e-12 4e-11 2e-11 2e-11 5e-11 6e-12 2e-11 2e-1 δ f 2e-13 2e-11 4e-12 2e-11 2e-11 2e-11 4e-11 5e-12 1e-11 7e-11 ɛ 3e-16 4e-16 3e-16 4e-16 4e-16 3e-16 3e-16 5e-16 4e-16 7e-16 Θ 5e-1 5e-1 3e-1 4e-1 2e-1 3e-1 7e-2 3e-1 2e-1 2e-2 ρ TABLE 77 (α β) = (1 7 1): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 9e-15 1e-14 1e-14 2e-14 3e-14 3e-14 5e-14 5e-14 2e-13 4e-13 Es max 2e-14 9e-15 3e-14 2e-14 3e-14 5e-14 3e-14 2e-13 2e-14 4e-13 Eh max 2e-14 2e-14 3e-13 3e-13 2e-13 1e-14 6e-14 2e-13 3e-13 4e-13 Eu min 6e-15 1e-14 7e-15 2e-14 3e-14 3e-14 4e-15 5e-14 4e-15 8e-15 Es min 2e-15 3e-15 6e-15 2e-14 3e-14 3e-15 2e-15 4e-15 2e-14 7e-15 Eh min 9e-15 7e-15 3e-14 6e-15 3e-14 7e-15 3e-15 2e-13 2e-15 7e-15 Eu GM 7e-15 1e-14 9e-15 2e-14 3e-14 3e-14 2e-14 5e-14 3e-14 6e-14 Es GM 5e-15 5e-15 2e-14 2e-14 3e-14 2e-14 8e-15 3e-14 2e-14 5e-14 Eh GM 2e-14 2e-14 6e-14 5e-14 7e-14 8e-15 2e-14 2e-13 3e-14 5e-14 ɛ f 9e-8 5e-7 9e-8 2e-7 9e-7 2e-7 5e-7 2e-8 2e-7 7e-7 δ f 7e-8 2e-7 9e-8 9e-8 3e-7 1e-7 5e-7 8e-9 5e-8 7e-7 ɛ 5e-16 3e-16 4e-16 7e-16 4e-16 3e-16 3e-16 4e-16 3e-16 6e-16 Θ 3e-1 5e-1 2e-1 7e-1 3e-1 7e-2 3e-1 7e-2 5e-1 2e-1 ρ 5e
22 22 V MEHRMANN AND H XU TABLE 78 (α β) = ( ): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 3e-5 3e-5 5e-5 2e-4 3e-4 4e-4 7e-4 3e-3 1e-1 2e-1 Es max 8e-6 3e-5 6e-5 2e-4 6e-4 4e-4 8e-4 2e-3 3e-2 3e-1 Eh max 5e-4 4e-5 4e-3 8e-4 2e-3 4e-4 3e-3 2e-3 8e-2 2e+ Eu min 2e-6 3e-5 5e-5 2e-4 1e-4 4e-4 7e-4 3e-5 5e-5 9e-5 Es min 8e-6 3e-5 6e-5 2e-4 7e-5 4e-4 8e-4 6e-5 4e-5 2e-3 Eh min 2e-5 4e-5 2e-4 2e-4 3e-4 4e-4 7e-4 4e-4 9e-5 2e-4 Eu GM 6e-6 3e-5 5e-5 2e-4 2e-4 4e-4 7e-4 3e-4 3e-3 4e-3 Es GM 8e-6 3e-5 6e-5 2e-4 2e-4 4e-4 8e-4 3e-4 2e-3 2e-2 Eh GM 9e-5 4e-5 8e-4 4e-4 7e-4 4e-4 2e-3 9e-4 3e-3 2e-2 ɛ f 4e-8 2e-8 5e-7 5e-7 2e-7 7e-7 2e-7 6e-7 9e-7 2e-6 δ f 3e-8 3e-8 4e-7 5e-7 3e-7 8e-7 2e-7 3e-7 4e-7 2e-6 ɛ 2e-16 1e-15 3e-16 7e-16 4e-16 7e-16 3e-16 5e-16 6e-16 3e-16 Θ 2e-1 5e-1 6e-1 2e-1 1e-1 8e-2 7e-2 6e-2 2e-2 5e-3 ρ TABLE 79 (α β) = (1 5 1): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 8e-8 8e-8 9e-8 1e-7 2e-7 2e-7 2e-7 2e-7 2e-7 1e-6 Es max 2e-7 4e-8 7e-8 6e-8 1e-7 2e-7 5e-8 2e-7 2e-7 8e-7 Eh max 2e-7 8e-8 2e-7 2e-7 2e-7 4e-7 3e-7 4e-7 6e-7 2e-6 Eu min 8e-8 8e-8 9e-8 1e-7 2e-7 2e-7 2e-7 2e-7 2e-7 1e-6 Es min 2e-7 4e-8 7e-8 6e-8 1e-7 2e-7 5e-8 2e-7 2e-7 8e-7 Eh min 8e-8 7e-8 2e-7 4e-8 2e-7 3e-7 3e-7 2e-7 5e-7 1e-6 Eu GM 8e-8 8e-8 9e-8 1e-7 2e-7 2e-7 2e-7 2e-7 2e-7 1e-6 Es GM 2e-7 4e-8 7e-8 6e-8 1e-7 2e-7 5e-8 2e-7 2e-7 8e-7 Eh GM 2e-7 7e-8 2e-7 7e-8 2e-7 3e-7 3e-7 3e-7 5e-7 2e-6 ɛ f 2e-9 2e-9 3e-9 8e-1 3e-1 8e-11 7e-9 3e-1 4e-9 3e-9 δ f 1e-9 3e-9 3e-9 9e-1 4e-1 2e-1 2e-9 3e-1 2e-9 3e-9 ɛ 5e-16 4e-16 4e-16 2e-15 1e-15 3e-16 5e-16 6e-16 3e-16 8e-16 Θ 3e-1 4e-1 4e-1 6e-1 2e-1 3e-1 2e-1 5e-1 1e-1 2e-1 ρ TABLE 71 (α β) = ( ): eigenvalue errors errors of deflating subspaces and Θ ρ Eu max 5e-2 6e-2 7e-2 8e-2 1e-1 1e-1 2e-1 3e-1 4e-1 5e-1 Es max 5e-2 5e-2 8e-2 8e-2 8e-2 2e-1 7e-2 3e-1 2e-1 6e-1 Eh max 5e-2 6e-2 3e-1 3e-1 2e-1 9e-2 2e-1 6e-1 3e-1 1e+ Eu min 5e-2 6e-2 7e-2 8e-2 1e-1 1e-1 2e-1 3e-1 4e-1 5e-1 Es min 5e-2 5e-2 8e-2 8e-2 8e-2 2e-1 7e-2 3e-1 2e-1 5e-1 Eh min 2e-2 3e-2 2e-1 9e-2 5e-2 8e-2 2e-1 3e-1 3e-2 2e-1 Eu GM 5e-2 6e-2 7e-2 8e-2 1e-1 1e-1 2e-1 3e-1 4e-1 5e-1 Es GM 5e-2 5e-2 8e-2 8e-2 8e-2 2e-1 7e-2 3e-1 2e-1 6e-1 Eh GM 3e-2 5e-2 3e-1 2e-1 8e-2 9e-2 2e-1 5e-1 9e-2 4e-1 ɛ f 3e-1 9e-1 1e-9 2e-9 3e-9 2e-1 1e-9 3e-9 5e-9 3e-8 δ f 3e-1 8e-1 4e-1 2e-9 2e-9 3e-1 6e-1 3e-9 2e-9 3e-8 ɛ 4e-16 7e-16 5e-16 6e-16 3e-16 9e-16 2e-16 4e-16 7e-16 5e-16 Θ 4e-1 5e-1 2e-1 2e-1 4e-1 3e-1 6e-1 1e-1 6e-2 3e-2 ρ
23 STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES 23 structure preserving transformations deliver the same accuracy despite possible ill-conditioning The unstructured orthogonal transformation however shows the expected problems with purely imaginary eigenvalues 8 Conclusions We have presented and analyzed several methods to deflate the infinite eigenvalue part in a structured regular pencil of index at most one We have shown via a careful error analysis that it is possible to do this deflation via a structure preserving real orthogonal or unitary transformation as well as with a Schur complement approach Both methods yield similar results in the perturbation and error analysis and this is confirmed in the numerical tests This is surprising and counterintuitive to the general wisdom that unitary transformations typically perform better than transformations with potentially unbounded transformations Nonetheless we suggest to use the unitary structure preserving transformations since they are very much in line with the staircase algorithm Both methods are definitely preferable to the non-structured unitary transformation Acknowledgments This work was started while the first author visited the second and completed while the second author was visiting the first The authors thank both host institutions for their hospitality and generosity The authors also thank the referees for their valuable comments and suggestions REFERENCES 1 P BENNER R BYERS V MEHRMANN AND H XU A robust numerical method for the γ-iteration in H control Linear Algebra Appl 425 (27) pp P BENNER V SIMA AND M VOIGT FORTRAN 77 subroutines for the solutions of skew- Hamiltonian/Hamiltonian eigenproblems part I: algorithms and applications Preprint MPIMD/13-11 Max Planck Institute Magdeburg FORTRAN 77 subroutines for the solutions of skew-hamiltonian/hamiltonian eigenproblems part II: implementation and numerical results Preprint MPIMD/13-12 Max Planck Institute Magdeburg T BRÜLL AND V MEHRMANN STCSSP: A FORTRAN 77 routine to compute a structured staircase form for a (skew-)symmetric/(skew-)symmetric matrix pencil Preprint Institut für Mathematik Technical University Berlin Berlin 27 5 R BYERS V MEHRMANN AND H XU A structured staircase algorithm for skew-symmetric/symmetric pencils Electron Trans Numer Anal 26 (27) pp 1 33 /vol2627/pp1-33dir 6 J DEMMEL AND B KÅGSTRÖM Stably computing the Kronecker structure and reducing subspaces of singular pencils A λb for uncertain data in Large Scale Eigenvalue Problems (Oberlech 1985) J Cullum and R A Willoughby eds vol 127 of North-Holland Math Stud North-Holland Amsterdam 1986 pp P GAHINET AND A J LAUB Numerically reliable computation of optimal performance in singular H control SIAM J Control Optim 35 (1997) pp G H GOLUB AND C F VAN LOAN Matrix Computations 3rd ed Johns Hopkins University Press Baltimore P KUNKEL V MEHRMANN AND L SCHOLZ Self-adjoint differential-algebraic equations Math Control Signals Systems 26 (214) pp D S MACKEY N MACKEY C MEHL AND V MEHRMANN Structured polynomial eigenvalue problems: good vibrations from good linearizations SIAM J Matrix Anal Appl 28 (26) pp Möbius transformations of matrix polynomials Linear Algebra Appl in press doi: 1116/jlaa V MEHRMANN The Autonomous Linear Quadratic Control Problem Theory and Numerical Solution Springer Heidelberg V MEHRMANN AND D WATKINS Structure-preserving methods for computing eigenpairs of large sparse skew-hamiltonian/hamiltonian pencils SIAM J Sci Comput 22 (2) pp V MEHRMANN AND H XU Numerical methods in control J Comput Appl Math 123 (2) pp P H PETKOV N D CHRISTOV AND M M KONSTANTINOV Computational Methods for Linear Control Systems Prentice-Hall Int Hertfordshire C SCHRÖDER Palindromic and Even Eigenvalue Problems Analysis and Numerical Methods PhD Thesis Department of Mathematics Technical University Berlin Berlin 28
24 24 V MEHRMANN AND H XU 17 V SIMA Algorithms for Linear-Quadratic Optimization Marcel Dekker New York R C THOMPSON Pencils of complex and real symmetric and skew matrices Linear Algebra Appl 147 (1991) pp P VAN DOOREN A generalized eigenvalue approach for solving Riccati equations SIAM J Sci Statist Comput 2 (1981) pp K ZHOU J C DOYLE AND K GLOVER Robust and Optimal Control Prentice-Hall Englewood Cliffs 1996
Nonlinear palindromic eigenvalue problems and their numerical solution
Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (
More informationThe Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation
The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse
More informationTrimmed linearizations for structured matrix polynomials
Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis Volume 38, pp 75-30, 011 Copyright 011, ISSN 1068-9613 ETNA PERTURBATION ANALYSIS FOR COMPLEX SYMMETRIC, SKEW SYMMETRIC, EVEN AND ODD MATRIX POLYNOMIALS SK
More informationStructured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control
Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet
More informationCanonical forms of structured matrices and pencils
Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationPerturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop
Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael
More informationAn SVD-Like Matrix Decomposition and Its Applications
An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationSLICOT Working Note
SLICOT Working Note 3-3 MB4BV A FORTRAN 77 Subroutine to Compute the Eigenvectors Associated to the Purely Imaginary Eigenvalues of Skew-Hamiltonian/Hamiltonian Matrix Pencils Peihong Jiang Matthias Voigt
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationIndex. for generalized eigenvalue problem, butterfly form, 211
Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,
More informationFinite dimensional indefinite inner product spaces and applications in Numerical Analysis
Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de
More information18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in
806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state
More informationTHE RELATION BETWEEN THE QR AND LR ALGORITHMS
SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationA new block method for computing the Hamiltonian Schur form
A new block method for computing the Hamiltonian Schur form V Mehrmann, C Schröder, and D S Watkins September 8, 28 Dedicated to Henk van der Vorst on the occasion of his 65th birthday Abstract A generalization
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture
More informationEIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..
EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationDefinite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra
Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationThe Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications
MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department
More informationLinear algebra properties of dissipative Hamiltonian descriptor systems
Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationThe antitriangular factorisation of saddle point matrices
The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationDOUBLING ALGORITHMS WITH PERMUTED LAGRANGIAN GRAPH BASES
DOUBLNG ALGORTHMS WTH PERMUTED LAGRANGAN GRAPH BASES VOLKER MEHRMANN AND FEDERCO POLON Abstract. We derive a new representation of Lagrangian subspaces in the form m Π T, X where Π is a symplectic matrix
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationA robust numerical method for the γ-iteration in H control
A robust numerical method for the γ-iteration in H control Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu December 6, 26 Abstract We present a numerical method for the solution of the optimal H control
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationOn the eigenvalues of specially low-rank perturbed matrices
On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationp q m p q p q m p q p Σ q 0 I p Σ 0 0, n 2p q
A NUMERICAL METHOD FOR COMPUTING AN SVD-LIKE DECOMPOSITION HONGGUO XU Abstract We present a numerical method to compute the SVD-like decomposition B = QDS 1 where Q is orthogonal S is symplectic and D
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationIntroduction to Iterative Solvers of Linear Systems
Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their
More informationMATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1
MATLAB TOOLS FOR SOLVING PERIODIC EIGENVALUE PROBLEMS 1 Robert Granat Bo Kågström Daniel Kressner,2 Department of Computing Science and HPC2N, Umeå University, SE-90187 Umeå, Sweden. {granat,bokg,kressner}@cs.umu.se
More informationSensitivity analysis of the differential matrix Riccati equation based on the associated linear differential system
Advances in Computational Mathematics 7 (1997) 295 31 295 Sensitivity analysis of the differential matrix Riccati equation based on the associated linear differential system Mihail Konstantinov a and Vera
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationMath 307 Learning Goals
Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationSolving projected generalized Lyapunov equations using SLICOT
Solving projected generalized Lyapunov equations using SLICOT Tatjana Styel Abstract We discuss the numerical solution of projected generalized Lyapunov equations. Such equations arise in many control
More informationContents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri
Technische Universitat Chemnitz Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu Numerical Computation of Deating Subspaces
More informationNotes on Eigenvalues, Singular Values and QR
Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM
ON THE REDUCTION OF A HAMILTONIAN MATRIX TO HAMILTONIAN SCHUR FORM DAVID S WATKINS Abstract Recently Chu Liu and Mehrmann developed an O(n 3 ) structure preserving method for computing the Hamiltonian
More informationA Note on Inverse Iteration
A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationOn Linear-Quadratic Control Theory of Implicit Difference Equations
On Linear-Quadratic Control Theory of Implicit Difference Equations Daniel Bankmann Technische Universität Berlin 10. Elgersburg Workshop 2016 February 9, 2016 D. Bankmann (TU Berlin) Control of IDEs February
More informationLecture notes: Applied linear algebra Part 2. Version 1
Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least
More informationMath 504 (Fall 2011) 1. (*) Consider the matrices
Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture
More informationThe Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix
The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of
More informationThe quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying
I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationNumerical Methods for Solving Large Scale Eigenvalue Problems
Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems
More informationEigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations
Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April
More informationPort-Hamiltonian Realizations of Linear Time Invariant Systems
Port-Hamiltonian Realizations of Linear Time Invariant Systems Christopher Beattie and Volker Mehrmann and Hongguo Xu December 16 215 Abstract The question when a general linear time invariant control
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationCyclic reduction and index reduction/shifting for a second-order probabilistic problem
Cyclic reduction and index reduction/shifting for a second-order probabilistic problem Federico Poloni 1 Joint work with Giang T. Nguyen 2 1 U Pisa, Computer Science Dept. 2 U Adelaide, School of Math.
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue
More informationCayley-Hamilton Theorem
Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationTheorem Let X be a symmetric solution of DR(X) = 0. Let X b be a symmetric approximation to Xand set V := X? X. b If R b = R + B T XB b and I + V B R
Initializing Newton's method for discrete-time algebraic Riccati equations using the buttery SZ algorithm Heike Fabender Zentrum fur Technomathematik Fachbereich - Mathematik und Informatik Universitat
More informationEfficient and Accurate Rectangular Window Subspace Tracking
Efficient and Accurate Rectangular Window Subspace Tracking Timothy M. Toolan and Donald W. Tufts Dept. of Electrical Engineering, University of Rhode Island, Kingston, RI 88 USA toolan@ele.uri.edu, tufts@ele.uri.edu
More informationMATRIX AND LINEAR ALGEBR A Aided with MATLAB
Second Edition (Revised) MATRIX AND LINEAR ALGEBR A Aided with MATLAB Kanti Bhushan Datta Matrix and Linear Algebra Aided with MATLAB Second Edition KANTI BHUSHAN DATTA Former Professor Department of Electrical
More informationMath 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.
Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we
More informationFORTRAN 77 Subroutines for the Solution of Skew-Hamiltonian/Hamiltonian Eigenproblems Part I: Algorithms and Applications 1
SLICOT Working Note 2013-1 FORTRAN 77 Subroutines for the Solution of Skew-Hamiltonian/Hamiltonian Eigenproblems Part I: Algorithms and Applications 1 Peter Benner 2 Vasile Sima 3 Matthias Voigt 4 August
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationExponentials of Symmetric Matrices through Tridiagonal Reductions
Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationMATH 581D FINAL EXAM Autumn December 12, 2016
MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all
More informationETNA Kent State University
Electronic Transactions on Numerical Analysis. Volume 1, pp. 1-11, 8. Copyright 8,. ISSN 168-961. MAJORIZATION BOUNDS FOR RITZ VALUES OF HERMITIAN MATRICES CHRISTOPHER C. PAIGE AND IVO PANAYOTOV Abstract.
More informationQR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR
QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationSolving large scale eigenvalue problems
arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationEIGENVALUE PROBLEMS (EVP)
EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a
More information