Singular-value-like decomposition for complex matrix triples

Size: px
Start display at page:

Download "Singular-value-like decomposition for complex matrix triples"

Transcription

1 Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical singular value decomposition for a matrix A C m n is a canonical form for A that also displays the eigenvalues of the Hermitian matrices AA and A A In this paper we develop a corresponding decomposition for A that provides the Jordan canonical forms for the complex symmetric matrices AA T and A T A More generally we consider the matrix triple (A G Ĝ) where G Cm m Ĝ Cn n are invertible and either complex symmetric or complex skew-symmetric and we provide a canonical form under transformations of the form (A G Ĝ) (XT AY X T GX Y T ĜY ) where X Y are nonsingular Keywords singular value decomposition canonical form complex bilinear form complex symmetric matrix complex skew-symmetric matrix Hamiltonian matrix Takagi factorization AMS subject classification 65F15 65L8 65L5 15A21 34A3 93B4 1 Introduction In 3 Bunse-Gerstner and Gragg derived an algorithm for computing the Takagi factorization A = U T ΣU U unitary for a complex symmetric matrix A T = A C n n The Takagi factorization is just a special case of the singular value decomposition and combines two important aspects: computation of singular values (ie eigenvalues of A A and AA ) and exploitation of structure with respect to complex bilinear forms (here the symmetry of A is exploited by choosing U and U T as unitary factors for the singular value decomposition) These two aspects can be combined in a completely different way Instead of computing the singular values of a general matrix A C m n and thus revealing the eigenvalues of AA and A A we may ask for a canonical form for A that reveals the eigenvalues of the complex School of Mathematics University of Birmingham Edgbaston Birmingham B15 2TT United Kingdom mehl@mathsbhamacuk Technische Universität Berlin Institut für Mathematik MA 4-5 Straße des 17 Juni Berlin Germany mehrmann@mathtu-berlinde Department of Mathematics University of Kansas Lawrence KS 6645 USA xu@mathkuedu Partially supported by Senior Visiting Scholar Fund of Fudan University Key Laboratory and the University of Kansas General Research Fund allocation # Part of the work was done while this author was visiting Fudan University and TU Berlin whose hospitality is gratefully acknowledged Partially supported by the Deutsche Forschungsgemeinschaft through the DFG Research Center Matheon Mathematics for key technologies in Berlin 1

2 symmetric matrices AA T and A T A In this paper we compute such a form by solving a more general problem: instead of restricting ourselves to the matrix A we consider a triple of matrices (A G Ĝ) with A Cm n G C m m and Ĝ Cn n where G and Ĝ are nonsingular and either complex symmetric or complex skew-symmetric Then we derive canonical forms under transformations of the form (A G Ĝ) (A CF G CF ĜCF) := (X T AY X T GX Y T ĜY ) (11) with nonsingular matrices X C m m and Y C n n This canonical form will allow the determination of the eigenstructure of the pair of structured matrices because we find that Ĥ = Ĝ 1 A T G 1 A H = G 1 AĜ 1 A T Y 1 ĤY = (Y 1 Ĝ 1 Y T )(Y T A T X)(X 1 G 1 X T )(X T AY ) = Ĝ 1 CF A T CFG 1 CF A CF (12) X 1 HX = (X 1 G 1 X T )(X T AY )(Y 1 Ĝ 1 Y T )(Y T A T X) = G 1 CF A CF Ĝ 1 CF A T CF (13) For the special case G = I m and Ĝ = I n we obtain Ĥ = AT A and H = AA T and thus an appropriate canonical form (11) will display the eigenvalues of A T A and AA T via the identities (12) and (13) In the general case if G T = ( 1) s G and ĜT = ( 1) t Ĝ with s t { 1} then the matrices Ĥ and H satisfy Ĥ T Ĝ = ( 1) s A T G 1 A = ( 1) s ĜĤ HT G = ( 1) t AĜ 1 A T = ( 1) t GH (14) ie Ĥ and H are either selfadjoint or skew-adjoint with respect to the complex bilinear form induced by Ĝ or G respectively Indeed setting for x y C n the identities (14) can be rewritten as x y G = y T Gx x y Ĝ = y T Ĝx (15) Ĥx y Ĝ = ( 1)s x Ĥy Ĝ and Hx y G = ( 1) t x Hy G for all x y C n Indefinite inner products and related structured matrices have been intensively studied in the last few decades with main focus on real bilinear or complex sesquilinear forms see and the references therein and in particular 6 In recent years there has also been interest in matrices that are structured with respect to complex bilinear forms because such matrices do appear in applications such as the frequency analysis of high speed trains 8 13 Besides revealing the eigenstructure of the matrices Ĥ and H the canonical form (11) also allows to determine the eigenstructure of the double-sized structured matrix pencil G A λg A = λ Ĝ A T because we have that X Y T ( G λ Ĝ GCF = λ Ĝ CF A A T A CF A T CF ) X Y 2

3 The idea of generalizing the concept of the singular value decomposition to indefinite inner products by considering transformations of the form (11) is not new and has been considered in 2 for the case of complex Hermitian forms The canonical forms presented here are the analogue in the case of complex bilinear forms This case is more involved because one has to make a clear distinction between symmetric and skew-symmetric bilinear forms in contrast to the sesquilinear case where Hermitian and skew-hermitian forms are closely related Indeed an Hermitian matrix can be easily transformed into a skew-hermitian matrix by scalar multiplication with the imaginary unit i but this is not true for complex symmetric matrices Therefore we have to treat the three cases separately that G and Ĝ are both symmetric both skew-symmetric or that one of the matrices is symmetric and the another skew-symmetric A canonical form closely related to the form obtained under the transformation (11) has been developed in 11 where transformations of the form (B C) (X 1 BY Y 1 CX) B C m n C C n m have been considered Then a canonical form is constructed that reveals the Jordan structures of the products BC and CB In our framework this corresponds to a canonical form of the pair of matrices (G 1 A Ĝ 1 A T ) rather than for the triple (A G Ĝ) In this case our approach is more general because the canonical form for the pair (G 1 A Ĝ 1 A T ) can be easily read off the canonical form for (A G Ĝ) but not vice versa The approach in 11 on the other hand focusses on different aspects and allows to consider pairs (B C) where the ranks of B and C are distinct This situation is not covered by the canonical forms obtained in this paper The remainder of the paper is organized as follows In Section 2 we recall the definition of several structured matrices and review their canonical forms In Section 3 we develop structured factorizations that are needed for the proofs of the results in the following sections In Sections 4 6 we present the canonical forms for matrix triples (A G Ĝ) In Section 4 we consider the case that both G and Ĝ are complex symmetric in Section 5 we assume that G is complex symmetric and Ĝ is complex skew-symmetric and Section 6 is devoted to the case that both G and Ĝ are complex skew-symmetric Throughout the paper we use the following notation I n and n denote the n n identity and n n zero matrices respectively The m n zero matrix is denoted by m n and e j is the jth column of the identity matrix I n or equivalently the jth standard basis vector of C n Moreover we denote λ 1 1 R n := Im In λ Σ mn := J I n = J n I n n (λ) = 1 1 λ The transpose and conjugate transpose of a matrix A are denoted by A T and A respectively We use A 1 A k to denote a block diagonal matrix with diagonal blocks A 1 A k If A = a ij C n m and B C l k then A B = a ij B C nl mk denotes the Kronecker product of A and B 2 Matrices structured with respect to complex bilinear forms Our general theory will cover and generalize results for the following classes of matrices 3

4 Definition 21 Let G C n n be invertible and let H K C n n such that (GH) T = GH and (GK) T = GK 1 If G is symmetric then H is called G-symmetric and K is called G-skew-symmetric 2 If G is skew-symmetric then H is called G-Hamiltonian and K is called G-skew- Hamiltonian Thus G-symmetric and G-skew-Hamiltonian matrices are selfadjoint in the inner product induced by G while G-skew-symmetric and G-Hamiltonian matrices are skew-adjoint Observe that transformations of the form (M G) (P 1 MP P T GP ) P C n n invertible preserve the structure of M with respect to G ie if for example M = H is G-Hamiltonian then P 1 HP is P T GP -Hamiltonian as well Thus instead of working with G directly one may first transform G to a simple form using the Takagi factorization for complex symmetric and complex skew-symmetric matrices see This factorization is a special case of the well-known singular value decomposition Theorem 22 (Takagi s factorization) Let G C n n be complex symmetric Then there exists a unitary matrix U C n n such that G = Udiag(σ 1 σ n )U T where σ 1 σ n There is a variant for complex skew-symmetric matrices (see 9) This result is a just a special case of the Youla form 18 for general complex matrices Theorem 23 (Skew-symmetric analogue of Takagi s factorization) Let K C n n be complex skew-symmetric Then there exists a unitary matrix U C n n such that ( ) r1 rk K = U r 1 r k n 2k U T where r 1 r n R \ {} As immediate corollaries we obtain the following well-known results Corollary 24 Let G C n n be complex symmetric and let rank G = r Then there exists a nonsingular matrix X C n n such that X T Ir GX = Corollary 25 Let G C m m be complex skew-symmetric and let rank G = r Then r is even and there exists a nonsingular matrix X C n n such that X T Jr/2 GX = 4

5 Next we review canonical forms for the classes of matrices defined in Definition 21 These canonical forms are closely related to the well-known canonical forms for pairs of matrices that are complex symmetric or complex skew-symmetric see 17 for an overview on this topic Proofs of the following results can be found eg in 14 Theorem 26 (Canonical form for G-symmetric matrices) Let G C n n be symmetric and invertible and let H C n n be G-symmetric Then there exists an invertible matrix X C n n such that X 1 HX = J ξ1 (λ 1 ) J ξm (λ m ) X T GX = R ξ1 R ξm where λ 1 λ m C are the (not necessarily pairwise distinct) eigenvalues of H For the next two results we need additional notation By Γ η we denote the matrix with alternating signs on the anti-diagonal ie ( 1) Γ η = ( 1) 1 ( 1) η 1 Theorem 27 (Canonical form for G-skew-symmetric matrices) Let G C n n be symmetric and invertible and let K C n n be G-skew-symmetric Then there exists an invertible matrix X C n n such that where X 1 KX = K c K z X T GX = G c G z K c = K c1 K cmc G c = G c1 G cmc K z = K z1 K zmo+me G z = G z1 G zmo+me and where the diagonal blocks are given as follows: 1) blocks associated with pairs (λ j λ j ) of nonzero eigenvalues of K: Jξj (λ K cj = j ) Rξj G J ξj (λ j ) cj = R ξj where λ j C \ {} and ξ j N for j = 1 m c when m c > ; 2) blocks associated with the eigenvalue λ = of K: K zj = J ηj () G zj = Γ ηj where η j N is odd for j = 1 m o when m o > and Jηj () Rηj K zj = G J ηj () zj = R ηj where η j N is even for j = m o + 1 m o + m e when m e > (not necessarily pair- The matrix K has the non-zero eigenvalues λ 1 λ mc λ 1 λ mc wise distinct) and the additional eigenvalue if m o + m e > 5

6 Theorem 28 (Canonical form for G-Hamiltonian matrices) Let G C 2n 2n be complex skew-symmetric and invertible and let H C 2n 2n be G-Hamiltonian Then there exists an invertible matrix X C 2n 2n such that where X 1 HX = H c H z X T GX = G c G z H c = H c1 H cmc G c = G c1 G cmc H z = H z1 H zmo+me G z = G z1 G zmo+me and where the diagonal blocks are given as follows: 1) blocks associated with pairs (λ j λ j ) of nonzero eigenvalues of H: Jξj (λ H cj = j ) R G J ξj (λ j ) cj = ξj R ξj where λ j C \ {} with arg(λ j ) π) and ξ j N for j = 1 m c when m c > ; 2) blocks associated with the eigenvalue λ = of H: Jξj () H zj = G J ξj () zj = R ξj R ξj where η j N is odd for j = 1 m o when m o > and H zj = J ηj () G zj = Γ ηj where η j N is even for j = m o + 1 m o + m e when m e > (not necessarily pair- The matrix H has the non-zero eigenvalues λ 1 λ mc λ 1 λ mc wise distinct) and the additional eigenvalue if m o + m e > Theorem 29 (Canonical form for G-skew-Hamiltonian matrices) Let G C 2n 2n be complex skew-symmetric and invertible and let K C 2n 2n be G-skew-Hamiltonian Then there exists an invertible matrix X C 2n 2n such that where X 1 KX = K 1 K m X T GX = G G m K j = Jξj (λ j ) J ξj (λ j ) G j = R ξj R ξj The matrix K has the (not necessarily pairwise distinct) eigenvalues λ 1 λ m The following lemma on existence and uniqueness of structured square roots of structured matrices will frequently be used Lemma 21 Let G C n n be invertible and let H C n n be invertible and such that H T G = GH 1 If G C n n is complex symmetric (ie H C n n is G-symmetric) then there exists a square root S C n n of H that is a polynomial in H and that satisfies σ(s) {z C : arg(z) π)} The square root is uniquely determined by these properties In particular S is G-symmetric 6

7 2 If G C n n if complex skew-symmetric (ie H C n n is G-skew-Hamiltonian) then there exists a square root S C n n of H that is a polynomial in H and that satisfies σ(s) {z C : arg(z) π)} The square root is uniquely determined by these properties In particular S is G-skew-Hamiltonian Proof By the discussion in Chapter 64 in 1 we obtain for both cases that a square root S of H with σ(s) {z C : arg(z) π)} exists is unique and can be expressed as a polynomial in H It is straightforward to check that a matrix that is a polynomial in H is again G-symmetric or G-skew-Hamiltonian respectively 3 Structured factorizations In this section we develop basic factorizations that will be needed for computing the canonical forms in the Sections 4 6 We start with factorizations for matrices B C m n satisfying B T B = I or B T B = Lemma 31 If B C m n satisfies B T B = I n then m n and there exists a nonsingular matrix X C m m such that X T In B = X T X = I m Proof By assumption B has full column rank So there exists B C m (m n) such that X = B B C m m is invertible Then X T I X = n B T B B T B B T B and with we have ( XX 1 ) T ( XX 1 ) = In B X 1 = T B I m n In BT (I BB T ) B Since XX 1 is nonsingular so is the complex symmetric matrix B T (I BB T ) B By Corollary 24 there exists a nonsingular matrix X 2 such that ( BT (I BB T ) B ) X 2 = I m n With X T 2 X = XX 1 In X 2 we then obtain X T X = I m Note that X = B B I n B T B In = I m n X 2 and hence X T X = I m implies that X T B = In B (I BB T ) BX 2 7

8 Lemma 32 If B C m n satisfies rank B = n and B T B = then m 2n and there exists a unitary matrix X C m m such that B I n X T B = n X T X = I n I m 2n where B C n n is upper triangular and invertible Proof We present a constructive proof which allows to determine the matrix X numerically We may assume that m 2 otherwise the result holds trivially Let Be 1 = u 1 + iv 1 u 1 v 1 R m Then (using eg a Householder transformation see 7) there exists an orthogonal matrix Q 1 R m m such that Q T 1 u 1 = α 1 e 1 and α 1 R Let ṽ 1 be the vector formed by the trailing m 1 components of Q T 1 v 1 Then (using eg a QR-decomposition see 7) there exists an orthogonal matrix Q 2 R (m 1) (m 1) such that Q T 2 ṽ1 = β 1 and β 1 R With U 1 = Q 1 (1 Q 2 ) then α 1 + iv 11 b 1 U1 T B = iβ 1 b 2 B 1 where B 1 C (m 2) (n 1) b 1 b 2 C 1 (n 1) and v 11 R Since U 1 is real orthogonal we have (U T 1 B) T (U T 1 B) = B T B = and hence (α 1 + iv 11 ) 2 β 2 1 = (α 1 + iv 11 )b 1 + iβ 1 b 2 = B T 1 B 1 + b T 1 b 1 + b T 2 b 2 = n 2 (31) From the first identity in (31) it follows that v 11 = and α 1 = β 1 Since α 1 β 1 we have that α 1 = β 1 > because otherwise we would have that rank B n 1 which is a contradiction With this the last two identities in (31) imply that b 1 = ib 2 B1 T B 1 = and thus α 1 ib 2 U1 T B = iα 1 b 2 B 1 C (m 2) (n 1) B 1 One can easily verify that rank B 1 = n 1 Applying the same procedure inductively to B 1 we obtain the existence of a real orthogonal matrix U 2 such that U2 T B 1 = α 2 ib 3 iα 2 b 3 B 2 B 2 C (m 4) (n 2) Similarly as above we can show that α 2 > and rank B 2 = n 2 8

9 Continuing the procedure we finally obtain a real orthogonal matrix U such that α 1 ib 12 ib 1n iα 1 b 12 b 1n α 2 ib 2n iα 2 b 2n UB = α n iα n and from this we obtain that m 2n Moreover we see that every other row of UB is a multiple by i of the preceding row Thus setting 2 1 i Z 1 = Z = Z 2 1 i 1 Z }{{} 1 I m 2n n letting P be a permutation matrix for which premultiplication has the effect of re-arranging the first 2n rows of a matrix in the order of 1 3 2n n and introducing the unitary matrix X = (P ZU) T we then have and we obtain furthermore that ZZ T 1 = 1 X T B = } {{ } n α 1 ib 12 ib 1n α 2 ib 2n α n I m 2n and X T X = using the fact that U is real orthogonal ie U T U = I I n I n I m 2n Proposition 33 Let B C m n and suppose that rank B = n rank B T B = n n and that δ = n n is the dimension of the null space of B T B Then there exists a nonsingular X C m m such that X T B = B m n n X T X = I n1 where B C n n is nonsingular and n 1 = m n δ I δ I n I δ 9

10 Proof Since B T B is complex symmetric by the assumption and by Corollary 24 there exists a nonsingular matrix Y C n n such that Y T B T In BY = δ Let B C m n be the matrix formed by the leading n columns of BY By Lemma 31 there exists X 1 C m m such that X1 T B In = X1 T X 1 = I m and we obtain that Since we have that X1 T In B BY = 12 B 1 (X1 T BY ) T (X1 T BY ) = Y T B T In BY = δ B 12 = B T 1 B 1 = δ By assumption B has full column rank so this also holds for B 1 C (m n ) δ By Lemma 32 there exists a nonsingular matrix X 2 C (m n ) (m n ) such that T I δ X2 T B 1 = δ X2 T X 2 = I δ I n1 where T C δ δ is nonsingular and n 1 = m n 2δ = m n δ With X 3 = X 1 (I n X 2 ) we then have I n X3 T BY = T δ Iδ XT 3 X 3 = I n I I δ n1 Let P be the permutation that rearranges the block rows of X3 T BY in the order and let X = X 3 P T Then X T BY = XT X = I n1 δ I n T I δ I n I δ Post-multiplying Y 1 to the first of these two equations and setting In B = Y 1 T we have the asserted form In the previous results we have obtained factorizations for matrices B such that B T B is the identity or zero We get similar results if B T J m B = J n or B T J m B = 1

11 Lemma 34 If B C 2m 2n satisfies B T J m B = J n then m n and there exists a nonsingular matrix X C 2m 2m such that I n X T B 1 = I n XT J m X = J m Proof The proof is similar to that for Lemma 31 and is hence omitted Lemma 35 Let b C 2m Then there is a unitary matrix X C 2m 2m such that X T b = αe 1 X T J m X = J m Proof We again present a constructive proof that can be implemented into a numerical algorithm Let b = b T 1 bt 2 T with b 1 b 2 C m and let H 2 C m m be a unitary matrix ( eg a Householder matrix) such that H2 T b 2 = βe 1 With H2 1 b 1 = b 11 b m1 T one then can determine (eg via a QR factorization) a unitary matrix G = 1 b11 b11 β b11 = b β b β 2 such that G T b11 11 β = Note that G T J 2 G = J 2 Next determine a unitary matrix H 1 C m m such that Finally let X = H T 1 b 11 b 21 b m1 T = αe 1 H T 2 H 2 H1 Ĝ H1 T b11 where Ĝ C2m 2m is the unitary matrix obtained by replacing the (1 1) (1 m+1) (m+1 1) and (m + 1 m + 1) elements of the identity matrix I 2m with the corresponding elements of G respectively It is easily verified that X is unitary and satisfies X T b = αe 1 and X T J m X = J m Lemma 36 If B C 2m n satisfies rank B = n and B T J m B = then m n and there exists a unitary matrix X C 2m 2m such that X T B B = X T J m X = J m where B C n n is upper triangular invertible Proof By Lemma 35 there exists a unitary matrix X 1 such that b 11 b T 1 X1 T B = B 22 b T XT 1 J m X 1 = J m 3 B 24 11

12 where b 1 b 3 C n 1 Since rank B = n we have b 11 and from it follows that (X 1 B) T J m (X 1 B) = B T J m B = b 3 = B22 B 24 Applying the same procedure inductively to X T B = B n T J m 1 B22 B 24 where B C n n is upper triangular and invertible B22 B 24 = we obtain a unitary matrix X such that 2m n XT J m X = J m Proposition 37 Let B C 2m n Suppose that rank B = n rank B T J m B = 2n n ie δ = n 2n is the dimension of the null space of B T J m B Then there exists an invertible matrix X C 2m 2m such that X T B = B 2m n n X T J m X = J n1 where B C n n is nonsingular and n 1 = m n δ I δ J n I δ Proof Since B T J m B is complex skew-symmetric by the assumption and Corollary 25 there exists a nonsingular matrix Y C n n such that Y T B T Jn J m BY = δ Let B 1 C 2m 2n be the matrix formed by the leading 2n columns of BY By Lemma 34 there exists a nonsingular X 1 C 2m 2m such that X T 1 B 1 = I n I n XT 1 J m X 1 = J m We have X T 1 BY = I n B 13 B 23 I n B 33 B 43 Since X1 T J mx 1 = J m also implies X 1 J m X1 T = J m from (X1 T BY ) T J m (X1 T BY ) = Y T B T Jn J m BY = δ 12

13 we obtain that B 13 = B 33 = Since B has full column rank so does X 2 C (2m 2n ) (2m 2n ) such that B23 = X T 2 B 43 B B23 B 43 B23 B 43 T J m n B23 B 43 = δ By Lemma 36 there exists an invertible X T 2 J m n X 2 = J m n where B C δ δ is invertible Let P 1 be a permutation that interchanges the second and third block rows of X1 T BY and set X 3 = X 1 P1 T (I 2n X 2 ) Then I 2n 2n X3 T B δ BY = n 1 X3 T J m X 3 = J n J m n δ n 1 where n 1 = m n δ (For convenience we have split the zero block row of X3 T BY into three block rows) Let P be a permutation that changes the block rows of X3 T BY to the order by pre-multiplication and let X = X 3 P T (I 2n1 ( I δ ) I 2n +δ ) Then 2n 1 X T δ BY = T J m X = J n1 I 2n B X 2n δ I δ J n I δ Post-multiplying Y 1 to the first equation and setting B = (I 2n B )Y 1 we have the asserted form In this section we have presented preliminary factorizations that will form the basis in determining the canonical forms in the following sections 4 Canonical form for G Ĝ complex symmetric We start with the case that the matrix A under consideration is square and nonsingular If Σ = U AV is the standard singular decomposition of A then U AA U = V A AV = Σ 2 ie the canonical forms for both AA and A A are just the square of the canonical form for A This fact has a generalization in the case of a matrix triple (A G Ĝ) where G and Ĝ are complex symmetric To start from a square root of the Ĝ-symmetric matrix Ĥ = Ĝ 1 A T G 1 A will be the key strategy in the derivation of the canonical form in the following result Theorem 41 Let A C n n be nonsingular and let G Ĝ Cn n be complex symmetric and nonsingular Then there exist nonsingular matrices X Y C n n such that X T AY = J ξ1 (µ 1 ) J ξm (µ m ) X T GX = R ξ1 R ξm Y T ĜY = R ξ1 R ξm (41) 13

14 where µ j C \ {} arg µ j π) and ξ j N for j = 1 m Moreover for the Ĝ- symmetric matrix Ĥ = Ĝ 1 A T G 1 A and for the G-symmetric matrix H = G 1AĜ 1 A T we have that Y 1 ĤY = Jξ 2 1 (µ 1 ) Jξ 2 m (µ m ) (42) X 1 HX = Jξ 2 1 (µ 1 ) T Jξ 2 m (µ m ) T Moreover the form (41) is unique up to the simultaneous permutation of blocks in the right hand side of (41) Proof By Lemma 21 Ĥ has a unique Ĝ-symmetric square root S Cn n satisfying σ(s) {µ C \ {} : arg(µ) π)} Then by Theorem 26 there exists a nonsingular matrix Ỹ Cn n such that S CF := Ỹ 1 SỸ = J ξ 1 (µ 1 ) J ξm (µ m ) G CF := Ỹ T ĜỸ = R ξ 1 R ξm H CF := Ỹ 1 ĤỸ = J ξ 2 1 (µ 1 ) Jξ 2 m (µ m ) where µ j C \ {} arg µ j π) and ξ j N for j = 1 m (Here the third line immediately follows from Ĥ = S2 ) Using G 1 AĤ = HG 1 A and the fact that G 1 A is nonsingular we find that Ĥ and H are similar Since the canonical form of G-symmetric matrices in Theorem 26 is uniquely determined by the Jordan canonical form we obtain from Theorem 26 that the canonical forms of the pairs (Ĥ Ĝ) and (H G) coincide In particular this implies the existence of a nonsingular matrix X C n n such that H CF = X 1 H X = J 2 ξ 1 (µ 1 ) J 2 ξ m (µ m ) G CF = X T G X = R ξ1 R ξm Finally setting X = G 1 X T and Y = A 1 G XS CF we obtain X T AY = X 1 G 1 AA 1 G XS CF = S CF X T GX = X 1 G 1 GG 1 X T = ( X T G X) 1 = G 1 1CF CF Y T ĜY = S T CF X T GA T ĜA 1 G XS CF = S T CF X T G X X 1 H 1 XSCF = S T CFG CF (H CF ) 1 S CF = G CF S CF (H CF ) 1 S CF = G CF as desired where we used that S CF is G CF -symmetric and that S 2 CF = H CF It is now easy to check that Y 1 ĤY and X 1 HX have the claimed forms Concerning uniqueness we note that the form (41) is already uniquely determined by the Jordan structure of Ĥ and by the restriction µ j C \ {} arg µ j π) The canonical form for the case that A is singular or rectangular is more involved because then the matrices Ĥ and H may be singular as well The key idea in the proof of Theorem 41 was the construction of a Ĝ-symmetric square root of Ĥ but if Ĥ is singular then such a square root need not exist (For example the R n -symmetric nilpotent matrix J n () does not have any square root let alone a R n -symmetric one) A second difficulty comes from the fact that the Jordan structures of Ĥ and H may be different For example if 1 1 A = 1 1 G = R 2 R 2 = 1 1 Ĝ = R 1 R 3 =

15 then we obtain that Ĥ = Ĝ 1 A T G 1 A = 1 1 H = G 1 AĜ 1 A T = 1 1 Here Ĥ has a 1 1 and a 3 3 Jordan block associated with the eigenvalue zero while H has two 2 2 Jordan blocks associated with zero In general we obtain the following result Theorem 42 Let A C m n and let G C m m Ĝ Cn n be complex symmetric and nonsingular Then there exist nonsingular matrices X C m m and Y C n n such that X T AY = A c A z1 A z2 A z3 A z4 X T GX = G c G z1 G z2 G z3 G z4 (43) Y T ĜY = Ĝc Ĝz1 Ĝz2 Ĝz3 Ĝz4 Moreover for the Ĝ-symmetric matrix Ĥ = Ĝ 1 A T G 1 A C n n and for the G-symmetric matrix H = G 1 AĜ 1 A T C m m we have that Y 1 ĤY = Ĥc Ĥz1 Ĥz2 Ĥz3 Ĥz4 X 1 HX = H c H z1 H z2 H z3 H z4 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A c G c Ĝc have the forms as in (41) and Ĥc H c have the forms as in (42); 1) one block corresponding to n Jordan blocks of size 1 1 of Ĥ and m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = m n G z1 = I m Ĝ z1 = I n Ĥ z1 = n H z1 = m where m n N {}; 2) blocks corresponding to a pair of j j Jordan blocks of Ĥ and H associated with the eigenvalue zero: A z2 = G z2 = Ĝ z2 = Ĥ z2 = H z2 = γ 1 J 2 () γ 1 R 2 γ 1 R 2 γ 1 γ 2 J 4 () γ 2 R 4 γ 2 R 4 γ l γ l J 2l () R 2l γ l R 2l J2 2 γ () 2 J4 2 γ () l J2l 2 () γ 1 γ 2 γ l J2 2 ()T J4 2 ()T J2l 2 ()T where γ 1 γ l N {}; thus Ĥ z2 and H z2 both have each 2γ j Jordan blocks of size j j for j = 1 l; 15

16 3) blocks corresponding to a j j Jordan block of Ĥ and a (j + 1) (j + 1) Jordan block of H associated with the eigenvalue zero: A z3 = m 1 I1 m 2 m I2 l 1 Il l (l 1) m 1 m 2 m l 1 G z3 = R l Ĝ z3 = R 2 m 1 R 1 R 3 m 2 R 2 Ĥ z3 = m 1 J 1 () m 2 J 2 () H z3 = m 1 J 2 () T m 2 J 3 () T m l 1 m l 1 m l 1 R l 1 J l 1 () J l () T where m 1 m l 1 N {}; thus Ĥ z3 has m j Jordan blocks of size j j and H z3 has m j Jordan blocks of size (j + 1) (j + 1) for j = 1 l 1; 4) blocks corresponding to a (j + 1) (j + 1) Jordan block of Ĥ and a j j Jordan block of H associated with the eigenvalue zero: A z4 = n 1 G z4 = Ĝ z4 = Ĥ z4 = I1 1 2 n 2 n 1 R 1 n 1 R 2 n 1 n I2 2 3 l 1 n 2 R 2 n 2 R 3 J 2 () n 2 J 3 () H z4 = n 1 J 1 () T n 2 J 2 () T Il 1 n l 1 n l 1 n l 1 n l 1 R l 1 R l J l () (l 1) l J l 1 () T where n 1 n l 1 N {}; thus Ĥ z4 has n j Jordan blocks of size (j + 1) (j + 1) and H z4 has n j Jordan blocks of size j j for j = 1 l 1; For the eigenvalue zero the matrices Ĥ and H have 2γ j+m j +n j 1 respectively 2γ j +m j 1 +n j Jordan blocks of size j j for j = 1 l where m l = n l = and where l is the maximum of the indices of Ĥ and H (Here index refers to the size of the largest Jordan block associated with the eigenvalue zero) Moreover the form (43) is unique up to simultaneous block permutation of the blocks in the diagonal blocks of the right hand side of (43) Proof The proof is very long and technical and is therefore postponed to the Appendix We highlight that the numbers m and n in 1) of Theorem 42 are allowed to be zero This has the effect that there may occur rectangular matrices with a total number of zero rows or columns in the canonical form We illustrate this phenomenon with the following example 16

17 Example 43 Consider the two non-equivalent triples A 1 = 1 G 1 = 1 1 Ĝ 1 = 1 and A 2 = 1 G 2 = 1 1 Ĝ 2 = 1 The first example is just one block of type 4) in Theorem 42 Indeed forming the products Ĥ 1 = Ĝ 1 1 AT 1 G A = H 1 = G 1 1 AĜ 1 1 AT 1 = we see that as predicted by Theorem 42 Ĥ 1 has only one Jordan block of size 2 associated with the eigenvalue λ = whereas H 1 has one Jordan block of size 1 associated with λ = The situation is different in the second case Here we obtain Ĥ 2 = Ĝ 1 2 AT 2 G 1 2 A = 1 H 2 = G 1 2 AĜ 1 2 AT 2 = 1 ie Ĥ 2 has two Jordan blocks of size 1 one associated with λ = and a second one associated with λ = 1 while H 2 has one Jordan block of size 1 associated with λ = 1 Here the triple (A 2 G 2 Ĝ2) is in canonical form consisting of one block of type 1) and size 1 and of one block of type ): A 2 = 1 G 2 = 1 Ĝ 2 = 1 1 Remark 44 Theorem 42 in particular covers the special case G = I m and Ĝ = I n ie the case that Ĥ = AT A and H = AA T In comparison to the standard singular values of a matrix A C m n which are σ 1 σ min(mn) and which are the square roots of the eigenvalues of AA and A A we now obtain the transpose singular values of A according to Theorem 42 as J ξ1 (µ 1 ) m n J 2p1 () Iq1 I r1 where µ j arg(µ j ) π) and ξ j p j q j r j N Theorem 42 displays how thee blocks are related to the eigenvalues and Jordan structures of AA T and A T A The canonical form of A in Theorem 42 together with the canonical forms for AA T and A T A in the special case G = I m Ĝ = I n can also be deduced from Theorem 5 in 11 where the canonical form for a pair (B C) B C m n C C n m under the transformation (B C) (X 1 BY Y 1 CX) X Y nonsingular is given Setting then B = A and C = A T then yields the desired form The result of Theorem 42 however gives additional information on the transformation matrices X and Y because we also have a canonical form for X T X = X T GX and Y T Y = Y T ĜY A well known result by Flanders 4 completely describes the Jordan structures of the products BC and CB where B C m n and C C n m Recall that the partial multiplicities of an eigenvalue λ of a matrix M C n n are just the sizes of the Jordan blocks associated with λ in the Jordan canonical form for M 17

18 Theorem 45 (4) For M C m m and N C n n the following conditions are equivalent: 1) There exist matrices B C m n and C C n m such that M = BC and N = CB 2) M and N satisfy the Flanders condition ie i) M and N have the same nonzero eigenvalues and their algebraic geometric and partial multiplicities coincide ii) If (τ i ) i N is the monotonically decreasing sequence of partial multiplicities of M associated with the eigenvalue zero made infinite by adjunction of zeros and if (ζ i ) i N is the corresponding sequence of N then τ i ζ i 1 for all i N With the canonical form of Theorem 42 we are now able to prove a specialization of Theorem 45 for the case of complex symmetric matrices Theorem 46 For M C m m and N C n n the following conditions are equivalent: 1) There exists a matrix A C m n such that M = AA T and N = A T A 2) M and N are symmetric and satisfy the Flanders condition ie i) and ii) in Theorem 45 as well as iii) Let φ k be the number of indices j for which τ j = ζ j = k where (τ i ) i N and (ζ i ) i N are the sequences as in Theorem 45 and let k 1 > > k ν be the numbers k N for which φ k is odd If ν is even then for j = 1 ν 2 we have that φ k for all k with k 2j 1 k k 2j (Here κ denotes the smallest integer larger or equal to κ and we set k ν+1 := 1 in the case that ν is odd) Proof 1) 2) : Let H = M = AA T and Ĥ = N = AT A and let ω j and ˆω j denote the number of Jordan blocks of size j j associated with the eigenvalue zero of H and Ĥ respectively Using the same notation as in Theorem 42 we obtain that ω j = 2γ j + n j + m j 1 and ˆω j = 2γ j + m j + n j 1 j = 1 l Assume without loss of generality that m l 1 n l 1 Since m l = n l = we find that the first 2γ l + n l 1 entries in the sequences (τ i ) i N and (ζ i ) i N are given by l which implies φ l = 2γ l + n l 1 The sequence (τ i ) has m l 1 n l 1 more entries equal to l that are paired to m l 1 n l 1 entries l 1 in (ζ i ) Since then there are 2γ l 1 + n l 1 + n l 2 more entries l 1 in (ζ i ) and 2γ l 1 + n l 1 + m l 2 entries l 1 in (τ i ) we obtain that φ l 1 = 2γ l 1 + n l 1 + min(m l 2 n l 2 ) Continuing the counting in the way just described finally yields φ j = 2γ j + min(m j n j ) + min(m j 1 n j 1 ) j = 1 l (44) If ν = then there is nothing to prove so assume ν 1 Since = min(m l n l ) is even as well as φ l φ k1 +1 we obtain from (44) that min(m j 1 n j 1 ) is even for j > k 1 and min(m k1 1 n k1 1) is odd Clearly we must then have that min(m k 1 n k 1 ) is odd for all k satisfying k 1 > k > k 2 In particular this implies φ k for all such k as well as φ k1 and φ k2 If ν 2 we are done Otherwise min(m k2 1 n k2 1) is even and we can repeat the argument for k 2j 1 k k 2j for j = 2 ν 2 18

19 2) 1) : Let l be the largest entry that appears in one of the sequences (τ i ) i N and (ζ i ) i N First let us assume that ν = or k 1 = 1 ie φ j is even for j = 2 l Then we build up a matrix triple (à G Ĝ) as a direct sum of blocks as follows: for the φ k indices j with τ j = ζ j = k k 1 we take φ k /2 blocks as in 2) of Theorem 42 and for each index j with τ j ζ j = 1 τ j ζ j we take a block as in 3) respectively 4) of Theorem 42 Finally if there are say m indices in (τ i ) left with τ i = 1 and n indices in (ζ i ) left with ζ i = 1 then we take a block of size m n as in 1) of Theorem 42 Then by construction and Theorem 42 the matrices H = G 1AĜ 1 A T and Ĥ = Ĝ 1 A T G 1 A have the same Jordan canonical form as M and N respectively Let Z Ẑ be such that ZT GZ = I m and Ẑ T ĜẐ = I n Then setting  = ZT ÃẐ we find that ÂÂT = Z 1 HZ and ÂT  = Ẑ 1 ĤẐ are symmetric Thus there exist orthogonal matrices S and T such that SÂÂT S 1 = M and T ÂT ÂT 1 = N (This well-known fact is a direct consequence of Theorem 26) Then A = SÂT 1 satisfies M = AA T and N = A T A Next assume that k 1 > 1 Then 2) guarantees that for each k with k 2j 1 > k > k 2j j = 1 ν 2 we have that φ k 2 This allows us to modify the sequences (τ i ) and (ζ i ) to (not necessarily monotonically decreasing) sequences ( τ i ) and ( ζ i ) such that the number of indices j with τ j = ζ j = k is even for all k > 1 In order to avoid too complicated notation we explain the modification only for the case ν 2 The general case is analogous Thus if (τ i ) = k 1 k }{{} 1 k 1 1 k 1 1 }{{} φ k1 φ k1 1 (ζ i ) = k 1 k }{{} 1 k 1 1 k 1 1 }{{} φ k1 φ k1 1 k k }{{} φ k2 +1 k k } {{ } φ k2 +1 then the corresponding parts in the sequences ( τ i ) and ( ζ i ) take the forms k 2 k }{{} 2 φ k2 k 2 k }{{} 2 φ k2 ( τ i ) = k 1 k }{{} 1 k 1 1 k 1 1 k }{{} k k }{{} 2 k 2 Ξ }{{} τ φ k1 1 φ k1 1 2 φ k φ k2 1 ( ζ i ) = k 1 k }{{} 1 k 1 1 k 1 1 k }{{} k k }{{} 2 k 2 Ξ }{{} ζ φ k1 1 φ k1 1 2 φ k φ k2 1 where Ξ τ = k 1 k 1 1 k 1 1 k 1 2 k k 2 ; Ξ ζ = k 1 1 k 1 k 1 2 k 1 1 k 2 k When the sequences ( τ i ) and ( ζ i ) have been constructed we can apply the strategy of the previous paragraph to construct A such that M = AA T and N = A T A Example 47 Let 1 i M 1 = i 1 1 i N 1 = i 1 M 2 = 1 i i 1 N 2 = 1 i i 1 ie M 1 and N 1 are similar to a Jordan block of size 2 2 associated with zero Then (τ (1) i ) i N = (ζ (1) i ) i N = (2 ) and (τ (2) i ) i N = (ζ (2) i ) i N = (2 1 ) are the sequences as in Theorem 45 associated to M 1 N 1 and M 2 N 2 respectively In both cases we have φ 2 = 1 which is odd The sequences associated to M 1 and N 1 do not satisfy condition iii) 19

20 in Theorem 46 while the sequences associated with M 2 and N 2 do Indeed there does not exist a matrix A 1 such that M 1 = A 1 A T 1 and N 1 = A T 1 A 1 because setting a b A 1 = c d gives a 2 + b 2 ac + bd ac + bd c 2 + d 2 1 i = i 1 and a 2 + c 2 ab + cd ab + cd b 2 + d 2 = 1 i i 1 which implies d = ±a If d = a then i = ac ba = ab ca a contradiction But d = a implies a 2 = bc because det A 1 = det M 1 = Moreover we then have bc + b 2 = 1 = c 2 bc which implies (b + c) 2 = ie c = b contradicting a 2 + b 2 = 1 a 2 + c 2 On the other hand we have M 2 = AA T and N 2 = A T A where A = Here the canonical form for the triple (A I 3 I 3 ) is given by i i 1 5 Condensed forms for G complex symmetric Ĝ complex skew-symmetric In this section we study the canonical forms for the case that G is complex symmetric and Ĝ complex skew-symmetric Again we start with the canonical form for the case that A is quadratic and nonsingular We cannot directly use our key strategy from the proof of Theorem 42 and construct a square root of Ĥ because now Ĥ is Ĝ-Hamiltonian A Ĝ- Hamiltonian matrix can neither have a Ĝ-Hamiltonian nor a Ĝ-skew-Hamiltonian square root because the squares of matrices of such type are always Ĝ-skew-Hamiltonian Therefore we will start from the fourth root of the Ĝ-skew-Hamiltonian matrix Ĥ2 instead Theorem 51 Let A G Ĝ C2n 2n be nonsingular and let G be complex symmetric and Ĝ be complex skew-symmetric Then there exists nonsingular matrices X Y C 2n 2n such that X T Jξ1 (µ AY = 1 ) Jξm (µ m ) J ξ1 (µ 1 ) J ξm (µ m ) X T Rξ1 R GX = ξm (51) R ξ1 R ξm Y T Rξ1 R ĜY = ξm R ξ1 R ξm where µ j C \ {} arg µ j π/2) and ξ j N for j = 1 m Moreover for the Ĝ- Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A and the G-skew-symmetric matrix H = G 1 AĜ 1 A T 2

21 we have that Y 1 ĤY = X 1 HX = J 2 ξ1 (µ 1 ) Jξ 2 1 (µ 1 ) J 2 ξ1 (µ 1 ) Jξ 2 1 (µ 1 ) J 2 ξm (µ m ) J 2 ξ m (µ m ) T T J 2 ξm (µ m ) J 2 ξ m (µ m ) (52) Proof By Theorem 28 there exists a nonsingular matrix Y C n n such that Y 1 Jξ1 (λ ĤY = 1 ) Jξm (λ m ) J ξ1 (λ 1 ) J ξm (λ m ) Y T Rξ1 R ĜY = ξm R ξ1 R ξm where λ j C \ {} arg(λ j ) π) and ξ j N for j = 1 m Next construct the matrix S such that Y 1 SY = Jξ1 (λ 1 ) J ξ1 (λ 1 ) Jξm (λ m ) J ξm (λ m ) It is easily verified that S is Ĝ-skew-Hamiltonian that it satisfies S 2 = Ĥ2 and that we have σ( S) {z C \ {} : arg(z) π)} Thus by the uniqueness property of Lemma 21 we obtain that S is a polynomial in Ĥ2 Moreover applying Lemma 21 once more we obtain that S has a unique square root S C n n being a polynomial in S and satisfying σ(s) {z C \ {} : arg(z) π)} namely Y 1 J SY = ξ1 (λ 1 ) 1 2 J ξm (λ m ) 1 2 J ξ1 (λ 1 ) 1 2 J ξm (λ m ) 1 2 In fact we must have σ(s) {z C \ {} : arg(z) π/2)} because otherwise S would have eigenvalues λ j with arg(λ j ) π 2π) Let µ 2 j = λ j and arg(µ j ) π/2) By Theorem 29 we then obtain that there exists a nonsingular matrix Ỹ C n n such that S CF := Ỹ 1 SỸ = Jξ1 (µ 1 ) Jξm (µ m ) J ξ1 (µ 1 ) J ξm (µ m ) Ĝ CF := Ỹ T ĜỸ = Rξ1 R ξm R ξ1 R ξm Moreover using G 1 AĤ = HG 1 A and the fact that G 1 A is nonsingular we find that Ĥ and H are similar Thus by Theorem 27 there exists a nonsingular matrix X C n n such that H CF = X 1 H X = G CF = X T G X = J 2 ξ1 (µ 1 ) J 2 Rξ1 R ξ1 ξ 1 (µ 1 ) J 2 ξm (µ m ) Jξ 2 m (µ m ) R ξm R ξm 21

22 Indeed since H is similar to Ĥ it has the eigenvalues λ j = µ 2 j with partial multiplicities ξ j j = 1 m Since the canonical form of G-skew-symmetric matrices in Theorem 27 is uniquely determined by the Jordan canonical form we find that the pairs (H G) and (H CF G CF ) must have the same canonical form Observe that S CF is G CF -symmetric but not a square root of H CF Instead it is easy to check that S CF (H CF ) 1 Iξ1 S CF = I ξ1 Iξm I ξm Using this identity and setting X = G 1 X T and Y = A 1 G XS CF we obtain that X T AY = X 1 G 1 AA 1 G XS CF = S CF X T GX = X 1 G 1 GG 1 X T = ( X T G X) 1 = (G CF ) 1 = G CF Y T ĜY = S T CF X T GA T ĜA 1 G XS CF = S T CF X T G X X 1 H 1 XSCF = S T CFG CF (H CF ) 1 S CF = G CF S CF (H CF ) 1 S CF = ĜCF It is now straightforward to check that Y 1 ĤY and X 1 HX have the claimed forms Concerning uniqueness we note that the form (51) is already uniquely determined by the Jordan structure of Ĥ and by the restriction µ j C \ {} arg µ j π/2) Theorem 52 Let A C m 2n let G C m m be complex symmetric and nonsingular and let Ĝ C2n 2n be complex skew-symmetric and nonsingular Then there exists nonsingular matrices X C m m and Y C 2n 2n such that X T AY = A c A z1 A z2 A z3 A z4 A z5 A z6 X T GX = G c G z1 G z2 G z3 G z4 G z5 G z6 (53) Y T ĜY = Ĝc Ĝz1 Ĝz2 Ĝz3 Ĝz4 Ĝz5 Ĝz6 Moreover for the Ĝ-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A C 2n 2n and for the G-skewsymmetric matrix H = G 1 AĜ 1 A T C m m we have that Y 1 ĤY = Ĥc Ĥz1 Ĥz2 Ĥz3 Ĥz4 Ĥz5 Ĥz6 X 1 HX = H c H z1 H z2 H z3 H z4 H z5 H z6 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A c G c Ĝc have the forms as in (51) and Ĥc H c have the forms as in (52); 1) one block corresponding to 2n Jordan blocks of size 1 1 of Ĥ and m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = m 2n G z1 = I m Ĝ z1 = J n Ĥ z1 = 2n H z1 = m where m o n o N {}; 22

23 2) blocks corresponding to a pair of j j Jordan blocks of Ĥ and H associated with the eigenvalue zero: A z2 = G z2 = Ĝ z2 = Ĥ z2 = H z2 = γ 1 J 2 () γ 1 R 2 γ 1 γ 2 γ 2 J 4 () γ 2l+1 γ 2l+1 J 4l+2 () R 4 R 4l+2 R1 γ 2 γ R2 2l+1 R 2l+1 R 1 R 2 R 2l+1 γ 1 γ 2 2 ( Σ 22 )J4 2 γ () 2l+1 ( Σ 2l+12l+1 )J4l+2 2 () γ 1 2 γ 2 Σ 31 J4 2()T γ 2l+1 Σ 2l+22l J 2 4l+2 ()T where γ 1 γ l N {}; thus Ĥ z2 and H z2 both have each 2γ j Jordan blocks of size j j for j = 1 2l + 1; 3) blocks corresponding to a 2j 2j Jordan block of Ĥ and a (2j + 1) (2j + 1) Jordan block of H associated with the eigenvalue zero: A z3 = m 2 I2 m 4 I4 m 2l I2l G z3 = Ĝ z3 = m m 2 R 3 R1 R 1 m m 4 R 5 R2 R 2 (2l+1) 2l m 2l R 2l+1 m 2l Rl R l Ĥ z3 = m 2 ( Σ 11 )J 2 () m 4 ( Σ 22 )J 4 () m 2l ( Σ ll )J 2l () H z3 = m 2 Σ 21 J 3 () T m 4 Σ 32 J 5 () T m 2l Σ l+1l J 2l+1 () T where m 2 m 4 m 2l N {}; thus Ĥ z3 has m 2j Jordan blocks of size 2j 2j and H z3 has m 2j Jordan blocks of size (2j + 1) (2j + 1) for j = 1 l; 4) blocks corresponding to two (2j 1) (2j 1) Jordan blocks of Ĥ and two 2j 2j Jordan 23

24 blocks of H associated with the eigenvalue zero: I 1 I 3 A z4 = m 1 I 1 m 3 I m 1 m 3 G z4 = Ĝ z4 = Ĥ z4 = m 1 H z4 = m R 4 R 8 R1 m 3 R3 R 1 R 3 m 1 2 m 3 J3 () J 3 () T J2 () m 3 T J4 () J 2 () J 4 () I 2l 1 I 2l 1 4l (4l 2) m 2l 1 R 4l m 2l 1 R 2l 1 R 2l 1 J2l 1 () J 2l 1 () m 2l 1 T J2l () J 2l () m 2l 1 m 2l 1 where m 1 m 3 m 2l 1 N {}; thus Ĥ z4 has 2m 2j 1 Jordan blocks of the size (2j 1) (2j 1) and H z4 has 2m 2j 1 Jordan blocks of size 2j 2j for j = 1 l; 5) blocks corresponding to a 2j 2j Jordan block of Ĥ and a (2j 1) (2j 1) Jordan block of H associated with the eigenvalue zero: A z5 = n 1 G z5 = Ĝ z5 = n 1 I1 1 2 n 3 n 1 R 1 R1 R 1 n 3 n I l 1 n 3 R 3 R2 R 2 Ĥ z5 = n 1 ( Σ 11 )J 2 () n 3 ( Σ 22 )J 4 () H z5 = n 1 1 n 3 Σ 21 J 3 () T I2l 1 n 2l 1 n 2l 1 n 2l 1 n 2l 1 Rl R l (2l 1) 2l R 2l 1 ( Σ ll )J 2l () Σ ll 1 J 2l 1 () T where n 1 n 3 n 2l 1 N {}; thus Ĥ z5 has n 2j 1 Jordan blocks of size 2j 2j and H z5 has n 2j 1 Jordan blocks of size (2j 1) (2j 1) for j = 1 l; 6) blocks corresponding to two (2j +1) (2j +1) Jordan blocks of Ĥ and two 2j 2j Jordan blocks of H associated with the eigenvalue zero: A z6 = n 2 I2 n 4 I4 n 2l I2l I I I 2l 4l (4l+2) n 2 n 4 n 2l G z6 = R 4 R 8 R 4l n 2 R3 Ĝ z6 = n 4 R5 n 2l R 2l+1 R 3 R 5 R 2l+1 Ĥ z6 = n 2 J3 () n 4 J5 () n 2l J2l+1 () J 3 () J 5 () J 2l+1 () H z6 = n 2 T J2 () n 4 T J4 () n 2l T J2l () J 2 () J 4 () J 2l () 24

25 where n 2 n 4 n 2l N {}; thus Ĥz6 has 2n 2j Jordan blocks of size (2j+1) (2j+1) and H z6 has 2n 2j Jordan blocks of size 2j 2j for j = 1 l; For the eigenvalue zero the matrices Ĥ and H have 2γ 2j + m 2j + n 2j 1 respectively 2γ 2j + 2m 2j 1 + 2n 2j Jordan blocks of size 2j 2j for j = 1 l and 2γ 2j+1 + 2m 2j+1 + 2n 2j respectively 2γ 2j+1 + m 2j + n 2j+1 Jordan blocks of size (2j + 1) (2j + 1) for j = l Here m 2l+1 = n 2l+1 = and 2l + 1 is the smallest odd number that is larger or equal to the maximum of the indices of Ĥ and H (Here index refers to the maximal size of a Jordan block associated with zero) Moreover the form (43) is unique up to simultaneous block permutation of the blocks in the diagonal blocks of the right hand side of (43) Proof The proof is presented in the Appendix 6 Canonical forms for G Ĝ complex skew-symmetric In this section we finally treat that case that both G and Ĝ are complex skew-symmetric Theorem 61 Let A C 2n 2n be nonsingular and let G Ĝ C2n 2n be nonsingular and complex skew-symmetric Then there exists nonsingular matrices X Y C 2n 2n such that X T Jξ1 (µ AY = 1 ) Jξm (µ m ) J ξ1 (µ 1 ) J ξm (µ m ) X T Rξ1 R GX = ξm (61) R ξ1 R ξm Y T Rξ1 R ĜY = ξm R ξ1 R ξm where µ j C \ {} arg µ j π) and ξ j N for j = 1 m Furthermore for the Ĝ-skew-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A and for the G-skew-Hamiltonian matrix H = G 1AĜ 1 A T we have that Y 1 ĤY = X 1 HX = J 2 ξ1 (µ 1 ) J 2 ξ 1 (µ 1 ) J 2 ξ1 (µ 1 ) Jξ 2 1 (µ 1 ) J 2 ξm (µ m ) J 2 ξ m (µ m ) T T J 2 ξm (µ m ) J 2 ξ m (µ m ) (62) Proof The proof proceeds completely analogous to the proof of Theorem 41 Starting with a skew-hamiltonian square root S of Ĥ that is a polynomial in Ĥ (such a square root exists by Lemma 21) and reducing the pair (S; Ĝ) to the canonical form (S CF G CF ) = (Ỹ 1 SỸ Ỹ T ĜỸ ) of Theorem 29 we obtain the existence of a transformation matrix X such that ( X 1 H X X T G X) = (S 2 CF G CF ) Here it is used that by Theorem 29 the canonical form of all three pairs (Ĥ Ĝ) (H G) and (H G) is the same because H and Ĥ are similar Then setting X = G 1 X T and Y = A 1 G XS CF yields the desired result 25

26 We mention that the choice of the transformation matrices X Y in Theorem 61 so that X T GX = Y T ĜY rather than X T GX = Y T ĜY is just a matter of taste A canonical form (with modified values instead of µ 1 µ m in X T AY ) with X T GX = Y T ĜY can be constructed as well but this would lead to the occurrence of distracting minus signs in the forms for H and Ĥ Therefore we prefer to represent the canonical form as we did in Theorem 61 Theorem 62 Let A C 2m 2n and let G C 2m 2m Ĝ C2n 2n be complex skew-symmetric and nonsingular Then there exists nonsingular matrices X C 2m 2m and Y C 2n 2n such that X T AY = A c A z1 A z2 A z3 A z4 X T GX = G c G z1 G z2 G z3 G z4 (63) Y T ĜY = Ĝc Ĝz1 Ĝz2 Ĝz3 Ĝz4 Moreover for the Ĝ-skew-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A C 2n 2n and for the G- skew-hamiltonian matrix H = G 1 AĜ 1 A T C 2m 2m we have that Y 1 ĤY = Ĥc Ĥz1 Ĥz2 Ĥz3 Ĥz4 X 1 HX = H c H z1 H z2 H z3 H z4 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A c G c Ĝc have the forms as in (61) and Ĥc H c have the forms as in (62); 1) one block corresponding to 2n Jordan blocks of size 1 1 of Ĥ and 2m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = 2m 2n G z1 = J m Ĝ z1 = J n Ĥ z1 = 2n H z1 = 2m ; 2) blocks corresponding to a pair of j j Jordan blocks of Ĥ and H associated with the eigenvalue zero: γ 1 γ 2 γ l A z2 = J 2 () J 4 () J 2l () γ 1 R1 γ 2 R2 γ l Rl G z2 = R 1 R 2 R l γ 1 R1 γ 2 R2 γ l Rl Ĝ z2 = R 1 R 2 R l γ 1 γ 2 Ĥ z2 = 2 ˆΓ 4 J4 2 γ () l ˆΓ 2l J2l 2 () H z2 = γ 1 γ 2 γ l 2 Γ 4 J4 2 ()T Γ 2l J2l 2 ()T where γ 1 γ l N {} ˆΓ 2j = ( I j 1 ) I 1 ( I j ) and Γ 2j = ( I j ) I 1 ( I j 1 ) for j = 2 l; thus Ĥ z2 and H z2 both have each 2γ j Jordan blocks of size j j for j = 1 l; 26

27 3) blocks corresponding to two j j Jordan blocks of Ĥ and two (j + 1) (j + 1) Jordan blocks of H associated with the eigenvalue zero: I 1 I 2 A z3 = m 1 I 1 m 2 I G z3 = m 1 R2 m 2 R3 R 2 R 3 Ĝ z3 = m 1 R1 m 2 R2 R 1 R 2 m 1 Ĥ z3 = 2 m 2 J2 () J 2 () H z3 = m 1 T J2 () m 2 T J3 () J 2 () J 3 () m l I l 1 I l 1 2l (2l 2) m l 1 Rl R l m l 1 R l 1 R l 1 Jl 1 () J l 1 () m l 1 T Jl () J l () where m 1 m l 1 N {}; thus Ĥ z3 has 2m j Jordan blocks of size j j and H z3 has 2m j Jordan blocks of size (j + 1) (j + 1) for j = 1 l 1; 4) blocks corresponding to two (j + 1) (j + 1) Jordan blocks of Ĥ and two j j Jordan blocks of H associated with the eigenvalue zero: A z4 = n 1 I1 n 2 n I2 l 1 Il 1 I I I l 1 (2l 2) 2l n 1 R1 G z4 = n 2 n R2 l 1 R l 1 R 1 R 2 R l 1 n 1 R2 Ĝ z4 = n 2 n R3 l 1 Rl R 2 R 3 R l Ĥ z4 = n 1 J2 () n 2 n J3 () l 1 Jl () J 2 () J 3 () J l () H z4 = n 1 T J1 () n 2 T n J2 () l 1 T Jl 1 () J 1 () J 2 () J l 1 () where n 1 n l 1 N {}; thus Ĥ z4 has 2n j Jordan blocks of size (j + 1) (j + 1) and H z4 has 2n j Jordan blocks of size j j for j = 1 l 1; Then for the eigenvalue zero the matrices Ĥ and H have 2γ j + 2m j + 2n j 1 respectively 2γ j + 2m j 1 + 2n j Jordan blocks of size j j for j = 1 l Here l is the maximum of the indices of Ĥ and H (Here index refers to the maximal size of a Jordan block associated with the eigenvalue zero) Moreover the form (63) is unique up to simultaneous block permutation of the blocks in the diagonal blocks of the right hand side of (63) Proof The proof is presented in the Appendix m l 1 27

28 7 Conclusion We have presented canonical forms for matrix triples (A G Ĝ) where G Ĝ are complex symmetric or complex skew-symmetric and nonsingular The canonical form for A can be interpreted as a variant of the singular value decomposition because the form also displays the Jordan canonical forms of the structured matrices Ĥ = Ĝ 1 A T G 1 A and H = G 1AĜ 1 A T Acknowledgement We thank Leiba Rodman for some valuable comments and in particular for pointing us into the direction of Theorem 46 References 1 G Ammar C Mehl and V Mehrmann Schur-like forms for matrix Lie groups Lie algebras and Jordan algebras Linear Algebra Appl 287: Y Bolschakov and B Reichstein Unitary equivalence in an indefinite scalar product: an analogue of singular-value decomposition Linear Algebra Appl 222: A Bunse-Gerstner and W B Gragg Singular value decompositions of complex symmetric matrices J Comput Appl Math 21: H Flanders Elementary divisors of AB and BA Proc Amer Math Soc 2: I Gohberg P Lancaster and L Rodman Matrices and Indefinite Scalar Products Birkhäuser Basel I Gohberg P Lancaster and L Rodman Indefinite Linear Algebra and Applications Birkhäuser Basel 25 7 G H Golub and C F Van Loan Matrix Computations Johns Hopkins University Press Baltimore 3rd edition A Hilliges C Mehl and V Mehrmann On the solution of palindromic eigenvalue problems In Proceedings of the 4th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS) Jyväskylä Finland 24 CD-ROM 9 R A Horn and C R Johnson Matrix Analysis Cambridge University Press Cambridge R A Horn and C R Johnson Topics in Matrix Analysis Cambridge University Press Cambridge R A Horn and D Merino Contragredient equivalence: a canonical form and some applications Linear Algebra Appl 214: P Lancaster and L Rodman The Algebraic Riccati Equation Oxford University Press Oxford

Structured decompositions for matrix triples: SVD-like concepts for structured matrices

Structured decompositions for matrix triples: SVD-like concepts for structured matrices Structured decompositions for matrix triples: SVD-like concepts for structured matrices Christian Mehl Volker Mehrmann Hongguo Xu July 1 8 In memory of Ralph Byers (1955-7) Abstract Canonical forms for

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Canonical forms of structured matrices and pencils

Canonical forms of structured matrices and pencils Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Generic rank-k perturbations of structured matrices

Generic rank-k perturbations of structured matrices Generic rank-k perturbations of structured matrices Leonhard Batzke, Christian Mehl, André C. M. Ran and Leiba Rodman Abstract. This paper deals with the effect of generic but structured low rank perturbations

More information

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES H. FAßBENDER AND KH. D. IKRAMOV Abstract. A matrix A C n n is unitarily quasi-diagonalizable if A can be brought by a unitary similarity transformation

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

On doubly structured matrices and pencils that arise in linear response theory

On doubly structured matrices and pencils that arise in linear response theory On doubly structured matrices and pencils that arise in linear response theory Christian Mehl Volker Mehrmann Hongguo Xu Abstract We discuss matrix pencils with a double symmetry structure that arise in

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

A note on the product of two skew-hamiltonian matrices

A note on the product of two skew-hamiltonian matrices A note on the product of two skew-hamiltonian matrices H. Faßbender and Kh. D. Ikramov October 13, 2007 Abstract: We show that the product C of two skew-hamiltonian matrices obeys the Stenzel conditions.

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Operators with numerical range in a closed halfplane

Operators with numerical range in a closed halfplane Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

An SVD-Like Matrix Decomposition and Its Applications

An SVD-Like Matrix Decomposition and Its Applications An SVD-Like Matrix Decomposition and Its Applications Hongguo Xu Abstract [ A matrix S C m m is symplectic if SJS = J, where J = I m I m. Symplectic matrices play an important role in the analysis and

More information

Skew-Symmetric Matrix Polynomials and their Smith Forms

Skew-Symmetric Matrix Polynomials and their Smith Forms Skew-Symmetric Matrix Polynomials and their Smith Forms D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann March 23, 2013 Abstract Two canonical forms for skew-symmetric matrix polynomials

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

ABSOLUTELY FLAT IDEMPOTENTS

ABSOLUTELY FLAT IDEMPOTENTS ABSOLUTELY FLAT IDEMPOTENTS JONATHAN M GROVES, YONATAN HAREL, CHRISTOPHER J HILLAR, CHARLES R JOHNSON, AND PATRICK X RAULT Abstract A real n-by-n idempotent matrix A with all entries having the same absolute

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form Jay Daigle Advised by Stephan Garcia April 4, 2008 2 Contents Introduction 5 2 Technical Background 9 3

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices

Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices João R. Cardoso Instituto Superior de Engenharia de Coimbra Quinta da Nora 3040-228 Coimbra Portugal jocar@isec.pt F. Silva

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

THE RELATION BETWEEN THE QR AND LR ALGORITHMS SIAM J. MATRIX ANAL. APPL. c 1998 Society for Industrial and Applied Mathematics Vol. 19, No. 2, pp. 551 555, April 1998 017 THE RELATION BETWEEN THE QR AND LR ALGORITHMS HONGGUO XU Abstract. For an Hermitian

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J Hefferon E-mail: kapitula@mathunmedu Prof Kapitula, Spring

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch Definitions, Theorems and Exercises Abstract Algebra Math 332 Ethan D. Bloch December 26, 2013 ii Contents 1 Binary Operations 3 1.1 Binary Operations............................... 4 1.2 Isomorphic Binary

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 28, No. 4, pp. 029 05 c 2006 Society for Industrial and Applied Mathematics STRUCTURED POLYNOMIAL EIGENVALUE PROBLEMS: GOOD VIBRATIONS FROM GOOD LINEARIZATIONS D. STEVEN

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Chapter 4. Matrices and Matrix Rings

Chapter 4. Matrices and Matrix Rings Chapter 4 Matrices and Matrix Rings We first consider matrices in full generality, i.e., over an arbitrary ring R. However, after the first few pages, it will be assumed that R is commutative. The topics,

More information

ON THE MATRIX EQUATION XA AX = X P

ON THE MATRIX EQUATION XA AX = X P ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized

More information

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3]. Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

On the Determinant of Symplectic Matrices

On the Determinant of Symplectic Matrices On the Determinant of Symplectic Matrices D. Steven Mackey Niloufer Mackey February 22, 2003 Abstract A collection of new and old proofs showing that the determinant of any symplectic matrix is +1 is presented.

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Means of unitaries, conjugations, and the Friedrichs operator

Means of unitaries, conjugations, and the Friedrichs operator J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

Jordan Structures of Alternating Matrix Polynomials

Jordan Structures of Alternating Matrix Polynomials Jordan Structures of Alternating Matrix Polynomials D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann August 17, 2009 Abstract Alternating matrix polynomials, that is, polynomials whose coefficients

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Lipschitz stability of canonical Jordan bases of H-selfadjoint matrices under structure-preserving perturbations

Lipschitz stability of canonical Jordan bases of H-selfadjoint matrices under structure-preserving perturbations Lipschitz stability of canonical Jordan bases of H-selfadjoint matrices under structure-preserving perturbations T Bella, V Olshevsky and U Prasad Abstract In this paper we study Jordan-structure-preserving

More information

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras Gregory Ammar Department of Mathematical Sciences Northern Illinois University DeKalb, IL 65 USA Christian Mehl Fakultät für Mathematik

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

PRODUCT THEOREMS IN SL 2 AND SL 3

PRODUCT THEOREMS IN SL 2 AND SL 3 PRODUCT THEOREMS IN SL 2 AND SL 3 1 Mei-Chu Chang Abstract We study product theorems for matrix spaces. In particular, we prove the following theorems. Theorem 1. For all ε >, there is δ > such that if

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint...

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint... Technische Universitat Chemnitz Sonderforschungsbereich 9 Numerische Simulation auf massiv parallelen Rechnern Christian Mehl, Volker Mehrmann, Hongguo Xu Canonical forms for doubly structured matrices

More information

Low Rank Perturbations of Quaternion Matrices

Low Rank Perturbations of Quaternion Matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 38 2017 Low Rank Perturbations of Quaternion Matrices Christian Mehl TU Berlin, mehl@mathtu-berlinde Andre CM Ran Vrije Universiteit

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Digital Workbook for GRA 6035 Mathematics

Digital Workbook for GRA 6035 Mathematics Eivind Eriksen Digital Workbook for GRA 6035 Mathematics November 10, 2014 BI Norwegian Business School Contents Part I Lectures in GRA6035 Mathematics 1 Linear Systems and Gaussian Elimination........................

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Fundamentals of Unconstrained Optimization

Fundamentals of Unconstrained Optimization dalmau@cimat.mx Centro de Investigación en Matemáticas CIMAT A.C. Mexico Enero 2016 Outline Introduction 1 Introduction 2 3 4 Optimization Problem min f (x) x Ω where f (x) is a real-valued function The

More information

On families of anticommuting matrices

On families of anticommuting matrices On families of anticommuting matrices Pavel Hrubeš December 18, 214 Abstract Let e 1,..., e k be complex n n matrices such that e ie j = e je i whenever i j. We conjecture that rk(e 2 1) + rk(e 2 2) +

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information