Singular-value-like decomposition for complex matrix triples

Similar documents
Structured decompositions for matrix triples: SVD-like concepts for structured matrices

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Canonical forms of structured matrices and pencils

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Generic rank-k perturbations of structured matrices

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

On doubly structured matrices and pencils that arise in linear response theory

Two Results About The Matrix Exponential

A note on the product of two skew-hamiltonian matrices

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Math 321: Linear Algebra

Lecture Notes in Linear Algebra

Operators with numerical range in a closed halfplane

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1 Linear Algebra Problems

An SVD-Like Matrix Decomposition and Its Applications

Skew-Symmetric Matrix Polynomials and their Smith Forms

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Numerical Linear Algebra Homework Assignment - Week 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

ABSOLUTELY FLAT IDEMPOTENTS

Trimmed linearizations for structured matrix polynomials

October 25, 2013 INNER PRODUCT SPACES

Linear Algebra: Matrix Eigenvalue Problems

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

ELEMENTARY LINEAR ALGEBRA

Math113: Linear Algebra. Beifang Chen

Nonlinear palindromic eigenvalue problems and their numerical solution

Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Maths for Signals and Systems Linear Algebra in Engineering

A matrix over a field F is a rectangular array of elements from F. The symbol

arxiv: v1 [math.co] 3 Nov 2014

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Elementary linear algebra

THE RELATION BETWEEN THE QR AND LR ALGORITHMS

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Math Linear Algebra II. 1. Inner Products and Norms

First we introduce the sets that are going to serve as the generalizations of the scalars.

Math 321: Linear Algebra

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

MAT 2037 LINEAR ALGEBRA I web:

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

w T 1 w T 2. w T n 0 if i j 1 if i = j

Frame Diagonalization of Matrices

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ECE 275A Homework #3 Solutions

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch

c 2006 Society for Industrial and Applied Mathematics

Intrinsic products and factorizations of matrices

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Review problems for MA 54, Fall 2004.

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

MIT Final Exam Solutions, Spring 2017

Appendix C Vector and matrix algebra

Linear Algebra. Min Yan

MATH 583A REVIEW SESSION #1

Chapter 7. Linear Algebra: Matrices, Vectors,

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Algebra (Review) Volker Tresp 2017

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Chapter 4. Matrices and Matrix Rings

ON THE MATRIX EQUATION XA AX = X P

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

On the Determinant of Symplectic Matrices

A PRIMER ON SESQUILINEAR FORMS

Means of unitaries, conjugations, and the Friedrichs operator

Notes on Mathematics

Jordan Structures of Alternating Matrix Polynomials

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Lipschitz stability of canonical Jordan bases of H-selfadjoint matrices under structure-preserving perturbations

Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras

Elementary maths for GMT

Final Exam Practice Problems Answers Math 24 Winter 2012

PRODUCT THEOREMS IN SL 2 AND SL 3

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint...

Low Rank Perturbations of Quaternion Matrices

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Linear Algebra March 16, 2019

1 Matrices and Systems of Linear Equations. a 1n a 2n

Digital Workbook for GRA 6035 Mathematics

1 Matrices and Systems of Linear Equations

Fundamentals of Unconstrained Optimization

On families of anticommuting matrices

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Characterization of half-radial matrices

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Spectral inequalities and equalities involving products of matrices

Transcription:

Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical singular value decomposition for a matrix A C m n is a canonical form for A that also displays the eigenvalues of the Hermitian matrices AA and A A In this paper we develop a corresponding decomposition for A that provides the Jordan canonical forms for the complex symmetric matrices AA T and A T A More generally we consider the matrix triple (A G Ĝ) where G Cm m Ĝ Cn n are invertible and either complex symmetric or complex skew-symmetric and we provide a canonical form under transformations of the form (A G Ĝ) (XT AY X T GX Y T ĜY ) where X Y are nonsingular Keywords singular value decomposition canonical form complex bilinear form complex symmetric matrix complex skew-symmetric matrix Hamiltonian matrix Takagi factorization AMS subject classification 65F15 65L8 65L5 15A21 34A3 93B4 1 Introduction In 3 Bunse-Gerstner and Gragg derived an algorithm for computing the Takagi factorization A = U T ΣU U unitary for a complex symmetric matrix A T = A C n n The Takagi factorization is just a special case of the singular value decomposition and combines two important aspects: computation of singular values (ie eigenvalues of A A and AA ) and exploitation of structure with respect to complex bilinear forms (here the symmetry of A is exploited by choosing U and U T as unitary factors for the singular value decomposition) These two aspects can be combined in a completely different way Instead of computing the singular values of a general matrix A C m n and thus revealing the eigenvalues of AA and A A we may ask for a canonical form for A that reveals the eigenvalues of the complex School of Mathematics University of Birmingham Edgbaston Birmingham B15 2TT United Kingdom Email: mehl@mathsbhamacuk Technische Universität Berlin Institut für Mathematik MA 4-5 Straße des 17 Juni 136 1623 Berlin Germany Email: mehrmann@mathtu-berlinde Department of Mathematics University of Kansas Lawrence KS 6645 USA Email: xu@mathkuedu Partially supported by Senior Visiting Scholar Fund of Fudan University Key Laboratory and the University of Kansas General Research Fund allocation # 231717 Part of the work was done while this author was visiting Fudan University and TU Berlin whose hospitality is gratefully acknowledged Partially supported by the Deutsche Forschungsgemeinschaft through the DFG Research Center Matheon Mathematics for key technologies in Berlin 1

symmetric matrices AA T and A T A In this paper we compute such a form by solving a more general problem: instead of restricting ourselves to the matrix A we consider a triple of matrices (A G Ĝ) with A Cm n G C m m and Ĝ Cn n where G and Ĝ are nonsingular and either complex symmetric or complex skew-symmetric Then we derive canonical forms under transformations of the form (A G Ĝ) (A CF G CF ĜCF) := (X T AY X T GX Y T ĜY ) (11) with nonsingular matrices X C m m and Y C n n This canonical form will allow the determination of the eigenstructure of the pair of structured matrices because we find that Ĥ = Ĝ 1 A T G 1 A H = G 1 AĜ 1 A T Y 1 ĤY = (Y 1 Ĝ 1 Y T )(Y T A T X)(X 1 G 1 X T )(X T AY ) = Ĝ 1 CF A T CFG 1 CF A CF (12) X 1 HX = (X 1 G 1 X T )(X T AY )(Y 1 Ĝ 1 Y T )(Y T A T X) = G 1 CF A CF Ĝ 1 CF A T CF (13) For the special case G = I m and Ĝ = I n we obtain Ĥ = AT A and H = AA T and thus an appropriate canonical form (11) will display the eigenvalues of A T A and AA T via the identities (12) and (13) In the general case if G T = ( 1) s G and ĜT = ( 1) t Ĝ with s t { 1} then the matrices Ĥ and H satisfy Ĥ T Ĝ = ( 1) s A T G 1 A = ( 1) s ĜĤ HT G = ( 1) t AĜ 1 A T = ( 1) t GH (14) ie Ĥ and H are either selfadjoint or skew-adjoint with respect to the complex bilinear form induced by Ĝ or G respectively Indeed setting for x y C n the identities (14) can be rewritten as x y G = y T Gx x y Ĝ = y T Ĝx (15) Ĥx y Ĝ = ( 1)s x Ĥy Ĝ and Hx y G = ( 1) t x Hy G for all x y C n Indefinite inner products and related structured matrices have been intensively studied in the last few decades with main focus on real bilinear or complex sesquilinear forms see 1 5 12 15 and the references therein and in particular 6 In recent years there has also been interest in matrices that are structured with respect to complex bilinear forms because such matrices do appear in applications such as the frequency analysis of high speed trains 8 13 Besides revealing the eigenstructure of the matrices Ĥ and H the canonical form (11) also allows to determine the eigenstructure of the double-sized structured matrix pencil G A λg A = λ Ĝ A T because we have that X Y T ( G λ Ĝ GCF = λ Ĝ CF A A T A CF A T CF ) X Y 2

The idea of generalizing the concept of the singular value decomposition to indefinite inner products by considering transformations of the form (11) is not new and has been considered in 2 for the case of complex Hermitian forms The canonical forms presented here are the analogue in the case of complex bilinear forms This case is more involved because one has to make a clear distinction between symmetric and skew-symmetric bilinear forms in contrast to the sesquilinear case where Hermitian and skew-hermitian forms are closely related Indeed an Hermitian matrix can be easily transformed into a skew-hermitian matrix by scalar multiplication with the imaginary unit i but this is not true for complex symmetric matrices Therefore we have to treat the three cases separately that G and Ĝ are both symmetric both skew-symmetric or that one of the matrices is symmetric and the another skew-symmetric A canonical form closely related to the form obtained under the transformation (11) has been developed in 11 where transformations of the form (B C) (X 1 BY Y 1 CX) B C m n C C n m have been considered Then a canonical form is constructed that reveals the Jordan structures of the products BC and CB In our framework this corresponds to a canonical form of the pair of matrices (G 1 A Ĝ 1 A T ) rather than for the triple (A G Ĝ) In this case our approach is more general because the canonical form for the pair (G 1 A Ĝ 1 A T ) can be easily read off the canonical form for (A G Ĝ) but not vice versa The approach in 11 on the other hand focusses on different aspects and allows to consider pairs (B C) where the ranks of B and C are distinct This situation is not covered by the canonical forms obtained in this paper The remainder of the paper is organized as follows In Section 2 we recall the definition of several structured matrices and review their canonical forms In Section 3 we develop structured factorizations that are needed for the proofs of the results in the following sections In Sections 4 6 we present the canonical forms for matrix triples (A G Ĝ) In Section 4 we consider the case that both G and Ĝ are complex symmetric in Section 5 we assume that G is complex symmetric and Ĝ is complex skew-symmetric and Section 6 is devoted to the case that both G and Ĝ are complex skew-symmetric Throughout the paper we use the following notation I n and n denote the n n identity and n n zero matrices respectively The m n zero matrix is denoted by m n and e j is the jth column of the identity matrix I n or equivalently the jth standard basis vector of C n Moreover we denote λ 1 1 R n := Im In λ Σ mn := J I n = J n I n n (λ) = 1 1 λ The transpose and conjugate transpose of a matrix A are denoted by A T and A respectively We use A 1 A k to denote a block diagonal matrix with diagonal blocks A 1 A k If A = a ij C n m and B C l k then A B = a ij B C nl mk denotes the Kronecker product of A and B 2 Matrices structured with respect to complex bilinear forms Our general theory will cover and generalize results for the following classes of matrices 3

Definition 21 Let G C n n be invertible and let H K C n n such that (GH) T = GH and (GK) T = GK 1 If G is symmetric then H is called G-symmetric and K is called G-skew-symmetric 2 If G is skew-symmetric then H is called G-Hamiltonian and K is called G-skew- Hamiltonian Thus G-symmetric and G-skew-Hamiltonian matrices are selfadjoint in the inner product induced by G while G-skew-symmetric and G-Hamiltonian matrices are skew-adjoint Observe that transformations of the form (M G) (P 1 MP P T GP ) P C n n invertible preserve the structure of M with respect to G ie if for example M = H is G-Hamiltonian then P 1 HP is P T GP -Hamiltonian as well Thus instead of working with G directly one may first transform G to a simple form using the Takagi factorization for complex symmetric and complex skew-symmetric matrices see 3 9 16 This factorization is a special case of the well-known singular value decomposition Theorem 22 (Takagi s factorization) Let G C n n be complex symmetric Then there exists a unitary matrix U C n n such that G = Udiag(σ 1 σ n )U T where σ 1 σ n There is a variant for complex skew-symmetric matrices (see 9) This result is a just a special case of the Youla form 18 for general complex matrices Theorem 23 (Skew-symmetric analogue of Takagi s factorization) Let K C n n be complex skew-symmetric Then there exists a unitary matrix U C n n such that ( ) r1 rk K = U r 1 r k n 2k U T where r 1 r n R \ {} As immediate corollaries we obtain the following well-known results Corollary 24 Let G C n n be complex symmetric and let rank G = r Then there exists a nonsingular matrix X C n n such that X T Ir GX = Corollary 25 Let G C m m be complex skew-symmetric and let rank G = r Then r is even and there exists a nonsingular matrix X C n n such that X T Jr/2 GX = 4

Next we review canonical forms for the classes of matrices defined in Definition 21 These canonical forms are closely related to the well-known canonical forms for pairs of matrices that are complex symmetric or complex skew-symmetric see 17 for an overview on this topic Proofs of the following results can be found eg in 14 Theorem 26 (Canonical form for G-symmetric matrices) Let G C n n be symmetric and invertible and let H C n n be G-symmetric Then there exists an invertible matrix X C n n such that X 1 HX = J ξ1 (λ 1 ) J ξm (λ m ) X T GX = R ξ1 R ξm where λ 1 λ m C are the (not necessarily pairwise distinct) eigenvalues of H For the next two results we need additional notation By Γ η we denote the matrix with alternating signs on the anti-diagonal ie ( 1) Γ η = ( 1) 1 ( 1) η 1 Theorem 27 (Canonical form for G-skew-symmetric matrices) Let G C n n be symmetric and invertible and let K C n n be G-skew-symmetric Then there exists an invertible matrix X C n n such that where X 1 KX = K c K z X T GX = G c G z K c = K c1 K cmc G c = G c1 G cmc K z = K z1 K zmo+me G z = G z1 G zmo+me and where the diagonal blocks are given as follows: 1) blocks associated with pairs (λ j λ j ) of nonzero eigenvalues of K: Jξj (λ K cj = j ) Rξj G J ξj (λ j ) cj = R ξj where λ j C \ {} and ξ j N for j = 1 m c when m c > ; 2) blocks associated with the eigenvalue λ = of K: K zj = J ηj () G zj = Γ ηj where η j N is odd for j = 1 m o when m o > and Jηj () Rηj K zj = G J ηj () zj = R ηj where η j N is even for j = m o + 1 m o + m e when m e > (not necessarily pair- The matrix K has the non-zero eigenvalues λ 1 λ mc λ 1 λ mc wise distinct) and the additional eigenvalue if m o + m e > 5

Theorem 28 (Canonical form for G-Hamiltonian matrices) Let G C 2n 2n be complex skew-symmetric and invertible and let H C 2n 2n be G-Hamiltonian Then there exists an invertible matrix X C 2n 2n such that where X 1 HX = H c H z X T GX = G c G z H c = H c1 H cmc G c = G c1 G cmc H z = H z1 H zmo+me G z = G z1 G zmo+me and where the diagonal blocks are given as follows: 1) blocks associated with pairs (λ j λ j ) of nonzero eigenvalues of H: Jξj (λ H cj = j ) R G J ξj (λ j ) cj = ξj R ξj where λ j C \ {} with arg(λ j ) π) and ξ j N for j = 1 m c when m c > ; 2) blocks associated with the eigenvalue λ = of H: Jξj () H zj = G J ξj () zj = R ξj R ξj where η j N is odd for j = 1 m o when m o > and H zj = J ηj () G zj = Γ ηj where η j N is even for j = m o + 1 m o + m e when m e > (not necessarily pair- The matrix H has the non-zero eigenvalues λ 1 λ mc λ 1 λ mc wise distinct) and the additional eigenvalue if m o + m e > Theorem 29 (Canonical form for G-skew-Hamiltonian matrices) Let G C 2n 2n be complex skew-symmetric and invertible and let K C 2n 2n be G-skew-Hamiltonian Then there exists an invertible matrix X C 2n 2n such that where X 1 KX = K 1 K m X T GX = G G m K j = Jξj (λ j ) J ξj (λ j ) G j = R ξj R ξj The matrix K has the (not necessarily pairwise distinct) eigenvalues λ 1 λ m The following lemma on existence and uniqueness of structured square roots of structured matrices will frequently be used Lemma 21 Let G C n n be invertible and let H C n n be invertible and such that H T G = GH 1 If G C n n is complex symmetric (ie H C n n is G-symmetric) then there exists a square root S C n n of H that is a polynomial in H and that satisfies σ(s) {z C : arg(z) π)} The square root is uniquely determined by these properties In particular S is G-symmetric 6

2 If G C n n if complex skew-symmetric (ie H C n n is G-skew-Hamiltonian) then there exists a square root S C n n of H that is a polynomial in H and that satisfies σ(s) {z C : arg(z) π)} The square root is uniquely determined by these properties In particular S is G-skew-Hamiltonian Proof By the discussion in Chapter 64 in 1 we obtain for both cases that a square root S of H with σ(s) {z C : arg(z) π)} exists is unique and can be expressed as a polynomial in H It is straightforward to check that a matrix that is a polynomial in H is again G-symmetric or G-skew-Hamiltonian respectively 3 Structured factorizations In this section we develop basic factorizations that will be needed for computing the canonical forms in the Sections 4 6 We start with factorizations for matrices B C m n satisfying B T B = I or B T B = Lemma 31 If B C m n satisfies B T B = I n then m n and there exists a nonsingular matrix X C m m such that X T In B = X T X = I m Proof By assumption B has full column rank So there exists B C m (m n) such that X = B B C m m is invertible Then X T I X = n B T B B T B B T B and with we have ( XX 1 ) T ( XX 1 ) = In B X 1 = T B I m n In BT (I BB T ) B Since XX 1 is nonsingular so is the complex symmetric matrix B T (I BB T ) B By Corollary 24 there exists a nonsingular matrix X 2 such that ( BT (I BB T ) B ) X 2 = I m n With X T 2 X = XX 1 In X 2 we then obtain X T X = I m Note that X = B B I n B T B In = I m n X 2 and hence X T X = I m implies that X T B = In B (I BB T ) BX 2 7

Lemma 32 If B C m n satisfies rank B = n and B T B = then m 2n and there exists a unitary matrix X C m m such that B I n X T B = n X T X = I n I m 2n where B C n n is upper triangular and invertible Proof We present a constructive proof which allows to determine the matrix X numerically We may assume that m 2 otherwise the result holds trivially Let Be 1 = u 1 + iv 1 u 1 v 1 R m Then (using eg a Householder transformation see 7) there exists an orthogonal matrix Q 1 R m m such that Q T 1 u 1 = α 1 e 1 and α 1 R Let ṽ 1 be the vector formed by the trailing m 1 components of Q T 1 v 1 Then (using eg a QR-decomposition see 7) there exists an orthogonal matrix Q 2 R (m 1) (m 1) such that Q T 2 ṽ1 = β 1 and β 1 R With U 1 = Q 1 (1 Q 2 ) then α 1 + iv 11 b 1 U1 T B = iβ 1 b 2 B 1 where B 1 C (m 2) (n 1) b 1 b 2 C 1 (n 1) and v 11 R Since U 1 is real orthogonal we have (U T 1 B) T (U T 1 B) = B T B = and hence (α 1 + iv 11 ) 2 β 2 1 = (α 1 + iv 11 )b 1 + iβ 1 b 2 = B T 1 B 1 + b T 1 b 1 + b T 2 b 2 = n 2 (31) From the first identity in (31) it follows that v 11 = and α 1 = β 1 Since α 1 β 1 we have that α 1 = β 1 > because otherwise we would have that rank B n 1 which is a contradiction With this the last two identities in (31) imply that b 1 = ib 2 B1 T B 1 = and thus α 1 ib 2 U1 T B = iα 1 b 2 B 1 C (m 2) (n 1) B 1 One can easily verify that rank B 1 = n 1 Applying the same procedure inductively to B 1 we obtain the existence of a real orthogonal matrix U 2 such that U2 T B 1 = α 2 ib 3 iα 2 b 3 B 2 B 2 C (m 4) (n 2) Similarly as above we can show that α 2 > and rank B 2 = n 2 8

Continuing the procedure we finally obtain a real orthogonal matrix U such that α 1 ib 12 ib 1n iα 1 b 12 b 1n α 2 ib 2n iα 2 b 2n UB = α n iα n and from this we obtain that m 2n Moreover we see that every other row of UB is a multiple by i of the preceding row Thus setting 2 1 i Z 1 = Z = Z 2 1 i 1 Z }{{} 1 I m 2n n letting P be a permutation matrix for which premultiplication has the effect of re-arranging the first 2n rows of a matrix in the order of 1 3 2n 1 2 4 2n and introducing the unitary matrix X = (P ZU) T we then have and we obtain furthermore that ZZ T 1 = 1 X T B = 2 1 1 } {{ } n α 1 ib 12 ib 1n α 2 ib 2n α n I m 2n and X T X = using the fact that U is real orthogonal ie U T U = I I n I n I m 2n Proposition 33 Let B C m n and suppose that rank B = n rank B T B = n n and that δ = n n is the dimension of the null space of B T B Then there exists a nonsingular X C m m such that X T B = B m n n X T X = I n1 where B C n n is nonsingular and n 1 = m n δ I δ I n I δ 9

Proof Since B T B is complex symmetric by the assumption and by Corollary 24 there exists a nonsingular matrix Y C n n such that Y T B T In BY = δ Let B C m n be the matrix formed by the leading n columns of BY By Lemma 31 there exists X 1 C m m such that X1 T B In = X1 T X 1 = I m and we obtain that Since we have that X1 T In B BY = 12 B 1 (X1 T BY ) T (X1 T BY ) = Y T B T In BY = δ B 12 = B T 1 B 1 = δ By assumption B has full column rank so this also holds for B 1 C (m n ) δ By Lemma 32 there exists a nonsingular matrix X 2 C (m n ) (m n ) such that T I δ X2 T B 1 = δ X2 T X 2 = I δ I n1 where T C δ δ is nonsingular and n 1 = m n 2δ = m n δ With X 3 = X 1 (I n X 2 ) we then have I n X3 T BY = T δ Iδ XT 3 X 3 = I n I I δ n1 Let P be the permutation that rearranges the block rows of X3 T BY in the order 4 3 1 2 and let X = X 3 P T Then X T BY = XT X = I n1 δ I n T I δ I n I δ Post-multiplying Y 1 to the first of these two equations and setting In B = Y 1 T we have the asserted form In the previous results we have obtained factorizations for matrices B such that B T B is the identity or zero We get similar results if B T J m B = J n or B T J m B = 1

Lemma 34 If B C 2m 2n satisfies B T J m B = J n then m n and there exists a nonsingular matrix X C 2m 2m such that I n X T B 1 = I n XT J m X = J m Proof The proof is similar to that for Lemma 31 and is hence omitted Lemma 35 Let b C 2m Then there is a unitary matrix X C 2m 2m such that X T b = αe 1 X T J m X = J m Proof We again present a constructive proof that can be implemented into a numerical algorithm Let b = b T 1 bt 2 T with b 1 b 2 C m and let H 2 C m m be a unitary matrix ( eg a Householder matrix) such that H2 T b 2 = βe 1 With H2 1 b 1 = b 11 b m1 T one then can determine (eg via a QR factorization) a unitary matrix G = 1 b11 b11 β b11 = b β b 11 2 + β 2 such that G T b11 11 β = Note that G T J 2 G = J 2 Next determine a unitary matrix H 1 C m m such that Finally let X = H T 1 b 11 b 21 b m1 T = αe 1 H T 2 H 2 H1 Ĝ H1 T b11 where Ĝ C2m 2m is the unitary matrix obtained by replacing the (1 1) (1 m+1) (m+1 1) and (m + 1 m + 1) elements of the identity matrix I 2m with the corresponding elements of G respectively It is easily verified that X is unitary and satisfies X T b = αe 1 and X T J m X = J m Lemma 36 If B C 2m n satisfies rank B = n and B T J m B = then m n and there exists a unitary matrix X C 2m 2m such that X T B B = X T J m X = J m where B C n n is upper triangular invertible Proof By Lemma 35 there exists a unitary matrix X 1 such that b 11 b T 1 X1 T B = B 22 b T XT 1 J m X 1 = J m 3 B 24 11

where b 1 b 3 C n 1 Since rank B = n we have b 11 and from it follows that (X 1 B) T J m (X 1 B) = B T J m B = b 3 = B22 B 24 Applying the same procedure inductively to X T B = B n T J m 1 B22 B 24 where B C n n is upper triangular and invertible B22 B 24 = we obtain a unitary matrix X such that 2m n XT J m X = J m Proposition 37 Let B C 2m n Suppose that rank B = n rank B T J m B = 2n n ie δ = n 2n is the dimension of the null space of B T J m B Then there exists an invertible matrix X C 2m 2m such that X T B = B 2m n n X T J m X = J n1 where B C n n is nonsingular and n 1 = m n δ I δ J n I δ Proof Since B T J m B is complex skew-symmetric by the assumption and Corollary 25 there exists a nonsingular matrix Y C n n such that Y T B T Jn J m BY = δ Let B 1 C 2m 2n be the matrix formed by the leading 2n columns of BY By Lemma 34 there exists a nonsingular X 1 C 2m 2m such that X T 1 B 1 = I n I n XT 1 J m X 1 = J m We have X T 1 BY = I n B 13 B 23 I n B 33 B 43 Since X1 T J mx 1 = J m also implies X 1 J m X1 T = J m from (X1 T BY ) T J m (X1 T BY ) = Y T B T Jn J m BY = δ 12

we obtain that B 13 = B 33 = Since B has full column rank so does X 2 C (2m 2n ) (2m 2n ) such that B23 = X T 2 B 43 B B23 B 43 B23 B 43 T J m n B23 B 43 = δ By Lemma 36 there exists an invertible X T 2 J m n X 2 = J m n where B C δ δ is invertible Let P 1 be a permutation that interchanges the second and third block rows of X1 T BY and set X 3 = X 1 P1 T (I 2n X 2 ) Then I 2n 2n X3 T B δ BY = n 1 X3 T J m X 3 = J n J m n δ n 1 where n 1 = m n δ (For convenience we have split the zero block row of X3 T BY into three block rows) Let P be a permutation that changes the block rows of X3 T BY to the order 3 5 4 1 2 by pre-multiplication and let X = X 3 P T (I 2n1 ( I δ ) I 2n +δ ) Then 2n 1 X T δ BY = T J m X = J n1 I 2n B X 2n δ I δ J n I δ Post-multiplying Y 1 to the first equation and setting B = (I 2n B )Y 1 we have the asserted form In this section we have presented preliminary factorizations that will form the basis in determining the canonical forms in the following sections 4 Canonical form for G Ĝ complex symmetric We start with the case that the matrix A under consideration is square and nonsingular If Σ = U AV is the standard singular decomposition of A then U AA U = V A AV = Σ 2 ie the canonical forms for both AA and A A are just the square of the canonical form for A This fact has a generalization in the case of a matrix triple (A G Ĝ) where G and Ĝ are complex symmetric To start from a square root of the Ĝ-symmetric matrix Ĥ = Ĝ 1 A T G 1 A will be the key strategy in the derivation of the canonical form in the following result Theorem 41 Let A C n n be nonsingular and let G Ĝ Cn n be complex symmetric and nonsingular Then there exist nonsingular matrices X Y C n n such that X T AY = J ξ1 (µ 1 ) J ξm (µ m ) X T GX = R ξ1 R ξm Y T ĜY = R ξ1 R ξm (41) 13

where µ j C \ {} arg µ j π) and ξ j N for j = 1 m Moreover for the Ĝ- symmetric matrix Ĥ = Ĝ 1 A T G 1 A and for the G-symmetric matrix H = G 1AĜ 1 A T we have that Y 1 ĤY = Jξ 2 1 (µ 1 ) Jξ 2 m (µ m ) (42) X 1 HX = Jξ 2 1 (µ 1 ) T Jξ 2 m (µ m ) T Moreover the form (41) is unique up to the simultaneous permutation of blocks in the right hand side of (41) Proof By Lemma 21 Ĥ has a unique Ĝ-symmetric square root S Cn n satisfying σ(s) {µ C \ {} : arg(µ) π)} Then by Theorem 26 there exists a nonsingular matrix Ỹ Cn n such that S CF := Ỹ 1 SỸ = J ξ 1 (µ 1 ) J ξm (µ m ) G CF := Ỹ T ĜỸ = R ξ 1 R ξm H CF := Ỹ 1 ĤỸ = J ξ 2 1 (µ 1 ) Jξ 2 m (µ m ) where µ j C \ {} arg µ j π) and ξ j N for j = 1 m (Here the third line immediately follows from Ĥ = S2 ) Using G 1 AĤ = HG 1 A and the fact that G 1 A is nonsingular we find that Ĥ and H are similar Since the canonical form of G-symmetric matrices in Theorem 26 is uniquely determined by the Jordan canonical form we obtain from Theorem 26 that the canonical forms of the pairs (Ĥ Ĝ) and (H G) coincide In particular this implies the existence of a nonsingular matrix X C n n such that H CF = X 1 H X = J 2 ξ 1 (µ 1 ) J 2 ξ m (µ m ) G CF = X T G X = R ξ1 R ξm Finally setting X = G 1 X T and Y = A 1 G XS CF we obtain X T AY = X 1 G 1 AA 1 G XS CF = S CF X T GX = X 1 G 1 GG 1 X T = ( X T G X) 1 = G 1 1CF CF Y T ĜY = S T CF X T GA T ĜA 1 G XS CF = S T CF X T G X X 1 H 1 XSCF = S T CFG CF (H CF ) 1 S CF = G CF S CF (H CF ) 1 S CF = G CF as desired where we used that S CF is G CF -symmetric and that S 2 CF = H CF It is now easy to check that Y 1 ĤY and X 1 HX have the claimed forms Concerning uniqueness we note that the form (41) is already uniquely determined by the Jordan structure of Ĥ and by the restriction µ j C \ {} arg µ j π) The canonical form for the case that A is singular or rectangular is more involved because then the matrices Ĥ and H may be singular as well The key idea in the proof of Theorem 41 was the construction of a Ĝ-symmetric square root of Ĥ but if Ĥ is singular then such a square root need not exist (For example the R n -symmetric nilpotent matrix J n () does not have any square root let alone a R n -symmetric one) A second difficulty comes from the fact that the Jordan structures of Ĥ and H may be different For example if 1 1 A = 1 1 G = R 2 R 2 = 1 1 Ĝ = R 1 R 3 = 1 1 1 1 1 14

then we obtain that Ĥ = Ĝ 1 A T G 1 A = 1 1 H = G 1 AĜ 1 A T = 1 1 Here Ĥ has a 1 1 and a 3 3 Jordan block associated with the eigenvalue zero while H has two 2 2 Jordan blocks associated with zero In general we obtain the following result Theorem 42 Let A C m n and let G C m m Ĝ Cn n be complex symmetric and nonsingular Then there exist nonsingular matrices X C m m and Y C n n such that X T AY = A c A z1 A z2 A z3 A z4 X T GX = G c G z1 G z2 G z3 G z4 (43) Y T ĜY = Ĝc Ĝz1 Ĝz2 Ĝz3 Ĝz4 Moreover for the Ĝ-symmetric matrix Ĥ = Ĝ 1 A T G 1 A C n n and for the G-symmetric matrix H = G 1 AĜ 1 A T C m m we have that Y 1 ĤY = Ĥc Ĥz1 Ĥz2 Ĥz3 Ĥz4 X 1 HX = H c H z1 H z2 H z3 H z4 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A c G c Ĝc have the forms as in (41) and Ĥc H c have the forms as in (42); 1) one block corresponding to n Jordan blocks of size 1 1 of Ĥ and m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = m n G z1 = I m Ĝ z1 = I n Ĥ z1 = n H z1 = m where m n N {}; 2) blocks corresponding to a pair of j j Jordan blocks of Ĥ and H associated with the eigenvalue zero: A z2 = G z2 = Ĝ z2 = Ĥ z2 = H z2 = γ 1 J 2 () γ 1 R 2 γ 1 R 2 γ 1 γ 2 J 4 () γ 2 R 4 γ 2 R 4 γ l γ l J 2l () R 2l γ l R 2l J2 2 γ () 2 J4 2 γ () l J2l 2 () γ 1 γ 2 γ l J2 2 ()T J4 2 ()T J2l 2 ()T where γ 1 γ l N {}; thus Ĥ z2 and H z2 both have each 2γ j Jordan blocks of size j j for j = 1 l; 15

3) blocks corresponding to a j j Jordan block of Ĥ and a (j + 1) (j + 1) Jordan block of H associated with the eigenvalue zero: A z3 = m 1 I1 m 2 m I2 l 1 Il 1 2 1 3 2 l (l 1) m 1 m 2 m l 1 G z3 = R l Ĝ z3 = R 2 m 1 R 1 R 3 m 2 R 2 Ĥ z3 = m 1 J 1 () m 2 J 2 () H z3 = m 1 J 2 () T m 2 J 3 () T m l 1 m l 1 m l 1 R l 1 J l 1 () J l () T where m 1 m l 1 N {}; thus Ĥ z3 has m j Jordan blocks of size j j and H z3 has m j Jordan blocks of size (j + 1) (j + 1) for j = 1 l 1; 4) blocks corresponding to a (j + 1) (j + 1) Jordan block of Ĥ and a j j Jordan block of H associated with the eigenvalue zero: A z4 = n 1 G z4 = Ĝ z4 = Ĥ z4 = I1 1 2 n 2 n 1 R 1 n 1 R 2 n 1 n I2 2 3 l 1 n 2 R 2 n 2 R 3 J 2 () n 2 J 3 () H z4 = n 1 J 1 () T n 2 J 2 () T Il 1 n l 1 n l 1 n l 1 n l 1 R l 1 R l J l () (l 1) l J l 1 () T where n 1 n l 1 N {}; thus Ĥ z4 has n j Jordan blocks of size (j + 1) (j + 1) and H z4 has n j Jordan blocks of size j j for j = 1 l 1; For the eigenvalue zero the matrices Ĥ and H have 2γ j+m j +n j 1 respectively 2γ j +m j 1 +n j Jordan blocks of size j j for j = 1 l where m l = n l = and where l is the maximum of the indices of Ĥ and H (Here index refers to the size of the largest Jordan block associated with the eigenvalue zero) Moreover the form (43) is unique up to simultaneous block permutation of the blocks in the diagonal blocks of the right hand side of (43) Proof The proof is very long and technical and is therefore postponed to the Appendix We highlight that the numbers m and n in 1) of Theorem 42 are allowed to be zero This has the effect that there may occur rectangular matrices with a total number of zero rows or columns in the canonical form We illustrate this phenomenon with the following example 16

Example 43 Consider the two non-equivalent triples A 1 = 1 G 1 = 1 1 Ĝ 1 = 1 and A 2 = 1 G 2 = 1 1 Ĝ 2 = 1 The first example is just one block of type 4) in Theorem 42 Indeed forming the products Ĥ 1 = Ĝ 1 1 AT 1 G 1 1 1 A = H 1 = G 1 1 AĜ 1 1 AT 1 = we see that as predicted by Theorem 42 Ĥ 1 has only one Jordan block of size 2 associated with the eigenvalue λ = whereas H 1 has one Jordan block of size 1 associated with λ = The situation is different in the second case Here we obtain Ĥ 2 = Ĝ 1 2 AT 2 G 1 2 A = 1 H 2 = G 1 2 AĜ 1 2 AT 2 = 1 ie Ĥ 2 has two Jordan blocks of size 1 one associated with λ = and a second one associated with λ = 1 while H 2 has one Jordan block of size 1 associated with λ = 1 Here the triple (A 2 G 2 Ĝ2) is in canonical form consisting of one block of type 1) and size 1 and of one block of type ): A 2 = 1 G 2 = 1 Ĝ 2 = 1 1 Remark 44 Theorem 42 in particular covers the special case G = I m and Ĝ = I n ie the case that Ĥ = AT A and H = AA T In comparison to the standard singular values of a matrix A C m n which are σ 1 σ min(mn) and which are the square roots of the eigenvalues of AA and A A we now obtain the transpose singular values of A according to Theorem 42 as J ξ1 (µ 1 ) m n J 2p1 () Iq1 I r1 where µ j arg(µ j ) π) and ξ j p j q j r j N Theorem 42 displays how thee blocks are related to the eigenvalues and Jordan structures of AA T and A T A The canonical form of A in Theorem 42 together with the canonical forms for AA T and A T A in the special case G = I m Ĝ = I n can also be deduced from Theorem 5 in 11 where the canonical form for a pair (B C) B C m n C C n m under the transformation (B C) (X 1 BY Y 1 CX) X Y nonsingular is given Setting then B = A and C = A T then yields the desired form The result of Theorem 42 however gives additional information on the transformation matrices X and Y because we also have a canonical form for X T X = X T GX and Y T Y = Y T ĜY A well known result by Flanders 4 completely describes the Jordan structures of the products BC and CB where B C m n and C C n m Recall that the partial multiplicities of an eigenvalue λ of a matrix M C n n are just the sizes of the Jordan blocks associated with λ in the Jordan canonical form for M 17

Theorem 45 (4) For M C m m and N C n n the following conditions are equivalent: 1) There exist matrices B C m n and C C n m such that M = BC and N = CB 2) M and N satisfy the Flanders condition ie i) M and N have the same nonzero eigenvalues and their algebraic geometric and partial multiplicities coincide ii) If (τ i ) i N is the monotonically decreasing sequence of partial multiplicities of M associated with the eigenvalue zero made infinite by adjunction of zeros and if (ζ i ) i N is the corresponding sequence of N then τ i ζ i 1 for all i N With the canonical form of Theorem 42 we are now able to prove a specialization of Theorem 45 for the case of complex symmetric matrices Theorem 46 For M C m m and N C n n the following conditions are equivalent: 1) There exists a matrix A C m n such that M = AA T and N = A T A 2) M and N are symmetric and satisfy the Flanders condition ie i) and ii) in Theorem 45 as well as iii) Let φ k be the number of indices j for which τ j = ζ j = k where (τ i ) i N and (ζ i ) i N are the sequences as in Theorem 45 and let k 1 > > k ν be the numbers k N for which φ k is odd If ν is even then for j = 1 ν 2 we have that φ k for all k with k 2j 1 k k 2j (Here κ denotes the smallest integer larger or equal to κ and we set k ν+1 := 1 in the case that ν is odd) Proof 1) 2) : Let H = M = AA T and Ĥ = N = AT A and let ω j and ˆω j denote the number of Jordan blocks of size j j associated with the eigenvalue zero of H and Ĥ respectively Using the same notation as in Theorem 42 we obtain that ω j = 2γ j + n j + m j 1 and ˆω j = 2γ j + m j + n j 1 j = 1 l Assume without loss of generality that m l 1 n l 1 Since m l = n l = we find that the first 2γ l + n l 1 entries in the sequences (τ i ) i N and (ζ i ) i N are given by l which implies φ l = 2γ l + n l 1 The sequence (τ i ) has m l 1 n l 1 more entries equal to l that are paired to m l 1 n l 1 entries l 1 in (ζ i ) Since then there are 2γ l 1 + n l 1 + n l 2 more entries l 1 in (ζ i ) and 2γ l 1 + n l 1 + m l 2 entries l 1 in (τ i ) we obtain that φ l 1 = 2γ l 1 + n l 1 + min(m l 2 n l 2 ) Continuing the counting in the way just described finally yields φ j = 2γ j + min(m j n j ) + min(m j 1 n j 1 ) j = 1 l (44) If ν = then there is nothing to prove so assume ν 1 Since = min(m l n l ) is even as well as φ l φ k1 +1 we obtain from (44) that min(m j 1 n j 1 ) is even for j > k 1 and min(m k1 1 n k1 1) is odd Clearly we must then have that min(m k 1 n k 1 ) is odd for all k satisfying k 1 > k > k 2 In particular this implies φ k for all such k as well as φ k1 and φ k2 If ν 2 we are done Otherwise min(m k2 1 n k2 1) is even and we can repeat the argument for k 2j 1 k k 2j for j = 2 ν 2 18

2) 1) : Let l be the largest entry that appears in one of the sequences (τ i ) i N and (ζ i ) i N First let us assume that ν = or k 1 = 1 ie φ j is even for j = 2 l Then we build up a matrix triple (à G Ĝ) as a direct sum of blocks as follows: for the φ k indices j with τ j = ζ j = k k 1 we take φ k /2 blocks as in 2) of Theorem 42 and for each index j with τ j ζ j = 1 τ j ζ j we take a block as in 3) respectively 4) of Theorem 42 Finally if there are say m indices in (τ i ) left with τ i = 1 and n indices in (ζ i ) left with ζ i = 1 then we take a block of size m n as in 1) of Theorem 42 Then by construction and Theorem 42 the matrices H = G 1AĜ 1 A T and Ĥ = Ĝ 1 A T G 1 A have the same Jordan canonical form as M and N respectively Let Z Ẑ be such that ZT GZ = I m and Ẑ T ĜẐ = I n Then setting  = ZT ÃẐ we find that ÂÂT = Z 1 HZ and ÂT  = Ẑ 1 ĤẐ are symmetric Thus there exist orthogonal matrices S and T such that SÂÂT S 1 = M and T ÂT ÂT 1 = N (This well-known fact is a direct consequence of Theorem 26) Then A = SÂT 1 satisfies M = AA T and N = A T A Next assume that k 1 > 1 Then 2) guarantees that for each k with k 2j 1 > k > k 2j j = 1 ν 2 we have that φ k 2 This allows us to modify the sequences (τ i ) and (ζ i ) to (not necessarily monotonically decreasing) sequences ( τ i ) and ( ζ i ) such that the number of indices j with τ j = ζ j = k is even for all k > 1 In order to avoid too complicated notation we explain the modification only for the case ν 2 The general case is analogous Thus if (τ i ) = k 1 k }{{} 1 k 1 1 k 1 1 }{{} φ k1 φ k1 1 (ζ i ) = k 1 k }{{} 1 k 1 1 k 1 1 }{{} φ k1 φ k1 1 k 2 + 1 k 2 + 1 }{{} φ k2 +1 k 2 + 1 k 2 + 1 } {{ } φ k2 +1 then the corresponding parts in the sequences ( τ i ) and ( ζ i ) take the forms k 2 k }{{} 2 φ k2 k 2 k }{{} 2 φ k2 ( τ i ) = k 1 k }{{} 1 k 1 1 k 1 1 k }{{} 2 + 1 k 2 + 1 k }{{} 2 k 2 Ξ }{{} τ φ k1 1 φ k1 1 2 φ k2 +1 2 φ k2 1 ( ζ i ) = k 1 k }{{} 1 k 1 1 k 1 1 k }{{} 2 + 1 k 2 + 1 k }{{} 2 k 2 Ξ }{{} ζ φ k1 1 φ k1 1 2 φ k2 +1 2 φ k2 1 where Ξ τ = k 1 k 1 1 k 1 1 k 1 2 k 2 + 1 k 2 ; Ξ ζ = k 1 1 k 1 k 1 2 k 1 1 k 2 k 2 + 1 When the sequences ( τ i ) and ( ζ i ) have been constructed we can apply the strategy of the previous paragraph to construct A such that M = AA T and N = A T A Example 47 Let 1 i M 1 = i 1 1 i N 1 = i 1 M 2 = 1 i i 1 N 2 = 1 i i 1 ie M 1 and N 1 are similar to a Jordan block of size 2 2 associated with zero Then (τ (1) i ) i N = (ζ (1) i ) i N = (2 ) and (τ (2) i ) i N = (ζ (2) i ) i N = (2 1 ) are the sequences as in Theorem 45 associated to M 1 N 1 and M 2 N 2 respectively In both cases we have φ 2 = 1 which is odd The sequences associated to M 1 and N 1 do not satisfy condition iii) 19

in Theorem 46 while the sequences associated with M 2 and N 2 do Indeed there does not exist a matrix A 1 such that M 1 = A 1 A T 1 and N 1 = A T 1 A 1 because setting a b A 1 = c d gives a 2 + b 2 ac + bd ac + bd c 2 + d 2 1 i = i 1 and a 2 + c 2 ab + cd ab + cd b 2 + d 2 = 1 i i 1 which implies d = ±a If d = a then i = ac ba = ab ca a contradiction But d = a implies a 2 = bc because det A 1 = det M 1 = Moreover we then have bc + b 2 = 1 = c 2 bc which implies (b + c) 2 = ie c = b contradicting a 2 + b 2 = 1 a 2 + c 2 On the other hand we have M 2 = AA T and N 2 = A T A where A = Here the canonical form for the triple (A I 3 I 3 ) is given by 1 1 1 1 1 1 1 1 1 i i 1 5 Condensed forms for G complex symmetric Ĝ complex skew-symmetric In this section we study the canonical forms for the case that G is complex symmetric and Ĝ complex skew-symmetric Again we start with the canonical form for the case that A is quadratic and nonsingular We cannot directly use our key strategy from the proof of Theorem 42 and construct a square root of Ĥ because now Ĥ is Ĝ-Hamiltonian A Ĝ- Hamiltonian matrix can neither have a Ĝ-Hamiltonian nor a Ĝ-skew-Hamiltonian square root because the squares of matrices of such type are always Ĝ-skew-Hamiltonian Therefore we will start from the fourth root of the Ĝ-skew-Hamiltonian matrix Ĥ2 instead Theorem 51 Let A G Ĝ C2n 2n be nonsingular and let G be complex symmetric and Ĝ be complex skew-symmetric Then there exists nonsingular matrices X Y C 2n 2n such that X T Jξ1 (µ AY = 1 ) Jξm (µ m ) J ξ1 (µ 1 ) J ξm (µ m ) X T Rξ1 R GX = ξm (51) R ξ1 R ξm Y T Rξ1 R ĜY = ξm R ξ1 R ξm where µ j C \ {} arg µ j π/2) and ξ j N for j = 1 m Moreover for the Ĝ- Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A and the G-skew-symmetric matrix H = G 1 AĜ 1 A T 2

we have that Y 1 ĤY = X 1 HX = J 2 ξ1 (µ 1 ) Jξ 2 1 (µ 1 ) J 2 ξ1 (µ 1 ) Jξ 2 1 (µ 1 ) J 2 ξm (µ m ) J 2 ξ m (µ m ) T T J 2 ξm (µ m ) J 2 ξ m (µ m ) (52) Proof By Theorem 28 there exists a nonsingular matrix Y C n n such that Y 1 Jξ1 (λ ĤY = 1 ) Jξm (λ m ) J ξ1 (λ 1 ) J ξm (λ m ) Y T Rξ1 R ĜY = ξm R ξ1 R ξm where λ j C \ {} arg(λ j ) π) and ξ j N for j = 1 m Next construct the matrix S such that Y 1 SY = Jξ1 (λ 1 ) J ξ1 (λ 1 ) Jξm (λ m ) J ξm (λ m ) It is easily verified that S is Ĝ-skew-Hamiltonian that it satisfies S 2 = Ĥ2 and that we have σ( S) {z C \ {} : arg(z) π)} Thus by the uniqueness property of Lemma 21 we obtain that S is a polynomial in Ĥ2 Moreover applying Lemma 21 once more we obtain that S has a unique square root S C n n being a polynomial in S and satisfying σ(s) {z C \ {} : arg(z) π)} namely Y 1 J SY = ξ1 (λ 1 ) 1 2 J ξm (λ m ) 1 2 J ξ1 (λ 1 ) 1 2 J ξm (λ m ) 1 2 In fact we must have σ(s) {z C \ {} : arg(z) π/2)} because otherwise S would have eigenvalues λ j with arg(λ j ) π 2π) Let µ 2 j = λ j and arg(µ j ) π/2) By Theorem 29 we then obtain that there exists a nonsingular matrix Ỹ C n n such that S CF := Ỹ 1 SỸ = Jξ1 (µ 1 ) Jξm (µ m ) J ξ1 (µ 1 ) J ξm (µ m ) Ĝ CF := Ỹ T ĜỸ = Rξ1 R ξm R ξ1 R ξm Moreover using G 1 AĤ = HG 1 A and the fact that G 1 A is nonsingular we find that Ĥ and H are similar Thus by Theorem 27 there exists a nonsingular matrix X C n n such that H CF = X 1 H X = G CF = X T G X = J 2 ξ1 (µ 1 ) J 2 Rξ1 R ξ1 ξ 1 (µ 1 ) J 2 ξm (µ m ) Jξ 2 m (µ m ) R ξm R ξm 21

Indeed since H is similar to Ĥ it has the eigenvalues λ j = µ 2 j with partial multiplicities ξ j j = 1 m Since the canonical form of G-skew-symmetric matrices in Theorem 27 is uniquely determined by the Jordan canonical form we find that the pairs (H G) and (H CF G CF ) must have the same canonical form Observe that S CF is G CF -symmetric but not a square root of H CF Instead it is easy to check that S CF (H CF ) 1 Iξ1 S CF = I ξ1 Iξm I ξm Using this identity and setting X = G 1 X T and Y = A 1 G XS CF we obtain that X T AY = X 1 G 1 AA 1 G XS CF = S CF X T GX = X 1 G 1 GG 1 X T = ( X T G X) 1 = (G CF ) 1 = G CF Y T ĜY = S T CF X T GA T ĜA 1 G XS CF = S T CF X T G X X 1 H 1 XSCF = S T CFG CF (H CF ) 1 S CF = G CF S CF (H CF ) 1 S CF = ĜCF It is now straightforward to check that Y 1 ĤY and X 1 HX have the claimed forms Concerning uniqueness we note that the form (51) is already uniquely determined by the Jordan structure of Ĥ and by the restriction µ j C \ {} arg µ j π/2) Theorem 52 Let A C m 2n let G C m m be complex symmetric and nonsingular and let Ĝ C2n 2n be complex skew-symmetric and nonsingular Then there exists nonsingular matrices X C m m and Y C 2n 2n such that X T AY = A c A z1 A z2 A z3 A z4 A z5 A z6 X T GX = G c G z1 G z2 G z3 G z4 G z5 G z6 (53) Y T ĜY = Ĝc Ĝz1 Ĝz2 Ĝz3 Ĝz4 Ĝz5 Ĝz6 Moreover for the Ĝ-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A C 2n 2n and for the G-skewsymmetric matrix H = G 1 AĜ 1 A T C m m we have that Y 1 ĤY = Ĥc Ĥz1 Ĥz2 Ĥz3 Ĥz4 Ĥz5 Ĥz6 X 1 HX = H c H z1 H z2 H z3 H z4 H z5 H z6 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A c G c Ĝc have the forms as in (51) and Ĥc H c have the forms as in (52); 1) one block corresponding to 2n Jordan blocks of size 1 1 of Ĥ and m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = m 2n G z1 = I m Ĝ z1 = J n Ĥ z1 = 2n H z1 = m where m o n o N {}; 22

2) blocks corresponding to a pair of j j Jordan blocks of Ĥ and H associated with the eigenvalue zero: A z2 = G z2 = Ĝ z2 = Ĥ z2 = H z2 = γ 1 J 2 () γ 1 R 2 γ 1 γ 2 γ 2 J 4 () γ 2l+1 γ 2l+1 J 4l+2 () R 4 R 4l+2 R1 γ 2 γ R2 2l+1 R 2l+1 R 1 R 2 R 2l+1 γ 1 γ 2 2 ( Σ 22 )J4 2 γ () 2l+1 ( Σ 2l+12l+1 )J4l+2 2 () γ 1 2 γ 2 Σ 31 J4 2()T γ 2l+1 Σ 2l+22l J 2 4l+2 ()T where γ 1 γ l N {}; thus Ĥ z2 and H z2 both have each 2γ j Jordan blocks of size j j for j = 1 2l + 1; 3) blocks corresponding to a 2j 2j Jordan block of Ĥ and a (2j + 1) (2j + 1) Jordan block of H associated with the eigenvalue zero: A z3 = m 2 I2 m 4 I4 m 2l I2l G z3 = Ĝ z3 = m 2 3 2 m 2 R 3 R1 R 1 m 4 5 4 m 4 R 5 R2 R 2 (2l+1) 2l m 2l R 2l+1 m 2l Rl R l Ĥ z3 = m 2 ( Σ 11 )J 2 () m 4 ( Σ 22 )J 4 () m 2l ( Σ ll )J 2l () H z3 = m 2 Σ 21 J 3 () T m 4 Σ 32 J 5 () T m 2l Σ l+1l J 2l+1 () T where m 2 m 4 m 2l N {}; thus Ĥ z3 has m 2j Jordan blocks of size 2j 2j and H z3 has m 2j Jordan blocks of size (2j + 1) (2j + 1) for j = 1 l; 4) blocks corresponding to two (2j 1) (2j 1) Jordan blocks of Ĥ and two 2j 2j Jordan 23

blocks of H associated with the eigenvalue zero: I 1 I 3 A z4 = m 1 I 1 m 3 I 3 4 2 m 1 m 3 G z4 = Ĝ z4 = Ĥ z4 = m 1 H z4 = m 1 8 6 R 4 R 8 R1 m 3 R3 R 1 R 3 m 1 2 m 3 J3 () J 3 () T J2 () m 3 T J4 () J 2 () J 4 () I 2l 1 I 2l 1 4l (4l 2) m 2l 1 R 4l m 2l 1 R 2l 1 R 2l 1 J2l 1 () J 2l 1 () m 2l 1 T J2l () J 2l () m 2l 1 m 2l 1 where m 1 m 3 m 2l 1 N {}; thus Ĥ z4 has 2m 2j 1 Jordan blocks of the size (2j 1) (2j 1) and H z4 has 2m 2j 1 Jordan blocks of size 2j 2j for j = 1 l; 5) blocks corresponding to a 2j 2j Jordan block of Ĥ and a (2j 1) (2j 1) Jordan block of H associated with the eigenvalue zero: A z5 = n 1 G z5 = Ĝ z5 = n 1 I1 1 2 n 3 n 1 R 1 R1 R 1 n 3 n I3 3 4 2l 1 n 3 R 3 R2 R 2 Ĥ z5 = n 1 ( Σ 11 )J 2 () n 3 ( Σ 22 )J 4 () H z5 = n 1 1 n 3 Σ 21 J 3 () T I2l 1 n 2l 1 n 2l 1 n 2l 1 n 2l 1 Rl R l (2l 1) 2l R 2l 1 ( Σ ll )J 2l () Σ ll 1 J 2l 1 () T where n 1 n 3 n 2l 1 N {}; thus Ĥ z5 has n 2j 1 Jordan blocks of size 2j 2j and H z5 has n 2j 1 Jordan blocks of size (2j 1) (2j 1) for j = 1 l; 6) blocks corresponding to two (2j +1) (2j +1) Jordan blocks of Ĥ and two 2j 2j Jordan blocks of H associated with the eigenvalue zero: A z6 = n 2 I2 n 4 I4 n 2l I2l I 2 4 6 I 4 8 1 I 2l 4l (4l+2) n 2 n 4 n 2l G z6 = R 4 R 8 R 4l n 2 R3 Ĝ z6 = n 4 R5 n 2l R 2l+1 R 3 R 5 R 2l+1 Ĥ z6 = n 2 J3 () n 4 J5 () n 2l J2l+1 () J 3 () J 5 () J 2l+1 () H z6 = n 2 T J2 () n 4 T J4 () n 2l T J2l () J 2 () J 4 () J 2l () 24

where n 2 n 4 n 2l N {}; thus Ĥz6 has 2n 2j Jordan blocks of size (2j+1) (2j+1) and H z6 has 2n 2j Jordan blocks of size 2j 2j for j = 1 l; For the eigenvalue zero the matrices Ĥ and H have 2γ 2j + m 2j + n 2j 1 respectively 2γ 2j + 2m 2j 1 + 2n 2j Jordan blocks of size 2j 2j for j = 1 l and 2γ 2j+1 + 2m 2j+1 + 2n 2j respectively 2γ 2j+1 + m 2j + n 2j+1 Jordan blocks of size (2j + 1) (2j + 1) for j = l Here m 2l+1 = n 2l+1 = and 2l + 1 is the smallest odd number that is larger or equal to the maximum of the indices of Ĥ and H (Here index refers to the maximal size of a Jordan block associated with zero) Moreover the form (43) is unique up to simultaneous block permutation of the blocks in the diagonal blocks of the right hand side of (43) Proof The proof is presented in the Appendix 6 Canonical forms for G Ĝ complex skew-symmetric In this section we finally treat that case that both G and Ĝ are complex skew-symmetric Theorem 61 Let A C 2n 2n be nonsingular and let G Ĝ C2n 2n be nonsingular and complex skew-symmetric Then there exists nonsingular matrices X Y C 2n 2n such that X T Jξ1 (µ AY = 1 ) Jξm (µ m ) J ξ1 (µ 1 ) J ξm (µ m ) X T Rξ1 R GX = ξm (61) R ξ1 R ξm Y T Rξ1 R ĜY = ξm R ξ1 R ξm where µ j C \ {} arg µ j π) and ξ j N for j = 1 m Furthermore for the Ĝ-skew-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A and for the G-skew-Hamiltonian matrix H = G 1AĜ 1 A T we have that Y 1 ĤY = X 1 HX = J 2 ξ1 (µ 1 ) J 2 ξ 1 (µ 1 ) J 2 ξ1 (µ 1 ) Jξ 2 1 (µ 1 ) J 2 ξm (µ m ) J 2 ξ m (µ m ) T T J 2 ξm (µ m ) J 2 ξ m (µ m ) (62) Proof The proof proceeds completely analogous to the proof of Theorem 41 Starting with a skew-hamiltonian square root S of Ĥ that is a polynomial in Ĥ (such a square root exists by Lemma 21) and reducing the pair (S; Ĝ) to the canonical form (S CF G CF ) = (Ỹ 1 SỸ Ỹ T ĜỸ ) of Theorem 29 we obtain the existence of a transformation matrix X such that ( X 1 H X X T G X) = (S 2 CF G CF ) Here it is used that by Theorem 29 the canonical form of all three pairs (Ĥ Ĝ) (H G) and (H G) is the same because H and Ĥ are similar Then setting X = G 1 X T and Y = A 1 G XS CF yields the desired result 25

We mention that the choice of the transformation matrices X Y in Theorem 61 so that X T GX = Y T ĜY rather than X T GX = Y T ĜY is just a matter of taste A canonical form (with modified values instead of µ 1 µ m in X T AY ) with X T GX = Y T ĜY can be constructed as well but this would lead to the occurrence of distracting minus signs in the forms for H and Ĥ Therefore we prefer to represent the canonical form as we did in Theorem 61 Theorem 62 Let A C 2m 2n and let G C 2m 2m Ĝ C2n 2n be complex skew-symmetric and nonsingular Then there exists nonsingular matrices X C 2m 2m and Y C 2n 2n such that X T AY = A c A z1 A z2 A z3 A z4 X T GX = G c G z1 G z2 G z3 G z4 (63) Y T ĜY = Ĝc Ĝz1 Ĝz2 Ĝz3 Ĝz4 Moreover for the Ĝ-skew-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A C 2n 2n and for the G- skew-hamiltonian matrix H = G 1 AĜ 1 A T C 2m 2m we have that Y 1 ĤY = Ĥc Ĥz1 Ĥz2 Ĥz3 Ĥz4 X 1 HX = H c H z1 H z2 H z3 H z4 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A c G c Ĝc have the forms as in (61) and Ĥc H c have the forms as in (62); 1) one block corresponding to 2n Jordan blocks of size 1 1 of Ĥ and 2m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = 2m 2n G z1 = J m Ĝ z1 = J n Ĥ z1 = 2n H z1 = 2m ; 2) blocks corresponding to a pair of j j Jordan blocks of Ĥ and H associated with the eigenvalue zero: γ 1 γ 2 γ l A z2 = J 2 () J 4 () J 2l () γ 1 R1 γ 2 R2 γ l Rl G z2 = R 1 R 2 R l γ 1 R1 γ 2 R2 γ l Rl Ĝ z2 = R 1 R 2 R l γ 1 γ 2 Ĥ z2 = 2 ˆΓ 4 J4 2 γ () l ˆΓ 2l J2l 2 () H z2 = γ 1 γ 2 γ l 2 Γ 4 J4 2 ()T Γ 2l J2l 2 ()T where γ 1 γ l N {} ˆΓ 2j = ( I j 1 ) I 1 ( I j ) and Γ 2j = ( I j ) I 1 ( I j 1 ) for j = 2 l; thus Ĥ z2 and H z2 both have each 2γ j Jordan blocks of size j j for j = 1 l; 26

3) blocks corresponding to two j j Jordan blocks of Ĥ and two (j + 1) (j + 1) Jordan blocks of H associated with the eigenvalue zero: I 1 I 2 A z3 = m 1 I 1 m 2 I 2 4 2 6 4 G z3 = m 1 R2 m 2 R3 R 2 R 3 Ĝ z3 = m 1 R1 m 2 R2 R 1 R 2 m 1 Ĥ z3 = 2 m 2 J2 () J 2 () H z3 = m 1 T J2 () m 2 T J3 () J 2 () J 3 () m l I l 1 I l 1 2l (2l 2) m l 1 Rl R l m l 1 R l 1 R l 1 Jl 1 () J l 1 () m l 1 T Jl () J l () where m 1 m l 1 N {}; thus Ĥ z3 has 2m j Jordan blocks of size j j and H z3 has 2m j Jordan blocks of size (j + 1) (j + 1) for j = 1 l 1; 4) blocks corresponding to two (j + 1) (j + 1) Jordan blocks of Ĥ and two j j Jordan blocks of H associated with the eigenvalue zero: A z4 = n 1 I1 n 2 n I2 l 1 Il 1 I 1 2 4 I 2 4 6 I l 1 (2l 2) 2l n 1 R1 G z4 = n 2 n R2 l 1 R l 1 R 1 R 2 R l 1 n 1 R2 Ĝ z4 = n 2 n R3 l 1 Rl R 2 R 3 R l Ĥ z4 = n 1 J2 () n 2 n J3 () l 1 Jl () J 2 () J 3 () J l () H z4 = n 1 T J1 () n 2 T n J2 () l 1 T Jl 1 () J 1 () J 2 () J l 1 () where n 1 n l 1 N {}; thus Ĥ z4 has 2n j Jordan blocks of size (j + 1) (j + 1) and H z4 has 2n j Jordan blocks of size j j for j = 1 l 1; Then for the eigenvalue zero the matrices Ĥ and H have 2γ j + 2m j + 2n j 1 respectively 2γ j + 2m j 1 + 2n j Jordan blocks of size j j for j = 1 l Here l is the maximum of the indices of Ĥ and H (Here index refers to the maximal size of a Jordan block associated with the eigenvalue zero) Moreover the form (63) is unique up to simultaneous block permutation of the blocks in the diagonal blocks of the right hand side of (63) Proof The proof is presented in the Appendix m l 1 27

7 Conclusion We have presented canonical forms for matrix triples (A G Ĝ) where G Ĝ are complex symmetric or complex skew-symmetric and nonsingular The canonical form for A can be interpreted as a variant of the singular value decomposition because the form also displays the Jordan canonical forms of the structured matrices Ĥ = Ĝ 1 A T G 1 A and H = G 1AĜ 1 A T Acknowledgement We thank Leiba Rodman for some valuable comments and in particular for pointing us into the direction of Theorem 46 References 1 G Ammar C Mehl and V Mehrmann Schur-like forms for matrix Lie groups Lie algebras and Jordan algebras Linear Algebra Appl 287:11 39 1999 2 Y Bolschakov and B Reichstein Unitary equivalence in an indefinite scalar product: an analogue of singular-value decomposition Linear Algebra Appl 222:155 226 1995 3 A Bunse-Gerstner and W B Gragg Singular value decompositions of complex symmetric matrices J Comput Appl Math 21:41 54 1988 4 H Flanders Elementary divisors of AB and BA Proc Amer Math Soc 2:871 874 1951 5 I Gohberg P Lancaster and L Rodman Matrices and Indefinite Scalar Products Birkhäuser Basel 1983 6 I Gohberg P Lancaster and L Rodman Indefinite Linear Algebra and Applications Birkhäuser Basel 25 7 G H Golub and C F Van Loan Matrix Computations Johns Hopkins University Press Baltimore 3rd edition 1996 8 A Hilliges C Mehl and V Mehrmann On the solution of palindromic eigenvalue problems In Proceedings of the 4th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS) Jyväskylä Finland 24 CD-ROM 9 R A Horn and C R Johnson Matrix Analysis Cambridge University Press Cambridge 1985 1 R A Horn and C R Johnson Topics in Matrix Analysis Cambridge University Press Cambridge 1991 11 R A Horn and D Merino Contragredient equivalence: a canonical form and some applications Linear Algebra Appl 214:43 92 1995 12 P Lancaster and L Rodman The Algebraic Riccati Equation Oxford University Press Oxford 1995 28