Structured decompositions for matrix triples: SVD-like concepts for structured matrices

Size: px
Start display at page:

Download "Structured decompositions for matrix triples: SVD-like concepts for structured matrices"

Transcription

1 Structured decompositions for matrix triples: SVD-like concepts for structured matrices Christian Mehl Volker Mehrmann Hongguo Xu July 1 8 In memory of Ralph Byers (1955-7) Abstract Canonical forms for matrix triples (A G Ĝ) where A is arbitrary rectangular and G Ĝ are either real symmetric or skew symmetric or complex Hermitian or skew Hermitian are derived These forms generalize classical product Schur forms as well as singular value decompositions An new proof for the complex case is given where there is no need to distinguish whether G and Ĝ are Hermitian or skew Hermitian This proof is independent from the results in 1 where a similar canonical form has been obtained for the complex case and it allows generalization to the real case Here the three cases ie that G and Ĝ are both symmetric both skew symmetric or one each are treated separately Keywords Matrix triples indefinite inner product structured SVD canonical form Hamiltonian matrix skew-hamiltonian matrix AMS subject classification 15A1 65F15 65L8 65L5 1 Introduction Let F denote either the complex field C or the real field R Consider a triple of matrices (A G Ĝ) with A Fm n G F m m and Ĝ Fn n where G and Ĝ are nonsingular and either Hermitian or skew-hermitian (in the complex case) or symmetric or skew-symmetric (in the real case) In this paper we derive canonical forms (A CF G CF ĜCF) under the transformation (A CF G CF ĜCF) := (X AY X GX Y ĜY ) (11) with nonsingular matrices X F m m and Y F n n (Here A denotes the conjugate transpose of a matrix A if F = C or the transpose if F = R) School of Mathematics University of Birmingham Edgbaston Birmingham B15 TT United Kingdom mehl@mathsbhamacuk Technische Universität Berlin Institut für Mathematik MA 4-5 Straße des 17 Juni Berlin Germany mehrmann@mathtu-berlinde Department of Mathematics University of Kansas Lawrence KS 6645 USA xu@mathkuedu Partially supported by the University of Kansas General Research Fund allocation # Part of the work was done while this author was visiting TU Berlin whose hospitality is gratefully acknowledged Partially supported by the Deutsche Forschungsgemeinschaft through the DFG Research Center Matheon Mathematics for key technologies in Berlin 1

2 The canonical form for the complex case is already known and has appeared in 1 although uniqueness of the canonical form had not been considered there The real case however has only been investigated in 5 so far for the special case that G = I m I m and Ĝ = I n and in 6 where a numerical method was derived for this case In this paper based on stair-case-like decompositions we give a new and independent proof of the canonical form in the complex case These stair-case decompositions have analogues in the real case which allow a generalization of the results for the complex case to cover the real case as well The difficulties encountered in the treatment of the real case in full generality stem from the fact that one has to distinguish the cases that G and Ĝ are either both symmetric or both skew-symmetric or that one is symmetric and the other skew-symmetric In the complex case in contrast there is no need to distinguish between Hermitian and skew-hermitian matrices G and Ĝ because multiplication with the imaginary unit ı easily converts an Hermitian matrix into a skew-hermitian matrix and vice versa A corresponding transformation can be performed on the canonical form so that all cases are covered by presenting the canonical form for the case that G and Ĝ are both Hermitian A third case besides the real case and the complex case with Hermitian and skew-hermitian G and Ĝ is obtained if one assumes that G and Ĝ are complex symmetric or complex skewsymmetric and if one replaces the conjugate transpose in (11) by the transpose This case has been investigated in 18 So together with this paper the complete set of canonical forms for real and complex matrix triples of the form (11) is available The study of the described canonical forms is motivated by the goal of unifying the solution procedures for eigenvalue problems associated with structured matrices from Lie and Jordan algebras related to indefinite inner products Consider for example a signature matrix Iπ1 Σ π1 ν 1 = π I 1 + ν 1 = m ν1 A matrix H C m m is called Σ π1 ν 1 -Hermitian if (Σ π1 ν 1 H) = Σ π1 ν 1 H ie if Σ π1 ν 1 H is Hermitian The matrix Σ π1 ν 1 H then possesses a factorization Σ π1 ν 1 H = AΣ π ν A where Σ π ν is another signature matrix and A R m n with n = π + ν This means that H has a factorization H = Σ π1 ν 1 AΣ π ν A If for the triple (A Σ π1 ν 1 Σ π ν ) we can determine a suitable canonical form (A CF G CF ĜCF) = (X AY X Σ π1 ν 1 X Y Σ π ν Y ) then this will allow us to determine the eigenstructure of H because X 1 HX = (X Σ π1 ν 1 X) 1 (X AY )(Y Σ π ν Y ) 1 (Y A X) = G 1 CFA CF Ĝ 1 CFA CF Simultaneously the eigenstructure for the Σ π ν -Hermitian matrix Ĥ = Σ π ν A Σ π1 ν 1 A is obtained because Y 1 Σ π ν A Σ π1 ν 1 AY = Ĝ 1 CFA CFG 1 CFA

3 In general the canonical form (11) of the matrix triple (A G Ĝ) will allow us to simultaneously determine the eigenstructures of the two structured matrices H = G 1 AĜ 1 A Ĥ = Ĝ 1 A G 1 A (1) Structured matrices with such product representations cover all the structured matrices from the Lie and Jordan algebras (see 3) associated with the sesquilinear forms < x y > G = x Gy < x y >Ĝ= x Ĝy (13) Furthermore the form (11) can be interpreted as a generalization of the singular value decomposition 8 of a matrix A C m n ie the decomposition A CF := U AV = σ 1 σr σ 1 σ r > with unitary matrices U V Indeed the SVD can be considered as a canonical form for the matrix triple (A I m I n ) under the transformation (A I m I n ) (A CF I m I n ) = (X AY X I m X Y I n Y ) (14) Here the equation for the first components of the two matrix triples in (14) is the actual singular value decomposition while the equations for the second and third components just force the transformation matrices to be unitary The canonical form then displays the eigenstructure of the I n -selfadjoint matrix A A and the I m -selfadjoint matrix AA because the nonzero singular values σ 1 σ r are just the square roots of the nonzero eigenvalues of A A and AA When generalizing the concept of the singular value decomposition to analogous factorizations for linear maps L : C n C m where the spaces C n and C m are equipped with indefinite inner products given by invertible Hermitian matrices G C m m and Ĝ Cn n one may consider to apply a transformation A X AY to a matrix representation of L where X and Y are matrices that are unitary with respect to the sesquilinear forms (13) ie where X GX = G Y ĜY = Ĝ However if one allows general changes of bases in the spaces C n and C m ie changes that affect the indefinite inner products as well then this corresponds exactly to the transformation as in (11) and the canonical forms will appear to be less complicated Generalizations of the singular value decomposition in the sense of this paper have been studied earlier in the literature probably starting with 9 1 The generalized singular value decomposition defined there corresponds to (11) for the case that for all three matrices A CF H CF := G 1 CFA CF Ĝ 1 CFA CF and ĤCF := Ĝ 1 CFA CFG 1 CFA CF a diagonal representation can be chosen The general complex case (allowing also non-diagonal represenations) was then discussed in 1 In 14 the canonical forms of the matrices X X and XX were investigated where X = H 1 XH denotes the adjoint of a matrix X C n n with respect to the indefinite inner product induced by the nonsingular Hermitian matrix H C n n This question is motivated from the theory of polar decompositions in indefinite inner product spaces It is 3

4 said that a matrix X C n n allows an H-polar decomposition if there exists an H-selfadjoint matrix B ie a matrix satisfying B H = HB and an H-unitary matrix U ie a matrix satisfying U HU = H such that X = UB It was shown in 19 that X allows an H-polar decomposition if and only if the two matrices X X and XX have the same canonical forms as H-selfadjoint matrices Setting A = X G = H 1 and Ĝ = H we find that X X = Ĝ 1 A G 1 A = Ĥ and XX = AĜ 1 A G 1 = GHG 1 and thus the canonical forms of X X and XX can be read off from the canonical form for the matrix triple (A G Ĝ) = (X H 1 H) Consequently many of the results from 14 can be recovered from the results in this paper Recently the relation of the spectra of X X and XX has been investigated in terms of infinite dimensional indefinite inner product spaces (also known as Krein spaces) in A canonical form closely related to the form obtained under the transformation (11) has been developed in 1 where transformations of the form (B C) (X 1 BY Y 1 CX) B C m n C C n m have been considered Then a canonical form is constructed that reveals the Jordan structures of the products BC and CB In our framework this corresponds to a canonical form of the pair of matrices (G 1 A Ĝ 1 A ) rather than for the triple (A G Ĝ) When focussing on matrix triples our approach is more general because the canonical form for the pair (G 1 A Ĝ 1 A ) can be easily read off from the canonical form for (A G Ĝ) but not vice versa The approach in 1 on the other hand focusses on different aspects and allows to consider pairs (B C) where the ranks of B and C are distinct This case is not covered by the canonical forms obtained in this paper or in 18 as the considered pairs of matrices always have the same rank The paper is organized as follows In Section we review the definitions of matrices having structures with respect to indefinite inner products and provide some auxiliary results In Section 3 we present preliminary factorizations that are essential tools for the derivation of the canonical forms In Section 4 we then derive the canonical forms for the complex case and for the real case when G and Ĝ are both symmetric In Section 5 we study the case that one of G Ĝ is real symmetric and the other is real skew-symmetric In Section 6 we present the canonical forms for the case that both G Ĝ are real skew-symmetric Throughout the paper we use F to denote the field of real or complex matrices ie F = R or F = C R (R + ) is the set of real negative (positive) numbers and C (C + ) is the open left (right) half complex plane The n n identity and n n zero matrices are denoted by I n and O n respectively The m n zero matrix is denoted by O m n and e j is the jth column of the identity matrix or equivalently the jth standard basis vector of F n Moreover we introduce I π Σ πνδ = Iπ In I ν Σ πν = Σ πν = J I n = O ν I n δ The transpose and conjugate transpose of a matrix A are denoted by A T and A respectively We use A 1 A k to denote a block diagonal matrix with diagonal blocks A 1 A k If A = a ij F n m and B F l k then A B = a ij B F nl mk denotes the Kronecker 4

5 product of A and B For a real symmetric or complex Hermitian matrix A we call (π ν δ) the Sylvester inertia index with π ν δ being the number of positive negative and zero eigenvalues of A respectively For a square matrix A σ(a) denotes the spectrum of A We use R n = 1 1 J n (λ) = λ 1 λ 1 λ to denote the n n reverse identity or the n n upper triangular Jordan block associated with the eigenvalue λ respectively and a b 1 b a 1 a b a b b a J n (a b) = I n + J b a n () I = 1 1 a b b a for blocks associated with complex conjugate eigenvalues in the real Jordan form of a real matrix Matrices structured with respect to sesquilinear forms Our general theory will cover and generalize results for the following classes of matrices Definition 1 Let G F n n be invertible and let H K F n n be such that (GH) = GH and (GK) = GK 1) If F = C and G is Hermitian or skew-hermitian then H is called G-Hermitian and K is called G-skew-Hermitian ) If F = R and G is symmetric then H is called G-symmetric and K is called G-skewsymmetric 3) If F = R and G is skew-symmetric then H is called G-Hamiltonian and K is called G-skew-Hamiltonian G-Hermitian and G-symmetric matrices are often called G-selfadjoint matrices as they are selfadjoint with respect to the indefinite inner product induced by G In this paper we prefer the notions G-Hermitian and G-symmetric in order to clearly distinguish between the real and the complex case Observe that transformations of the form (H G) (P 1 HP P GP) P F n n invertible preserve the structure of H with respect to G ie if for example H is G-Hermitian then P 1 HP is P GP-Hermitian as well Clearly each complex Hermitian or real symmetric 5

6 invertible matrix G is congruent to Σ πν for some π ν and each real skew-symmetric invertible matrix G is congruent to J n for some n Thus we may always restrict ourselves to the case that either G = Σ πν or G = J n In the latter case we refer to J n -Hamiltonian or J n -skew- Hamiltonian matrices simply as Hamiltonian or skew-hamiltonian matrices respectively G-(skew-)Hermitian G-(skew-)symmetric and G-(skew-)Hamiltonian matrices have been intensively studied in the literature In particular canonical forms for such matrices have been derived in many places We review these well-known canonical forms in the following Theorem (Canonical form for G-Hermitian matrices ) Let G C n n be Hermitian and invertible and let H C n n be G-Hermitian Then there exists an invertible matrix X C n n such that X 1 HX = H c H r X GX = G c G r where H c = H c1 H cmc G c = G c1 G cmc H r = H r1 H rmr G r = G r1 G rmr and where the diagonal blocks have the following forms: 1) blocks associated with pairs (λ j λ j ) of nonreal eigenvalues of H: Jξj (λ H cj = j ) Rξj G J ξj ( λ j ) cj = R ξj = R ξj where Imλ j > and ξ j N for j = 1 m c ; ) blocks associated with real eigenvalues: H rj = J ηj (α j ) G rj = s j R ηj where α j R s j { 1 1} and η j N for j = 1 m r H has the (not necessarily pairwise distinct) non-real eigenvalues λ 1 λ mc λ 1 λ mc and (not necessarily pairwise distinct) real eigenvalues α 1 α mr Remark 3 Besides the eigenvalues the signs s 1 s mr associated with the real eigenvalues are additional invariants of G-Hermitian matrices The collection of these sign is called the sign characteristic of H sometimes also called Krein signature 13 For details on the sign characteristics we refer to 7 and the references therein The real version of Theorem is as follows: Theorem 4 (Canonical form for real G-symmetric matrices ) Let G R n n be symmetric and invertible and let H R n n be G-symmetric Then there exists an invertible matrix X R n n such that where X 1 HX = H c H r X T GX = G c G r H c = H c1 H cmc G c = G c1 G cmc H r = H r1 H rmr G r = G r1 G rmr and where the diagonal blocks have the following forms: 6

7 1) blocks associated with pairs (λ j λ j ) of nonreal eigenvalues of H: H cj = J ξj (a j b j ) G cj = R ξj where b j = Im λ j > a j = Reλ j and ξ j N for j = 1 m c ; ) blocks associated with real eigenvalues: H rj = J ηj (α j ) G rj = s j R ηj where α j R s j { 1 1} and η j N for j = 1 m r H has the (not necessarily pairwise distinct) non-real eigenvalues λ 1 λ mc λ 1 λ mc and (not necessarily pairwise distinct) real eigenvalues α 1 α mr The corresponding canonical form for G-skew-Hermitian matrices immediately follows from Theorem because a matrix K is G-skew-Hermitian if and only if H = ık is G- Hermitian In the real case however the trick of multiplying by the imaginary unit ı is not an option and a canonical form has to be derived separately We also need additional notation We denote Ξ n = ( 1) ( 1) n 1 Γ n = Ξ n R n = ( 1) ( 1) n 1 Theorem 5 (Canonical form for G-skew-symmetric matrices ) Let G R n n be symmetric and invertible and let K R n n be G-skew symmetric Then there exists an invertible matrix X R n n such that where X 1 KX = K c K r K ı K z X T GX = G c G r G ı G z K c = K c1 K cmc G c = G c1 G cmc K r = K r1 K rmr G r = G r1 G rmr K ı = K ı1 K ımı G ı = G ı1 G ımı K z = K z1 K zmo +m e G z = G z1 G zmo +m e and where the diagonal blocks have the following forms: 1) blocks associated with quadruples (λ j λ j λ j λ j ) of nonreal non purely imaginary eigenvalues of K: Jξj (a K cj = j b j ) R G J ξj (a j b j ) cj = R 4ξj = ξj R ξj where a j = Reλ j > b j = Im λ j > and ξ j N for j = 1 m c ; ) blocks associated with pairs (α j α j ) of real nonzero eigenvalues of K: Jηj (α K rj = j ) Rηj G J ηj (α j ) rj = R ηj = R ηj where α j > and η j N for j = 1 m r ; 7

8 3) blocks associated with pairs (ıβ j ıβ j ) of purely imaginary nonzero eigenvalues of K: J K ıj = ρj (β j ) Rρj G J ρj (β j ) ıj = s j R ρj where β j > s j { 1 1} and ρ j N for j = 1 m ı ; 4) blocks associated with the eigenvalue λ = of K: K zj = J ζj () G zj = t j Γ ζj where ζ j N is odd t j { 1 1} for j = 1 m o and Jζj () Rζj K zj = G J ζj () zj = R ζj where ζ j N is even for j = m o + 1 m o + m e K has the (not necessarily pairwise distinct) eigenvalues ±λ 1 ±λ mc ±λ 1 ±λ mc ±α 1 ±α mr ±ıβ 1 ±ıβ mı and the additional eigenvalue provided that m o +m e > If G is skew-hermitian then the canonical form for G-Hermitian (G-skew-Hermitian) matrices follows directly from Theorem because ıg is Hermitian and a matrix H is G- Hermitian (G-skew-Hermitian) if and only if H is ıg-hermitian (ıg-skew-hermitian) The real case once again has to be treated separately Theorem 6 (Canonical form for G-Hamiltonian matrices ) Let G R n n be skew-symmetric and invertible and let H R n n be G-Hamiltonian Then there exists an invertible matrix X R n n such that where X 1 HX = H c H r H ı H z X T GX = G c G r G ı G z H c = H c1 H cmc G c = G c1 G cmc H r = H r1 H rmr G r = G r1 G rmr H ı = H ı1 H ımı G ı = G ı1 G ımı H z = H z1 H zmo +m e G z = G z1 G zmo +m e and where the diagonal blocks have the following forms: 1) blocks associated with quadruples (λ j λ j λ j λ j ) of nonreal non purely imaginary eigenvalues of H: Jξj (a H cj = j b j ) R G J ξj (a j b j ) cj = ξj R ξj where a j = Reλ j > b j = Im λ j > and ξ j N for j = 1 m c ; ) blocks associated with pairs (α j α j ) of real nonzero eigenvalues of H: Jηj (α H rj = j ) R G J ηj (α j ) rj = ηj R ηj where α j > and η j N for j = 1 m r ; 8

9 3) blocks associated with pairs (ıβ j ıβ j ) of purely imaginary nonzero eigenvalues of H: J H ıj = ρj (β j ) R G J ρj (β j ) ıj = s ρj j R ρj where β j > s j { 1 1} and ρ j N for j = 1 m ı ; 4) blocks associated with the eigenvalue λ = of H: Jζj () H zj = G J ζj () zj = R ζj R ζj where ζ j N is odd for j = 1 m o and H zj = Ξ ζj J ζj () G zj = t j Γ ζj where ζ j N is even and t j { 1 1} for j = m o + 1 m o + m e H has the (not necessarily pairwise distinct) eigenvalues ±λ 1 ±λ mc ±λ 1 ±λ mc ±α 1 ±α mr ±ıβ 1 ±ıβ mı and the additional eigenvalue provided that m o +m e > Theorem 7 (Canonical form for G-skew-Hamiltonian matrices ) Let G R n n be skew-symmetric and invertible and let K R n n be G-skew-Hamiltonian Then there exists an invertible matrix X R n n such that X 1 KX = K c K r X T GX = G c G r where K c = K c1 K cmc G c = G c1 G cmc K r = K r1 K rmr G r = G r1 G rmr and where the diagonal blocks have the following forms: 1) blocks associated with pairs (λ j λ j ) of nonreal eigenvalues of K: Jξj (a K cj = j b j ) R G J ξj (a j b j ) cj = ξj R ξj where a j = Reλ j R b j = Im λ j > and ξ j N for j = 1 m c ; ) blocks associated with real eigenvalues α j of K: Jηj (α K rj = j ) G J ηj (α j ) rj = R ηj R ηj where α j R and η j N for j = 1 m r K has the (not necessarily pairwise distinct) non-real eigenvalues a 1 ±ıb 1 a mc ±ıb mc and the (not necessarily pairwise distinct) real eigenvalues α 1 α mr (possibly including zero) In the following we need some results concerning the existence of structured square roots of structured matrices This question has been deeply investigated in the literature mostly in the context of polar decompositions and necessary and sufficient conditions for the existence of square roots have been developed see 1 4 We do not quote the results in full generality but only consider the following special cases 9

10 Theorem 8 Let G F n n be Hermitian and nonsingular and let H F n n be G- Hermitian nonsingular and such that σ(h) R = Then there exists a square root S F n n of H that satisfies σ(s) C + This square root is unique and is a real polynomial in H (ie a polynomial in H whose coefficients are real) In particular S is G-Hermitian Proof Comparing the canonical forms of G-Hermitian and G-symmetric matrices it is easily seen that any pair (G H) C n n C n n where G is a nonsingular Hermitian matrix and H is G-Hermitian can be transformed into a real pair (G r H r ) = (P GP P 1 HP) by some complex nonsingular transformation matrix P C n n (This corresponds to the well-known fact that a matrix H is G-Hermitian for some Hermitian G if and only if H is similar to a real matrix see 7) Therefore it is sufficient to consider the real case Then by the discussion in Chapter 64 in 11 we obtain that a square root S of H with σ(s) C + exists is unique and can be expressed as a polynomial (with real coefficients) in H Clearly this polynomial stays invariant under the transformation with the transformation matrix P in the complex case It is then straightforward to check that a real polynomial in H is again G-Hermitian In the case of a skew-symmetric real bilinear form we have a similar result The proof follows exactly the same line as the proof of the preceding theorem Theorem 9 Let G R n n be skew-symmetric and nonsingular and let K R n n be G-skew-Hamiltonian nonsingular and such that σ(k) R = Then there exists a square root S R n n of K that satisfies σ(s) C + This square root is unique and is a real polynomial in K In particular S is G-skew-Hamiltonian One might ask whether G-Hamiltonian matrices have G-Hamiltonian or G-skew-Hamiltonian square roots but this is never the case because squares of such matrices must always be G-skew-Hamiltonian On the other hand each real G-skew-Hamiltonian matrix K will have a G-Hamiltonian square root 4 but this square root cannot be a polynomial in K because such a polynomial would be G-skew-Hamiltonian again After reviewing some of the basic canonical forms in this section we introduce some basis factorizations in the next section 3 Preliminary factorizations In the following sections we aim to compute canonical forms via some type of staircase algorithm A key factorization needed in the steps of this algorithm is presented in the following lemma Proposition 31 Let B F m n m n and let π ν be integers such that π + ν = m Suppose that rankb = n and that the inertia index of the Hermitian matrix B Σ πν B is (π ν δ ) Then π + ν + δ = n and there exists an invertible matrix X F m m such that X B = B π 1 + ν 1 δ n X Σ πν X = Σ π1 ν 1 Σ π ν where B F n n is nonsingular π 1 = π π δ and ν 1 = ν ν δ I δ I δ 1

11 Proof By assumption there exists a nonsingular matrix Y F n n such that Y B Σ πν BY = Σ π ν δ Let B 1 F m π be the matrix formed by the leading π columns of BY and partition it as B11 B 1 = B 11 F π π B 1 F ν π B 1 Then from B 1 Σ πνb 1 = I π we have that B 11B 11 B 1B 1 = I π (31) Since B11 B 11 and B1 B 1 are positive semidefinite it follows that rankb 11 = rank(i π + B1 B 1) = π and therefore π π by Sylvester s Law of Inertia Hence there exists a unitary matrix U 1 F π π such that U 1B 11 = where T 1 F π π is invertible Since B 11 B 11 = T 1 T 1 we obtain that (31) is equivalent to T1 I π (B 1 T 1 1 ) (B 1 T 1 1 ) = T 1 T 1 1 Then the matrix I ν (B 1 T1 1 )(B 1 T1 1 ) is positive definite because it easily follows from 5 that it has the same eigenvalues as I π (B 1 T1 1 ) (B 1 T1 1 ) with a possible exception for the eigenvalue λ = 1 Thus we have the factorization I ν (B 1 T 1 1 )(B 1 T 1 1 ) = T 1 T 1 (3) for some invertible T 1 F ν ν Let U1 X 1 = I ν T 1 (B 1 T1 1 ) T 1 I π π B 1 T 1 With (31) and (3) it is easily verified that I π π X1B 1 = π π ν Then since Σ πν = I m the last relation implies that X 1Σ πν X 1 = Σ πν X 1 Σ πν X 1 = X 1 Σ πν Σ πν Σ πν X 1 = X 1 Σ πν X 1Σ πν X 1 Σ πν X 1 and thus Σ πν X 1 Σ πν X 1 = I m or equivalently X 1 Σ πν X 1 = Σ πν Also recall that B 1 consists of the first π columns of BY Thus partitioning X 1BY = Iπ B 1 B 11

12 where B is (m π ) (n π ) we obtain from that Σ π ν δ = Y B Σ πν BY = (X1BY ) Σ πν (X1BY ) Iπ Iπ Iπ B 1 = B Σ π π ν B B 1 B 1 = B Σ π π ν B Iν = Σ ν δ = O δ Letting B F (m π ) ν be the matrix consisting of the leading ν columns of B we obtain that B Σ π π νb = I ν By a procedure analogous to the one used for B 1 above we can determine a nonsingular matrix X F (m π ) (m π ) such that XB = I ν π π ν ν ν X Σ π π νx = Σ π π ν which also shows that ν ν With X 3 = X 1 (I π X ) we then have I π X3BY = B 13 I ν B 3 X 3Σ πν X 3 = Σ πν B 33 which also implies X 3 Σ πν X3 = Σ πν and thus (X3 BY ) Σ πν (X3 BY ) = Σ π ν δ Then it easily follows that B13 B13 B 3 = and = Σ π πν ν = B13B 13 B33B 33 (33) B 33 Let P 1 be the permutation matrix that interchanges the middle two block-rows of X3 BY by pre-multiplication and set X 4 = X 3 P1 Then X 4BY = I π I ν B 13 B 33 X 4Σ πν X 4 = B 33 Σπ ν Σ π π ν ν Now both B 13 and B 33 have full column rank because otherwise by (33) it is not difficult to show that B 13 and B 33 would have a common null space But this is not possible because then X4 BY as well as B would have rank less than n contradicting the assumption Since B 13 F (π π ) δ and B 33 F (ν ν ) δ we have that π π + δ and ν ν + δ Observe that (33) implies that the positive definite factors in the polar decompositions of B 13 and B 33 coincide ie we have B 13 = Ũ3W and B 33 = Ũ4W for some Ũ3 F (π π ) δ Ũ 4 F (ν ν ) δ where U 3 U 4 have orthonormal columns and W = (B13 B 13) 1/ = (B33 B 33) 1/ F δ δ is nonsingular Extending Ũ3 and Ũ4 to unitary matrices U 3 F (π π ) (π π ) U 4 F (ν ν ) (ν ν ) we obtain that U 3 B 13 = W U 4 B 33 = 1 W

13 Setting X 5 = X 4 (I π +ν U3 U 4 ) we obtain that I π +ν X5BY W = W X 5Σ πν X 5 = Σ π ν Σ π π ν ν Let P be the permutation matrix that interchanges the 3rd and 4th block row of X5 BY by pre-multiplication and let X 6 = X 5 P Then X 6BY = I π +ν W W X 6Σ πν X 6 = Σ π ν Σ δ δ Σ π1 ν 1 where π 1 = π π δ and ν 1 = ν ν δ Then setting Iδ I Z = δ I δ I δ and X 7 = X 6 (I π +ν Z I π1 +ν 1 ) it is easily verified that Z W W = Z Iδ Σ W δ δ Z = I δ and thus we have I π +ν X7BY = W Iδ X 7Σ πν X 7 = Σ π ν I δ Σ π1 ν 1 Let P 3 be the permutation matrix that changes the order the block rows of X7 BY to the order by pre-multiplication and set X = X 7 P3 Then X BY = I π +ν W X Σ πν X = Σ π1 ν 1 I δ Σ π ν I δ The desired factorization then follows by multiplying with Y 1 from the right and setting B = (I π +ν W)Y 1 Proposition 3 Let B R m n and suppose that rank B = n rank B T J m B = n (note that the rank of a real skew-symmetric matrix is even) and let δ = n n denote the dimension of the null space of B T J m B Then there exists an invertible matrix X R m m such that X T B = B n 1 δ n X T J m X = J n1 where B C n n is nonsingular and n 1 = m n δ I δ J n I δ Proof The proof follows the same lines as in the complex case (or more precisely as in the case of a complex skew-symmetric bilinear form induced by J m ) see 18 for details 13

14 4 Canonical form for G Ĝ Hermitian In this section we investigate the matrix triple (A G Ĝ) for the case that both G Ĝ are Hermitian and nonsingular We first consider the simpler case that A is square and nonsingular Theorem 41 Let A C n n be nonsingular and let G Ĝ Cn n be Hermitian and nonsingular Then there exist nonsingular matrices X Y C n n such that X AY = A c A r X GX = G c G r Y ĜY = Ĝc Ĝr (41) and for the Ĝ-Hermitian matrix Ĥ = Ĝ 1 A G 1 A C n n and for the G-Hermitian matrix H = G 1 AĜ 1 A C n n we have that Y 1 ĤY = Ĥc Ĥr X 1 HX = H c H r (4) The diagonal blocks in these decompositions have the following forms: 1) blocks associated with pairs (µ j µ j ) of nonreal eigenvalues of Ĥ and H: Jξ1 (µ 1 ) A c = G c = Ĝ c = Ĥ c = H c = J ξ1 ( µ 1 ) Rξ1 R ξ1 Rξ1 R ξ1 J ξ1 (µ 1 ) Jξ 1 ( µ 1 ) J ξ1 (µ 1 ) Jξ 1 ( µ 1 ) Jξmc (µ m c ) J ( µ ξm c m c ) R ξm c R ξm c R ξm c R ξm c J ξm c (µ m c ) J ξ ( µ m m c c ) J ξm c (µ m c ) J ξ m c ( µ m c ) where µ j C arg µ j ( π/) and ξ j N for j = 1 m c ; ) blocks associated with real eigenvalues α j of H and Ĥ: A r = J η1 (β 1 ) J ηm r (β m r ) G r = s 1 R η1 s mr R ηm r Ĝ r = ŝ 1 R η1 ŝ mr R ηm r Ĥ r = s 1 ŝ 1 J η 1 (β 1 ) s mr ŝ mr J η m r (β m r ) H r = s 1 ŝ 1 ( J η1 (β 1 ) ) smr ŝ mr ( J ηm r (β m r ) ) where β j > s j ŝ j {+1 1} and η j N for j = 1 m r Thus α j = β j > if s j = ŝ j and α j = β j < if s j ŝ j Moreover the form (41) is unique up to the simultaneous permutation of blocks in the right hand side of (41) Proof The proof will be performed in two steps Step 1) We first show that we may assume without loss of generality that Ĥ either has only one pair of conjugate complex nonreal eigenvalues (λ λ) or only one real eigenvalue α 14

15 Indeed in view of Theorem there exists a nonsingular matrix Y C n n such that Y 1 ĤY = Ĥ 1 Ĥ Y ĜY = Ĝ 1 Ĝ where Ĥ1 Ĝ1 C p p Ĥ Ĝ C (n p) (n p) σ(ĥ1) σ(ĥ) = and Ĥ1 either has only one eigenvalue that is real or only two eigenvalues that are conjugate complex Using G 1 AĤ = HG 1 A and the fact that G 1 A is nonsingular we find that H and Ĥ are similar Thus there exists a nonsingular matrix X C m m such that X 1 Ĥ1 HX = Ĥ1 Ĥ = Ĥ X GX = G1 G 1 G 1 G Here G has been partitioned conformably with H By assumption σ(ĥ1) = σ(ĥ 1 ) and thus σ(ĥ 1 ) σ(ĥ) = Then using that H is G-Hermitian ie Ĥ 1 G1 G 1 Ĥ G = X H GX = X G1 G GHX = 1 Ĥ1 1 G G 1 G Ĥ we obtain G 1 = because the Sylvester equation Ĥ 1 G 1 G 1 Ĥ = only has the trivial solution given that the spectra of the coefficient matrices Ĥ 1 and Ĥ do not intersect Next we will show that X AY decomposes in the same way as H Ĥ G and Ĝ To this end we partition (X AY ) 1 A11 A = 1 A 1 A conformably with G and Ĝ Then implies ĤA 1 = Ĝ 1 A G 1 = (G 1 AĜ 1 ) = (HA ) = A 1 H Ĥ1 A11 A 1 A11 A = 1 Ĥ 1 Ĥ A 1 A A 1 A Ĥ Using once again the fact that a Sylvester equation only has the trivial solution if the spectra of the coefficient matrices do not intersect we finally obtain that (X AY ) 1 A11 = A and thus X AY is block diagonal as well Repeating this argument several times we see that it remains to study triples (A G Ĝ) for which Ĥ has the restricted spectrum as initially stated Step ) By Step 1) we may assume without loss of generality that Ĥ either has only one pair of conjugate complex nonreal eigenvalues (λ λ) or only one real eigenvalue α We discuss these two cases separately 15

16 Case 1: σ(ĥ) = {λ λ} for some λ C Im λ > By Theorem 8 Ĥ has a unique Ĝ-Hermitian square root S Cn n satisfying σ(s) C + Then by Theorem there exists a nonsingular matrix Ỹ Cn n such that S CF := Ỹ 1 SỸ = Jξ1 (µ) Jξm (µ) J ξ1 ( µ) G CF := Ỹ ĜỸ = Rξ1 R ξ1 H CF := Ỹ 1 ĤỸ = J ξ1 (µ) J ξ 1 ( µ) J ξm ( µ) R ξm R ξm J ξm (µ) Jξ m ( µ) where µ = λ C arg µ ( π ) and ξ j N for j = 1 m Here the third identity immediately follows from Ĥ = S Since H and Ĥ are similar and since Ĥ has only a pair of conjugate complex nonreal eigenvalues we obtain from Theorem that the canonical forms of the pairs (H G) and (Ĥ Ĝ) coincide In particular this implies the existence of a nonsingular matrix X C n n such that H CF = X 1 H X = G CF = X G X = J ξ1 (µ) Jξ 1 ( µ) Rξ1 R ξ1 J Finally setting X = G 1 X and Y = A 1 G XS CF we obtain ξm (µ) J ξ m ( µ) R ξm R ξm X AY = X 1 G 1 AA 1 G XS CF = S CF X GX = X 1 G 1 GG 1 X = ( X G X) 1 = G 1 CF = G CF Y ĜY = S X CF GA ĜA 1 G XS CF = S CF X G X X 1 H 1 XSCF = S CFG CF (H CF ) 1 S CF = G CF S CF (H CF ) 1 S CF = G CF as desired where we have used that S CF is G CF -Hermitian and that S CF = H CF It is now easy to check that X 1 HX and Y 1 ĤY have the claimed forms Case : σ(ĥ) = {α} for some α R \ {} Observe that sign(α)ĥ has the only positive eigenvalue α Thus we can apply Theorems 8 and which yield the existence of a square root S C n n of sign(α)ĥ and a nonsingular matrix Ỹ Cn n such that S CF := Ỹ 1 SỸ = J η 1 (β) J ηm (β) Ỹ ĜỸ = ŝ 1R η1 ŝ m R ηm H CF := Ỹ 1 ĤỸ = sign(α)j η 1 (β) sign(α)jη m (β) where β = α η j N and ŝ j {+1 1} for j = 1 m Again using that H and Ĥ are similar we obtain from Theorem the existence of a nonsingular matrix X C n n such that H CF = X 1 H X = sign(α)j η 1 (β) sign(α)j η m (β) G CF := X G X = s 1 R η1 s m R ηm 16

17 for some s 1 s m {+1 1} Setting X = G 1 X and Y = A 1 G XS CF we obtain as in Case 1 that X AY = S CF X GX = G CF and Y ĜY = G CF S CF (H CF ) 1 S CF = sign(α)g CF We mention in passing that it is possible to link the sign characteristic (ŝ 1 ŝ m ) to the sign characteristic (s 1 s m ) but we refrain from doing so because the explicit knowledge of the parameters ŝ 1 ŝ m is irrelevant for the development of the canonical form for the triple (A G Ĝ) It is now straightforward to check that X 1 HX and Y 1 ĤY have the claimed forms Concerning uniqueness we note that the form (41) is uniquely determined by the canonical form of Ĥ as a Ĝ-Hermitian matrix and the restrictions arg µ j ( π/) and β > In the general situation that A is non square we have the following result Theorem 4 Let A C m n and let G C m m and Ĝ Cn n be Hermitian and nonsingular Then there exist nonsingular matrices X C m m and Y C n n such that X AY = A nz A z1 A z A z3 A z4 X GX = G nz G z1 G z G z3 G z4 (43) Y ĜY = Ĝnz Ĝz1 Ĝz Ĝz3 Ĝz4 Moreover for the Ĝ-Hermitian matrix Ĥ = Ĝ 1 A G 1 A C n n and for the G-symmetric matrix H = G 1 AĜ 1 A C m m we have that Y 1 ĤY = Ĥnz Ĥz1 Ĥz Ĥz3 Ĥz4 X 1 HX = H nz H z1 H z H z3 H z4 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A nz G nz Ĝnz have the forms as in (41) and Ĥnz H nz have the forms as in (4); 1) one block corresponding to n Jordan blocks of size 1 1 of Ĥ and m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = O m n G z1 = Σ π ν Ĝ z1 = Σˆπ ˆν Ĥ z1 = O n H z1 = O m where m n π ν ˆπ ˆν N {} and π + ν = m ˆπ + ˆν = n ; ) blocks corresponding to a pair of j j Jordan blocks of Ĥ and H associated with the eigenvalue zero: A z = G z = Ĝ z = Ĥ z = H z = γ 1 J () γ 1 R γ 1 R γ 1 γ J 4 () γ R 4 γ R 4 γ l γ l J l () R l γ l R l J γ () J4 γ () l Jl () γ 1 γ γ l J ()T J4 ()T Jl ()T where γ 1 γ l N {}; thus Ĥ z and H z both have each γ j Jordan blocks of size j j where exactly γ j blocks have sign +1 and γ j blocks have sign 1 for j = 1 l; 17

18 3) blocks corresponding to a j j Jordan blocks of Ĥ and a (j + 1) (j + 1) Jordan block of H associated with the eigenvalue zero: A z3 = m 1 I1 m m I l 1 Il l (l 1) m 1 G z3 = s (i) 1 R m s (i) m R l 1 3 s (i) l 1 R l Ĝ z3 = Ĥ z3 = m 1 H z3 = m 1 m 1 ŝ (i) 1 R 1 s (i) 1 ŝ(i) s (i) 1 ŝ(i) 1 J 1() m 1 J () T m m ŝ (i) R s (i) ŝ(i) J () s (i) ŝ(i) J 3() T m l 1 m l 1 m l 1 ŝ (i) l 1 R l 1 s (i) l 1ŝ(i) J l 1() s (i) l 1ŝ(i) J l() T where m 1 m l 1 N {} and for j = 1 l 1 we have that s (i) j ŝ (i) j {+1 1} if j is odd and s (i) j m j Jordan blocks of size j j with signs ŝ (i) j {+1 1} and ŝ (i) j = 1 and = 1 if j is even; thus Ĥz3 has if j is odd and signs s (i) j H z3 has m j Jordan blocks of size (j + 1) (j + 1) with signs ŝ (i) j s (i) j if j is even for i = 1 m j and j = 1 l 1; if j is even and if j is odd and signs 4) blocks corresponding to a (j + 1) (j + 1) Jordan blocks of Ĥ and a j j Jordan block of H associated with the eigenvalue zero: A z4 = n 1 G z4 = Ĝ z4 = n 1 Ĥ z4 = n 1 H z4 = n 1 I1 1 n n 1 s (i) 1 R 1 ŝ (i) 1 R s (i) 1 ŝ(i) s (i) 1 ŝ(i) n 1 J () n 1 J 1() T n n I 3 l 1 n s (i) R ŝ (i) R 3 s (i) ŝ(i) J 3() s (i) ŝ(i) J () T n l 1 Il 1 n l 1 n l 1 n l 1 (l 1) l s (i) l 1 R l 1 ŝ (i) l 1 R l s (i) l 1ŝ(i) J l() s (i) l 1ŝ(i) J l 1() T where n 1 n l 1 N {} and for j = 1 l 1 we have that s (i) j ŝ (i) j {+1 1} if j is even and s (i) j {+1 1} and ŝ (i) j n j Jordan blocks of size (j + 1) (j + 1) with signs s (i) j even and H z4 has n j Jordan blocks of size j j with signs s (i) j ŝ (i) j if j is even for i = 1 m j and j = 1 l 1; = 1 and = 1 if j is odd; thus Ĥ z4 has if j is odd and signs ŝ (i) j if j is if j is odd and signs For the eigenvalue zero the matrices Ĥ and H have γ j+m j +n j 1 respectively γ j +m j 1 +n j Jordan blocks of size j j for j = 1 l where m l = n l = and where l is the maximum of the index of Ĥ and the index of H (Here index refers to the maximal size of a Jordan block associated with the eigenvalue zero) 18

19 Furthermore the form (43) is unique up to simultaneous block permutation of the blocks in the block diagonal of the right hand side of (43) Proof Due to its very technical nature the proof is omitted here and presented in the Appendix Since the canonical form of Theorem 4 is quite complicated we present some examples to illustrate this form Example 43 Let A G Ĝ be given by A = G = Ĝ = Then the canonical form consists of one block of type ) with j = one block of type 3) with j = 1 and sign ŝ (1) 1 = 1 and two blocks of type 4) one with j = 1 and sign s (1) 1 = +1 and one with j = and sign ŝ (1) = 1 Observe that the signs only occur in the blocks of G or Ĝ respectively that have odd size The signs attached to the corresponding even sized blocks are always +1 Thus for example the signs corresponding to blocks of type 4) will always be found in G if j is odd and they can be read off Ĝ if j is even Example 44 It is important to note that rectangular matrices with a total number of zero rows or columns are allowed in the canonical form For example consider the two nonequivalent triples A 1 = 1 G 1 = 1 1 Ĝ 1 = and A = 1 G = 1 Ĝ = The first example is just one block of type 4) with sign s (1) 1 = 1 Indeed forming the products Ĥ 1 = Ĝ 1 1 A 1G 1 1 A = 1 H 1 = G 1 1 AĜ 1 1 A 1 = as predicted Ĥ 1 has only one Jordan block of size associated with the eigenvalue λ = and the sign s = 1 while H 1 has one Jordan block of size 1 associated with λ = and the sign s = 1 The situation is different in the second case Here we obtain Ĥ = Ĝ 1 A G 1 A = 1 H = G 1 AĜ 1 A = 1 ie Ĥ has two Jordan blocks of size 1 one associated with λ = and sign s 1 = 1 and a second one associated with λ = 1 and sign s = 1 while H has one Jordan block of size 19

20 1 associated with λ = 1 and sign s = 1 Here the triple (A G Ĝ) is in canonical form consisting of one block of type 1) and size 1 and of one block of type ): A = 1 G = 1 Ĝ = 1 1 We have the following real versions of Theorem 41 and Theorem 4 Theorem 45 Let A R n n be nonsingular and let G Ĝ Rn n be symmetric and nonsingular Then there exist nonsingular matrices X Y R n n such that X T AY = A c A r X T GX = G c G r Y T ĜY = Ĝc Ĝr (44) Moreover for the Ĝ-symmetric matrix Ĥ = Ĝ 1 A T G 1 A and for the G-symmetric matrix H = G 1 AĜ 1 A T we have that Y 1 ĤY = Ĥc Ĥr X 1 HX = H c H r (45) The diagonal blocks in these decompositions have the following forms: 1) blocks associated with pairs (µ j µ j ) of nonreal eigenvalues of Ĥ and H: A c = J ξ1 (a 1 b 1 ) J ξm c (a m c b mc ) G c = R ξ1 R ξm c Ĝ c = R ξ1 R ξm c Ĥ c = J ξ 1 (a 1 b 1 ) J ξ m c (a m c b mc ) H c = J ξ 1 (a 1 b 1 ) T J ξ m c (a m c b mc ) T where a j b j > µ j = a j + ıb j and ξ j N for j = 1 m c ; ) blocks associated with real eigenvalues α j of H and Ĥ: A r = J η1 (β 1 ) J ηm r (β m r ) G r = s 1 R η1 s mr R ηm r Ĝ r = ŝ 1 R η1 ŝ mr R ηm r Ĥ r = s 1 ŝ 1 J η 1 (β 1 ) s mr ŝ mr J η m r (β m r ) H r = s 1 ŝ 1 ( J η1 (β 1 ) ) T smr ŝ mr ( J ηm r (β m r ) ) T where β j > s j ŝ j {+1 1} and η j N for j = 1 m r Thus α j = β j > if s j = ŝ j and α j = β j < if s j ŝ j Furthermore the form (44) is unique up to the simultaneous permutation of blocks in the right hand side of (44) Proof The proof follows exactly the same lines as the proof of Theorem 41 (The key point here is that the square roots that are constructed analogously to the proof of Theorem 41 are real see Theorem 8)

21 Theorem 46 Let A R m n and let G R m m and Ĝ Rn n be symmetric and nonsingular Then there exist nonsingular matrices X R m m and R n n such that X T AY = A nz A z1 A z A z3 A z4 X T GX = G nz G z1 G z G z3 G z4 (46) Y T ĜY = Ĝnz Ĝz1 Ĝz Ĝz3 Ĝz4 Moreover for the Ĝ-symmetric matrix Ĥ = Ĝ 1 A T G 1 A C n n and for the G-symmetric matrix H = G 1 AĜ 1 A T C m m we have that Y 1 ĤY = Ĥnz Ĥz1 Ĥz Ĥz3 Ĥz4 X 1 HX = H nz H z1 H z H z3 H z4 Here the blocks A nz G nz Ĝnz Ĥnz and H nz have the forms as in (44) and (45) while A zk G zk Ĝzk Ĥ zk and H zk have the forms as in Theorem 4 for k = 1 4 Moreover the form (46) is unique up to the simultaneous permutation of blocks in the right hand side of (46) Proof The proof follows exactly the same lines as the proof of Theorem 4 Indeed the proof (as presented in the Appendix) makes use of the decompositions as presented in Proposition 31 which has a real version as well In the particular case that one of the Hermitian matrices is positive definite (say G) we obtain the following special case of Theorem 4 and Theorem 46 that can be interpreted as a generalization of both the Schur form for a Hermitian matrix as well as a generalization of the standard singular value decomposition Corollary 47 Let A F m n let G F m m be Hermitian and positive definite and let Ĝ F n n be Hermitian and nonsingular Then there exist nonsingular matrices X F m m and Y F n n such that X AY = β 1 O m n O n1 I n1 X GX = Y ĜY = β mr 1 I m I n1 = I m 1 ŝ 1 ŝ mr Σˆπ ˆν In1 I n1 where n = ˆπ + ˆν and β j > ŝ j { 1 1} for j = 1 m r Moreover ŝ 1 β 1 Y 1 Ĝ 1 A G 1 AY = In1 O n ŝ mr βm r X 1 G 1 AĜ 1 A X = ŝ 1 β 1 O m +n 1 ŝ mr β m r 1

22 Proof Because G is positive definite due to the inertia index relation in the canonical form of Theorem 4 G c as well as A c Ĝc must be void Furthermore η 1 = = η mr = 1 and s 1 = = s mr = 1 Concerning the blocks A zk G zk Ĝ zk the blocks for k = 1 may exist but G z1 has to be the identity matrix I m ; the blocks for k = and k = 3 must be void and the blocks for k = 4 may only exist when j = 1 In this case G z4 has to be I n1 and applying an appropriate permutation we can achieve the forms A z4 = O n1 I n1 Gz4 = I n1 Ĝ z4 = The proof for the real case is analogous In1 I n1 Remark 48 It should be noted that when G = I m then X is unitary and Corollary 47 gives the Schur form of the Hermitian matrix product AĜ 1 A Also it simultaneously displays the Jordan form of Ĝ 1 A A One should observe here the difference in the eigenstructures of AĜ 1 A and Ĝ 1 A A corresponding to the eigenvalue λ = (Indeed it is well known that two matrix products AB and BA have identical nonzero eigenvalues including identical algebraic geometric and partial multiplicities but the Jordan structure for the eigenvalue λ = may be different for both matrices see 5) If G = I m and Ĝ = I n then also Y is unitary and Corollary 47 becomes the standard singular value decomposition 5 Canonical form for G symmetric and Ĝ skew-symmetric In this section we determine the canonical form for the case that G is symmetric and Ĝ is skew-symmetric We only consider the real case because the corresponding complex case (ie G being Hermitian and Ĝ being skew-hermitian) can be easily derived from the canonical form in Theorem 4 by simply multiplying Ĝ with ı For the real case the situation is different and the canonical form becomes more complicated Again we start with the result for the case that A is square and nonsingular Theorem 51 Let A R n n be nonsingular let G R n n be symmetric and nonsingular and let Ĝ Rn n be skew-symmetric and nonsingular Then there exist nonsingular matrices X Y R n n such that X T AY = A c A r A ı X T GX = G c G r G ı Y T ĜY = Ĝc Ĝr Ĝı (51) Moreover for the Ĝ-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A and for the G-skew-symmetric matrix H = G 1 AĜ 1 A T we have that Y 1 ĤY = Ĥ c Ĥ r Ĥ ı X 1 HX = H c H r H ı (5) The diagonal blocks in these decompositions have the following forms: 1) blocks associated with quadruples ((a j ± ıb j ) (a j ± ıb j ) ) of nonreal and non-purely

23 imaginary eigenvalues of Ĥ and H: Jξ1 (a A c = 1 b 1 ) Jξmc (a m c b mc ) J ξ1 (a 1 b 1 ) J (a ξm c m c b mc ) R G c = ξ1 R ξm c R ξ1 R ξm c R Ĝ c = ξ1 R ξm c R ξ1 R ξm c Jξ1 (a Ĥ c = 1 b 1 ) Jξmc J ξ1 (a 1 b 1 ) (a m c b mc ) J (a ξm c m c b mc ) Jξ1 (a H c = 1 b 1 ) T Jξmc J ξ1 (a 1 b 1 ) (a m c b mc ) T J (a ξm c m c b mc ) where a j > b j > and ξ j N for j = 1 m c ; ) blocks associated with pairs of real eigenvalues (αj α j ) of H and Ĥ: Jη1 (α A r = 1 ) Jηmr (α m r ) J η1 (α 1 ) J (α ηm r m r ) Rη1 R G r = ηm r R η1 R ηm r R Ĝ r = η1 R ηm r R η1 R ηm r Jη1 (α Ĥ r = 1 ) Jηmr J η1 (α 1 ) (α m r ) J (α ηm r m r ) Jη1 (α H r = 1 ) T Jηmr J η1 (α 1 ) (α m r ) T J (α ηm r m r ) where α j > and η j N for j = 1 m r ; 3) blocks associated with pairs of purely imaginary eigenvalues (ıβj ıβ j ) of H and Ĥ: Jρ1 (β A ı = 1 ) Jρmı (β m ı ) J ρ1 (β 1 ) J (β ρm ı m ı ) Rρ1 G ı = s 1 R ρ1 R Ĝ ı = s ρ1 1 R ρ1 J Ĥ ı = ρ1 (β 1 ) J ρ1 (β 1 ) J H ı = ρ1 (β 1 ) J ρ1 (β 1 ) Rρm ı s mı R ρm ı R s ρm ı mı R ρm ı T J (β ρm ı m ı ) T J (β ρm ı m ı ) J (β ρm ı m ı ) J (β ρm ı m ı ) where β j > s j {+1 1} and ρ j N for j = 1 m ı ; Furthermore the form (51) is unique up to the simultaneous permutation of blocks in the right hand side of (51) 3

24 Proof Analogous to the proof of Theorem 41 it can be shown that without loss of generality we may assume that σ(ĥ) = {λ λ λ λ} where λ C\{} We then distinguish the three different cases λ > λ < and λ R The proof then proceeds similar to the proof of Theorem 41 but instead of constructing a square root of Ĥ a square root of a related Ĝ-skew-Hamiltonian matrix S will be considered The proof for the cases λ > and λ R follows exactly the same lines as the proof of Theorem 51 in 18 and will not be reproduced here The proof for the remaining case differs slightly and will therefore be presented here in full detail Thus assume without loss of generality that σ(ĥ) = {ıλ ıλ} where λ > By Theorem 6 there exists a nonsingular matrix W R n n such that W 1 J ĤW = ρ1 (λ) J ρm (λ) J ρ1 (λ) J ρm (λ) W T R ĜW = ŝ ρ1 R 1 ŝ ρm R ρ1 m R ρm where ρ j N and ŝ j {+1 1} for j = 1 m Next we define the matrix S to be such that W 1 Jρ1 (λ) Jρm (λ) SW = J ρ1 (λ) J ρm (λ) Then S is Ĝ-skew-Hamiltonian and satisfies σ( S) R + Thus applying Theorem 9 we obtain that S has unique square root S C n n that satisfies σ(s) R + and that is a polynomial in S Consequently with S also its square root S is Ĝ-skew-Hamiltonian Let β = λ Then Theorem 7 implies the existence of a nonsingular matrix Ỹ Rn n such that S CF := Ỹ 1 SỸ = Jρ1 (β) Jρm (β) J ρ1 (β) J ρm (β) Ỹ T Rρ1 R ĜỸ = ρm R ρ1 R ρm (Note that the Jordan structure of S follows from the fact that S is a polynomial in S) Moreover using G 1 AĤ = HG 1 A and the fact that G 1 A is nonsingular we find that Ĥ and H are similar Thus by Theorem 5 we find that there is a nonsingular matrix X 1 such that X 1 1 H X J 1 = ρ1 (λ) J ρ1 (λ) X 1 TG X Rρ1 1 = s 1 R ρ1 J ρm (λ) J ρm (λ) Rρm s m R ρm where s 1 s m {+1 1} On the other hand since β = λ the matrix Jρ H CF := 1 (β) Jρ 1 (β) Jρ m (β) Jρ m (β) 1 is similar to X 1 H X 1 It is also obvious that H CF is X1 TG X 1 -skew symmetric Again by Theorem 5 there exists a nonsingular matrix X such that with setting X = X 1 X we have 4

25 that H CF := X 1 H X Jρ = 1 (β) Jρ 1 (β) G CF := X T G X Rρ1 = s 1 R ρ1 Jρ m (β) Jρ m (β) s m Rρm R ρm for some s 1 s m {+1 1} (It is actually possible to show that s j = s j for j = 1 m but we refrain from doing so as it is not necessary for the proof) Observe that S CF is G CF - symmetric and satisfies S CF (H CF ) 1 S CF = Iρ1 I ρ1 Iρm I ρm Using this identity and setting X = G 1 X T and Y = A 1 G XS CF we obtain X T AY = X 1 G 1 AA 1 G XS CF = S CF X T GX = X 1 G 1 GG 1 X T = ( X T G X) 1 = (G CF ) 1 = G CF Y T ĜY = S T CF X T GA T ĜA 1 G XS CF = S T CF X T G X X 1 H 1 XSCF = SCFG T CF (H CF ) 1 S CF = G CF S CF (H CF ) 1 S CF Rρ1 R = s 1 s ρm R ρ1 m R ρm It is now straightforward to check that Y 1 HY and X 1 ĤX have the claimed forms Concerning uniqueness we note that the form (51) is already uniquely determined by the Jordan structure the sign characteristic of H and by the restrictions on the parameters For the general non square case we have the following result Theorem 5 Let A R m n let G R m m be symmetric nonsingular and let Ĝ Rn n be skew-symmetric and nonsingular Then there exist nonsingular matrices X R m m and Y R n n such that X T AY = A nz A z1 A z A z3 A z4 A z5 A z6 X T GX = G nz G z1 G z G z3 G z4 G z5 G z6 (53) Y T ĜY = Ĝnz Ĝz1 Ĝz Ĝz3 Ĝz4 Ĝz5 Ĝz6 Moreover for the Ĝ-Hamiltonian matrix Ĥ = Ĝ 1 A T G 1 A R n n and for the G-skewsymmetric matrix H = G 1 AĜ 1 A T R m m we have that Y 1 ĤY = Ĥnz Ĥz1 Ĥz Ĥz3 Ĥz4 Ĥz5 Ĥz6 X 1 HX = H nz H z1 H z H z3 H z4 H z5 H z6 The diagonal blocks in these decompositions have the following forms: ) blocks associated with nonzero eigenvalues of Ĥ and H: A nz G nz Ĝnz have the forms as in (51) and Ĥnz H nz have the forms as in (5); 5

26 1) one block corresponding to n Jordan blocks of size 1 1 of Ĥ and m Jordan blocks of size 1 1 of H associated with the eigenvalue zero: A z1 = O m n G z1 = Σ π ν Ĝ z1 = J n Ĥ z1 = O n H z1 = O m where m n π ν N {} and m = π + ν ; ) blocks corresponding to a pair of j j Jordan blocks of H and eigenvalue zero: Ĥ associated with the A z = γ 1 J () γ 1 R γ 1 γ γ J 4 () γ l+1 γ l+1 J 4l+ () G z = R 4 R 4l+ 1 γ γ R l+1 R Ĝ z = l+1 1 R R l+1 γ 1 γ Ĥ z = ( Σ )J4 γ () l+1 ( Σ l+1l+1 )J4l+ () H z = γ 1 γ Σ 31 J4 ()T γ l+1 Σ l+l J 4l+ ()T where γ 1 γ l N {}; thus Ĥ z and H z both have each γ j Jordan blocks of size j j for j = 1 l + 1; moreover if j is odd then exactly γ j Jordan blocks of H z of size j j have sign s = +1 and exactly γ j blocks have sign s = 1 (even-sized Jordan blocks associated with zero of G-skew-symmetric matrices do not have signs) and if j is even then exactly γ j Jordan blocks of Ĥ z of size j j have sign s = +1 and exactly γ j blocks have sign s = 1 (odd-sized Jordan blocks associated with zero of Ĝ-Hamiltonian matrices do not have signs); 3) blocks corresponding to a j j Jordan block of Ĥ and a (j + 1) (j + 1) Jordan block of H associated with the eigenvalue zero: m I A z3 = m l Il 3 (l+1) l m G z3 = s () m l i R 3 s (l) i R l+1 m m R1 l Rl Ĝ z3 = R 1 R l Ĥ z3 = m ( s () i Σ 11 )J () m l ( s (l) i Σ ll )J l () H z3 = m s (l) i Σ 1 J 3 () T m l s (l) i Σ l+1l J l+1 () T where m m 4 m l N {}; thus Ĥ z3 has m j Jordan blocks of size j j with signs s (j) i and H z3 has m j Jordan blocks of size (j + 1) (j + 1) with signs s (j) i for i = 1 m j and j = 1 l; 6

Singular-value-like decomposition for complex matrix triples

Singular-value-like decomposition for complex matrix triples Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Canonical forms of structured matrices and pencils

Canonical forms of structured matrices and pencils Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry

More information

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations

Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Eigenvalue perturbation theory of structured real matrices and their sign characteristics under generic structured rank-one perturbations Christian Mehl Volker Mehrmann André C. M. Ran Leiba Rodman April

More information

On doubly structured matrices and pencils that arise in linear response theory

On doubly structured matrices and pencils that arise in linear response theory On doubly structured matrices and pencils that arise in linear response theory Christian Mehl Volker Mehrmann Hongguo Xu Abstract We discuss matrix pencils with a double symmetry structure that arise in

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Generic rank-k perturbations of structured matrices

Generic rank-k perturbations of structured matrices Generic rank-k perturbations of structured matrices Leonhard Batzke, Christian Mehl, André C. M. Ran and Leiba Rodman Abstract. This paper deals with the effect of generic but structured low rank perturbations

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Lipschitz stability of canonical Jordan bases of H-selfadjoint matrices under structure-preserving perturbations

Lipschitz stability of canonical Jordan bases of H-selfadjoint matrices under structure-preserving perturbations Lipschitz stability of canonical Jordan bases of H-selfadjoint matrices under structure-preserving perturbations T Bella, V Olshevsky and U Prasad Abstract In this paper we study Jordan-structure-preserving

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Skew-Symmetric Matrix Polynomials and their Smith Forms

Skew-Symmetric Matrix Polynomials and their Smith Forms Skew-Symmetric Matrix Polynomials and their Smith Forms D. Steven Mackey Niloufer Mackey Christian Mehl Volker Mehrmann March 23, 2013 Abstract Two canonical forms for skew-symmetric matrix polynomials

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear algebra properties of dissipative Hamiltonian descriptor systems

Linear algebra properties of dissipative Hamiltonian descriptor systems Linear algebra properties of dissipative Hamiltonian descriptor systems C. Mehl V. Mehrmann M. Wojtylak January 6, 28 Abstract A wide class of matrix pencils connected with dissipative Hamiltonian descriptor

More information

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri Technische Universitat Chemnitz Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu Numerical Computation of Deating Subspaces

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint...

Contents 1 Introduction 1 Preliminaries Singly structured matrices Doubly structured matrices 9.1 Matrices that are H-selfadjoint and G-selfadjoint... Technische Universitat Chemnitz Sonderforschungsbereich 9 Numerische Simulation auf massiv parallelen Rechnern Christian Mehl, Volker Mehrmann, Hongguo Xu Canonical forms for doubly structured matrices

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Algebra Exam Syllabus

Algebra Exam Syllabus Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Port-Hamiltonian Realizations of Linear Time Invariant Systems

Port-Hamiltonian Realizations of Linear Time Invariant Systems Port-Hamiltonian Realizations of Linear Time Invariant Systems Christopher Beattie and Volker Mehrmann and Hongguo Xu December 16 215 Abstract The question when a general linear time invariant control

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

ARTICLE IN PRESS. Linear Algebra and its Applications xx (2002) xxx xxx

ARTICLE IN PRESS. Linear Algebra and its Applications xx (2002) xxx xxx Linear Algebra and its Applications xx (2002) xxx xxx www.elsevier.com/locate/laa On doubly structured matrices and pencils that arise in linear response theory Christian Mehl a Volker Mehrmann a 1 Hongguo

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS NEEL PATEL Abstract. The goal of this paper is to study the irreducible representations of semisimple Lie algebras. We will begin by considering two

More information

Chapter One. Introduction

Chapter One. Introduction Chapter One Introduction Besides the introduction, front matter, back matter, and Appendix (Chapter 15), the book consists of two parts. The first part comprises Chapters 2 7. Here, fundamental properties

More information

A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm

A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm A Numerical Algorithm for Block-Diagonal Decomposition of Matrix -Algebras, Part II: General Algorithm Takanori Maehara and Kazuo Murota May 2008 / May 2009 Abstract An algorithm is proposed for finding

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k I. REVIEW OF LINEAR ALGEBRA A. Equivalence Definition A1. If A and B are two m x n matrices, then A is equivalent to B if we can obtain B from A by a finite sequence of elementary row or elementary column

More information

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that

More information

Appendix C Vector and matrix algebra

Appendix C Vector and matrix algebra Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J Hefferon E-mail: kapitula@mathunmedu Prof Kapitula, Spring

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

More information

Optimization of Quadratic Functions

Optimization of Quadratic Functions 36 CHAPER 4 Optimization of Quadratic Functions In this chapter we study the problem (4) minimize x R n x Hx + g x + β, where H R n n is symmetric, g R n, and β R. It has already been observed that we

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form Jay Daigle Advised by Stephan Garcia April 4, 2008 2 Contents Introduction 5 2 Technical Background 9 3

More information

Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices

Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices João R. Cardoso Instituto Superior de Engenharia de Coimbra Quinta da Nora 3040-228 Coimbra Portugal jocar@isec.pt F. Silva

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Low Rank Perturbations of Quaternion Matrices

Low Rank Perturbations of Quaternion Matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 38 2017 Low Rank Perturbations of Quaternion Matrices Christian Mehl TU Berlin, mehl@mathtu-berlinde Andre CM Ran Vrije Universiteit

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Trimmed linearizations for structured matrix polynomials

Trimmed linearizations for structured matrix polynomials Trimmed linearizations for structured matrix polynomials Ralph Byers Volker Mehrmann Hongguo Xu January 5 28 Dedicated to Richard S Varga on the occasion of his 8th birthday Abstract We discuss the eigenvalue

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS

STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS Electronic Transactions on Numerical Analysis Volume 44 pp 1 24 215 Copyright c 215 ISSN 168 9613 ETNA STRUCTURE PRESERVING DEFLATION OF INFINITE EIGENVALUES IN STRUCTURED PENCILS VOLKER MEHRMANN AND HONGGUO

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information