Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices

Similar documents
section it will become clear the strong connection between orthogonal solutions of the Moser-Veselov equation (1 and symmetric solutions of a particul

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

STRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES

Orthogonal similarity of a real matrix and its transpose

Two Results About The Matrix Exponential

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Linear Algebra: Matrix Eigenvalue Problems

LinGloss. A glossary of linear algebra

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Symmetric and anti symmetric matrices

Matrix Inequalities by Means of Block Matrices 1

(VII.B) Bilinear Forms

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

GENERAL ARTICLE Realm of Matrices

MATHEMATICS 217 NOTES

Diagonalizing Matrices

Elementary linear algebra

Spectral inequalities and equalities involving products of matrices

Multiple eigenvalues

Singular-value-like decomposition for complex matrix triples

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra. Workbook

0.1 Rational Canonical Forms

Positive definite preserving linear transformations on symmetric matrix spaces

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Chapter One. Introduction

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Computing Real Logarithm of a Real Matrix

Quantum Computing Lecture 2. Review of Linear Algebra

Moore Penrose inverses and commuting elements of C -algebras

Lecture 10 - Eigenvalues problem

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if

On the Definition of Two Natural Classes of Scalar Product

Linear algebra 2. Yoav Zemel. March 1, 2012

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

NOTES ON BILINEAR FORMS

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Foundations of Matrix Analysis

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

CLASSICAL GROUPS DAVID VOGAN

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

On rank one perturbations of Hamiltonian system with periodic coefficients

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Spectral Theorem for Self-adjoint Linear Operators

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course

A matrix over a field F is a rectangular array of elements from F. The symbol

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Math Spring 2011 Final Exam

Linear Algebra Lecture Notes-II

Generalized Principal Pivot Transform

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

over a field F with char F 2: we define

1. General Vector Spaces

Final Project # 5 The Cartan matrix of a Root System

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

A PRIMER ON SESQUILINEAR FORMS

QUATERNIONS AND ROTATIONS

Canonical forms of structured matrices and pencils

Chapter 2 The Group U(1) and its Representations

Mathematical Methods wk 2: Linear Operators

Chapter Two Elements of Linear Algebra

Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations

Geometric Mapping Properties of Semipositive Matrices

1 Linear Algebra Problems

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Numerical Linear Algebra Homework Assignment - Week 2

1 Last time: least-squares problems

Means of unitaries, conjugations, and the Friedrichs operator

On the Hermitian solutions of the

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Linear Algebra Primer

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

1: Lie groups Matix groups, Lie algebras

Review of linear algebra

Review problems for MA 54, Fall 2004.

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Linear algebra and applications to graphs Part 1

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

POLYNOMIAL EQUATIONS OVER MATRICES. Robert Lee Wilson. Here are two well-known facts about polynomial equations over the complex numbers

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Problems in Linear Algebra and Representation Theory

Some inequalities for sum and product of positive semide nite matrices

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Math 408 Advanced Linear Algebra

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Basic Concepts of Group Theory

Clifford Algebras and Spin Groups

MINIMAL NORMAL AND COMMUTING COMPLETIONS

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

ELEMENTARY LINEAR ALGEBRA

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Transcription:

Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices João R. Cardoso Instituto Superior de Engenharia de Coimbra Quinta da Nora 3040-228 Coimbra Portugal jocar@isec.pt F. Silva Leite Departamento de Matemática Universidade de Coimbra 3000 Coimbra Portugal fleite@mat.uc.pt fax: (351) 239-832568 April, 2002 Abstract We extend results concerning orthogonal matrices to a more general class of matrices that will be called P -orthogonal. This is a large class of matrices that includes, for instance, orthogonal and symplectic matrices as particular cases. We study the elementary properties of P -orthogonal matrices and give some exponential representations. The role of these matrices in matrix decompositions, with particular emphasis on generalized polar decompositions, is analysed. An application to matrix equations is presented. Key-words: P -orthogonal, P -symmetric, P -skew-symmetric, generalized polar decompositions, primary matrix functions Work supported in part by ISR and a PRODEP grant, under Concurso n. 4/5.3/PRODEP/2000. Work supported in part by ISR and research network contract ERB FMRXCT-970137. 1

1 Introduction If IK is the set of real numbers IR or the set of complex numbers C, we denote by GL(n, IK) the Lie group of all n n nonsingular matrices with entries in IK and by gl(n, IK) the Lie algebra of all n n matrices with entries in IK. Throughout the paper P GL(n, IR) is a fixed matrix. Let A gl(n, C). The P -transpose of A is defined by A P := P 1 A T P and the P -adjoint of A by A P := P 1 A P, where A = ĀT. Definition 1.1 Let A gl(n, C). 1. The matrix A is called P -orthogonal if A P A = I, i.e., A T P A = P ; 2. The matrix A is called P -symmetric if A P = A, i.e., A T P = P A; 3. The matrix A is P -skew-symmetric if A P = A, i.e., A T P = P A; 4. The matrix A is P -unitary if A P A = I, i.e., A P A = P ; 5. The matrix A is P -Hermitian if A P = A, i.e., A P = P A; 6. The matrix A is P -skew-hermitian if A P = A, i.e., A P = P A. The set of complex P -orthogonal matrices G P = {A GL(n, C) : A T P A = P } and the set of P -unitary matrices G P = {A GL(n, C) : A P A = P } are Lie groups that include important particular cases, such as: the orthogonal group G I = O(n, C), the unitary group G I = U(n), the symplectic group G J = SP (2m, C), with J = denotes the m m identity matrix, and 0 I m I m 0, 2m = n, where I m the group G D = O(p, q), with D = diag(i p, I q ), p + q = n, which is the well known Lorentz group for p = 3 and q = 1. The set of P -skew-symmetric matrices L P = {A gl(n, C) : A T P = P A} 2

is the Lie algebra of G P, with respect to the commutator [A, B] = AB BA, and the set of P -symmetric matrices J P = {A gl(n, C) : A T P = P A}, equipped with the Jordan product {A, B} = AB + BA, forms a Jordan algebra. The set of P -skew-hermitian matrices, denoted by L P, is a Lie Algebra over IR (not over C) and the set of P -Hermitian matrices JP is a Jordan algebra over IR. Moreover, L P = ijp. The sets of real P -orthogonal, P -skew-symmetric and P -symmetric will be denoted, respectively, by G P (IR), L P (IR) and J P (IR). Thus, for example, with the terminology of Lie theory, we have: G I (IR) = O(n) and L I (IR) = o(n). Some results about the orthogonal group O(p, q) or the symplectic group SP (2m) have been stated and proved separately. However, if we think in terms of groups G P, we observe that there are important features shared by both groups. One example is the generalized polar decomposition of a nonsingular matrix A: A = QG, where Q G P and G J P. This decomposition also holds for many others choices of P. The idea of working in a general setting has been used in some recent papers such as [2], [5], [12] and [21]. Cardoso and Leite [5] have shown through unifying proofs that diagonal Padé approximants method for computing matrix logarithms and exponentials is structure preserving for P -orthogonal matrices. Leite and Crouch [21] have shown that important features related to the spectrum of a matrix in L P are independent of P. In the same spirit, Iserels [12] have studied the discretization of equations in the Lie groups G P (that were called quadratic Lie groups). Our main goal is to extend some well known results involving orthogonal, symmetric and skew-symmetric matrices to P -orthogonal, P -symmetric and P -skew-symmetric matrices. We also aim to show how the theory associated to these latter matrices may be used to identify solutions of two particular algebraic matrix equations. The organization of this paper is as follows. In Section 2 we state some elementary properties of P -orthogonal, P -unitary and matrices in the corresponding algebras. In the most cases the condition P is orthogonal and P 2 = ±I is needed. When this condition does not hold but P is symmetric or skew-symmetric, we will show that there exists an isomorphism between G P and a certain group G P1, where P 1 is orthogonal and P1 2 = ±I. In Section 3 we generalize the exponential representations given in [7], ch. XI, to P -orthogonal and P -unitary matrices. In Section 4, well known matrix decompositions are extended to P -orthogonal and P -unitary matrices. Special attention will be paid to the generalized polar decomposition. Finally, in Section 5, we solve two algebraic matrix equations using the theory stated previously. 3

2 Properties of P -orthogonal matrices In this section we shall give elementary properties of the group G p and the corresponding algebras L P and J P. Since most of the results have immediate proofs, they will be omitted. Similar results may be adapted for P -unitary, P -Hermitian and P -skew-hermitian matrices. Theorem 2.1 Let P GL(n, IR) and A gl(n, C). Then: (i) If A is P -symmetric (resp. P -skew-symmetric) and nonsingular then A 1 is P -symmetric (resp. P -skew-symmetric); (ii) G P, L P and J P are closed under P -orthogonal similarities, that is, if S G P and A G P (resp. L P, J P ) then S 1 AS G P (resp. L P, J P ); (iii) (AB) P = B P A P. Moreover, if P is orthogonal and P 2 = ±I, then: (iv) If A is P -orthogonal (resp. P -skew-symmetric, P -symmetric) then A T is P -orthogonal (resp. P -skew-symmetric, P -symmetric); (v) (A P ) P = A; (vi) A P A and A + A P are P -symmetric; (vii) A A P is P -skew-symmetric. The properties (vi) and (vii) generalize well known results about symmetric and skewsymmetric matrices. In particular, it follows that, if P is orthogonal and P 2 = ±I, every matrix can be written as a sum of a P -symmetric with a P -skew-symmetric matrix. This result, which has already appeared in [21], is an immediate consequence of (vi) and (vii) since A = 1(A + 2 AP ) + 1(A 2 AP ). The next theorem relates P -symmetric and P -skew-symmetric matrices with symmetric and skew-symmetric matrices. Theorem 2.2 Let P GL(n, IR) be orthogonal. (i) If P 2 = I, then every P -symmetric (resp. P -skew-symmetric) matrix A can be written in the form A = P S, where S is symmetric (resp. skew-symmetric); (ii) If P 2 = I, then every P -symmetric (resp. P -skew-symmetric) matrix A can be written in the form A = P S, where S is skew-symmetric (resp. symmetric). 4

Some properties of the spectrum of orthogonal and skew-symmetric matrices are shared by P -orthogonal and P -skew-symmetric matrices. For instance, the spectrum of a P -orthogonal matrix A, which will be denoted by σ(a), contains the inverse of each eigenvalue. Indeed, since A T P A = P and every matrix is similar to its transpose, it follows that A is similar to its inverse. Thus, the equivalence λ σ(a) λ 1 σ(a) holds and also det(a) = ±1. However, we shall note that σ(a) does not necessarily lie over the unit circumference. Given a P -skew-symmetric matrix A, its spectrum σ(a) is symmetric with respect to the origin, i.e., λ σ(a) λ σ(a). In fact, A is similar to A and, as consequence, trace(a) = 0, which means that L P sl(n, C), where sl(n, C) denotes the special linear Lie algebra, i.e., the Lie Algebra consisting of all matrices with trace equal to zero. We have seen in the previous theorem that some results require the restriction P T = P 1 and P 2 = ±I. (1) Examples of groups G P, with P satisfying (1), are the orthogonal group O(p, q) and the symplectic group SP (2m). In the next theorem, we generalize some ideas of [16] and show that, if either P is symmetric or skew-symmetric, then G P is isomorphic to O(p, q) or SP (2m), respectively. Theorem 2.3 Let P GL(n, IR). (a) If P = P T, p and q are, respectively, the number of positive and negative eigenvalues of P (p + q = n) and D = diag(i p, I q ), then (i) For complex P -orthogonal, P -skew-symmetric and P -symmetric matrices, the following isomorphisms hold: G P = GI, L P = LI, J P = JI ; (ii) For P -unitary, P -skew-hermitian and P -Hermitian matrices, we have: G P = G D, L P = L D, J P = J D; (iii) For real P -orthogonal, P -skew-symmetric and P -symmetric matrices, it holds: G P (IR) = G D (IR), L P (IR) = L D (IR), J P (IR) = J D (IR). 5

(b) Let P be a 2m 2m matrix such that P T = P and suppose that J is the matrix that defines the symplectic group. Then (i) G P = GJ, L P = LJ, J P = JJ ; (ii) G P = G J, L P = L J, JP = JJ ; (iii) G P (IR) = G J (IR), L P (IR) = L J (IR), J P (IR) = J J (IR). Proof. (a)(i) Since P is real, symmetric and invertible, we may write P = C T 1 C 1, for some nonsingular complex C 1. A possible choice for C 1 is to take it as a symmetric square root of P, whose existence is always guaranteed. Define the mapping Φ 1 : gl(n, C) gl(n, C) A Φ 1 (A) = C 1 AC1 1 It is easy to check that Φ 1 establishes an algebraic isomorphism between the groups G P and G I, a Lie algebra isomorphism between L P and L I and a Jordan algebra isomorphism between J P and J I. (ii), (iii) Since P GL(n, IR) is symmetric, it is congruent with D (see, for instance, [13]), that is, P = C T 2 DC 2, for some C 2 real nonsingular. Then the mapping states the required isomorphism. Φ 2 : gl(n, IR) gl(n, IR) A Φ 2 (A) = C 2 AC2 1 (b) Since P is real and nonsingular, with even size n = 2m, P allows a Cholesky-type factorization P = C T 4 JC 4 (see [4]). Therefore, the required isomorphism may be defined by Φ 4 (A) = C 4 AC 1 4. Remark 2.4 When P is symmetric positive definite, we may rewrite the statements (a)(ii) and (iii) of the previous theorem in the following way: (ii) G P = G I, L P = L I, J P = J I ; (iii) G P (IR) = G I (IR), L P (IR) = L I (IR), J P (IR) = J I (IR). Indeed, P admits the Cholesky factorization P = C T 3 C 3, with C 3 real (see [11]) and then the mapping states the desired isomorphisms. Φ 3 : gl(n, IR) gl(n, IR) A Φ 3 (A) = C 3 AC3 1.. 6

Since the condition (1) implies that P T = ±P, it follows from the previous theorem that any group of the form G P1, with P 1 satisfying (1), is isomorphic to one of the groups G I = O(n), G D = O(p, q) or G J = SP (2m). Thus, these three groups are the base to study the groups of the form G P, with P symmetric or skew-symmetric. We finish this section giving some references where one can finds information about the Lie groups corresponding to P = I, P = J and P = D. For real orthogonal and unitary matrices we refer to some matrix theory books, such as, [11] and [13]; about complex orthogonal see, for instance, [7] and [9] and references therein; for symplectic see, for instance, [1], [6], [14] and references therein and for O(p, q) see, for instance, [1], [8] and [19]. Sometimes, this last group appears associated to indefinite inner products. 3 Exponential representations We refer to [10], ch. 6, for details concerning to primary matrix functions. From now on, X 1/2 will stand for the principal square root of X and X for a generic square root; Log X denotes the principal matrix logarithm of X. Given a P -unitary matrix U, there exists a complex P -Hermitian matrix V such that U = e iv. (2) Indeed, since U is invertible, some properties of logarithms of matrices (see (6.4.20) in [10]) allow us to choose a P -skew-hermitian matrix W = log U such that U = e W. Since W = iv, for some P -Hermitian V, it yields the representation (2). In this section, we attempt to generalize the exponential representations given in [7], ch. XI, for orthogonal and unitary matrices to P -orthogonal and P -unitary matrices. Lemma 3.1 Let P GL(n, IR). If A is complex P -orthogonal and P -Hermitian, then: (i) There exists a real matrix K such that A = e ik ; (ii) If σ(a) IR = φ then there exists a real P -skew-symmetric matrix K such that A = e ik ; (iii) If there exists a real P -skew-symmetric matrix K such that A = e ik, then the Jordan blocks of A with negative eigenvalues occur in pairs. Proof. (i) Since A is P -orthogonal and P -Hermitian, it is easy to conclude that AĀ = I. Therefore (6.4.22) in [10] ensures that A = e ik, for some K real. (ii) The restriction σ(a) IR = φ means that the principal logarithm of A is defined. Let K = i Log A. We have to show that K is real and P -skew-symmetric. Since A is P -Hermitian, it follows that A = P AP 1 and then Log A = P (Log A)P 1. Since 7

Log(A ) = (Log A), it follows that Log A is P -Hermitian. On the other hand, since A is P -orthogonal, Log A is P -skew-symmetric. Hence Log A is both P -Hermitian and P -skew-symmetric. Therefore Log A = Log A and so K is real. To show that K is P -skew-symmetric is just a simple calculation. (iii) Let K be real and P -skew-symmetric and let λ σ(k). Then λ, λ, λ σ(k). The eigenvalues of A = e ik are of the form e iλ, where λ is an eigenvalue of K. Hence e iλ, e iλ and e iλ are also eigenvalues of A. If µ is an negative eigenvalue of A then µ = e iλ, for some λ = (2k + 1)π + bi, k Z, b IR, and e iλ = e iλ, e iλ = e iλ. Since the matrix exponential preserves the size of the Jordan blocks, one concludes that the Jordan blocks associated to negative eigenvalues of A occur in pairs. Remark 3.2 The reciprocal of (iii) may not be true. In fact, suppose that P = I 2m and consider the orthogonal matrix A = I 2m. If K is a P -skew-symmetric matrix then e ik has always positive eigenvalues. Therefore, we cannot have e ik = I 2m, for any given skewsymmetric matrix K. However, if P = J, then K = diag(π, π) is Hamiltonian and e ik = I 2m. Theorem 3.3 Let P T = P 1, P 2 = ±I, let A be a complex P -orthogonal matrix and X := A P A. If σ(x) IR = φ then there exists a real P -orthogonal matrix R and a real P -skewsymmetric matrix K such that A = Re ik. Proof. First we observe that X = A P A is P -Hermitian and P -orthogonal. Since σ(x) IR = φ, by the previous lemma there exists K real and P -skew-symmetric such that X = e 2iK. Hence e ik is a square root of X which is P -orthogonal. Therefore a simple calculation shows that R := Ae ik is P -orthogonal and P -unitary which, in turn, implies that R is real. Lemma 3.4 Let P GL(n, IR). If A is a complex P -symmetric and P -unitary matrix then there exists a real matrix K such that A = e ik. If σ(a) IR = φ then K may be chosen to be P -symmetric. Proof. Analogue to the proof of Lemma 3.1. Theorem 3.5 Let P T = P 1, P 2 = ±I, let A be a complex P -unitary and X := A P A. If σ(x) IR = φ then there exists a real P -orthogonal matrix R and a real P -symmetric K such that A = Re ik. Proof. Similar to the proof of Theorem 3.3. 8

Remark 3.6 In this section we have worked with a general P GL(n, IR) and, in some cases, with P satisfying (1). The sufficient conditions stated are very restrictive on the spectrum of some matrices involved. However, if we study these exponential representations for particular choices of P, say P = J or P = D, we believe that the range of applications of our results may be enlarged. These study still remains to be done. 4 Generalized matrix decompositions 4.1 Generalized polar decomposition The standard polar decomposition of a nonsingular matrix states that every A GL(n, C) may be written in the form A = UH, where U is unitary and H is Hermitian positive definite. The matrix H is always uniquely determined as H := (A A) 1/2 and if A is real then U and H may be taken to be real. Several extensions of this decomposition have been made. An example is the complex orthogonal/symmetric version (see [7], ch. XI) that allows us to write A = QG, where Q is complex orthogonal and G is complex symmetric. When A is real, Q and G may also be taken to be real. Another important example is the more general case concerning to Lie groups. Lawson [15] proved the existence of the polar decomposition in arbitrary Lie groups equipped with an involutive automorphism. However, such a decomposition is only possible for elements nearby the identity. For other particular Lie groups, which include some important matrix groups, polar decompositions may be obtained for a wide range of elements of the group. Bar-On and Gray [2] showed that the elements of certain Lie groups, which they called polar groups (the pair (G, P ) is called a polar group if for all A G there exists a square root A P A J P ) may be written as a product of an element in the group G P by an element in J P. However, it is not easy to identify polar Lie groups. And, in addition, working with these groups is very restrictive, since there are matrices which admits polar decompositions independently of belonging to a polar group or not, as it will become clear latter. Here, we propose a different approach to generalize the polar decomposition which is based on the theory of matrix square roots and recent results on the groups Sp(2m) and O(p, q). In the generalized polar decomposition that will be studied here we suppose that P satisfies (1). Due to the isomorphisms stated in Theorem 2.3, the most significant groups associated to these P s are G J and G D. All the other groups G P1, with P 1 satisfying (1), are isomorphic to G J or G D. Natural extensions of our results may be easily done if P is either symmetric or P -skew-symmetric. We establish sufficient conditions for a generic P satisfying (1) and, since the polar decompositions when P = D have recently received some attention in the context of indefinite scalar 9

product spaces (see, for instance, [3] and [18]), we refine these conditions only for P = J, i.e., the symplectic/hamiltonian case. We start with the complex P -orthogonal/p -symmetric version of the polar decomposition. Theorem 4.1 Let P T = P 1, P 2 = ±I and A GL(n, C). Then: (i) There exists a P -orthogonal matrix Q and a P -symmetric matrix G such that A = QG. (ii) If σ(a P A) IR = φ, then there exists a P -unitary matrix U and a P -Hermitian H with eigenvalues on the open right half plane such that A = UH. Besides, H is uniquely determined as H := (A P A) 1/2 and if A is real then U and H may be taken to be real. Proof. (i) The matrix X = A P A is P -symmetric and, since it is nonsingular, it has at least a square root which is a polynomial in A P A (see (6.4.12) in [10]). Let G := A P A be one of that square roots. Since every polynomial preserves the P -symmetry, we have that G is also P -symmetric and hence Q := AG 1 is P -orthogonal. (ii) If σ(a P A) IR = φ then H = (A P A) 1/2 is P -Hermitian and is the only square root of A P A with eigenvalues on the open right half plane. Moreover, if A is real this square root is also real. Remark 4.2 The order in which the matrices Q and G appear in the decomposition of (i) of the previous theorem may be changed, i.e., there exists a P -symmetric matrix G 1 and a P -orthogonal matrix Q 1 such that A = G 1 Q 1. In fact, A T can be represented in the form A T = QG, for some P -orthogonal Q and some P -symmetric G. This implies that A = G T Q T, where G 1 = G T is P -symmetric and Q 1 = Q T is P -orthogonal. A similar argument holds for the P -unitary/p -Hermitian case. It is well known that the similarity relationship between two complex orthogonal (resp. symmetric, skew-symmetric) matrices may be established by means of an orthogonal matrix ([7], ch. XI), that is, if B = SAS 1, for some nonsingular S, then B = QAQ T, for some orthogonal Q. Using the generalized polar decomposition given in the previous theorem, we show, in the next corollary, that an analogue result holds for two complex P -orthogonal (resp. P -symmetric, P skew-symmetric) matrices. Corollary 4.3 Let A, B gl(n, C) and suppose that P satisfies (1). (i) If both matrices A and B are P -orthogonal (resp. P -symmetric, P -skew-symmetric) and are similar then they are P -orthogonally similar, i.e., there exists a P -orthogonal matrix Q such that B = QAQ P. 10

(ii) Suppose now that both A and B are P -unitary (resp. P -Hermitian, P -skew-hermitian) and are similar, i.e., there exists a nonsingular complex matrix S such that B = SAS 1. If σ(s P S) IR = φ, then A and B are P -unitarilly similar, i.e., there exists a P -unitary matrix U such that B = UAU P. If A and B are real then U may be taken to be real. Proof. (i) Without loss of generality, we suppose that A and B are P -orthogonal. If S is a nonsingular matrix such that B = SAS 1, (3) then B 1 = B P = (S 1 ) P A P S P = (S P ) 1 A 1 S P, which implies that B = (S P ) 1 AS P. Therefore, using (3), we have A(S P S) = (S P S)A, that is, A and S P S commute. Since S is nonsingular, by the previous theorem we may write S = QG, where Q is P -orthogonal and G := S P S is P -symmetric. As we have already seen in the proof of the theorem, G is a polynomial in S P S and therefore it commutes with A. Hence, B = SAS 1 = (QG)A(QG) 1 = QAQ 1 = QAQ P. (ii) Similar to (i). We will show in the next theorem that for a given nonsingular P -normal matrix A (i.e., A P A = AA P ) the restriction σ(a P A) IR = φ is not needed to guarantee that A admits a P -unitary/p -Hermitian polar decomposition. Theorem 4.4 If A GL(n, C) is P -normal then there exists a P -unitary matrix U and a P -Hermitian matrix H such that A = UH. Proof. Analogue to the proof of Theorem 5.1 in [3]. Remark 4.5 The previous theorem may not hold in the real case. For a counter example, let J = 0 1 and A = 1 4. The matrix A is Hamiltonian and as consequence 1 0 4 1 it is J-normal. If there exists a real symplectic U and a real skew-hamiltonian (i.e., J- symmetric) H such that A = UH then A J A = (UH) J (UH) = H J H = H 2. However, since A J A = diag( 17, 17), the equation H 2 = A J A cannot have real skew-hamiltonian solutions. Let us analyse the generalized polar decomposition for the particular case P = J = 0 I m I m 0. We recall that the case P = D has been extensively analysed ([3] and [18]). 11

Let A be complex nonsingular. Since the complex J-orthogonal/J-symmetric polar decomposition is a particular case of the decomposition in the Theorem 4.1, we only discuss the J-unitary/J-Hermitian case and its real version whenever A is real. Before proceeding further, let us recall some basic facts about J-Hermitian (also called skew-hamiltonian) square roots of J-Hermitian matrices. We refer to [6] for more details. While any nonsingular matrix H J J has at least a square root H J J, an analogue result may not hold in JJ. The matrix H = 1 2i JJ, whose eigenvalues are 1 2i 1 and 3, is an example of a matrix in JJ which does not have square roots in JJ. Indeed, if there was K JJ such that K 2 = H then the negative eigenvalues of H would have to occur in pairs, which is a contradiction. Nevertheless, if a nonsingular matrix H JJ allows the following skew-hamiltonian Jordan decomposition H = S H 1 0 S 1, (4) 0 H1 for some S G J (i.e., S is symplectic) and some H 1 GL(m, C), 2m = n, then satisfies K 2 = H. K = S H1/2 1 0 0 (H 1/2 1 ) S 1 J J Let us suppose now that H J J (IR) is nonsingular. By Theorem 1 in [6] there exists S G J (IR) and H 1 GL(m, IR) such that H = S H 1 0 S 1. Using the necessary and 0 H1 T sufficient condition for a nonsingular real matrix to have a real square root (see (6.4.14) in [10]), it follows that H has a skew-hamiltonian real square root if and only if the Jordan blocks of H 1 associated to real negative eigenvalues occur in pairs. To summarize the discussion above, in the following theorem we list some sufficient conditions under which a given nonsingular matrix A admits a symplectic/skew-hamiltonian polar decomposition. Theorem 4.6 (a) Let A GL(2m, C). If one of the following three conditions holds: (i) σ(a J A) IR = φ, (ii) A J A admits the skew-hamiltonian Jordan decomposition (4), or (iii) A is real, then there exists U G J and H J J such that A = UH. (b) Let A GL(2m, IR). There exists U G J (IR) and H J J (IR) such that A = UH if and only if for each real negative eigenvalue of A J A there are four equal Jordan blocks. 12

4.2 Other matrix decompositions In this subsection, we shall see some decompositions involving symmetric and skew-symmetric matrices that may be easily generalized to P -symmetric and P -skew-symmetric matrices. An example is the following well known result ([20] and [22]): Given a matrix A gl(n, C), there exists symmetric matrices F and G such that A = F G. The next theorem shows that a similar result holds for P -symmetric and P -skew-symmetric matrices, provided that P satisfies (1). Theorem 4.7 Let A gl(n, C). If P 2 = I (resp. P 2 = I) and P is orthogonal, then there exist complex P -symmetric (resp. P -skew-symmetric) matrices F and G such that A = F G, with rank(f ) = k, rank(g) = n k + rank(a), rank(a) k n. If A is real then F and G may be taken to be real. Proof. We assume that P 2 = I. Since A = F 1 G 1, for some F 1 and G 1 symmetric, it follows that A = (F 1 P )(P G 1 ) = F G, where F := F 1 P and G = P G 1 are P -symmetric. The remain of the proof in an immediate consequence of Prop. 1.1 in [20]. A necessary and sufficient condition to guarantee that a matrix A is a product of a symmetric by a skew-symmetric matrix is that A is similar to A (see Theorem 2.1 in [20]). The next theorem states an analogue result. Theorem 4.8 Let P 2 = ±I and P be orthogonal. A matrix A gl(n, C) can be represented as A = F G, where F is complex P -symmetric and G is complex P -skew-symmetric, if and only if A is similar to A. If A is real then F and G may be taken to be real. Proof. Immediate consequence of Theorem 2.1 in [20] and Theorem 2.2 in this paper. 5 Application to matrix equations In this section we illustrate how the theory developed previously may be used to solve two particular matrix equations, which arise in the characterization of h-selfdual and σ-selfdual Euclidean norms in C n (see [17]). Our goal is to solve the following problems: Problem 1. Given H complex Hermitian nonsingular, find Hermitian positive definite solutions X of the matrix equation: H 1 X = X 1 H; (5) Problem 2. Given S complex symmetric nonsingular, find Hermitian positive definite solutions X such that S 1 X = X 1 S. (6) 13

Although these problems were completely solved in [17], here we propose an alternative method based on the previous developments. We start with Hermitian solutions of (5). Since H is Hermitian and invertible, it is congruent with D = diag(i p, I q ), where p and q are, respectively, the number of positive and negative eigenvalues of H(see p. 184 in [13]), that is, there exists T such that H = T DT. (7) Let Y be a matrix such that X = T Y T. Then Y is Hermitian if and only if X is also Hermitian, and therefore solving (5) is equivalent to finding Y Hermitian such that DY = Y 1 D. (8) Setting W := DY, it turns out that to solve (8) is equivalent to find W such that W 2 = I, W D = DW. Thus, to solve Problem 1, we may solve alternatively the following problem: Problem 1. Given H complex Hermitian nonsingular, having p positive eigenvalues and q negative eigenvalues, find D-Hermitian square roots of the identity, where D = diag(i p, I q ). Theorem 5.1 Let D be as before. A matrix W is a D-Hermitian square root of I if and only if it can be written in the form W = R R 1, (9) where = diag(±1,, ±1) and R is a nonsingular matrix such that R DR commutes with. Proof. ( ) Let W denotes a D-Hermitian square root of the identity matrix I. Since W is a square root of I, it follows that ±1 are the only eigenvalues that W allows. Also, W is diagonalizable because the square root operation preserves the size and the number of the blocks in its Jordan canonical form. Hence there exists a nonsingular matrix R and = diag(±1,, ±1) such that W = R R 1. On the other hand, since W is D-Hermitian this implies that R DR and commute. ( ) Immediate. Remark 5.2 If R in (9) is D-unitary then R DR commutes with. Thus all matrices of the form W = R R D, where R is D-unitary, are D-Hermitian square roots of I. 14

Corollary 5.3 A matrix X is a Hermitian solution of (5) if and only if X = T DR R 1 T, where T and D are as in (7), = diag(±1,, ±1) and R is a nonsingular matrix such that R DR commutes with. Proof. Immediate consequence of the Theorem 5.1. After studying the Hermitian solutions of (5), we are now able to characterize the Hermitian positive definite solutions. Corollary 5.4 Given a nonsingular Hermitian matrix H, the matrix X is a Hermitian positive definite solution of the equation H 1 X = X 1 H if and only if X = T exp 0 K K 0 where exp(.) stands for the matrix exponential, T is the matrix in (7) and K is an arbitrary p q complex matrix with p, q being, respectively, the number of positive and negative eigenvalues of the given Hermitian matrix H. Proof. ( ) Let X be a Hermitian positive definite solution of (5). By Corollary 5.3 there exist nonsingular matrices R and = diag(±1,, ±1) such that X = T Y T, with Y := DR R 1. Since X and Y are congruent, one may conclude that X is Hermitian positive definite if and only if Y is. Therefore we may proceed our analysis with Y instead of X. Since Y has to be Hermitian positive definite, its principal logarithm L := Log Y is D-Hermitian and so Y can be represented by Y = e L. On the other hand, Y is D-unitary because both D and W are D-unitary. Hence Log Y is D-skew-Hermitian. Since L is simultaneously Hermitian and D-skew-Hermitian, there exists a complex p q matrix K such that L = 0 K. K 0 ( ) Immediate. Let us now analyse the equation (6). We start with its Hermitian solutions. Since S in (6) is complex nonsingular, using the Takagi s factorization (see (4.4.4) in [11]), we may write T, where U is unitary and Γ is diagonal with positive entries. S = UΓU T, (10) If Y is a matrix such that X = UY U, then X is Hermitian if and only if Y also is. Therefore, the equation (6) reduces to Γ 1 Y = Ȳ 1 Γ. (11) 15

If we let G := Γ 1 Y then finding Hermitian solutions of (11) is equivalent to find G which is simultaneously Γ-Hermitian and Γ-orthogonal. Since Γ is Hermitian positive definite, by Theorem 2.3 and Lemma 1 in [7], ch.xi, we may write G = C 1 Ee ik C, where C is such that Γ = C T C, E is a real involution and K is a real and skew-symmetric matrix which commutes with E. If we choose C = Γ 1/2, then G = Γ 1/2 Ee ik Γ 1/2, and the Hermitian solution Y is given by Y = Γ 1/2 Ee ik Γ 1/2. (12) If one wants that Y to be Hermitian positive definite, then, according to the proof of Lemma 1 in [7], it is enough to take E = I in (12). We now summarize the previous discussion in the following theorem. Theorem 5.5 A matrix X is a Hermitian solution of (6) if and only if it can be represented by X = UΓ 1/2 Ee ik Γ 1/2 U, (13) where U and Γ are as in (10), E is a real involution and K is a real skew-symmetric matrix which commutes with E. Moreover, X is Hermitian positive definite if and only if X is given as in (13) with E = I. References [1] G. Ammar, C. Mehl, V. Mehrmann, Schur-Like Forms for matrix Lie Groups, Lie Algebras and Jordan Algebras, Linear Algebra and its Applications, 287, (1999), pp. 11 39. [2] J. R. Bar-on and C. W. Gray, A generalized polar decomposition, Linear Algebra and its Applications, 170, (1992), pp. 75 80. [3] Y. Bolshakov, C.V.M. van der Mee, A.C.M. Ran, B. Reichstein, L. Rodman, Polar decompositions in finite dimensional indefinite scalar products spaces: General theory, Linear Algebra and its Applications, 261, (1997), pp. 91 141. [4] P. Benner, R. Beyers, H. Fassbender, V. Merhmann, D. Watkins, Cholesky-Like factorizations of skew-symmetric matrices, Electronic Transactions on Numerical Analysis, 11, (2000), pp. 85 93. [5] J. R. Cardoso and F. Silva Leite, Theoretical and numerical considerations about Padé approximants for the matrix logarithm, Linear Algebra and its Applications, 330, (2001), pp. 31 42. 16

[6] H. Faβbender, D. Mackey, N. Mackey, H. Xu, Real and complex Hamiltonian square roots of skew-hamiltonian matrices, Technical Report # 92, Department of Mathematics and Statistics, Western Michigan University (1999). [7] F. R. Gantmacher, Theory of Matrices, Vol. II, Chelsea, New York, 1989. [8] I. Gohberg, P.Lancaster, L. Rodman, Matrices and Indefinite Scalar Products. Operator Theory: Advances and Applications, Vol. 8, Birkhäuser Verlag, 1983. [9] R. A. Horn and D. I. Merino, The Jordan canonical forms of complex orthogonal and skew-symmetric matrices, Linear Algebra and its Applications, 302/303, (1999), pp. 411 421. [10] R. A Horn and C. R. Johnson, Topics in Matrix Analysis. Cambridge University Press, 1994. [11] R. A Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, 1985. [12] A. Iserels, On Cayley-transform methods for the discretization of Lie-group equations, Technical Report 1999/NA4, DAMTP, University of Cambridge. [13] P. Lancaster, M. Tismenetsky, The theory of matrices. Academic Press, 1985. [14] A. J. Laub and K. Meyer, Canonical forms for symplectic and Hamiltonian matrices, Celestial Mechanics, 9, (1974), pp. 213 238. [15] J. D. Lawson, Polar and Ol shankii decompositions, J. Reine Angew. Math., 448, (1994), pp. 191 219. [16] Anna Lee, Secondary symmetric, skew-symmetric and orthogonal matrices, Periodica Mathematica Hungarica, 7, (1976), pp. 63 70. [17] E. Marques de Sá, M. C. Santos, Notes on selfdual norms and products of two involutions, pre-print, (1992). [18] C.V.M. van der Mee, A.C.M. Ran, L. Rodman, Stability of selfadjoint square roots and polar decompositions in indefinite scalar spaces, Linear Algebra and its Applications, 302 303, (1999), pp. 77 104. [19] V. Merhmann, H. Xu, Structured Jordan canonical forms for structured matrices that are Hermitian, skew-hermitian or unitary with respect to indefinite inner products, Electronic Journal of Linear Algebra, 5, (1999), pp. 67 103. [20] L. Rodman, Products of Symmetric and Skew-symmetric Matrices, Linear and Multilinear Algebra, 43, (1997), pp. 19 34. 17

[21] F. Silva Leite, P. Crouch, Closed forms for the exponential mapping on matrix Lie groups based on Putzer s method, Journal of Mathematical Physics, 40, (1999), pp. 3561 3568. [22] O. Taussky, The Role of Symmetric Matrices in the Study of General Matrices, Linear Algebra and its Applications, 5, (1972), pp. 147 154. 18