Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices

Size: px
Start display at page:

Download "Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices"

Transcription

1 Extending Results from Orthogonal Matrices to the Class of P -orthogonal Matrices João R. Cardoso Instituto Superior de Engenharia de Coimbra Quinta da Nora Coimbra Portugal jocar@isec.pt F. Silva Leite Departamento de Matemática Universidade de Coimbra 3000 Coimbra Portugal fleite@mat.uc.pt fax: (351) April, 2002 Abstract We extend results concerning orthogonal matrices to a more general class of matrices that will be called P -orthogonal. This is a large class of matrices that includes, for instance, orthogonal and symplectic matrices as particular cases. We study the elementary properties of P -orthogonal matrices and give some exponential representations. The role of these matrices in matrix decompositions, with particular emphasis on generalized polar decompositions, is analysed. An application to matrix equations is presented. Key-words: P -orthogonal, P -symmetric, P -skew-symmetric, generalized polar decompositions, primary matrix functions Work supported in part by ISR and a PRODEP grant, under Concurso n. 4/5.3/PRODEP/2000. Work supported in part by ISR and research network contract ERB FMRXCT

2 1 Introduction If IK is the set of real numbers IR or the set of complex numbers C, we denote by GL(n, IK) the Lie group of all n n nonsingular matrices with entries in IK and by gl(n, IK) the Lie algebra of all n n matrices with entries in IK. Throughout the paper P GL(n, IR) is a fixed matrix. Let A gl(n, C). The P -transpose of A is defined by A P := P 1 A T P and the P -adjoint of A by A P := P 1 A P, where A = ĀT. Definition 1.1 Let A gl(n, C). 1. The matrix A is called P -orthogonal if A P A = I, i.e., A T P A = P ; 2. The matrix A is called P -symmetric if A P = A, i.e., A T P = P A; 3. The matrix A is P -skew-symmetric if A P = A, i.e., A T P = P A; 4. The matrix A is P -unitary if A P A = I, i.e., A P A = P ; 5. The matrix A is P -Hermitian if A P = A, i.e., A P = P A; 6. The matrix A is P -skew-hermitian if A P = A, i.e., A P = P A. The set of complex P -orthogonal matrices G P = {A GL(n, C) : A T P A = P } and the set of P -unitary matrices G P = {A GL(n, C) : A P A = P } are Lie groups that include important particular cases, such as: the orthogonal group G I = O(n, C), the unitary group G I = U(n), the symplectic group G J = SP (2m, C), with J = denotes the m m identity matrix, and 0 I m I m 0, 2m = n, where I m the group G D = O(p, q), with D = diag(i p, I q ), p + q = n, which is the well known Lorentz group for p = 3 and q = 1. The set of P -skew-symmetric matrices L P = {A gl(n, C) : A T P = P A} 2

3 is the Lie algebra of G P, with respect to the commutator [A, B] = AB BA, and the set of P -symmetric matrices J P = {A gl(n, C) : A T P = P A}, equipped with the Jordan product {A, B} = AB + BA, forms a Jordan algebra. The set of P -skew-hermitian matrices, denoted by L P, is a Lie Algebra over IR (not over C) and the set of P -Hermitian matrices JP is a Jordan algebra over IR. Moreover, L P = ijp. The sets of real P -orthogonal, P -skew-symmetric and P -symmetric will be denoted, respectively, by G P (IR), L P (IR) and J P (IR). Thus, for example, with the terminology of Lie theory, we have: G I (IR) = O(n) and L I (IR) = o(n). Some results about the orthogonal group O(p, q) or the symplectic group SP (2m) have been stated and proved separately. However, if we think in terms of groups G P, we observe that there are important features shared by both groups. One example is the generalized polar decomposition of a nonsingular matrix A: A = QG, where Q G P and G J P. This decomposition also holds for many others choices of P. The idea of working in a general setting has been used in some recent papers such as [2], [5], [12] and [21]. Cardoso and Leite [5] have shown through unifying proofs that diagonal Padé approximants method for computing matrix logarithms and exponentials is structure preserving for P -orthogonal matrices. Leite and Crouch [21] have shown that important features related to the spectrum of a matrix in L P are independent of P. In the same spirit, Iserels [12] have studied the discretization of equations in the Lie groups G P (that were called quadratic Lie groups). Our main goal is to extend some well known results involving orthogonal, symmetric and skew-symmetric matrices to P -orthogonal, P -symmetric and P -skew-symmetric matrices. We also aim to show how the theory associated to these latter matrices may be used to identify solutions of two particular algebraic matrix equations. The organization of this paper is as follows. In Section 2 we state some elementary properties of P -orthogonal, P -unitary and matrices in the corresponding algebras. In the most cases the condition P is orthogonal and P 2 = ±I is needed. When this condition does not hold but P is symmetric or skew-symmetric, we will show that there exists an isomorphism between G P and a certain group G P1, where P 1 is orthogonal and P1 2 = ±I. In Section 3 we generalize the exponential representations given in [7], ch. XI, to P -orthogonal and P -unitary matrices. In Section 4, well known matrix decompositions are extended to P -orthogonal and P -unitary matrices. Special attention will be paid to the generalized polar decomposition. Finally, in Section 5, we solve two algebraic matrix equations using the theory stated previously. 3

4 2 Properties of P -orthogonal matrices In this section we shall give elementary properties of the group G p and the corresponding algebras L P and J P. Since most of the results have immediate proofs, they will be omitted. Similar results may be adapted for P -unitary, P -Hermitian and P -skew-hermitian matrices. Theorem 2.1 Let P GL(n, IR) and A gl(n, C). Then: (i) If A is P -symmetric (resp. P -skew-symmetric) and nonsingular then A 1 is P -symmetric (resp. P -skew-symmetric); (ii) G P, L P and J P are closed under P -orthogonal similarities, that is, if S G P and A G P (resp. L P, J P ) then S 1 AS G P (resp. L P, J P ); (iii) (AB) P = B P A P. Moreover, if P is orthogonal and P 2 = ±I, then: (iv) If A is P -orthogonal (resp. P -skew-symmetric, P -symmetric) then A T is P -orthogonal (resp. P -skew-symmetric, P -symmetric); (v) (A P ) P = A; (vi) A P A and A + A P are P -symmetric; (vii) A A P is P -skew-symmetric. The properties (vi) and (vii) generalize well known results about symmetric and skewsymmetric matrices. In particular, it follows that, if P is orthogonal and P 2 = ±I, every matrix can be written as a sum of a P -symmetric with a P -skew-symmetric matrix. This result, which has already appeared in [21], is an immediate consequence of (vi) and (vii) since A = 1(A + 2 AP ) + 1(A 2 AP ). The next theorem relates P -symmetric and P -skew-symmetric matrices with symmetric and skew-symmetric matrices. Theorem 2.2 Let P GL(n, IR) be orthogonal. (i) If P 2 = I, then every P -symmetric (resp. P -skew-symmetric) matrix A can be written in the form A = P S, where S is symmetric (resp. skew-symmetric); (ii) If P 2 = I, then every P -symmetric (resp. P -skew-symmetric) matrix A can be written in the form A = P S, where S is skew-symmetric (resp. symmetric). 4

5 Some properties of the spectrum of orthogonal and skew-symmetric matrices are shared by P -orthogonal and P -skew-symmetric matrices. For instance, the spectrum of a P -orthogonal matrix A, which will be denoted by σ(a), contains the inverse of each eigenvalue. Indeed, since A T P A = P and every matrix is similar to its transpose, it follows that A is similar to its inverse. Thus, the equivalence λ σ(a) λ 1 σ(a) holds and also det(a) = ±1. However, we shall note that σ(a) does not necessarily lie over the unit circumference. Given a P -skew-symmetric matrix A, its spectrum σ(a) is symmetric with respect to the origin, i.e., λ σ(a) λ σ(a). In fact, A is similar to A and, as consequence, trace(a) = 0, which means that L P sl(n, C), where sl(n, C) denotes the special linear Lie algebra, i.e., the Lie Algebra consisting of all matrices with trace equal to zero. We have seen in the previous theorem that some results require the restriction P T = P 1 and P 2 = ±I. (1) Examples of groups G P, with P satisfying (1), are the orthogonal group O(p, q) and the symplectic group SP (2m). In the next theorem, we generalize some ideas of [16] and show that, if either P is symmetric or skew-symmetric, then G P is isomorphic to O(p, q) or SP (2m), respectively. Theorem 2.3 Let P GL(n, IR). (a) If P = P T, p and q are, respectively, the number of positive and negative eigenvalues of P (p + q = n) and D = diag(i p, I q ), then (i) For complex P -orthogonal, P -skew-symmetric and P -symmetric matrices, the following isomorphisms hold: G P = GI, L P = LI, J P = JI ; (ii) For P -unitary, P -skew-hermitian and P -Hermitian matrices, we have: G P = G D, L P = L D, J P = J D; (iii) For real P -orthogonal, P -skew-symmetric and P -symmetric matrices, it holds: G P (IR) = G D (IR), L P (IR) = L D (IR), J P (IR) = J D (IR). 5

6 (b) Let P be a 2m 2m matrix such that P T = P and suppose that J is the matrix that defines the symplectic group. Then (i) G P = GJ, L P = LJ, J P = JJ ; (ii) G P = G J, L P = L J, JP = JJ ; (iii) G P (IR) = G J (IR), L P (IR) = L J (IR), J P (IR) = J J (IR). Proof. (a)(i) Since P is real, symmetric and invertible, we may write P = C T 1 C 1, for some nonsingular complex C 1. A possible choice for C 1 is to take it as a symmetric square root of P, whose existence is always guaranteed. Define the mapping Φ 1 : gl(n, C) gl(n, C) A Φ 1 (A) = C 1 AC1 1 It is easy to check that Φ 1 establishes an algebraic isomorphism between the groups G P and G I, a Lie algebra isomorphism between L P and L I and a Jordan algebra isomorphism between J P and J I. (ii), (iii) Since P GL(n, IR) is symmetric, it is congruent with D (see, for instance, [13]), that is, P = C T 2 DC 2, for some C 2 real nonsingular. Then the mapping states the required isomorphism. Φ 2 : gl(n, IR) gl(n, IR) A Φ 2 (A) = C 2 AC2 1 (b) Since P is real and nonsingular, with even size n = 2m, P allows a Cholesky-type factorization P = C T 4 JC 4 (see [4]). Therefore, the required isomorphism may be defined by Φ 4 (A) = C 4 AC 1 4. Remark 2.4 When P is symmetric positive definite, we may rewrite the statements (a)(ii) and (iii) of the previous theorem in the following way: (ii) G P = G I, L P = L I, J P = J I ; (iii) G P (IR) = G I (IR), L P (IR) = L I (IR), J P (IR) = J I (IR). Indeed, P admits the Cholesky factorization P = C T 3 C 3, with C 3 real (see [11]) and then the mapping states the desired isomorphisms. Φ 3 : gl(n, IR) gl(n, IR) A Φ 3 (A) = C 3 AC

7 Since the condition (1) implies that P T = ±P, it follows from the previous theorem that any group of the form G P1, with P 1 satisfying (1), is isomorphic to one of the groups G I = O(n), G D = O(p, q) or G J = SP (2m). Thus, these three groups are the base to study the groups of the form G P, with P symmetric or skew-symmetric. We finish this section giving some references where one can finds information about the Lie groups corresponding to P = I, P = J and P = D. For real orthogonal and unitary matrices we refer to some matrix theory books, such as, [11] and [13]; about complex orthogonal see, for instance, [7] and [9] and references therein; for symplectic see, for instance, [1], [6], [14] and references therein and for O(p, q) see, for instance, [1], [8] and [19]. Sometimes, this last group appears associated to indefinite inner products. 3 Exponential representations We refer to [10], ch. 6, for details concerning to primary matrix functions. From now on, X 1/2 will stand for the principal square root of X and X for a generic square root; Log X denotes the principal matrix logarithm of X. Given a P -unitary matrix U, there exists a complex P -Hermitian matrix V such that U = e iv. (2) Indeed, since U is invertible, some properties of logarithms of matrices (see (6.4.20) in [10]) allow us to choose a P -skew-hermitian matrix W = log U such that U = e W. Since W = iv, for some P -Hermitian V, it yields the representation (2). In this section, we attempt to generalize the exponential representations given in [7], ch. XI, for orthogonal and unitary matrices to P -orthogonal and P -unitary matrices. Lemma 3.1 Let P GL(n, IR). If A is complex P -orthogonal and P -Hermitian, then: (i) There exists a real matrix K such that A = e ik ; (ii) If σ(a) IR = φ then there exists a real P -skew-symmetric matrix K such that A = e ik ; (iii) If there exists a real P -skew-symmetric matrix K such that A = e ik, then the Jordan blocks of A with negative eigenvalues occur in pairs. Proof. (i) Since A is P -orthogonal and P -Hermitian, it is easy to conclude that AĀ = I. Therefore (6.4.22) in [10] ensures that A = e ik, for some K real. (ii) The restriction σ(a) IR = φ means that the principal logarithm of A is defined. Let K = i Log A. We have to show that K is real and P -skew-symmetric. Since A is P -Hermitian, it follows that A = P AP 1 and then Log A = P (Log A)P 1. Since 7

8 Log(A ) = (Log A), it follows that Log A is P -Hermitian. On the other hand, since A is P -orthogonal, Log A is P -skew-symmetric. Hence Log A is both P -Hermitian and P -skew-symmetric. Therefore Log A = Log A and so K is real. To show that K is P -skew-symmetric is just a simple calculation. (iii) Let K be real and P -skew-symmetric and let λ σ(k). Then λ, λ, λ σ(k). The eigenvalues of A = e ik are of the form e iλ, where λ is an eigenvalue of K. Hence e iλ, e iλ and e iλ are also eigenvalues of A. If µ is an negative eigenvalue of A then µ = e iλ, for some λ = (2k + 1)π + bi, k Z, b IR, and e iλ = e iλ, e iλ = e iλ. Since the matrix exponential preserves the size of the Jordan blocks, one concludes that the Jordan blocks associated to negative eigenvalues of A occur in pairs. Remark 3.2 The reciprocal of (iii) may not be true. In fact, suppose that P = I 2m and consider the orthogonal matrix A = I 2m. If K is a P -skew-symmetric matrix then e ik has always positive eigenvalues. Therefore, we cannot have e ik = I 2m, for any given skewsymmetric matrix K. However, if P = J, then K = diag(π, π) is Hamiltonian and e ik = I 2m. Theorem 3.3 Let P T = P 1, P 2 = ±I, let A be a complex P -orthogonal matrix and X := A P A. If σ(x) IR = φ then there exists a real P -orthogonal matrix R and a real P -skewsymmetric matrix K such that A = Re ik. Proof. First we observe that X = A P A is P -Hermitian and P -orthogonal. Since σ(x) IR = φ, by the previous lemma there exists K real and P -skew-symmetric such that X = e 2iK. Hence e ik is a square root of X which is P -orthogonal. Therefore a simple calculation shows that R := Ae ik is P -orthogonal and P -unitary which, in turn, implies that R is real. Lemma 3.4 Let P GL(n, IR). If A is a complex P -symmetric and P -unitary matrix then there exists a real matrix K such that A = e ik. If σ(a) IR = φ then K may be chosen to be P -symmetric. Proof. Analogue to the proof of Lemma 3.1. Theorem 3.5 Let P T = P 1, P 2 = ±I, let A be a complex P -unitary and X := A P A. If σ(x) IR = φ then there exists a real P -orthogonal matrix R and a real P -symmetric K such that A = Re ik. Proof. Similar to the proof of Theorem

9 Remark 3.6 In this section we have worked with a general P GL(n, IR) and, in some cases, with P satisfying (1). The sufficient conditions stated are very restrictive on the spectrum of some matrices involved. However, if we study these exponential representations for particular choices of P, say P = J or P = D, we believe that the range of applications of our results may be enlarged. These study still remains to be done. 4 Generalized matrix decompositions 4.1 Generalized polar decomposition The standard polar decomposition of a nonsingular matrix states that every A GL(n, C) may be written in the form A = UH, where U is unitary and H is Hermitian positive definite. The matrix H is always uniquely determined as H := (A A) 1/2 and if A is real then U and H may be taken to be real. Several extensions of this decomposition have been made. An example is the complex orthogonal/symmetric version (see [7], ch. XI) that allows us to write A = QG, where Q is complex orthogonal and G is complex symmetric. When A is real, Q and G may also be taken to be real. Another important example is the more general case concerning to Lie groups. Lawson [15] proved the existence of the polar decomposition in arbitrary Lie groups equipped with an involutive automorphism. However, such a decomposition is only possible for elements nearby the identity. For other particular Lie groups, which include some important matrix groups, polar decompositions may be obtained for a wide range of elements of the group. Bar-On and Gray [2] showed that the elements of certain Lie groups, which they called polar groups (the pair (G, P ) is called a polar group if for all A G there exists a square root A P A J P ) may be written as a product of an element in the group G P by an element in J P. However, it is not easy to identify polar Lie groups. And, in addition, working with these groups is very restrictive, since there are matrices which admits polar decompositions independently of belonging to a polar group or not, as it will become clear latter. Here, we propose a different approach to generalize the polar decomposition which is based on the theory of matrix square roots and recent results on the groups Sp(2m) and O(p, q). In the generalized polar decomposition that will be studied here we suppose that P satisfies (1). Due to the isomorphisms stated in Theorem 2.3, the most significant groups associated to these P s are G J and G D. All the other groups G P1, with P 1 satisfying (1), are isomorphic to G J or G D. Natural extensions of our results may be easily done if P is either symmetric or P -skew-symmetric. We establish sufficient conditions for a generic P satisfying (1) and, since the polar decompositions when P = D have recently received some attention in the context of indefinite scalar 9

10 product spaces (see, for instance, [3] and [18]), we refine these conditions only for P = J, i.e., the symplectic/hamiltonian case. We start with the complex P -orthogonal/p -symmetric version of the polar decomposition. Theorem 4.1 Let P T = P 1, P 2 = ±I and A GL(n, C). Then: (i) There exists a P -orthogonal matrix Q and a P -symmetric matrix G such that A = QG. (ii) If σ(a P A) IR = φ, then there exists a P -unitary matrix U and a P -Hermitian H with eigenvalues on the open right half plane such that A = UH. Besides, H is uniquely determined as H := (A P A) 1/2 and if A is real then U and H may be taken to be real. Proof. (i) The matrix X = A P A is P -symmetric and, since it is nonsingular, it has at least a square root which is a polynomial in A P A (see (6.4.12) in [10]). Let G := A P A be one of that square roots. Since every polynomial preserves the P -symmetry, we have that G is also P -symmetric and hence Q := AG 1 is P -orthogonal. (ii) If σ(a P A) IR = φ then H = (A P A) 1/2 is P -Hermitian and is the only square root of A P A with eigenvalues on the open right half plane. Moreover, if A is real this square root is also real. Remark 4.2 The order in which the matrices Q and G appear in the decomposition of (i) of the previous theorem may be changed, i.e., there exists a P -symmetric matrix G 1 and a P -orthogonal matrix Q 1 such that A = G 1 Q 1. In fact, A T can be represented in the form A T = QG, for some P -orthogonal Q and some P -symmetric G. This implies that A = G T Q T, where G 1 = G T is P -symmetric and Q 1 = Q T is P -orthogonal. A similar argument holds for the P -unitary/p -Hermitian case. It is well known that the similarity relationship between two complex orthogonal (resp. symmetric, skew-symmetric) matrices may be established by means of an orthogonal matrix ([7], ch. XI), that is, if B = SAS 1, for some nonsingular S, then B = QAQ T, for some orthogonal Q. Using the generalized polar decomposition given in the previous theorem, we show, in the next corollary, that an analogue result holds for two complex P -orthogonal (resp. P -symmetric, P skew-symmetric) matrices. Corollary 4.3 Let A, B gl(n, C) and suppose that P satisfies (1). (i) If both matrices A and B are P -orthogonal (resp. P -symmetric, P -skew-symmetric) and are similar then they are P -orthogonally similar, i.e., there exists a P -orthogonal matrix Q such that B = QAQ P. 10

11 (ii) Suppose now that both A and B are P -unitary (resp. P -Hermitian, P -skew-hermitian) and are similar, i.e., there exists a nonsingular complex matrix S such that B = SAS 1. If σ(s P S) IR = φ, then A and B are P -unitarilly similar, i.e., there exists a P -unitary matrix U such that B = UAU P. If A and B are real then U may be taken to be real. Proof. (i) Without loss of generality, we suppose that A and B are P -orthogonal. If S is a nonsingular matrix such that B = SAS 1, (3) then B 1 = B P = (S 1 ) P A P S P = (S P ) 1 A 1 S P, which implies that B = (S P ) 1 AS P. Therefore, using (3), we have A(S P S) = (S P S)A, that is, A and S P S commute. Since S is nonsingular, by the previous theorem we may write S = QG, where Q is P -orthogonal and G := S P S is P -symmetric. As we have already seen in the proof of the theorem, G is a polynomial in S P S and therefore it commutes with A. Hence, B = SAS 1 = (QG)A(QG) 1 = QAQ 1 = QAQ P. (ii) Similar to (i). We will show in the next theorem that for a given nonsingular P -normal matrix A (i.e., A P A = AA P ) the restriction σ(a P A) IR = φ is not needed to guarantee that A admits a P -unitary/p -Hermitian polar decomposition. Theorem 4.4 If A GL(n, C) is P -normal then there exists a P -unitary matrix U and a P -Hermitian matrix H such that A = UH. Proof. Analogue to the proof of Theorem 5.1 in [3]. Remark 4.5 The previous theorem may not hold in the real case. For a counter example, let J = 0 1 and A = 1 4. The matrix A is Hamiltonian and as consequence it is J-normal. If there exists a real symplectic U and a real skew-hamiltonian (i.e., J- symmetric) H such that A = UH then A J A = (UH) J (UH) = H J H = H 2. However, since A J A = diag( 17, 17), the equation H 2 = A J A cannot have real skew-hamiltonian solutions. Let us analyse the generalized polar decomposition for the particular case P = J = 0 I m I m 0. We recall that the case P = D has been extensively analysed ([3] and [18]). 11

12 Let A be complex nonsingular. Since the complex J-orthogonal/J-symmetric polar decomposition is a particular case of the decomposition in the Theorem 4.1, we only discuss the J-unitary/J-Hermitian case and its real version whenever A is real. Before proceeding further, let us recall some basic facts about J-Hermitian (also called skew-hamiltonian) square roots of J-Hermitian matrices. We refer to [6] for more details. While any nonsingular matrix H J J has at least a square root H J J, an analogue result may not hold in JJ. The matrix H = 1 2i JJ, whose eigenvalues are 1 2i 1 and 3, is an example of a matrix in JJ which does not have square roots in JJ. Indeed, if there was K JJ such that K 2 = H then the negative eigenvalues of H would have to occur in pairs, which is a contradiction. Nevertheless, if a nonsingular matrix H JJ allows the following skew-hamiltonian Jordan decomposition H = S H 1 0 S 1, (4) 0 H1 for some S G J (i.e., S is symplectic) and some H 1 GL(m, C), 2m = n, then satisfies K 2 = H. K = S H1/ (H 1/2 1 ) S 1 J J Let us suppose now that H J J (IR) is nonsingular. By Theorem 1 in [6] there exists S G J (IR) and H 1 GL(m, IR) such that H = S H 1 0 S 1. Using the necessary and 0 H1 T sufficient condition for a nonsingular real matrix to have a real square root (see (6.4.14) in [10]), it follows that H has a skew-hamiltonian real square root if and only if the Jordan blocks of H 1 associated to real negative eigenvalues occur in pairs. To summarize the discussion above, in the following theorem we list some sufficient conditions under which a given nonsingular matrix A admits a symplectic/skew-hamiltonian polar decomposition. Theorem 4.6 (a) Let A GL(2m, C). If one of the following three conditions holds: (i) σ(a J A) IR = φ, (ii) A J A admits the skew-hamiltonian Jordan decomposition (4), or (iii) A is real, then there exists U G J and H J J such that A = UH. (b) Let A GL(2m, IR). There exists U G J (IR) and H J J (IR) such that A = UH if and only if for each real negative eigenvalue of A J A there are four equal Jordan blocks. 12

13 4.2 Other matrix decompositions In this subsection, we shall see some decompositions involving symmetric and skew-symmetric matrices that may be easily generalized to P -symmetric and P -skew-symmetric matrices. An example is the following well known result ([20] and [22]): Given a matrix A gl(n, C), there exists symmetric matrices F and G such that A = F G. The next theorem shows that a similar result holds for P -symmetric and P -skew-symmetric matrices, provided that P satisfies (1). Theorem 4.7 Let A gl(n, C). If P 2 = I (resp. P 2 = I) and P is orthogonal, then there exist complex P -symmetric (resp. P -skew-symmetric) matrices F and G such that A = F G, with rank(f ) = k, rank(g) = n k + rank(a), rank(a) k n. If A is real then F and G may be taken to be real. Proof. We assume that P 2 = I. Since A = F 1 G 1, for some F 1 and G 1 symmetric, it follows that A = (F 1 P )(P G 1 ) = F G, where F := F 1 P and G = P G 1 are P -symmetric. The remain of the proof in an immediate consequence of Prop. 1.1 in [20]. A necessary and sufficient condition to guarantee that a matrix A is a product of a symmetric by a skew-symmetric matrix is that A is similar to A (see Theorem 2.1 in [20]). The next theorem states an analogue result. Theorem 4.8 Let P 2 = ±I and P be orthogonal. A matrix A gl(n, C) can be represented as A = F G, where F is complex P -symmetric and G is complex P -skew-symmetric, if and only if A is similar to A. If A is real then F and G may be taken to be real. Proof. Immediate consequence of Theorem 2.1 in [20] and Theorem 2.2 in this paper. 5 Application to matrix equations In this section we illustrate how the theory developed previously may be used to solve two particular matrix equations, which arise in the characterization of h-selfdual and σ-selfdual Euclidean norms in C n (see [17]). Our goal is to solve the following problems: Problem 1. Given H complex Hermitian nonsingular, find Hermitian positive definite solutions X of the matrix equation: H 1 X = X 1 H; (5) Problem 2. Given S complex symmetric nonsingular, find Hermitian positive definite solutions X such that S 1 X = X 1 S. (6) 13

14 Although these problems were completely solved in [17], here we propose an alternative method based on the previous developments. We start with Hermitian solutions of (5). Since H is Hermitian and invertible, it is congruent with D = diag(i p, I q ), where p and q are, respectively, the number of positive and negative eigenvalues of H(see p. 184 in [13]), that is, there exists T such that H = T DT. (7) Let Y be a matrix such that X = T Y T. Then Y is Hermitian if and only if X is also Hermitian, and therefore solving (5) is equivalent to finding Y Hermitian such that DY = Y 1 D. (8) Setting W := DY, it turns out that to solve (8) is equivalent to find W such that W 2 = I, W D = DW. Thus, to solve Problem 1, we may solve alternatively the following problem: Problem 1. Given H complex Hermitian nonsingular, having p positive eigenvalues and q negative eigenvalues, find D-Hermitian square roots of the identity, where D = diag(i p, I q ). Theorem 5.1 Let D be as before. A matrix W is a D-Hermitian square root of I if and only if it can be written in the form W = R R 1, (9) where = diag(±1,, ±1) and R is a nonsingular matrix such that R DR commutes with. Proof. ( ) Let W denotes a D-Hermitian square root of the identity matrix I. Since W is a square root of I, it follows that ±1 are the only eigenvalues that W allows. Also, W is diagonalizable because the square root operation preserves the size and the number of the blocks in its Jordan canonical form. Hence there exists a nonsingular matrix R and = diag(±1,, ±1) such that W = R R 1. On the other hand, since W is D-Hermitian this implies that R DR and commute. ( ) Immediate. Remark 5.2 If R in (9) is D-unitary then R DR commutes with. Thus all matrices of the form W = R R D, where R is D-unitary, are D-Hermitian square roots of I. 14

15 Corollary 5.3 A matrix X is a Hermitian solution of (5) if and only if X = T DR R 1 T, where T and D are as in (7), = diag(±1,, ±1) and R is a nonsingular matrix such that R DR commutes with. Proof. Immediate consequence of the Theorem 5.1. After studying the Hermitian solutions of (5), we are now able to characterize the Hermitian positive definite solutions. Corollary 5.4 Given a nonsingular Hermitian matrix H, the matrix X is a Hermitian positive definite solution of the equation H 1 X = X 1 H if and only if X = T exp 0 K K 0 where exp(.) stands for the matrix exponential, T is the matrix in (7) and K is an arbitrary p q complex matrix with p, q being, respectively, the number of positive and negative eigenvalues of the given Hermitian matrix H. Proof. ( ) Let X be a Hermitian positive definite solution of (5). By Corollary 5.3 there exist nonsingular matrices R and = diag(±1,, ±1) such that X = T Y T, with Y := DR R 1. Since X and Y are congruent, one may conclude that X is Hermitian positive definite if and only if Y is. Therefore we may proceed our analysis with Y instead of X. Since Y has to be Hermitian positive definite, its principal logarithm L := Log Y is D-Hermitian and so Y can be represented by Y = e L. On the other hand, Y is D-unitary because both D and W are D-unitary. Hence Log Y is D-skew-Hermitian. Since L is simultaneously Hermitian and D-skew-Hermitian, there exists a complex p q matrix K such that L = 0 K. K 0 ( ) Immediate. Let us now analyse the equation (6). We start with its Hermitian solutions. Since S in (6) is complex nonsingular, using the Takagi s factorization (see (4.4.4) in [11]), we may write T, where U is unitary and Γ is diagonal with positive entries. S = UΓU T, (10) If Y is a matrix such that X = UY U, then X is Hermitian if and only if Y also is. Therefore, the equation (6) reduces to Γ 1 Y = Ȳ 1 Γ. (11) 15

16 If we let G := Γ 1 Y then finding Hermitian solutions of (11) is equivalent to find G which is simultaneously Γ-Hermitian and Γ-orthogonal. Since Γ is Hermitian positive definite, by Theorem 2.3 and Lemma 1 in [7], ch.xi, we may write G = C 1 Ee ik C, where C is such that Γ = C T C, E is a real involution and K is a real and skew-symmetric matrix which commutes with E. If we choose C = Γ 1/2, then G = Γ 1/2 Ee ik Γ 1/2, and the Hermitian solution Y is given by Y = Γ 1/2 Ee ik Γ 1/2. (12) If one wants that Y to be Hermitian positive definite, then, according to the proof of Lemma 1 in [7], it is enough to take E = I in (12). We now summarize the previous discussion in the following theorem. Theorem 5.5 A matrix X is a Hermitian solution of (6) if and only if it can be represented by X = UΓ 1/2 Ee ik Γ 1/2 U, (13) where U and Γ are as in (10), E is a real involution and K is a real skew-symmetric matrix which commutes with E. Moreover, X is Hermitian positive definite if and only if X is given as in (13) with E = I. References [1] G. Ammar, C. Mehl, V. Mehrmann, Schur-Like Forms for matrix Lie Groups, Lie Algebras and Jordan Algebras, Linear Algebra and its Applications, 287, (1999), pp [2] J. R. Bar-on and C. W. Gray, A generalized polar decomposition, Linear Algebra and its Applications, 170, (1992), pp [3] Y. Bolshakov, C.V.M. van der Mee, A.C.M. Ran, B. Reichstein, L. Rodman, Polar decompositions in finite dimensional indefinite scalar products spaces: General theory, Linear Algebra and its Applications, 261, (1997), pp [4] P. Benner, R. Beyers, H. Fassbender, V. Merhmann, D. Watkins, Cholesky-Like factorizations of skew-symmetric matrices, Electronic Transactions on Numerical Analysis, 11, (2000), pp [5] J. R. Cardoso and F. Silva Leite, Theoretical and numerical considerations about Padé approximants for the matrix logarithm, Linear Algebra and its Applications, 330, (2001), pp

17 [6] H. Faβbender, D. Mackey, N. Mackey, H. Xu, Real and complex Hamiltonian square roots of skew-hamiltonian matrices, Technical Report # 92, Department of Mathematics and Statistics, Western Michigan University (1999). [7] F. R. Gantmacher, Theory of Matrices, Vol. II, Chelsea, New York, [8] I. Gohberg, P.Lancaster, L. Rodman, Matrices and Indefinite Scalar Products. Operator Theory: Advances and Applications, Vol. 8, Birkhäuser Verlag, [9] R. A. Horn and D. I. Merino, The Jordan canonical forms of complex orthogonal and skew-symmetric matrices, Linear Algebra and its Applications, 302/303, (1999), pp [10] R. A Horn and C. R. Johnson, Topics in Matrix Analysis. Cambridge University Press, [11] R. A Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press, [12] A. Iserels, On Cayley-transform methods for the discretization of Lie-group equations, Technical Report 1999/NA4, DAMTP, University of Cambridge. [13] P. Lancaster, M. Tismenetsky, The theory of matrices. Academic Press, [14] A. J. Laub and K. Meyer, Canonical forms for symplectic and Hamiltonian matrices, Celestial Mechanics, 9, (1974), pp [15] J. D. Lawson, Polar and Ol shankii decompositions, J. Reine Angew. Math., 448, (1994), pp [16] Anna Lee, Secondary symmetric, skew-symmetric and orthogonal matrices, Periodica Mathematica Hungarica, 7, (1976), pp [17] E. Marques de Sá, M. C. Santos, Notes on selfdual norms and products of two involutions, pre-print, (1992). [18] C.V.M. van der Mee, A.C.M. Ran, L. Rodman, Stability of selfadjoint square roots and polar decompositions in indefinite scalar spaces, Linear Algebra and its Applications, , (1999), pp [19] V. Merhmann, H. Xu, Structured Jordan canonical forms for structured matrices that are Hermitian, skew-hermitian or unitary with respect to indefinite inner products, Electronic Journal of Linear Algebra, 5, (1999), pp [20] L. Rodman, Products of Symmetric and Skew-symmetric Matrices, Linear and Multilinear Algebra, 43, (1997), pp

18 [21] F. Silva Leite, P. Crouch, Closed forms for the exponential mapping on matrix Lie groups based on Putzer s method, Journal of Mathematical Physics, 40, (1999), pp [22] O. Taussky, The Role of Symmetric Matrices in the Study of General Matrices, Linear Algebra and its Applications, 5, (1972), pp

section it will become clear the strong connection between orthogonal solutions of the Moser-Veselov equation (1 and symmetric solutions of a particul

section it will become clear the strong connection between orthogonal solutions of the Moser-Veselov equation (1 and symmetric solutions of a particul The Moser-Veselov Equation J. R. Cardoso Λ Instituto Superior de Engenharia de Coimbra Quinta da Nora 3030 Coimbra Portugal jocar@sun.isec.pt F. Silva Leite y Departamento de Matemática Universidade de

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis

Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Finite dimensional indefinite inner product spaces and applications in Numerical Analysis Christian Mehl Technische Universität Berlin, Institut für Mathematik, MA 4-5, 10623 Berlin, Germany, Email: mehl@math.tu-berlin.de

More information

STRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES

STRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 3, pp. 821 850 c 2006 Society for Industrial and Applied Mathematics STRUCTURED FACTORIZATIONS IN SCALAR PRODUCT SPACES D. STEVEN MACKEY, NILOUFER MACKEY, AND FRANÇOISE

More information

Orthogonal similarity of a real matrix and its transpose

Orthogonal similarity of a real matrix and its transpose Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 382 392 www.elsevier.com/locate/laa Orthogonal similarity of a real matrix and its transpose J. Vermeer Delft University

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

(VII.B) Bilinear Forms

(VII.B) Bilinear Forms (VII.B) Bilinear Forms There are two standard generalizations of the dot product on R n to arbitrary vector spaces. The idea is to have a product that takes two vectors to a scalar. Bilinear forms are

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

GENERAL ARTICLE Realm of Matrices

GENERAL ARTICLE Realm of Matrices Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Multiple eigenvalues

Multiple eigenvalues Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 94305-15 June 4, 007 Abstract The dimensions

More information

Singular-value-like decomposition for complex matrix triples

Singular-value-like decomposition for complex matrix triples Singular-value-like decomposition for complex matrix triples Christian Mehl Volker Mehrmann Hongguo Xu December 17 27 Dedicated to William B Gragg on the occasion of his 7th birthday Abstract The classical

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Positive definite preserving linear transformations on symmetric matrix spaces

Positive definite preserving linear transformations on symmetric matrix spaces Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,

More information

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details. Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.

More information

Chapter One. Introduction

Chapter One. Introduction Chapter One Introduction Besides the introduction, front matter, back matter, and Appendix (Chapter 15), the book consists of two parts. The first part comprises Chapters 2 7. Here, fundamental properties

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Computing Real Logarithm of a Real Matrix

Computing Real Logarithm of a Real Matrix International Journal of Algebra, Vol 2, 2008, no 3, 131-142 Computing Real Logarithm of a Real Matrix Nagwa Sherif and Ehab Morsy 1 Department of Mathematics, Faculty of Science Suez Canal University,

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES H. FAßBENDER AND KH. D. IKRAMOV Abstract. A matrix A C n n is unitarily quasi-diagonalizable if A can be brought by a unitary similarity transformation

More information

On the Definition of Two Natural Classes of Scalar Product

On the Definition of Two Natural Classes of Scalar Product On the Definition of Two Natural Classes of Scalar Product D. Steven Mackey, Niloufer Mackey and Françoise Tisseur Abstract. We identify two natural classes of scalar product, termed unitary and orthosymmetric,

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

NOTES ON BILINEAR FORMS

NOTES ON BILINEAR FORMS NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

CLASSICAL GROUPS DAVID VOGAN

CLASSICAL GROUPS DAVID VOGAN CLASSICAL GROUPS DAVID VOGAN 1. Orthogonal groups These notes are about classical groups. That term is used in various ways by various people; I ll try to say a little about that as I go along. Basically

More information

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS Adv. Oper. Theory 3 (2018), no. 1, 53 60 http://doi.org/10.22034/aot.1702-1129 ISSN: 2538-225X (electronic) http://aot-math.org POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

More information

On rank one perturbations of Hamiltonian system with periodic coefficients

On rank one perturbations of Hamiltonian system with periodic coefficients On rank one perturbations of Hamiltonian system with periodic coefficients MOUHAMADOU DOSSO Université FHB de Cocody-Abidjan UFR Maths-Info., BP 58 Abidjan, CÔTE D IVOIRE mouhamadou.dosso@univ-fhb.edu.ci

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course Matrix Lie groups and their Lie algebras Mahmood Alaghmandan A project in fulfillment of the requirement for the Lie algebra course Department of Mathematics and Statistics University of Saskatchewan March

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Math Spring 2011 Final Exam

Math Spring 2011 Final Exam Math 471 - Spring 211 Final Exam Instructions The following exam consists of three problems, each with multiple parts. There are 15 points available on the exam. The highest possible score is 125. Your

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

over a field F with char F 2: we define

over a field F with char F 2: we define Chapter 3 Involutions In this chapter, we define the standard involution (also called conjugation) on a quaternion algebra. In this way, we characterize division quaternion algebras as noncommutative division

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Final Project # 5 The Cartan matrix of a Root System

Final Project # 5 The Cartan matrix of a Root System 8.099 Final Project # 5 The Cartan matrix of a Root System Thomas R. Covert July 30, 00 A root system in a Euclidean space V with a symmetric positive definite inner product, is a finite set of elements

More information

Strict diagonal dominance and a Geršgorin type theorem in Euclidean

Strict diagonal dominance and a Geršgorin type theorem in Euclidean Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

QUATERNIONS AND ROTATIONS

QUATERNIONS AND ROTATIONS QUATERNIONS AND ROTATIONS SVANTE JANSON 1. Introduction The purpose of this note is to show some well-known relations between quaternions and the Lie groups SO(3) and SO(4) (rotations in R 3 and R 4 )

More information

Canonical forms of structured matrices and pencils

Canonical forms of structured matrices and pencils Canonical forms of structured matrices and pencils Christian Mehl and Hongguo Xu Abstract This chapter provides a survey on the development of canonical forms for matrices and matrix pencils with symmetry

More information

Chapter 2 The Group U(1) and its Representations

Chapter 2 The Group U(1) and its Representations Chapter 2 The Group U(1) and its Representations The simplest example of a Lie group is the group of rotations of the plane, with elements parametrized by a single number, the angle of rotation θ. It is

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations

Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations Eigenvalue perturbation theory of classes of structured matrices under generic structured rank one perturbations, VU Amsterdam and North West University, South Africa joint work with: Leonard Batzke, Christian

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Means of unitaries, conjugations, and the Friedrichs operator

Means of unitaries, conjugations, and the Friedrichs operator J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,

More information

On the Hermitian solutions of the

On the Hermitian solutions of the Journal of Applied Mathematics & Bioinformatics vol.1 no.2 2011 109-129 ISSN: 1792-7625 (print) 1792-8850 (online) International Scientific Press 2011 On the Hermitian solutions of the matrix equation

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI 1. Maximal Tori By a torus we mean a compact connected abelian Lie group, so a torus is a Lie group that is isomorphic to T n = R n /Z n. Definition 1.1.

More information

1: Lie groups Matix groups, Lie algebras

1: Lie groups Matix groups, Lie algebras Lie Groups and Bundles 2014/15 BGSMath 1: Lie groups Matix groups, Lie algebras 1. Prove that O(n) is Lie group and that its tangent space at I O(n) is isomorphic to the space so(n) of skew-symmetric matrices

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS SAMPLE STUDY MATERIAL Postal Correspondence Course GATE Engineering Mathematics GATE ENGINEERING MATHEMATICS ENGINEERING MATHEMATICS GATE Syllabus CIVIL ENGINEERING CE CHEMICAL ENGINEERING CH MECHANICAL

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

POLYNOMIAL EQUATIONS OVER MATRICES. Robert Lee Wilson. Here are two well-known facts about polynomial equations over the complex numbers

POLYNOMIAL EQUATIONS OVER MATRICES. Robert Lee Wilson. Here are two well-known facts about polynomial equations over the complex numbers POLYNOMIAL EQUATIONS OVER MATRICES Robert Lee Wilson Here are two well-known facts about polynomial equations over the complex numbers (C): (I) (Vieta Theorem) For any complex numbers x,..., x n (not necessarily

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form Jay Daigle Advised by Stephan Garcia April 4, 2008 2 Contents Introduction 5 2 Technical Background 9 3

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1) Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1) Travis Schedler Thurs, Nov 18, 2010 (version: Wed, Nov 17, 2:15

More information

Basic Concepts of Group Theory

Basic Concepts of Group Theory Chapter 1 Basic Concepts of Group Theory The theory of groups and vector spaces has many important applications in a number of branches of modern theoretical physics. These include the formal theory of

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1 E2 212: Matrix Theory (Fall 2010) s to Test - 1 1. Let X = [x 1, x 2,..., x n ] R m n be a tall matrix. Let S R(X), and let P be an orthogonal projector onto S. (a) If X is full rank, show that P can be

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information