A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

Size: px
Start display at page:

Download "A revisit to a reverse-order law for generalized inverses of a matrix product and its variations"

Transcription

1 A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing , China Abstract. For a pair of complex matrices m n matrix A and n p matrix B, the general form of reverse-order laws for generalized inverses of the product AB can be written as (AB) (i,...,j) = B (i,...,j) A (i,...,j), where A (i,...,j), B (i,...,j), and (AB) (i,...,j) denote {i,..., j}-inverses of A, B, and AB respectively. There are all 15 3 = 3375 possible formulations for different choices of generalized inverses of A, B, and AB, respectively. This paper deals with a specific reverse-order law (AB) (1) = B A, where A and B are the Moore Penrose inverses of A and B, respectively. We will collect and derive many known and novel equivalent statements for the reverse-order law and its variations to hold by using the methodology of ranks and ranges of matrices. AMS classifications: 15A03; 15A09; 15A24 Keywords: matrix product, Moore Penrose inverse, reverse-order law, rank, range 1 Introduction Throughout this paper, R m n and C m n stands for the collections of all m n complex matrices. The symbols r(a), R(A) and N (A) stand for the rank, range (column space) and kernel (null space) of a matrix A C m n, respectively. I m denotes the identity matrix of order m; [ A, B ] denotes a row block matrix consisting of A and B. The Moore Penrose inverse of A C m n, denoted by A, is the unique matrix X C n m satisfying the four Penrose equations (i) AXA = A, (ii) XAX = X, (iii) (AX) = AX, (iv) (XA) = XA. (1.1) Further let E A = I m AA and F A = I n A A stand for two projectors induced by A. Moreover, a matrix X is called an {i,..., j}-inverse of A, denoted by A (i,...,j), if it satisfies the ith,..., jth equations. The collection of all {i,..., j}-inverses of A is denoted by {A (i,...,j) }. There are all 15 types of {i,..., j}-inverse under (1.1), but the eight-frequently used generalized inverses of A are A, A (1,3,4), A (1,2,4), A (1,2,3), A (1,4), A (1,3), A (1,2), and A (1). In matrix theory, a fundamental matrix operation is to find the inverse of a square matrix when it is nonsingular, or find its generalized inverses when it is singular. Assume that A C m n and B C n p are a pair of matrices. Then A (i,...,j), B (i,...,j), and (AB) (i,...,j) always exist. In this case, one of the fundamental equalities in the theory of generalized inverses is (AB) (i,...,j) = B (i,...,j) A (i,...,j), (1.2) This equality is a direct extension of the well-known identity (AB) 1 = B 1 A 1 for two invertible matrices, and is usually called the reverse-order law for generalized inverses of the matrix product AB. It is obvious that (1.2) includes 15 3 = 3375 formulations for all possible choices of A (i,...,j), B (i,...,j), and (AB) (i,...,j) defined above. We next present an illustrative example in statistical analysis where reverse-order laws of generalized inverses of matrix products may occur. Consider a linear random-effects model defined by M : y = Aα + ɛ, α = Bβ + γ, (1.3) where in the first-stage model, y R n 1 is a vector of observable response variables, A R n p is a known matrix of arbitrary rank, α R p 1 is a vector of unobservable random variables, ɛ R n 1 is a vector of randomly distributed error terms, and in the second-stage model, B R p k is a known matrix of arbitrary rank, β R k 1 is a vector of fixed but unknown parameters, γ R p 1 is a vector of unobservable random variables. Substituting the second equation into the first equation in (1.3) yields N : y = ABβ + Aγ + ɛ. (1.4) There exist two methods for deriving ordinary least-squares estimators (OLSEs) of the fixed but unknown parameter vector vβ in (1.3) and (1.4). A direct method is to find a β that satisfies yongge.tian@gmail.com y ABβ 2 = (y ABβ) T (y ABβ) = min (1.5) 1

2 in the context of (1.4), which leads to the general expression of the OLSE of β as follows OLSE N (β) = [ (AB) + F AB U ]y = (AB) (1,3) y, (1.6) where U is an arbitrary matrix. On the other hand, solving y Aα 2 = min under (1.3) leads to the OLSE of α as follows OLSE M (α) = ( A + F A U 1 )y = A (1,3) y, (1.7) where U 1 is an arbitrary matrix, and then substituting it into the second equation in (1.3) to yield In this case, solving A (1,3) y Bβ 2 = min under (1.8) leads to A (1,3) y = Bβ + γ. (1.8) OLSE M (β) = (B + F B U 2 )A (1,3) y = B (1,3) A (1,3) y, (1.9) where U 2 is an arbitrary matrix. Eqs. (1.6) and (1.9) are not necessarily equal, but comparing the coefficient matrices of y in (1.6) and (1.9) and their special cases (AB) and B A leads to the following reverse-order laws (AB) (1,3) = B (1,3) A (1,3), (AB) = B A. (1.10) Because AA (i,...,j), A (i,...,j) A, BB (i,...,j), and B (i,...,j) B are not necessarily identity matrices, (1.2) does not hold in general, namely, the reverse product B (i,...,j) A (i,...,j) does not necessarily satisfy the four equations for the Moore Penrose inverse of AB. In other words, (1.2) holds if and only if both A and B satisfy certain conditions. Characterization of reverse-order laws in (1.2) was greatly facilitated by the development of the matrix rank methodology. Reverse-order laws for generalized inverses have widely been investigated in the literature since 1960s due to their applications in simplifying matrix expressions involving generalized inverses. Reverse-order laws for generalized inverses of matrix products have been a core topic in the theory of generalized inverses since Penrose defined in 1950s the four equations in (1.1), which attracted much attention since 1960s (see, e.g., [1, 2, 5, 6]) and the research on the reverse-order laws made to some essential progress of the theory of generalized inverses, and led profound influence on development of methodology in matrix analysis. Because both A and B are unique, thus reverse product B A of A and A is unique also well and the product plays an important role in the characterization of various reverse-order laws. It is obvious that for the special choice B A on the right-hand side of (1.2), there are 15 possible choices of (AB) (i,...,j) on the left-hand side, but people are more interested in the following four specific situations (AB) (1) = B A, (AB) (1,3) = B A, (AB) (1,4) = B A, (AB) = B A. (1.11) Much effort has been paid to the four reverse-order laws in the literature, while many necessary and sufficient conditions are derived for these reverse-orders to hold. In this paper, the present author reconsiders the first situation in (1.11) and its variations. By definition, the following equivalence (AB) (1) = B A ABB A AB = AB. (1.12) holds. The second fundamental matrix equality in (1.12) and its variations occur widely in the theory of generalized inverses, and has been studied by many authors since1960s, in particular, some rank formulas associated the matrix equality were established in [9, 10]. This paper aims at collecting and proving many known and new results for the reverse-order law in (1.12) to hold by using the methodology of ranks and ranges of matrices. In Section 2, we present various rank and range formulas of matrices that will be used in the proofs of the main results. In Section 3, we derive several groups of closed-form formulas for calculating the maximum ranks of AB ABB (i,...,j) A (i,...,j) AB and ABB A AB ABB (i,...,j) A (i,...,j) AB. The main results and their proofs are presented in Section 4. 2 Some preliminaries The scope of this section is to introduce the mathematical foundations of generalized inverses, followed by presenting various matrix rank formulas and their usefulness in the theory of generalized inverses. These initial preparations serve as tools for solving the problems that are described in Section 1. Note from the definitions of generalized inverses of a matrix that they are in fact are defined to be (common) solutions of some matrix equations. Thus analytical expressions of generalized inverses of matrix can be written as certain matrix-valued functions with one or more variable matrices. In fact, analytical formulas of generalized inverses of matrices and their functions are important issues and tools in the theory of generalized inverses of matrices, which can be found in most textbooks on generalized inverses of matrices. For instance, the basic formulas in the following lemma can be found, e.g., in [3, 4, 8]. 2

3 Lemma 2.1. Let A C m n. Then, the following results hold. (a) The general expressions of the last seven generalized inverses of A in (1.2) can be written in the following parametric forms where the two matrices V, W K n m are arbitrary. (b) The following matrix equalities hold where the two matrices V and W are arbitrary. A (1,3,4) = A + F A V E A, (2.1) A (1,2,4) = A + A AW E A, (2.2) A (1,2,3) = A + F A V AA, (2.3) A (1,4) = A + W E A, (2.4) A (1,3) = A + F A V, (2.5) A (1,2) = ( A + F A V )A( A + W E A ), (2.6) A (1) = A + F A V + W E A, (2.7) AA (1,3,4) = AA (1,2,3) = AA (1,3) = AA, (2.8) A (1,3,4) A = A (1,2,4) A = A (1,4) A = A A, (2.9) AA (1,2,4) = AA (1,4) = AA (1,2) = AA (1) = AA + AW E A, (2.10) A (1,2,3) A = A (1,3) A = A (1,2) A = A (1) A = A A + F A V A, (2.11) Lemma 2.2 ([10]). Let A C m n and B C n p be given. Then the product B NA can be written as [ B A = [ B, 0 ] where the block matrices P, J, and Q satisfy B BB B A 0 0 A AA ] [ A ] := P J Q, (2.12) r(j) = r(a) + r(b), R(Q) R(J), R(P ) R(J ). (2.13) A powerful tool for approaching relations between matrix expressions and matrix equalities is using various expansion formulas for calculating ranks of matrices (also called the matrix rank methodology). Recall that A = 0 if and only if r(a) = 0, so that for two matrices A and B of the same size A = B r( A B ) = 0. (2.14) Furthermore, for two sets S 1 and S 2 consisting of matrices of the same size, the following assertions hold S 1 S 2 min r( A B ) = 0; (2.15) A S 1, B S 2 S 1 S 2 max min r( A B ) = 0. (2.16) B S 2 A S 1 These implications provide a highly flexible framework for characterizing equalities of matrices via ranks of matrices. If some expansion formulas for the rank of A B are established, we can use the formulas to characterize relations between two matrices A and B, and to obtain many valuable consequences. For instance, (1.2) holds if and only if [ min r (AB) (i,...,j) B (i,...,j) A (i,...,j)] = 0. (AB) (i,...,j), A (i,...,j), B (i,...,j) This idea was first introduced by the present author in early 1990s when characterizing reverse-order laws for Moore Penrose inverses of matrix products. In the past thirty years, the present author established thousands of closed-form formulas for calculating ranks of matrices, and used them to derive a huge amount of results on matrix equalities, matrix equations, matrix set inclusions, matrix-valued functions, etc., while matrix rank methodology has naturally become one of the most important and widely used tools in characterizing matrix equalities that involve generalized inverses of matrices. In order to establish and simplify various matrix equalities composed by generalized inverses of matrices, we need the following well-known rank formulas for matrices to make the paper self-contained. 3

4 Lemma 2.3 ([7]). Let A C m n, B C m k, C C l n, and D C l k. Then If R(B) R(A) and R(C ) R(A ), then Furthermore, the following results hold. r[ A, B ] = r(a) + r(e A B) = r(b) + r(e B A), (2.17) A r = r(a) + r(cf C A ) = r(c) + r(af C ), (2.18) A B r = r(b) + r(c) + r(e C 0 B AF C ), (2.19) A B 0 EA B r = r(a) + r C D CF A D CA. (2.20) B (a) r[ A, B ] = r(a) R(B) R(A) AA B = B E A B = 0. [ A (b) r = r(a) R(C C] ) R(A ) CA A = C CF A = 0. A B r = r(a) + r( D CA B ). (2.21) C D (c) r[ A, B ] = r(a) + r(b) R(A) R(B) = {0} R[(E A B) ] = R(B ) R[(E B A) ] = R(A ). [ A (d) r = r(a) + r(c) R(A C] ) R(C ) = {0} R(CF A ) = R(C) R(AF C ) = R(A). Lemma 2.4 ([10]). Let A C m n, B C m k, C C l n, and D C l k. Then In particular, The following result is well known. Lemma 2.5. If A C n n, then [ r( D CA A B ) = r AA A B CA D ] r(a). (2.22) r( A A ) = r( AA A A ). (2.23) r( A A 2 ) = r( I n A ) + r(a) n. (2.24) Lemma 2.6 ([15]). Let P, Q C m m be two orthogonal projectors. Then r( P Q QP ) = 2r[ P, Q ] + 2r(P Q) 2r(P ) 2r(Q). (2.25) In addition, we use the following simple properties (see [3, 4, 8]) when simplifying various matrix rank and range equalities R(A) R(B) and r(a) = r(b) R(A) = R(B), (2.26) R(A) R(B) R(P A) R(P B), (2.27) R(A) = R(B) R(P A) = R(P B), (2.28) R(A) = R(AA ) = R(AA A) = R(AA ) = R[(A ) ], (2.29) R(A ) = R(A A) = R(A AA ) = R(A ) = R(A A), (2.30) R(AB B) = R(AB ) = R(AB B) = R(AB ), (2.31) R(A 1 ) = R(A 2 ) and R(B 1 ) = R(B 2 ) r[ A 1, B 1 ] = r[ A 2, B 2 ]. (2.32) 4

5 Lemma 2.7 ([12, 13]). Let A C m n and B C n p be given. Then the following rank identities hold r(a ABB ) = r(a AB) = r(abb ) = r(ab), (2.33) r(bb A A) = r(bb A ) = r(b A A) = r(b A ) = r(ab), (2.34) r(abb A ) = r(b A AB) = r(ab), (2.35) r[(a A) 1/2 (BB ) 1/2 ] = r[(bb ) 1/2 (A A) 1/2 ] = r(ab), (2.36) r(b A ) = r(b A ) = r(b A ) = r(ab), (2.37) r[(a ) (B ) ] = r[(a ) B] = r[a(b ) ] = r(ab), (2.38) r(bb A A) = r(bb A A) = r(bb A A) = r(ab), (2.39) r(a ABB ) = r(a ABB ) = r(a ABB ) = r(ab), (2.40) r(abb A ) = r(abb A ) = r(abb A ) = r(ab), (2.41) r(b A AB) = r(b A AB) = r(b A AB) = r(ab), (2.42) r(abb A AB) = r(ab A AB) = r(abb A AB) = r(ab), (2.43) r[(bb ) (A A) )] = r[(bb ) (A A)] = r[(bb )(A A) )] = r(ab), (2.44) r[b (A A) )] = r(b A A) = r[b (A A) )] = r(ab), (2.45) r[(bb ) A ] = r[(bb ) A ] = r(bb A ) = r(ab). (2.46) Lemma 2.8 ([11, 14]). Let A C m n, B C m k, and C C l n. Then { [ A max r( A BXC ) = min r[ A, B ], r, (2.47) X C C]} k l A A B min r( A BXC ) = r[ A, B ] + r r. (2.48) X C k l C C 0 3 Rank formulas for some matrix expressions By definition, B (i,...,j) A (i,...,j) { (AB) (1) } if and only if ABB (i,...,j) A (i,...,j) AB = AB. Also by (2.14), ABB (i,...,j) A (i,...,j) AB = AB r(ab ABB (i,...,j) A (i,...,j) AB) = 0. (3.1) In order to establish closed-form formulas for calculating the ranks of the differences, we need to rewrite AB ABB (i,...,j) A (i,...,j) AB, (3.2) ABB A AB ABB (i,...,j) A (i,...,j) AB. (3.3) as certain linear or nonlinear matrix-valued functions that involve one or more variable matrices. We then establish exact formulas for calculating the maximum ranks of the two expressions, and then use the formulas to derive identifying conditions for (1.12) to hold. The following expansion formulas follow directly from (2.1) (2.11). AB ABB A (1,3,4) AB = AB ABB A (1,2,4) AB = AB ABB A (1,4) AB = AB ABB (1,3,4) A AB = AB ABB (1,3,4) A (1,3,4) AB = AB ABB (1,3,4) A (1,2,4) AB = AB ABB (1,3,4) A (1,4) AB = AB ABB (1,2,3) A AB = AB ABB (1,2,3) A (1,3,4) AB = AB ABB (1,2,3) A (1,2,4) AB = AB ABB (1,2,3) A (1,4) AB = AB ABB (1,3) A AB = AB ABB (1,3) A (1,3,4) AB = AB ABB (1,3) A (1,2,4) AB = AB ABB (1,3) A (1,4) AB = AB ABB A AB, (3.4) 5

6 AB ABB A (1,2,3) AB = AB ABB A (1,3) AB = AB ABB A (1,2) AB = AB ABB A (1) AB = AB ABB (1,3,4) A (1,2,3) AB = AB ABB (1,3,4) A (1,3) AB = AB ABB (1,3,4) A (1,2) AB = AB ABB (1,3,4) A (1) AB = AB ABB (1,2,3) A (1,2,3) AB = AB ABB (1,2,3) A (1,3) AB = AB ABB (1,2,3) A (1,2) AB = AB ABB (1,2,3) A (1) AB = AB ABB (1,3) A (1,2,3) AB = AB ABB (1,3) A (1,3) AB = AB ABB (1,3) A (1,2) AB = AB ABB (1,3) A (1) AB = AB ABB A AB ABB F A W AB, (3.5) AB ABB (1,2,4) A AB = AB ABB (1,2,4) A (1,3,4) AB = AB ABB (1,2,4) A (1,2,4) AB = AB ABB (1,2,4) A (1,4) AB = AB ABB (1,4) A AB = AB ABB (1,4) A (1,3,4) AB AB ABB (1,4) A (1,2,4) AB = AB ABB (1,4) A (1,4) AB = AB ABB (1,2) A AB = AB ABB (1,2) A (1,3,4) AB = AB ABB (1,2) A (1,2,4) AB = AB ABB (1,2) A (1,4) AB AB ABB (1) A AB = AB ABB (1) A (1,3,4) AB = AB ABB (1) A (1,2,4) AB = AB ABB (1) A (1,4) AB = AB ABB A AB ABV E B A AB, (3.6) ABB A AB ABB A (1,2,3) AB = ABB A AB ABB A (1,3) AB = ABB A AB ABB A (1,2) AB = ABB A AB ABB A (1) AB = AB ABB (1,3,4) A (1,2,3) AB = ABB A AB ABB (1,3,4) A (1,3) AB = AB ABB (1,3,4) A (1,2) AB = ABB A AB ABB (1,3,4) A (1) AB = AB ABB (1,2,3) A (1,2,3) AB = ABB A AB ABB (1,2,3) A (1,3) AB = AB ABB (1,2,3) A (1,2) AB = ABB A AB ABB (1,2,3) A (1) AB = AB ABB (1,3) A (1,2,3) AB = ABB A AB ABB (1,3) A (1,3) AB = AB ABB (1,3) A (1,2) AB = ABB A AB ABB (1,3) A (1) AB = ABB F A W AB, (3.7) ABB A AB ABB (1,2,4) A AB = ABB A AB ABB (1,2,4) A (1,3,4) AB = ABB A AB ABB (1,2,4) A (1,2,4) AB = ABB A AB ABB (1,2,4) A (1,4) AB = ABB A AB ABB (1,4) A AB = ABB A AB ABB (1,4) A (1,3,4) AB = ABB A AB ABB (1,4) A (1,2,4) AB = ABB A AB ABB (1,4) A (1,4) AB = ABB A AB ABB (1,2) A AB = ABB A AB ABB (1,2) A (1,3,4) AB = ABB A AB ABB (1,2) A (1,2,4) AB = ABB A AB ABB (1,2) A (1,4) AB = ABB A AB ABB (1) A AB = AB ABB (1) A (1,3,4) AB = ABB A AB ABB (1) A (1,2,4) AB = ABB A AB ABB (1) A (1,4) AB = ABV E B A AB, (3.8) where V and W are variable matrices of appropriate sizes. Applying the preliminary formulas in Section 2, we obtain the following results. Theorem 3.1. Let A C m n and B C n p, and denote M = [ A, B ]. (a) [[9, 10] The rank of AB ABB A AB satisfies the identity r( AB ABB A AB ) = r(m) + r(ab) r(a) r(b). (3.9) 6

7 (b) The following rank formulas hold max A (1,2,3) r( AB ABB A (1,2,3) AB ) A (1,3) r( AB ABB A (1,3) AB ) A (1,2) r( AB ABB A (1,2) AB ) B (1) r( AB ABB A (1) AB ) r( AB ABB (1,3,4) A (1,2,3) AB ) r( AB ABB (1,3,4) A (1,3) AB ) B (1,3,4),A (1,2,3) B (1,3,4),A (1,3) r( AB ABB (1,3,4) A (1,2) AB ) r( AB ABB (1,3,4) A (1) AB ) B (1,3,4),A (1,2) B (1,3,4),A (1) r( AB ABB (1,2,3) A (1,2,3) AB ) r( AB ABB (1,2,3) A (1,3) AB ) B (1,2,3),A (1,2,3) B (1,2,3),A (1,3) r( AB ABB (1,2,3) A (1,2) AB ) r( AB ABB (1,2,3) A (1) AB B (1,2,3),A (1,2) B (1,2,3),A (1) r( AB ABB (1,3) A (1,2,3) AB ) r( AB ABB (1,3) A (1,3) AB ) B (1,3),A (1,2,3) B (1,3),A (1,3) r( AB ABB (1,3) A (1,2) AB ) r( AB ABB (1,3) A (1) AB ) B (1,3),A (1,2) B (1,3),A (1) = r(m) + r(ab) r(a) r(b). (3.10) (c) The following rank formulas hold max r( AB ABB (1,2,4) A AB ) r( AB ABB (1,2,4) A (1,3,4) AB ) B (1,2,4) B (1,2,4),A (1,3,4) r( AB ABB (1,2,4) A (1,2,4) AB r( AB ABB (1,2,4) A (1,4) AB ) B (1,2,4),A (1,2,4) B (1,2,3),A (1,4) r( AB ABB (1,4) A AB ) r( AB ABB (1,4) A (1,3,4) AB ) B (1,4) B (1,4),A (1,3,4) max r( AB ABB (1,4) A (1,2,4) AB ) r( AB ABB (1,4) A (1,4) AB ) B (1,4),A (1,2,4) B (1,4),A (1,4) r( AB ABB (1,2) A AB ) r( AB ABB (1,2) A (1,3,4) AB ) B (1,2) B (1,4),A (1,2,4) r( AB ABB (1,2) A (1,2,4) AB ) r( AB ABB (1,2) A (1,4) AB ) B (1,2),A (1,2,4) B (1,2),A (1,4) r( AB ABB (1) A AB ) r( AB ABB (1) A (1,3,4) AB ) B (1) B (1),A (1,3,4) r( AB ABB (1) A (1,2,4) AB r( AB ABB (1) A (1,4) AB ) B (1),A (1,2,4) B (1),A (1,4) = r(m) + r(ab) r(a) r(b). (3.11) (d) The following rank formulas hold max r( ABB A AB ABB A (1,2,3) AB ) r( ABB A AB ABB A (1,3) AB ) A (1,2,3) A (1,3) A (1,2) r( ABB A AB ABB A (1,2) AB ) B (1) r( ABB A AB ABB A (1) AB ) B (1,3,4),A (1,2,3) r( ABB A AB ABB (1,3,4) A (1,2,3) AB ) B (1,3,4),A (1,3) r( ABB A AB ABB (1,3,4) A (1,3) AB ) B (1,3,4),A (1,2) r( ABB A AB ABB (1,3,4) A (1,2) AB ) 7

8 B (1,3,4),A (1) r( ABB A AB ABB (1,3,4) A (1) AB ) B (1,2,3),A (1,2,3) r( ABB A AB ABB (1,2,3) A (1,2,3) AB ) B (1,2,3),A (1,3) r( ABB A AB ABB (1,2,3) A (1,3) AB ) B (1,2,3),A (1,2) r( ABB A AB ABB (1,2,3) A (1,2) AB ) B (1,2,3),A (1) r( ABB A AB ABB (1,2,3) A (1) AB B (1,3),A (1,2,3) r( ABB A AB ABB (1,3) A (1,2,3) AB ) B (1,3),A (1,3) r( ABB A AB ABB (1,3) A (1,3) AB ) B (1,3),A (1,2) r( ABB A AB ABB (1,3) A (1,2) AB ) B (1,3),A (1) r( ABB A AB ABB (1,3) A (1) AB ) = r(m) + r(ab) r(a) r(b). (3.12) (e) The following rank formulas hold max r( ABB A AB ABB (1,2,4) A AB ) B (1,2,4) B (1,2,4),A (1,3,4) r( ABB A AB ABB (1,2,4) A (1,3,4) AB ) B (1,2,4),A (1,2,4) r( ABB A AB ABB (1,2,4) A (1,2,4) AB B (1,2,3),A (1,4) r( ABB A AB ABB (1,2,4) A (1,4) AB ) B (1,4) r( ABB A AB ABB (1,4) A AB ) B (1,4),A (1,3,4) r( ABB A AB ABB (1,4) A (1,3,4) AB ) B (1,4),A (1,2,4) r( ABB A AB ABB (1,4) A (1,2,4) AB ) B (1,4),A (1,4) r( ABB A AB ABB (1,4) A (1,4) AB ) B (1,2) r( ABB A AB ABB (1,2) A AB ) B (1,4),A (1,2,4) r( ABB A AB ABB (1,2) A (1,3,4) AB ) B (1,2),A (1,2,4) r( ABB A AB ABB (1,2) A (1,2,4) AB ) B (1,2),A (1,4) r( ABB A AB ABB (1,2) A (1,4) AB ) B (1) r( ABB A AB ABB (1) A AB ) B (1),A (1,3,4) r( ABB A AB ABB (1) A (1,3,4) AB ) B (1),A (1,2,4) r( ABB A AB ABB (1) A (1,2,4) AB B (1),A (1,4) r( ABB A AB ABB (1) A (1,4) AB ) = r(m) + r(ab) r(a) r(b). (3.13) 8

9 Proof. Applying (2.21) to (2.12) and simplifying, we obtain Thus establishing (3.9). Applying (2.47) to (3.5) gives r( AB ABB A AB ) = r( AB + ABP J QAB ) = r B A B BB 0 A AA 0 A AB r(a) r(b) 0 ABB AB B A B B 0 = r AA 0 AB r(a) r(b) 0 AB AB B A B B 0 = r AA AB 0 r(a) r(b) 0 0 AB B = r A B B AA + r(ab) r(a) r(b) AB ( ) A = r B [ A, B ] + r(ab) r(a) r(b) = r( [ A, B ] [ A, B ] ) + r(ab) r(a) r(b) = r(m) + r(ab) r(a) r(b), (3.14) max W r( AB ABB A AB ABB F A W AB ) = min { r[ AB ABB A AB, ABB F A ], r(ab) }, (3.15) min W r( AB ABB A AB ABB F A W AB ) = r[ AB ABB A AB, ABB F A ] r(abb F A ), (3.16) where by (2.17) and (2.18), r[ AB ABB A AB, ABB AB ABB F A ] = r A AB ABB r(a) 0 A AB ABB AB 0 = r r(a) = r r(a) AB A 0 AE B and = r(ab) + r(e B A ) r(a) = r(m) + r(ab) r(a) r(b), (3.17) r(abb F A ) = r(m) + r(ab) r(a) r(b). (3.18) Substituting (3.17) and (3.18) into (3.15) and (3.16), and noticing that r(m) + r(ab) r(a) r(b) r(ab), we obtain (3.10). By a similar approach to (3.6), we have max r( AB W ABB A AB ABV E B A AB ) = r(m) + r(ab) r(a) r(b), as required for (3.11). Applying (2.47) and (3.18) to (3.7) and (3.8) gives establishing (3.12) and (3.13). 4 Main Results max W r( ABB F A W AB ) = r(abb F A ) = r(m) + r(ab) r(a) r(b), (3.19) max r( ABV E B A AB ) = r(e B A AB) = r(m) + r(ab) r(a) r(b), (3.20) V Theorem 4.1. Let A C m n and B C n p, and denote M = [ A, B ]. Then, the following statements are equivalent: 1 B A { (AB) (1)}, namely, ABB A AB = AB. 9

10 2 B A { (AB) (2)}, namely, B A ABB A = B A. 3 B A { (AB) (1,2)}. 4 B A { [(A ) B] (1)}. 5 B A { [A(B ) ] (1)}. 6 B (A A) { (A AB) (1)} and/or (BB ) A { (ABB ) (1)}. 7 (BB ) (A A) { (A ABB ) (1)} and/or (A A) (BB ) { (BB A A) (1)}. 8 [(BB ) 1/2 ] [(A A) 1/2 ] { [(A A) 1/2 (BB ) 1/2 ] (1)} and/or [(A A) 1/2 ] (BB ) 1/2 ] { [(BB ) 1/2 (A A) 1/2 ] (1)}. 9 (BB B) (AA A) { (AA ABB B) (1)}. 10 B A A { (A AB) (1)} and/or BB A { (ABB ) (1)}. 11 BB A A { (A ABB ) (1)} and/or A ABB { (BB A A) (1)}. 12 BB F A { (F A BB ) (1)} and/or F A BB { (BB F A ) (1)}. 13 E B A A { (A AE B ) (1)} and/or A AE B { (E B A A) (1)}. 14 E B F A { (F A E B ) (1)} and/or F A E B { (E B F A ) (1)}. 15 (A ABB ) = BB A A and/or (BB A A) = A ABB. 16 (BB F A ) = BB F A and/or (F A BB ) = F A BB. 17 (A AE B ) = A AE B and/or (E B A A) = E B A A. 18 (F A E B ) = F A E B and/or (E B F A ) = E B F A. 19 A ABB = BB A A and/or F A BB = BB F A, A AE B = E B A A, F A E B = E B F A. 20 (A ABB ) 2 = A ABB and/or (BB A A) 2 = BB A A. 21 (F A BB ) 2 = F A BB and/or (BB F A ) 2 = BB F A and/or 22 (A AE B ) 2 = A AE B and/or (E B A A) 2 = E B A A. 23 (F A E B ) 2 = F A E B and/or (E B F A ) 2 = E B F A. 24 (ABB A ) 2 = ABB A and/or (B A AB) 2 = B A AB. 25 (AE B A ) 2 = AE B A and/or (B F A B) 2 = B F A B. 26 E B A ABB = 0 and/or BB A AE B = A ABB F A = 0 and/or F A BB A A = [ A, B ][ A, B ] = A A + BB A ABB and/or [ A, B ][ A, B ] = A A + BB BB A A. 29 [ F A, B ][ F A, B ] = F A + BB F A BB and/or [ F A, B ][ F A, B ] = F A + BB BB F A. 30 [ A, E B ][ A, E B ] = A A + E B A AE B and/or [ A, E B ][ A, E B ] = A A + E B E B A A. 31 [ F A, E B ][ F A, E B ] = F A + E B F A E B and/or [ F A, E B ][ F A, E B ] = F A + E B E B F A. 32 { B A (1)} { (AB) (1)}. 33 { B A (1,2)} { (AB) (1)}. 34 { B A (1,3)} { (AB) (1)}. 35 { B A (1,4)} { (AB) (1)}. 36 { B A (1,2,3)} { (AB) (1)}. 37 { B A (1,2,4)} { (AB) (1)}. 10

11 38 { B A (1,3,4)} { (AB) (1)}. 39 { B (1,3,4) A (1)} { (AB) (1)}. 40 { B (1,3,4) A (1,2)} { (AB) (1)}. 41 { B (1,3,4) A (1,3)} { (AB) (1)}. 42 { B (1,3,4) A (1,4)} { (AB) (1)}. 43 { B (1,3,4) A (1,2,3)} { (AB) (1)}. 44 { B (1,3,4) A (1,2,4)} { (AB) (1)}. 45 { B (1,3,4) A (1,3,4)} { (AB) (1)}. 46 { B (1,3,4) A } { (AB) (1)}. 47 { B (1,2,4) A (1,4)} { (AB) (1)}. 48 { B (1,2,4) A (1,2,4)} { (AB) (1)}. 49 { B (1,2,4) A (1,3,4)} { (AB) (1)}. 50 { B (1,2,4) A } { (AB) (1)}. 51 { B (1,2,3) A (1)} { (AB) (1)}. 52 { B (1,2,3) A (1,2)} { (AB) (1)}. 53 { B (1,2,3) A (1,3)} { (AB) (1)}. 54 { B (1,2,3) A (1,4)} { (AB) (1)}. 55 { B (1,2,3) A (1,2,3)} { (AB) (1)}. 56 { B (1,2,3) A (1,2,4)} { (AB) (1)}. 57 { B (1,2,3) A (1,3,4)} { (AB) (1)}. 58 { B (1,2,3) A } { (AB) (1)}. 59 { B (1,4) A (1,4)} { (AB) (1)}. 60 { B (1,4) A (1,2,4)} { (AB) (1)}. 61 { B (1,4) A (1,3,4)} { (AB) (1)}. 62 { B (1,4) A } { (AB) (1)}. 63 { B (1,3) A (1)} { (AB) (1)}. 64 { B (1,3) A (1,2)} { (AB) (1)}. 65 { B (1,3) A (1,3)} { (AB) (1)}. 66 { B (1,3) A (1,4)} { (AB) (1)}. 67 { B (1,3) A (1,2,3)} { (AB) (1)}. 68 { B (1,3) A (1,2,4)} { (AB) (1)}. 69 { B (1,3) A (1,3,4)} { (AB) (1)}. 70 { B (1,3) A } { (AB) (1)}. 71 { B (1,2) A (1,4)} { (AB) (1)}. 72 { B (1,2) A (1,2,4)} { (AB) (1)}. 11

12 73 { B (1,2) A (1,3,4)} { (AB) (1)}. 74 { B (1,2) A } { (AB) (1)}. 75 { B (1) A (1,4)} { (AB) (1)}. 76 { B (1) A (1,2,4)} { (AB) (1)}. 77 { B (1) A (1,3,4)} { (AB) (1)}. 78 { B (1) A } { (AB) (1)}. 79 ABB (1,3) A (1) AB is invariant with respect to the choice of A (1) and B (1,3). 80 ABB (1) A (1,4) AB is invariant with respect to the choice of A (1,4) and B (1). 81 r[ A, B ] = r(a) + r(b) r(ab). 82 r[ F A, B ] = r(f A ) + r(b) r(f A B). 83 r[ A, E B ] = r(a) + r(e B ) r(ae B ). 84 r[ F A, E B ] = r(f A ) + r(e B ) r(f A E B ). 85 r( I n A ABB ) = n r(a ABB ) and/or r( I n BB A A ) = n r(bb A A ). 86 r( I n F A BB ) = n r(f A BB ) and/or r( I n BB F A ) = n r(bb F A ). 87 r( I n A AE B ) = n r(a AE B ) and/or r( I n E B A A ) = n r(e B A A ). 88 r( I n F A E B ) = n r(f A E B ) and/or r( I n E B F A ) = n r(e B F A ). 89 r( I m ABB A ) = m r(abb A ) and/or r( I p B A AB ) = p r(b A AB ). 90 r( I m AE B A ) = m r(ae B A ) and/or r( I p B F A B ) = p r(b F A B ). 91 dim[r(a ) R(B)] = r(ab) and/or dim[r(f A ) R(B)] = r(f A B), dim[r(a ) R(E B )] = r(ae B ), dim[r(f A ) R(E B )] = r(f A E B ). 92 R(A ABB ) R(BB ) and/or R(BB A A) R(A A). 93 R(F A BB ) R(BB ) and/or R(BB F A ) R(F A ). 94 R(A AE B ) R(E B ) and/or R(E B A A) R(A A). 95 R(F A E B ) R(E B ) and/or R(E B F A ) R(F A ). 96 R(A ABB ) = R(A A) R(BB ) and/or R(BB A A) = R(A A) R(BB ). 97 R(F A BB ) = R(F A ) R(BB ) and/or R(BB F A ) = R(F A ) R(BB ). 98 R(A AE B ) = R(A A) R(E B ) and/or R(E B A A) = R(A A) R(E B ). 99 R(F A E B ) = R(F A ) R(E B ) and/or R(E B F A ) = R(F A ) R(E B ). 100 R(A ABB ) = R(BB A A) and/or R(A AE B ) = R(E B A A), R(F A BB ) = R(BB F A ), R(F A E B ) = R(E B F A ). 101 R(AB) R(AE B ) = {0} and/or R(F A B) R(F A E B ) = {0}, R(B A ) R[(B F A ) = {0}, R(E B A ) R(E B F A ) = {0}. Proof. Setting both sides of (3.9) equal to zero leads to the equivalence of 1 and 81. By (2.17), r(f A ) = n r(a), r(f A B) = r[ A, B ] r(a ), (4.1) r(e B ) = n r(b), r(ae B ) = r[ A, B ] r(b), (4.2) r[ F A, B ] = r(f A ) + r(a AB) = n r(a) + r(ab), (4.3) r[ A, E B ] = r(abb ) + r(e B ) = n r(b) + r(ab), (4.4) r[ F A, E B ] = r(f A ) + r(a AE B ) = n r(a) r(b) r[ A, B ], (4.5) r(f A E B ) = r[ A, E B ] r(a ) = n r(a) r(b) + r(ab). (4.6) 12

13 Substituting (4.1) (4.6) into the three rank equalities in 82, 83, and 84 and simplifying, we obtain the equivalence of Applying (3.9) to B A B A ABB A and simplifying by (2.29), (2.30), (2.32), and (2.37), we obtain r( B A B A ABB A ) = r[ (B ), A ] + r(b A ) r(b ) r(a ) = r[ A, B ] + r(ab) r(b) r(a). (4.7) Setting both side of (4.7) equal to zero, we obtain the equivalence of 2 and 81. Both 1 and 2 imply 3. Conversely, 3 implies 1 and 2. The following rank formulas can be established by a similar approach r[ (A ) B (A ) BB A (A ) B ] = r(m) + r(ab) r(a) r(b), (4.8) r[ A(B ) A(B ) B A A(B ) ] = r(m) + r(ab) r(a) r(b), (4.9) r[ (A A)B (A A)BB (A A) (A A)B ] = r(m) + r(ab) r(a) r(b), (4.10) r[ A(BB ) A(BB )(BB ) A A(BB ) ] = r(m) + r(ab) r(a) r(b), (4.11) r[ (A A)(BB ) (A A)(BB )(BB ) (A A) (A A)(BB ) ] = r(m) + r(ab) r(a) r(b), (4.12) r{ (A A) 1/2 (BB ) 1/2 (A A) 1/2 (BB ) 1/2 [(BB ) 1/2 ] [(A A) 1/2 ] (A A) 1/2 (BB ) 1/2 } = r(m) + r(ab) r(a) r(b), (4.13) r[ (AA A)(BB B) (AA A)(BB B)(BB B) (AA A) (AA A)(BB B) ] = r(m) + r(ab) r(a) r(b), (4.14) r[ A AB (A AB)B A A(A AB) ] = r(m) + r(ab) r(a) r(b), (4.15) r[ ABB (ABB )BB A (ABB ) ] = r(m) + r(ab) r(a) r(b), (4.16) r[ A ABB (A ABB )BB A A(A ABB ) ] = r(m) + r(ab) r(a) r(b). (4.17) Setting all both sides of (4.8) (4.17) equal to zero leads to the equivalence of 2 11 and 81. The following rank formulas can be established by a similar approach r[ F A BB (F A BB )BB F A (F A BB ) ] = r[ F A, B ] r(f A ) r(b) + r(f A B), (4.18) r[ A AE B (A AE B )E B A A(A AE B ) ] = r[ A, E B ] r(a) r(e B ) + r(ae B ), (4.19) r[ F A E B (F A E B )E B F A (F A E B ) ] = r[ F A, E B ] r(f A ) r(e B ) + r(f A AE B ). (4.20) Setting the both sides of (4.18) (4.20) equal to zero leads to the equivalence of and

14 Applying (2.22) and simplifying by (2.17), we obtain r( I n BB A A ) = r( I n A ABB ) AA ABB = r A r(a) I n AA = r ABB A 0 r(a) 0 I n = n + r( AA ABB A ) r(a) B = n + r B B A AB AA r(a) r(b) = n + r([ A, B ] [ A, B ]) r(a) r(b) = n + r(m) r(a) r(b), (4.21) r( I m ABB A B ) = r B B A r(b) AB I m B = r B B A AB 0 r(b) 0 I m Combining (4.21) (4.23) with (2.39) (2.42) yields = r( B F A B ) r(b) + m = r(f A B) r(b) + m = r(m) r(a) r(b) + m, (4.22) AA r( I p B A AB AB ) = r B A r(a) I p AA = r ABB A 0 r(a) 0 I p = r( AE B A ) r(a) + p = r(e B A ) r(a) + p = r(m) r(a) r(b) + p. (4.23) r( I n BB A A ) n + r(bb A A) = r(m) r(a) r(b) + r(ab), (4.24) r( I m ABB A ) m + r(abb A ) = r(m) r(a) r(b) + r(ab), (4.25) r( I p B A AB ) p + r(b A AB) = r(m) r(a) r(b) + r(ab). (4.26) Setting both sides of (4.24) (4.26) equal to zero leads to the equivalence of 85, 89, and 81. Replacing A A with F A in 85 leads to the equivalence of 86 and 82. Replacing BB with E B in 85 leads to the equivalence of 87 and 83. Replacing A A with F A and BB with E B in 85 leads to the equivalence of 88 and 84. Replacing A A with F A and BB with E B in 89 respectively leads to the equivalence of 90, 82, and 83. Applying (2.24), (4.21) (4.23) to (A ABB ) 2 A ABB, we obtain r[ (A ABB ) 2 A ABB ] = r( I n A ABB ) + r(a ABB ) n = r(m) r(a) r(b) + r(ab), (4.27) r[ (BB A A) 2 BB A A ] = r( I n BB A A ) + r(bb A ) n = r(m) r(a) r(b) + r(ab), (4.28) r[ (ABB A ) 2 ABB A ] = r( I m ABB A ) + r(abb A ) m = r(m) r(a) r(b) + r(ab), (4.29) r[ (B A AB) 2 B A AB ] = r( I p B A AB ) + r(b A AB) p = r(m) r(a) r(b) + r(ab). (4.30) Setting both sides of (4.27) (4.30) equal to zero leads to the equivalence of 20, 24, and

15 Applying (2.23) (A ABB ) BB A A, we obtain r[ (A ABB ) BB A A ] = r[ (A ABB ) (A ABB ) ] = r[ (A ABB ) (A ABB )(A ABB ) (A ABB ) ] = r[ (A ABB ) (A ABB ) 2 ] = r[ (A ABB + r[ I n A ABB ] n = r(m) + r(ab) r(a) r(b). (4.31) Setting both sides of (4.31) equal to zero leads to the equivalence of 15 and 81. Replacing A A with F A in 11 leads to the equivalence of 12 and 82. Replacing BB with E B in 11 leads to the equivalence of 13 and 83. Replacing A A with F A and BB with E B in 11 leads to the equivalence of 14 and 84. Replacing A A with F A in 15 leads to the equivalence of 16 and 82. Replacing BB with E B in 15 leads to the equivalence of 17 and 83. Replacing A A with F A and BB with E B in 15 leads to the equivalence of 18 and 84. Applying (2.25) to A ABB BB A A and simplifying by (2.32) and (2.40), we obtain r( A ABB BB A A ) = 2r[ A A, BB ] 2r(A A) 2r(BB ) + 2r(A ABB ) = 2r(M) 2r(A) 2r(B) + 2r(AB). (4.32) Setting both sides of (4.32) equal to zero leads to the equivalence of 19 and 81. Replacing A A with F A in 20 leads to the equivalence of 21 and 82. Replacing BB with E B in 20 leads to the equivalence of 22 and 83. Replacing A A with F A and BB with E B in 20 leads to the equivalence of 23 and 84. Replacing A A with F A and BB with E B in 24 respectively leads to the equivalence of 25, 82, and 83. Applying (2.17) to and simplifying by Lemma 2.3(c), we obtain r(e B A ABB ) = r(bb A AE B ) = r[ B, A ABB ] r(b) = r[ B A AB, A AB ] r(b) = r(b A AB) + r(a AB) r(b) r(a ABB F A ) = r(f A BB A A) = r[ A, BB A A ] r(a) = r(m) r(a) r(b) + r(ab), (4.33) = r[ A BB A, BB A ] r(a) = r(a BB A ) + r(bb A ) r(a) = r(m) r(a) r(b) + r(ab). (4.34) Setting all sides of (4.33) and (4.34) equal to zero leads to the equivalence of 26, 27, and 81. The following rank identities were proved in [16] r ( [ A, B ][ A, B ] A A BB + A ABB ) = r ( [ A, B ][ A, B ] A A BB + BB A A ) = r(m) r(a) r(b) + r(ab). Setting three sides of these two identities equal to zero leads to the equivalence of 29 and 81. Replacing A A with F A in 29 leads to the equivalence of 30 and 82. Replacing BB with E B in 29 leads to the equivalence of 31 and 83. Replacing A A with F A and BB with E B in 29 leads to the equivalence of 32 and 84. Setting all sides of (3.10) and (3.11) equal to zero leads to the equivalence of 32 78, and 81. Setting all sides of (3.12) and (3.13) equal to zero leads to the equivalence of 79, 80, and 81. The equivalence of 81 84, and 91 follows from the following well-known dimension formulas dim[r(a ) R(B)] = r(a) + r(b) r[ A, B ], dim[r(f A ) R(B)] = r(f A ) + r(b) r[ F A, B ], dim[r(a ) R(E B )] = r(a) + r(e B ) r[ A, E B ], dim[r(f A ) R(E B )] = r(f A ) + r(e B ) r[ F A, E B ]. Applying Lemma 2.3(a) and (b), to 26 and 27 leads to the equivalence of 26 and 27, and

16 Replacing A A with F A in 92 leads to the equivalence of 93 and 82. Replacing BB with E B in 92 leads to the equivalence of 94 and 83. Replacing A A with F A and BB with E B in 92 leads to the equivalence of 95 and 84. It follows first from 92 that R(A ABB ) R(A A) R(BB ) and/or R(BB A A) R(A A) R(BB ). (4.35) Furthermore, It follows first from 81 that dim[r(a A) R(BB )] = r(a A) + r(bb ) r[ A A, BB ] = r(a ABB ) = r(bb A A). (4.36) Applying (2.26) to (4.35) and (4.36) leads to the range identities in 96. Conversely, 96 obviously implies 92. The equivalence of 93 and 97, 94 and 98, and 95 and 99 can be established similarly. By Lemma 2.3(c) and (2.19), r[ A ABB, BB A A ] = r[ (I n BB )A ABB, BB A A ] Combining (4.37) with (2.39) and (2.40), we obtain = r[ (I n BB )A ABB ] + r(bb A A) = r[ B, A AB ] r(b) + r(ab) = r[ (I n A A)B, A AB ] r(b) + r(ab) = r[ (I n A A)B] + r(a AB) r(b) + r(ab) = r(m) r(a) r(b) + 2r(AB). (4.37) r[ A ABB, BB A A ] r(a ABB ) = r[ A ABB, BB A A ] r(bb A A) = r(m) r(a) r(b) + r(ab). (4.38) Setting both sides of (4.38) equal to zero, we obtain the equivalence of the first range equality in 100 and 81. Replacing A A with F A and BB with E B in the first range equality of 100, respectively, leads to the equivalences of the second, third, and fourth range equalities in 100 and those in 82, 83, and 84. Finally by (2.17), dim[r(ab) R(AE B )] = r(ab) + r(ae B ) r[ AB, AE B ] = r[ A, B ] r(a) r(b) + r(ab). (4.39) Setting both sides of (4.39) equal to zero, we obtain the equivalence of the first range equality in 101 and 81. The equivalence of the third fact in 101 with 81 can be established similarly. Replacing A A with F A and BB with E B in the first and third facts of 101, respectively, leads to the equivalences of the second and fourth facts in 101, 82, and Concluding remarks We have collected/established, as a summery, hundreds of known/novel identifying conditions for the reverseorder law in (1.12) to hold by using various matrix rank/range formulas, which surprisingly link versatile results in matrix theory together, and thus provide us different views of the reverse-order laws and a solid foundation for further approach to (1.2). It is easy to use these results in various situations where generalized inverses of matrices are involved. Observe further that the last three reverse-order laws in (1.11) are special cases of the first in (1.11), it is possible to derive many novel/valuable identifying conditions for the last three reverse-order laws in (1.11) to hold based in the previous results. The matrix rank methodology has been recognized as a direct and effective tool for establishing and characterizing various matrix equalities in matrix theory since 1970s, while this method was introduced in the establishments of reverse-order laws in 1990s by the present author (see [9, 10]). As mentioned in Section 1, it is a tremendous work to establish necessary and sufficient conditions for the thousands of possible equalities in (1.2) and their variations to hold. It is expected that more matrix rank formulas associated with (1.2) can be produced, and many new and unexpected necessary and sufficient conditions will be derived from the rank formulas for (1.2) to hold. References [1] E. Arghiriade. Sur les matrices qui sont permutables avec leur inverse généralisée. Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat. 35(1963),

17 [2] E. Arghiriade. Remarques sur l inverse généralisée d un produit de matrice. Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat. 42(1967), [3] A. Ben-Israel, T.N.E. Greville. Generalized Inverses: Theory and Applications, 2nd Ed., Springer, New York, [4] S.L. Campbell, C.D. Meyer. Generalized inverses of linear transformations, Pitman, London, [5] I. Erdelyi, On the reverse order law related to the generalized inverse of matrix products. J. Assoc. Comput. Mach. 13(1966), [6] T.N.E. Greville. Note on the generalized inverse of a matrix product. SIAM Rev. 8(1966), [7] G. Marsaglia, G.P.H. Styan. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 2(1974), [8] C.R. Rao, S.K. Mitra. Generalized Inverse of Matrices and Its Applications. Wiley, New York, [9] Y. Tian. The Moore Penrose inverse of a triple matrix product. Math. Theory Practice (in Chinese) 1(1992), [10] Y. Tian. Reverse order laws for the generalized inverses of multiple matrix products. Linear Algebra Appl. 211(1994), [11] Y. Tian. The maximal and minimal ranks of some expressions of generalized inverses of matrices. Southeast Asian Bull. Math. 25(2002), [12] Y. Tian. Equalities and inequalities for ranks of products of generalized inverses of two matrices and their applications. J. Inequal. Appl. 182(2016), [13] Y. Tian. How to establish exact formulas for calculating the max-min ranks of products of two matrices and their generalized inverses. Linear Multilinear Algebra, 2017, [14] Y. Tian, S. Cheng. The maximal and minimal ranks of A BXC with applications. New York J. Math. 9(2003), [15] Y. Tian, G.P.H. Styan. Rank equalities for idempotent and involutory matrices, Linear Algebra Appl. 335(2001), [16] Y. Tian, Y. Wang. Expansion formulas for orthogonal projectors onto ranges of row block matrices. J. Math. Res. Appl. 34(2014),

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =

More information

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China On global optimizations of the rank and inertia of the matrix function A 1 B 1 XB 1 subject to a pair of matrix equations B 2 XB 2, B XB = A 2, A Yongge Tian China Economics and Management Academy, Central

More information

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN Electronic Journal of Linear lgebra ISSN 1081-3810 publication of the International Linear lgebra Society EL SPECIL FORMS OF GENERLIZED INVERSES OF ROW BLOCK MTRICES YONGGE TIN bstract. Given a row block

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

Dragan S. Djordjević. 1. Introduction

Dragan S. Djordjević. 1. Introduction UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis

More information

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China

More information

ELA

ELA Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF

More information

Dragan S. Djordjević. 1. Introduction and preliminaries

Dragan S. Djordjević. 1. Introduction and preliminaries PRODUCTS OF EP OPERATORS ON HILBERT SPACES Dragan S. Djordjević Abstract. A Hilbert space operator A is called the EP operator, if the range of A is equal with the range of its adjoint A. In this article

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

Operators with Compatible Ranges

Operators with Compatible Ranges Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges

More information

A new algebraic analysis to linear mixed models

A new algebraic analysis to linear mixed models A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a

More information

arxiv: v1 [math.ra] 14 Apr 2018

arxiv: v1 [math.ra] 14 Apr 2018 Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,

More information

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model Statistics & Probability Letters 76 (2006) 1265 1272 www.elsevier.com/locate/stapro On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

arxiv: v1 [math.ra] 28 Jan 2016

arxiv: v1 [math.ra] 28 Jan 2016 The Moore-Penrose inverse in rings with involution arxiv:1601.07685v1 [math.ra] 28 Jan 2016 Sanzhang Xu and Jianlong Chen Department of Mathematics, Southeast University, Nanjing 210096, China Abstract:

More information

of a Two-Operator Product 1

of a Two-Operator Product 1 Applied Mathematical Sciences, Vol. 7, 2013, no. 130, 6465-6474 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.39501 Reverse Order Law for {1, 3}-Inverse of a Two-Operator Product 1 XUE

More information

Some results on the reverse order law in rings with involution

Some results on the reverse order law in rings with involution Some results on the reverse order law in rings with involution Dijana Mosić and Dragan S. Djordjević Abstract We investigate some necessary and sufficient conditions for the hybrid reverse order law (ab)

More information

Group inverse for the block matrix with two identical subblocks over skew fields

Group inverse for the block matrix with two identical subblocks over skew fields Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

Some results on matrix partial orderings and reverse order law

Some results on matrix partial orderings and reverse order law Electronic Journal of Linear Algebra Volume 20 Volume 20 2010 Article 19 2010 Some results on matrix partial orderings and reverse order law Julio Benitez jbenitez@mat.upv.es Xiaoji Liu Jin Zhong Follow

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the

More information

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B Dragan S. Djordjević November 15, 2005 Abstract In this paper we find the explicit solution of the equation A X + X A = B for linear bounded operators

More information

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 8 14 c 2008 Institute for Scientific Computing and Information DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX RAVI

More information

The generalized Schur complement in group inverses and (k + 1)-potent matrices

The generalized Schur complement in group inverses and (k + 1)-potent matrices The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The

More information

On EP elements, normal elements and partial isometries in rings with involution

On EP elements, normal elements and partial isometries in rings with involution Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China

More information

On Sums of Conjugate Secondary Range k-hermitian Matrices

On Sums of Conjugate Secondary Range k-hermitian Matrices Thai Journal of Mathematics Volume 10 (2012) Number 1 : 195 202 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Sums of Conjugate Secondary Range k-hermitian Matrices S. Krishnamoorthy,

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Determinantal divisor rank of an integral matrix

Determinantal divisor rank of an integral matrix Determinantal divisor rank of an integral matrix R. B. Bapat Indian Statistical Institute New Delhi, 110016, India e-mail: rbb@isid.ac.in Abstract: We define the determinantal divisor rank of an integral

More information

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE Dragan S Djordjević and Yimin Wei Abstract Additive perturbation results for the generalized Drazin inverse of Banach space operators are presented Precisely

More information

Generalized Schur complements of matrices and compound matrices

Generalized Schur complements of matrices and compound matrices Electronic Journal of Linear Algebra Volume 2 Volume 2 (200 Article 3 200 Generalized Schur complements of matrices and compound matrices Jianzhou Liu Rong Huang Follow this and additional wors at: http://repository.uwyo.edu/ela

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

arxiv: v1 [math.ra] 21 Jul 2013

arxiv: v1 [math.ra] 21 Jul 2013 Projections and Idempotents in -reducing Rings Involving the Moore-Penrose Inverse arxiv:1307.5528v1 [math.ra] 21 Jul 2013 Xiaoxiang Zhang, Shuangshuang Zhang, Jianlong Chen, Long Wang Department of Mathematics,

More information

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse

More information

Predrag S. Stanimirović and Dragan S. Djordjević

Predrag S. Stanimirović and Dragan S. Djordjević FULL-RANK AND DETERMINANTAL REPRESENTATION OF THE DRAZIN INVERSE Predrag S. Stanimirović and Dragan S. Djordjević Abstract. In this article we introduce a full-rank representation of the Drazin inverse

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

On some matrices related to a tree with attached graphs

On some matrices related to a tree with attached graphs On some matrices related to a tree with attached graphs R. B. Bapat Indian Statistical Institute New Delhi, 110016, India fax: 91-11-26856779, e-mail: rbb@isid.ac.in Abstract A tree with attached graphs

More information

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space Sonja Radosavljević and Dragan SDjordjević March 13, 2012 Abstract For two given orthogonal, generalized or hypergeneralized

More information

Proceedings of the Third International DERIV/TI-92 Conference

Proceedings of the Third International DERIV/TI-92 Conference Proceedings of the Third International DERIV/TI-9 Conference The Use of a Computer Algebra System in Modern Matrix Algebra Karsten Schmidt Faculty of Economics and Management Science, University of Applied

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field International Mathematical Forum, Vol 13, 2018, no 7, 323-335 HIKARI Ltd, wwwm-hikaricom https://doiorg/1012988/imf20188528 Some Reviews on Ranks of Upper Triangular lock Matrices over a Skew Field Netsai

More information

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Nonsingularity and group invertibility of linear combinations of two k-potent matrices Nonsingularity and group invertibility of linear combinations of two k-potent matrices Julio Benítez a Xiaoji Liu b Tongping Zhu c a Departamento de Matemática Aplicada, Instituto de Matemática Multidisciplinar,

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

EP elements in rings

EP elements in rings EP elements in rings Dijana Mosić, Dragan S. Djordjević, J. J. Koliha Abstract In this paper we present a number of new characterizations of EP elements in rings with involution in purely algebraic terms,

More information

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms Dijana Mosić and Dragan S Djordjević Abstract We introduce new expressions for the generalized Drazin

More information

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES XIAOJI LIU, LINGLING WU, AND JULIO BENíTEZ Abstract. In this paper, some formulas are found for the group inverse of ap +bq,

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

On some linear combinations of hypergeneralized projectors

On some linear combinations of hypergeneralized projectors Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß

More information

Linear estimation in models based on a graph

Linear estimation in models based on a graph Linear Algebra and its Applications 302±303 (1999) 223±230 www.elsevier.com/locate/laa Linear estimation in models based on a graph R.B. Bapat * Indian Statistical Institute, New Delhi 110 016, India Received

More information

A Note on the Group Inverses of Block Matrices Over Rings

A Note on the Group Inverses of Block Matrices Over Rings Filomat 31:1 017, 3685 369 https://doiorg/1098/fil171685z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat A Note on the Group Inverses

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

Minkowski Inverse for the Range Symmetric Block Matrix with Two Identical Sub-blocks Over Skew Fields in Minkowski Space M

Minkowski Inverse for the Range Symmetric Block Matrix with Two Identical Sub-blocks Over Skew Fields in Minkowski Space M International Journal of Mathematics And its Applications Volume 4, Issue 4 (2016), 67 80. ISSN: 2347-1557 Available Online: http://ijmaa.in/ International Journal 2347-1557 of Mathematics Applications

More information

arxiv: v1 [math.ra] 24 Aug 2016

arxiv: v1 [math.ra] 24 Aug 2016 Characterizations and representations of core and dual core inverses arxiv:1608.06779v1 [math.ra] 24 Aug 2016 Jianlong Chen [1], Huihui Zhu [1,2], Pedro Patrício [2,3], Yulin Zhang [2,3] Abstract: In this

More information

Rank equalities for idempotent and involutory matrices

Rank equalities for idempotent and involutory matrices Linear Algebra and its Applications 335 (2001) 101 117 www.elsevier.com/locate/laa Rank equalities for idempotent and involutory matrices Yongge Tian a, George P.H. Styan a, b, a Department of Mathematics

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

Decomposition of a ring induced by minus partial order

Decomposition of a ring induced by minus partial order Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012) Article 72 2012 Decomposition of a ring induced by minus partial order Dragan S Rakic rakicdragan@gmailcom Follow this and additional works

More information

On the Moore-Penrose Inverse in C -algebras

On the Moore-Penrose Inverse in C -algebras E extracta mathematicae Vol. 21, Núm. 2, 93 106 (2006) On the Moore-Penrose Inverse in C -algebras Enrico Boasso Via Cristoforo Cancellieri 2, 34137 Trieste, Italia e-mail: enrico odisseo@yahoo.it (Presented

More information

On Sum and Restriction of Hypo-EP Operators

On Sum and Restriction of Hypo-EP Operators Functional Analysis, Approximation and Computation 9 (1) (2017), 37 41 Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/faac On Sum and

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 432 21 1691 172 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Drazin inverse of partitioned

More information

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS Univ. Beograd. Publ. Eletrotehn. Fa. Ser. Mat. 15 (2004), 13 25. Available electronically at http: //matematia.etf.bg.ac.yu APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS Predrag

More information

Product of Range Symmetric Block Matrices in Minkowski Space

Product of Range Symmetric Block Matrices in Minkowski Space BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 29(1) (2006), 59 68 Product of Range Symmetric Block Matrices in Minkowski Space 1

More information

Inner image-kernel (p, q)-inverses in rings

Inner image-kernel (p, q)-inverses in rings Inner image-kernel (p, q)-inverses in rings Dijana Mosić Dragan S. Djordjević Abstract We define study the inner image-kernel inverse as natural algebraic extension of the inner inverse with prescribed

More information

arxiv: v1 [math.fa] 31 Jan 2013

arxiv: v1 [math.fa] 31 Jan 2013 Perturbation analysis for Moore-Penrose inverse of closed operators on Hilbert spaces arxiv:1301.7484v1 [math.fa] 31 Jan 013 FAPENG DU School of Mathematical and Physical Sciences, Xuzhou Institute of

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

On Euclidean distance matrices

On Euclidean distance matrices On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto

More information

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

More information

HYPO-EP OPERATORS 1. (Received 21 May 2013; after final revision 29 November 2014; accepted 7 October 2015)

HYPO-EP OPERATORS 1. (Received 21 May 2013; after final revision 29 November 2014; accepted 7 October 2015) Indian J. Pure Appl. Math., 47(1): 73-84, March 2016 c Indian National Science Academy DOI: 10.1007/s13226-015-0168-x HYPO-EP OPERATORS 1 Arvind B. Patel and Mahaveer P. Shekhawat Department of Mathematics,

More information

Ep Matrices and Its Weighted Generalized Inverse

Ep Matrices and Its Weighted Generalized Inverse Vol.2, Issue.5, Sep-Oct. 2012 pp-3850-3856 ISSN: 2249-6645 ABSTRACT: If A is a con s S.Krishnamoorthy 1 and B.K.N.MuthugobaI 2 Research Scholar Ramanujan Research Centre, Department of Mathematics, Govt.

More information

The Newton Bracketing method for the minimization of convex functions subject to affine constraints

The Newton Bracketing method for the minimization of convex functions subject to affine constraints Discrete Applied Mathematics 156 (2008) 1977 1987 www.elsevier.com/locate/dam The Newton Bracketing method for the minimization of convex functions subject to affine constraints Adi Ben-Israel a, Yuri

More information

The Moore-Penrose inverse of differences and products of projectors in a ring with involution

The Moore-Penrose inverse of differences and products of projectors in a ring with involution The Moore-Penrose inverse of differences and products of projectors in a ring with involution Huihui ZHU [1], Jianlong CHEN [1], Pedro PATRÍCIO [2] Abstract: In this paper, we study the Moore-Penrose inverses

More information

Moore-Penrose-invertible normal and Hermitian elements in rings

Moore-Penrose-invertible normal and Hermitian elements in rings Moore-Penrose-invertible normal and Hermitian elements in rings Dijana Mosić and Dragan S. Djordjević Abstract In this paper we present several new characterizations of normal and Hermitian elements in

More information

On a matrix result in comparison of linear experiments

On a matrix result in comparison of linear experiments Linear Algebra and its Applications 32 (2000) 32 325 www.elsevier.com/locate/laa On a matrix result in comparison of linear experiments Czesław Stȩpniak, Institute of Mathematics, Pedagogical University

More information

A note on solutions of linear systems

A note on solutions of linear systems A note on solutions of linear systems Branko Malešević a, Ivana Jovović a, Milica Makragić a, Biljana Radičić b arxiv:134789v2 mathra 21 Jul 213 Abstract a Faculty of Electrical Engineering, University

More information

Some additive results on Drazin inverse

Some additive results on Drazin inverse Linear Algebra and its Applications 322 (2001) 207 217 www.elsevier.com/locate/laa Some additive results on Drazin inverse Robert E. Hartwig a, Guorong Wang a,b,1, Yimin Wei c,,2 a Mathematics Department,

More information

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

More information

On Regularity of Incline Matrices

On Regularity of Incline Matrices International Journal of Algebra, Vol. 5, 2011, no. 19, 909-924 On Regularity of Incline Matrices A. R. Meenakshi and P. Shakila Banu Department of Mathematics Karpagam University Coimbatore-641 021, India

More information

Weak Monotonicity of Interval Matrices

Weak Monotonicity of Interval Matrices Electronic Journal of Linear Algebra Volume 25 Volume 25 (2012) Article 9 2012 Weak Monotonicity of Interval Matrices Agarwal N. Sushama K. Premakumari K. C. Sivakumar kcskumar@iitm.ac.in Follow this and

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Partial isometries and EP elements in rings with involution

Partial isometries and EP elements in rings with involution Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 55 2009 Partial isometries and EP elements in rings with involution Dijana Mosic dragan@pmf.ni.ac.yu Dragan S. Djordjevic Follow

More information

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. = SECTION 3.3. PROBLEM. The null space of a matrix A is: N(A) {X : AX }. Here are the calculations of AX for X a,b,c,d, and e. Aa [ ][ ] 3 3 [ ][ ] Ac 3 3 [ ] 3 3 [ ] 4+4 6+6 Ae [ ], Ab [ ][ ] 3 3 3 [ ]

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

arxiv: v1 [math.ra] 16 Nov 2016

arxiv: v1 [math.ra] 16 Nov 2016 Vanishing Pseudo Schur Complements, Reverse Order Laws, Absorption Laws and Inheritance Properties Kavita Bisht arxiv:1611.05442v1 [math.ra] 16 Nov 2016 Department of Mathematics Indian Institute of Technology

More information

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007 Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information