A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

Similar documents
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

On V-orthogonal projectors associated with a semi-norm

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

The DMP Inverse for Rectangular Matrices

Dragan S. Djordjević. 1. Introduction

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

ELA

Dragan S. Djordjević. 1. Introduction and preliminaries

Generalized Principal Pivot Transform

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Moore Penrose inverses and commuting elements of C -algebras

Operators with Compatible Ranges

A new algebraic analysis to linear mixed models

arxiv: v1 [math.ra] 14 Apr 2018

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

arxiv: v1 [math.ra] 28 Jan 2016

of a Two-Operator Product 1

Some results on the reverse order law in rings with involution

Group inverse for the block matrix with two identical subblocks over skew fields

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

The Drazin inverses of products and differences of orthogonal projections

Some results on matrix partial orderings and reverse order law

Linear Algebra and its Applications

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

The generalized Schur complement in group inverses and (k + 1)-potent matrices

On EP elements, normal elements and partial isometries in rings with involution

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

On Sums of Conjugate Secondary Range k-hermitian Matrices

Re-nnd solutions of the matrix equation AXB = C

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Determinantal divisor rank of an integral matrix

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE

Generalized Schur complements of matrices and compound matrices

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

New insights into best linear unbiased estimation and the optimality of least-squares

arxiv: v1 [math.ra] 21 Jul 2013

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Predrag S. Stanimirović and Dragan S. Djordjević

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

On some matrices related to a tree with attached graphs

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space

Proceedings of the Third International DERIV/TI-92 Conference

Chapter 1. Matrix Algebra

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

A note on the equality of the BLUPs for new observations under two linear models

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

EP elements in rings

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

On some linear combinations of hypergeneralized projectors

Linear estimation in models based on a graph

A Note on the Group Inverses of Block Matrices Over Rings

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Minkowski Inverse for the Range Symmetric Block Matrix with Two Identical Sub-blocks Over Skew Fields in Minkowski Space M

arxiv: v1 [math.ra] 24 Aug 2016

Rank equalities for idempotent and involutory matrices

a Λ q 1. Introduction

Decomposition of a ring induced by minus partial order

On the Moore-Penrose Inverse in C -algebras

On Sum and Restriction of Hypo-EP Operators

Linear Algebra and its Applications

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

Product of Range Symmetric Block Matrices in Minkowski Space

Inner image-kernel (p, q)-inverses in rings

arxiv: v1 [math.fa] 31 Jan 2013

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

On Euclidean distance matrices

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

HYPO-EP OPERATORS 1. (Received 21 May 2013; after final revision 29 November 2014; accepted 7 October 2015)

Ep Matrices and Its Weighted Generalized Inverse

The Newton Bracketing method for the minimization of convex functions subject to affine constraints

The Moore-Penrose inverse of differences and products of projectors in a ring with involution

Moore-Penrose-invertible normal and Hermitian elements in rings

On a matrix result in comparison of linear experiments

A note on solutions of linear systems

Some additive results on Drazin inverse

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

On Regularity of Incline Matrices

Weak Monotonicity of Interval Matrices

Lecture notes: Applied linear algebra Part 1. Version 2

Partial isometries and EP elements in rings with involution

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

ECE 275A Homework #3 Solutions

arxiv: v1 [math.ra] 16 Nov 2016

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

MATH36001 Generalized Inverses and the SVD 2015

Transcription:

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair of complex matrices m n matrix A and n p matrix B, the general form of reverse-order laws for generalized inverses of the product AB can be written as (AB) (i,...,j) = B (i,...,j) A (i,...,j), where A (i,...,j), B (i,...,j), and (AB) (i,...,j) denote {i,..., j}-inverses of A, B, and AB respectively. There are all 15 3 = 3375 possible formulations for different choices of generalized inverses of A, B, and AB, respectively. This paper deals with a specific reverse-order law (AB) (1) = B A, where A and B are the Moore Penrose inverses of A and B, respectively. We will collect and derive many known and novel equivalent statements for the reverse-order law and its variations to hold by using the methodology of ranks and ranges of matrices. AMS classifications: 15A03; 15A09; 15A24 Keywords: matrix product, Moore Penrose inverse, reverse-order law, rank, range 1 Introduction Throughout this paper, R m n and C m n stands for the collections of all m n complex matrices. The symbols r(a), R(A) and N (A) stand for the rank, range (column space) and kernel (null space) of a matrix A C m n, respectively. I m denotes the identity matrix of order m; [ A, B ] denotes a row block matrix consisting of A and B. The Moore Penrose inverse of A C m n, denoted by A, is the unique matrix X C n m satisfying the four Penrose equations (i) AXA = A, (ii) XAX = X, (iii) (AX) = AX, (iv) (XA) = XA. (1.1) Further let E A = I m AA and F A = I n A A stand for two projectors induced by A. Moreover, a matrix X is called an {i,..., j}-inverse of A, denoted by A (i,...,j), if it satisfies the ith,..., jth equations. The collection of all {i,..., j}-inverses of A is denoted by {A (i,...,j) }. There are all 15 types of {i,..., j}-inverse under (1.1), but the eight-frequently used generalized inverses of A are A, A (1,3,4), A (1,2,4), A (1,2,3), A (1,4), A (1,3), A (1,2), and A (1). In matrix theory, a fundamental matrix operation is to find the inverse of a square matrix when it is nonsingular, or find its generalized inverses when it is singular. Assume that A C m n and B C n p are a pair of matrices. Then A (i,...,j), B (i,...,j), and (AB) (i,...,j) always exist. In this case, one of the fundamental equalities in the theory of generalized inverses is (AB) (i,...,j) = B (i,...,j) A (i,...,j), (1.2) This equality is a direct extension of the well-known identity (AB) 1 = B 1 A 1 for two invertible matrices, and is usually called the reverse-order law for generalized inverses of the matrix product AB. It is obvious that (1.2) includes 15 3 = 3375 formulations for all possible choices of A (i,...,j), B (i,...,j), and (AB) (i,...,j) defined above. We next present an illustrative example in statistical analysis where reverse-order laws of generalized inverses of matrix products may occur. Consider a linear random-effects model defined by M : y = Aα + ɛ, α = Bβ + γ, (1.3) where in the first-stage model, y R n 1 is a vector of observable response variables, A R n p is a known matrix of arbitrary rank, α R p 1 is a vector of unobservable random variables, ɛ R n 1 is a vector of randomly distributed error terms, and in the second-stage model, B R p k is a known matrix of arbitrary rank, β R k 1 is a vector of fixed but unknown parameters, γ R p 1 is a vector of unobservable random variables. Substituting the second equation into the first equation in (1.3) yields N : y = ABβ + Aγ + ɛ. (1.4) There exist two methods for deriving ordinary least-squares estimators (OLSEs) of the fixed but unknown parameter vector vβ in (1.3) and (1.4). A direct method is to find a β that satisfies E-mail: yongge.tian@gmail.com y ABβ 2 = (y ABβ) T (y ABβ) = min (1.5) 1

in the context of (1.4), which leads to the general expression of the OLSE of β as follows OLSE N (β) = [ (AB) + F AB U ]y = (AB) (1,3) y, (1.6) where U is an arbitrary matrix. On the other hand, solving y Aα 2 = min under (1.3) leads to the OLSE of α as follows OLSE M (α) = ( A + F A U 1 )y = A (1,3) y, (1.7) where U 1 is an arbitrary matrix, and then substituting it into the second equation in (1.3) to yield In this case, solving A (1,3) y Bβ 2 = min under (1.8) leads to A (1,3) y = Bβ + γ. (1.8) OLSE M (β) = (B + F B U 2 )A (1,3) y = B (1,3) A (1,3) y, (1.9) where U 2 is an arbitrary matrix. Eqs. (1.6) and (1.9) are not necessarily equal, but comparing the coefficient matrices of y in (1.6) and (1.9) and their special cases (AB) and B A leads to the following reverse-order laws (AB) (1,3) = B (1,3) A (1,3), (AB) = B A. (1.10) Because AA (i,...,j), A (i,...,j) A, BB (i,...,j), and B (i,...,j) B are not necessarily identity matrices, (1.2) does not hold in general, namely, the reverse product B (i,...,j) A (i,...,j) does not necessarily satisfy the four equations for the Moore Penrose inverse of AB. In other words, (1.2) holds if and only if both A and B satisfy certain conditions. Characterization of reverse-order laws in (1.2) was greatly facilitated by the development of the matrix rank methodology. Reverse-order laws for generalized inverses have widely been investigated in the literature since 1960s due to their applications in simplifying matrix expressions involving generalized inverses. Reverse-order laws for generalized inverses of matrix products have been a core topic in the theory of generalized inverses since Penrose defined in 1950s the four equations in (1.1), which attracted much attention since 1960s (see, e.g., [1, 2, 5, 6]) and the research on the reverse-order laws made to some essential progress of the theory of generalized inverses, and led profound influence on development of methodology in matrix analysis. Because both A and B are unique, thus reverse product B A of A and A is unique also well and the product plays an important role in the characterization of various reverse-order laws. It is obvious that for the special choice B A on the right-hand side of (1.2), there are 15 possible choices of (AB) (i,...,j) on the left-hand side, but people are more interested in the following four specific situations (AB) (1) = B A, (AB) (1,3) = B A, (AB) (1,4) = B A, (AB) = B A. (1.11) Much effort has been paid to the four reverse-order laws in the literature, while many necessary and sufficient conditions are derived for these reverse-orders to hold. In this paper, the present author reconsiders the first situation in (1.11) and its variations. By definition, the following equivalence (AB) (1) = B A ABB A AB = AB. (1.12) holds. The second fundamental matrix equality in (1.12) and its variations occur widely in the theory of generalized inverses, and has been studied by many authors since1960s, in particular, some rank formulas associated the matrix equality were established in [9, 10]. This paper aims at collecting and proving many known and new results for the reverse-order law in (1.12) to hold by using the methodology of ranks and ranges of matrices. In Section 2, we present various rank and range formulas of matrices that will be used in the proofs of the main results. In Section 3, we derive several groups of closed-form formulas for calculating the maximum ranks of AB ABB (i,...,j) A (i,...,j) AB and ABB A AB ABB (i,...,j) A (i,...,j) AB. The main results and their proofs are presented in Section 4. 2 Some preliminaries The scope of this section is to introduce the mathematical foundations of generalized inverses, followed by presenting various matrix rank formulas and their usefulness in the theory of generalized inverses. These initial preparations serve as tools for solving the problems that are described in Section 1. Note from the definitions of generalized inverses of a matrix that they are in fact are defined to be (common) solutions of some matrix equations. Thus analytical expressions of generalized inverses of matrix can be written as certain matrix-valued functions with one or more variable matrices. In fact, analytical formulas of generalized inverses of matrices and their functions are important issues and tools in the theory of generalized inverses of matrices, which can be found in most textbooks on generalized inverses of matrices. For instance, the basic formulas in the following lemma can be found, e.g., in [3, 4, 8]. 2

Lemma 2.1. Let A C m n. Then, the following results hold. (a) The general expressions of the last seven generalized inverses of A in (1.2) can be written in the following parametric forms where the two matrices V, W K n m are arbitrary. (b) The following matrix equalities hold where the two matrices V and W are arbitrary. A (1,3,4) = A + F A V E A, (2.1) A (1,2,4) = A + A AW E A, (2.2) A (1,2,3) = A + F A V AA, (2.3) A (1,4) = A + W E A, (2.4) A (1,3) = A + F A V, (2.5) A (1,2) = ( A + F A V )A( A + W E A ), (2.6) A (1) = A + F A V + W E A, (2.7) AA (1,3,4) = AA (1,2,3) = AA (1,3) = AA, (2.8) A (1,3,4) A = A (1,2,4) A = A (1,4) A = A A, (2.9) AA (1,2,4) = AA (1,4) = AA (1,2) = AA (1) = AA + AW E A, (2.10) A (1,2,3) A = A (1,3) A = A (1,2) A = A (1) A = A A + F A V A, (2.11) Lemma 2.2 ([10]). Let A C m n and B C n p be given. Then the product B NA can be written as [ B A = [ B, 0 ] where the block matrices P, J, and Q satisfy B BB B A 0 0 A AA ] [ A ] := P J Q, (2.12) r(j) = r(a) + r(b), R(Q) R(J), R(P ) R(J ). (2.13) A powerful tool for approaching relations between matrix expressions and matrix equalities is using various expansion formulas for calculating ranks of matrices (also called the matrix rank methodology). Recall that A = 0 if and only if r(a) = 0, so that for two matrices A and B of the same size A = B r( A B ) = 0. (2.14) Furthermore, for two sets S 1 and S 2 consisting of matrices of the same size, the following assertions hold S 1 S 2 min r( A B ) = 0; (2.15) A S 1, B S 2 S 1 S 2 max min r( A B ) = 0. (2.16) B S 2 A S 1 These implications provide a highly flexible framework for characterizing equalities of matrices via ranks of matrices. If some expansion formulas for the rank of A B are established, we can use the formulas to characterize relations between two matrices A and B, and to obtain many valuable consequences. For instance, (1.2) holds if and only if [ min r (AB) (i,...,j) B (i,...,j) A (i,...,j)] = 0. (AB) (i,...,j), A (i,...,j), B (i,...,j) This idea was first introduced by the present author in early 1990s when characterizing reverse-order laws for Moore Penrose inverses of matrix products. In the past thirty years, the present author established thousands of closed-form formulas for calculating ranks of matrices, and used them to derive a huge amount of results on matrix equalities, matrix equations, matrix set inclusions, matrix-valued functions, etc., while matrix rank methodology has naturally become one of the most important and widely used tools in characterizing matrix equalities that involve generalized inverses of matrices. In order to establish and simplify various matrix equalities composed by generalized inverses of matrices, we need the following well-known rank formulas for matrices to make the paper self-contained. 3

Lemma 2.3 ([7]). Let A C m n, B C m k, C C l n, and D C l k. Then If R(B) R(A) and R(C ) R(A ), then Furthermore, the following results hold. r[ A, B ] = r(a) + r(e A B) = r(b) + r(e B A), (2.17) A r = r(a) + r(cf C A ) = r(c) + r(af C ), (2.18) A B r = r(b) + r(c) + r(e C 0 B AF C ), (2.19) A B 0 EA B r = r(a) + r C D CF A D CA. (2.20) B (a) r[ A, B ] = r(a) R(B) R(A) AA B = B E A B = 0. [ A (b) r = r(a) R(C C] ) R(A ) CA A = C CF A = 0. A B r = r(a) + r( D CA B ). (2.21) C D (c) r[ A, B ] = r(a) + r(b) R(A) R(B) = {0} R[(E A B) ] = R(B ) R[(E B A) ] = R(A ). [ A (d) r = r(a) + r(c) R(A C] ) R(C ) = {0} R(CF A ) = R(C) R(AF C ) = R(A). Lemma 2.4 ([10]). Let A C m n, B C m k, C C l n, and D C l k. Then In particular, The following result is well known. Lemma 2.5. If A C n n, then [ r( D CA A B ) = r AA A B CA D ] r(a). (2.22) r( A A ) = r( AA A A ). (2.23) r( A A 2 ) = r( I n A ) + r(a) n. (2.24) Lemma 2.6 ([15]). Let P, Q C m m be two orthogonal projectors. Then r( P Q QP ) = 2r[ P, Q ] + 2r(P Q) 2r(P ) 2r(Q). (2.25) In addition, we use the following simple properties (see [3, 4, 8]) when simplifying various matrix rank and range equalities R(A) R(B) and r(a) = r(b) R(A) = R(B), (2.26) R(A) R(B) R(P A) R(P B), (2.27) R(A) = R(B) R(P A) = R(P B), (2.28) R(A) = R(AA ) = R(AA A) = R(AA ) = R[(A ) ], (2.29) R(A ) = R(A A) = R(A AA ) = R(A ) = R(A A), (2.30) R(AB B) = R(AB ) = R(AB B) = R(AB ), (2.31) R(A 1 ) = R(A 2 ) and R(B 1 ) = R(B 2 ) r[ A 1, B 1 ] = r[ A 2, B 2 ]. (2.32) 4

Lemma 2.7 ([12, 13]). Let A C m n and B C n p be given. Then the following rank identities hold r(a ABB ) = r(a AB) = r(abb ) = r(ab), (2.33) r(bb A A) = r(bb A ) = r(b A A) = r(b A ) = r(ab), (2.34) r(abb A ) = r(b A AB) = r(ab), (2.35) r[(a A) 1/2 (BB ) 1/2 ] = r[(bb ) 1/2 (A A) 1/2 ] = r(ab), (2.36) r(b A ) = r(b A ) = r(b A ) = r(ab), (2.37) r[(a ) (B ) ] = r[(a ) B] = r[a(b ) ] = r(ab), (2.38) r(bb A A) = r(bb A A) = r(bb A A) = r(ab), (2.39) r(a ABB ) = r(a ABB ) = r(a ABB ) = r(ab), (2.40) r(abb A ) = r(abb A ) = r(abb A ) = r(ab), (2.41) r(b A AB) = r(b A AB) = r(b A AB) = r(ab), (2.42) r(abb A AB) = r(ab A AB) = r(abb A AB) = r(ab), (2.43) r[(bb ) (A A) )] = r[(bb ) (A A)] = r[(bb )(A A) )] = r(ab), (2.44) r[b (A A) )] = r(b A A) = r[b (A A) )] = r(ab), (2.45) r[(bb ) A ] = r[(bb ) A ] = r(bb A ) = r(ab). (2.46) Lemma 2.8 ([11, 14]). Let A C m n, B C m k, and C C l n. Then { [ A max r( A BXC ) = min r[ A, B ], r, (2.47) X C C]} k l A A B min r( A BXC ) = r[ A, B ] + r r. (2.48) X C k l C C 0 3 Rank formulas for some matrix expressions By definition, B (i,...,j) A (i,...,j) { (AB) (1) } if and only if ABB (i,...,j) A (i,...,j) AB = AB. Also by (2.14), ABB (i,...,j) A (i,...,j) AB = AB r(ab ABB (i,...,j) A (i,...,j) AB) = 0. (3.1) In order to establish closed-form formulas for calculating the ranks of the differences, we need to rewrite AB ABB (i,...,j) A (i,...,j) AB, (3.2) ABB A AB ABB (i,...,j) A (i,...,j) AB. (3.3) as certain linear or nonlinear matrix-valued functions that involve one or more variable matrices. We then establish exact formulas for calculating the maximum ranks of the two expressions, and then use the formulas to derive identifying conditions for (1.12) to hold. The following expansion formulas follow directly from (2.1) (2.11). AB ABB A (1,3,4) AB = AB ABB A (1,2,4) AB = AB ABB A (1,4) AB = AB ABB (1,3,4) A AB = AB ABB (1,3,4) A (1,3,4) AB = AB ABB (1,3,4) A (1,2,4) AB = AB ABB (1,3,4) A (1,4) AB = AB ABB (1,2,3) A AB = AB ABB (1,2,3) A (1,3,4) AB = AB ABB (1,2,3) A (1,2,4) AB = AB ABB (1,2,3) A (1,4) AB = AB ABB (1,3) A AB = AB ABB (1,3) A (1,3,4) AB = AB ABB (1,3) A (1,2,4) AB = AB ABB (1,3) A (1,4) AB = AB ABB A AB, (3.4) 5

AB ABB A (1,2,3) AB = AB ABB A (1,3) AB = AB ABB A (1,2) AB = AB ABB A (1) AB = AB ABB (1,3,4) A (1,2,3) AB = AB ABB (1,3,4) A (1,3) AB = AB ABB (1,3,4) A (1,2) AB = AB ABB (1,3,4) A (1) AB = AB ABB (1,2,3) A (1,2,3) AB = AB ABB (1,2,3) A (1,3) AB = AB ABB (1,2,3) A (1,2) AB = AB ABB (1,2,3) A (1) AB = AB ABB (1,3) A (1,2,3) AB = AB ABB (1,3) A (1,3) AB = AB ABB (1,3) A (1,2) AB = AB ABB (1,3) A (1) AB = AB ABB A AB ABB F A W AB, (3.5) AB ABB (1,2,4) A AB = AB ABB (1,2,4) A (1,3,4) AB = AB ABB (1,2,4) A (1,2,4) AB = AB ABB (1,2,4) A (1,4) AB = AB ABB (1,4) A AB = AB ABB (1,4) A (1,3,4) AB AB ABB (1,4) A (1,2,4) AB = AB ABB (1,4) A (1,4) AB = AB ABB (1,2) A AB = AB ABB (1,2) A (1,3,4) AB = AB ABB (1,2) A (1,2,4) AB = AB ABB (1,2) A (1,4) AB AB ABB (1) A AB = AB ABB (1) A (1,3,4) AB = AB ABB (1) A (1,2,4) AB = AB ABB (1) A (1,4) AB = AB ABB A AB ABV E B A AB, (3.6) ABB A AB ABB A (1,2,3) AB = ABB A AB ABB A (1,3) AB = ABB A AB ABB A (1,2) AB = ABB A AB ABB A (1) AB = AB ABB (1,3,4) A (1,2,3) AB = ABB A AB ABB (1,3,4) A (1,3) AB = AB ABB (1,3,4) A (1,2) AB = ABB A AB ABB (1,3,4) A (1) AB = AB ABB (1,2,3) A (1,2,3) AB = ABB A AB ABB (1,2,3) A (1,3) AB = AB ABB (1,2,3) A (1,2) AB = ABB A AB ABB (1,2,3) A (1) AB = AB ABB (1,3) A (1,2,3) AB = ABB A AB ABB (1,3) A (1,3) AB = AB ABB (1,3) A (1,2) AB = ABB A AB ABB (1,3) A (1) AB = ABB F A W AB, (3.7) ABB A AB ABB (1,2,4) A AB = ABB A AB ABB (1,2,4) A (1,3,4) AB = ABB A AB ABB (1,2,4) A (1,2,4) AB = ABB A AB ABB (1,2,4) A (1,4) AB = ABB A AB ABB (1,4) A AB = ABB A AB ABB (1,4) A (1,3,4) AB = ABB A AB ABB (1,4) A (1,2,4) AB = ABB A AB ABB (1,4) A (1,4) AB = ABB A AB ABB (1,2) A AB = ABB A AB ABB (1,2) A (1,3,4) AB = ABB A AB ABB (1,2) A (1,2,4) AB = ABB A AB ABB (1,2) A (1,4) AB = ABB A AB ABB (1) A AB = AB ABB (1) A (1,3,4) AB = ABB A AB ABB (1) A (1,2,4) AB = ABB A AB ABB (1) A (1,4) AB = ABV E B A AB, (3.8) where V and W are variable matrices of appropriate sizes. Applying the preliminary formulas in Section 2, we obtain the following results. Theorem 3.1. Let A C m n and B C n p, and denote M = [ A, B ]. (a) [[9, 10] The rank of AB ABB A AB satisfies the identity r( AB ABB A AB ) = r(m) + r(ab) r(a) r(b). (3.9) 6

(b) The following rank formulas hold max A (1,2,3) r( AB ABB A (1,2,3) AB ) A (1,3) r( AB ABB A (1,3) AB ) A (1,2) r( AB ABB A (1,2) AB ) B (1) r( AB ABB A (1) AB ) r( AB ABB (1,3,4) A (1,2,3) AB ) r( AB ABB (1,3,4) A (1,3) AB ) B (1,3,4),A (1,2,3) B (1,3,4),A (1,3) r( AB ABB (1,3,4) A (1,2) AB ) r( AB ABB (1,3,4) A (1) AB ) B (1,3,4),A (1,2) B (1,3,4),A (1) r( AB ABB (1,2,3) A (1,2,3) AB ) r( AB ABB (1,2,3) A (1,3) AB ) B (1,2,3),A (1,2,3) B (1,2,3),A (1,3) r( AB ABB (1,2,3) A (1,2) AB ) r( AB ABB (1,2,3) A (1) AB B (1,2,3),A (1,2) B (1,2,3),A (1) r( AB ABB (1,3) A (1,2,3) AB ) r( AB ABB (1,3) A (1,3) AB ) B (1,3),A (1,2,3) B (1,3),A (1,3) r( AB ABB (1,3) A (1,2) AB ) r( AB ABB (1,3) A (1) AB ) B (1,3),A (1,2) B (1,3),A (1) = r(m) + r(ab) r(a) r(b). (3.10) (c) The following rank formulas hold max r( AB ABB (1,2,4) A AB ) r( AB ABB (1,2,4) A (1,3,4) AB ) B (1,2,4) B (1,2,4),A (1,3,4) r( AB ABB (1,2,4) A (1,2,4) AB r( AB ABB (1,2,4) A (1,4) AB ) B (1,2,4),A (1,2,4) B (1,2,3),A (1,4) r( AB ABB (1,4) A AB ) r( AB ABB (1,4) A (1,3,4) AB ) B (1,4) B (1,4),A (1,3,4) max r( AB ABB (1,4) A (1,2,4) AB ) r( AB ABB (1,4) A (1,4) AB ) B (1,4),A (1,2,4) B (1,4),A (1,4) r( AB ABB (1,2) A AB ) r( AB ABB (1,2) A (1,3,4) AB ) B (1,2) B (1,4),A (1,2,4) r( AB ABB (1,2) A (1,2,4) AB ) r( AB ABB (1,2) A (1,4) AB ) B (1,2),A (1,2,4) B (1,2),A (1,4) r( AB ABB (1) A AB ) r( AB ABB (1) A (1,3,4) AB ) B (1) B (1),A (1,3,4) r( AB ABB (1) A (1,2,4) AB r( AB ABB (1) A (1,4) AB ) B (1),A (1,2,4) B (1),A (1,4) = r(m) + r(ab) r(a) r(b). (3.11) (d) The following rank formulas hold max r( ABB A AB ABB A (1,2,3) AB ) r( ABB A AB ABB A (1,3) AB ) A (1,2,3) A (1,3) A (1,2) r( ABB A AB ABB A (1,2) AB ) B (1) r( ABB A AB ABB A (1) AB ) B (1,3,4),A (1,2,3) r( ABB A AB ABB (1,3,4) A (1,2,3) AB ) B (1,3,4),A (1,3) r( ABB A AB ABB (1,3,4) A (1,3) AB ) B (1,3,4),A (1,2) r( ABB A AB ABB (1,3,4) A (1,2) AB ) 7

B (1,3,4),A (1) r( ABB A AB ABB (1,3,4) A (1) AB ) B (1,2,3),A (1,2,3) r( ABB A AB ABB (1,2,3) A (1,2,3) AB ) B (1,2,3),A (1,3) r( ABB A AB ABB (1,2,3) A (1,3) AB ) B (1,2,3),A (1,2) r( ABB A AB ABB (1,2,3) A (1,2) AB ) B (1,2,3),A (1) r( ABB A AB ABB (1,2,3) A (1) AB B (1,3),A (1,2,3) r( ABB A AB ABB (1,3) A (1,2,3) AB ) B (1,3),A (1,3) r( ABB A AB ABB (1,3) A (1,3) AB ) B (1,3),A (1,2) r( ABB A AB ABB (1,3) A (1,2) AB ) B (1,3),A (1) r( ABB A AB ABB (1,3) A (1) AB ) = r(m) + r(ab) r(a) r(b). (3.12) (e) The following rank formulas hold max r( ABB A AB ABB (1,2,4) A AB ) B (1,2,4) B (1,2,4),A (1,3,4) r( ABB A AB ABB (1,2,4) A (1,3,4) AB ) B (1,2,4),A (1,2,4) r( ABB A AB ABB (1,2,4) A (1,2,4) AB B (1,2,3),A (1,4) r( ABB A AB ABB (1,2,4) A (1,4) AB ) B (1,4) r( ABB A AB ABB (1,4) A AB ) B (1,4),A (1,3,4) r( ABB A AB ABB (1,4) A (1,3,4) AB ) B (1,4),A (1,2,4) r( ABB A AB ABB (1,4) A (1,2,4) AB ) B (1,4),A (1,4) r( ABB A AB ABB (1,4) A (1,4) AB ) B (1,2) r( ABB A AB ABB (1,2) A AB ) B (1,4),A (1,2,4) r( ABB A AB ABB (1,2) A (1,3,4) AB ) B (1,2),A (1,2,4) r( ABB A AB ABB (1,2) A (1,2,4) AB ) B (1,2),A (1,4) r( ABB A AB ABB (1,2) A (1,4) AB ) B (1) r( ABB A AB ABB (1) A AB ) B (1),A (1,3,4) r( ABB A AB ABB (1) A (1,3,4) AB ) B (1),A (1,2,4) r( ABB A AB ABB (1) A (1,2,4) AB B (1),A (1,4) r( ABB A AB ABB (1) A (1,4) AB ) = r(m) + r(ab) r(a) r(b). (3.13) 8

Proof. Applying (2.21) to (2.12) and simplifying, we obtain Thus establishing (3.9). Applying (2.47) to (3.5) gives r( AB ABB A AB ) = r( AB + ABP J QAB ) = r B A B BB 0 A AA 0 A AB r(a) r(b) 0 ABB AB B A B B 0 = r AA 0 AB r(a) r(b) 0 AB AB B A B B 0 = r AA AB 0 r(a) r(b) 0 0 AB B = r A B B AA + r(ab) r(a) r(b) AB ( ) A = r B [ A, B ] + r(ab) r(a) r(b) = r( [ A, B ] [ A, B ] ) + r(ab) r(a) r(b) = r(m) + r(ab) r(a) r(b), (3.14) max W r( AB ABB A AB ABB F A W AB ) = min { r[ AB ABB A AB, ABB F A ], r(ab) }, (3.15) min W r( AB ABB A AB ABB F A W AB ) = r[ AB ABB A AB, ABB F A ] r(abb F A ), (3.16) where by (2.17) and (2.18), r[ AB ABB A AB, ABB AB ABB F A ] = r A AB ABB r(a) 0 A AB ABB AB 0 = r r(a) = r r(a) AB A 0 AE B and = r(ab) + r(e B A ) r(a) = r(m) + r(ab) r(a) r(b), (3.17) r(abb F A ) = r(m) + r(ab) r(a) r(b). (3.18) Substituting (3.17) and (3.18) into (3.15) and (3.16), and noticing that r(m) + r(ab) r(a) r(b) r(ab), we obtain (3.10). By a similar approach to (3.6), we have max r( AB W ABB A AB ABV E B A AB ) = r(m) + r(ab) r(a) r(b), as required for (3.11). Applying (2.47) and (3.18) to (3.7) and (3.8) gives establishing (3.12) and (3.13). 4 Main Results max W r( ABB F A W AB ) = r(abb F A ) = r(m) + r(ab) r(a) r(b), (3.19) max r( ABV E B A AB ) = r(e B A AB) = r(m) + r(ab) r(a) r(b), (3.20) V Theorem 4.1. Let A C m n and B C n p, and denote M = [ A, B ]. Then, the following statements are equivalent: 1 B A { (AB) (1)}, namely, ABB A AB = AB. 9

2 B A { (AB) (2)}, namely, B A ABB A = B A. 3 B A { (AB) (1,2)}. 4 B A { [(A ) B] (1)}. 5 B A { [A(B ) ] (1)}. 6 B (A A) { (A AB) (1)} and/or (BB ) A { (ABB ) (1)}. 7 (BB ) (A A) { (A ABB ) (1)} and/or (A A) (BB ) { (BB A A) (1)}. 8 [(BB ) 1/2 ] [(A A) 1/2 ] { [(A A) 1/2 (BB ) 1/2 ] (1)} and/or [(A A) 1/2 ] (BB ) 1/2 ] { [(BB ) 1/2 (A A) 1/2 ] (1)}. 9 (BB B) (AA A) { (AA ABB B) (1)}. 10 B A A { (A AB) (1)} and/or BB A { (ABB ) (1)}. 11 BB A A { (A ABB ) (1)} and/or A ABB { (BB A A) (1)}. 12 BB F A { (F A BB ) (1)} and/or F A BB { (BB F A ) (1)}. 13 E B A A { (A AE B ) (1)} and/or A AE B { (E B A A) (1)}. 14 E B F A { (F A E B ) (1)} and/or F A E B { (E B F A ) (1)}. 15 (A ABB ) = BB A A and/or (BB A A) = A ABB. 16 (BB F A ) = BB F A and/or (F A BB ) = F A BB. 17 (A AE B ) = A AE B and/or (E B A A) = E B A A. 18 (F A E B ) = F A E B and/or (E B F A ) = E B F A. 19 A ABB = BB A A and/or F A BB = BB F A, A AE B = E B A A, F A E B = E B F A. 20 (A ABB ) 2 = A ABB and/or (BB A A) 2 = BB A A. 21 (F A BB ) 2 = F A BB and/or (BB F A ) 2 = BB F A and/or 22 (A AE B ) 2 = A AE B and/or (E B A A) 2 = E B A A. 23 (F A E B ) 2 = F A E B and/or (E B F A ) 2 = E B F A. 24 (ABB A ) 2 = ABB A and/or (B A AB) 2 = B A AB. 25 (AE B A ) 2 = AE B A and/or (B F A B) 2 = B F A B. 26 E B A ABB = 0 and/or BB A AE B = 0. 27 A ABB F A = 0 and/or F A BB A A = 0. 28 [ A, B ][ A, B ] = A A + BB A ABB and/or [ A, B ][ A, B ] = A A + BB BB A A. 29 [ F A, B ][ F A, B ] = F A + BB F A BB and/or [ F A, B ][ F A, B ] = F A + BB BB F A. 30 [ A, E B ][ A, E B ] = A A + E B A AE B and/or [ A, E B ][ A, E B ] = A A + E B E B A A. 31 [ F A, E B ][ F A, E B ] = F A + E B F A E B and/or [ F A, E B ][ F A, E B ] = F A + E B E B F A. 32 { B A (1)} { (AB) (1)}. 33 { B A (1,2)} { (AB) (1)}. 34 { B A (1,3)} { (AB) (1)}. 35 { B A (1,4)} { (AB) (1)}. 36 { B A (1,2,3)} { (AB) (1)}. 37 { B A (1,2,4)} { (AB) (1)}. 10

38 { B A (1,3,4)} { (AB) (1)}. 39 { B (1,3,4) A (1)} { (AB) (1)}. 40 { B (1,3,4) A (1,2)} { (AB) (1)}. 41 { B (1,3,4) A (1,3)} { (AB) (1)}. 42 { B (1,3,4) A (1,4)} { (AB) (1)}. 43 { B (1,3,4) A (1,2,3)} { (AB) (1)}. 44 { B (1,3,4) A (1,2,4)} { (AB) (1)}. 45 { B (1,3,4) A (1,3,4)} { (AB) (1)}. 46 { B (1,3,4) A } { (AB) (1)}. 47 { B (1,2,4) A (1,4)} { (AB) (1)}. 48 { B (1,2,4) A (1,2,4)} { (AB) (1)}. 49 { B (1,2,4) A (1,3,4)} { (AB) (1)}. 50 { B (1,2,4) A } { (AB) (1)}. 51 { B (1,2,3) A (1)} { (AB) (1)}. 52 { B (1,2,3) A (1,2)} { (AB) (1)}. 53 { B (1,2,3) A (1,3)} { (AB) (1)}. 54 { B (1,2,3) A (1,4)} { (AB) (1)}. 55 { B (1,2,3) A (1,2,3)} { (AB) (1)}. 56 { B (1,2,3) A (1,2,4)} { (AB) (1)}. 57 { B (1,2,3) A (1,3,4)} { (AB) (1)}. 58 { B (1,2,3) A } { (AB) (1)}. 59 { B (1,4) A (1,4)} { (AB) (1)}. 60 { B (1,4) A (1,2,4)} { (AB) (1)}. 61 { B (1,4) A (1,3,4)} { (AB) (1)}. 62 { B (1,4) A } { (AB) (1)}. 63 { B (1,3) A (1)} { (AB) (1)}. 64 { B (1,3) A (1,2)} { (AB) (1)}. 65 { B (1,3) A (1,3)} { (AB) (1)}. 66 { B (1,3) A (1,4)} { (AB) (1)}. 67 { B (1,3) A (1,2,3)} { (AB) (1)}. 68 { B (1,3) A (1,2,4)} { (AB) (1)}. 69 { B (1,3) A (1,3,4)} { (AB) (1)}. 70 { B (1,3) A } { (AB) (1)}. 71 { B (1,2) A (1,4)} { (AB) (1)}. 72 { B (1,2) A (1,2,4)} { (AB) (1)}. 11

73 { B (1,2) A (1,3,4)} { (AB) (1)}. 74 { B (1,2) A } { (AB) (1)}. 75 { B (1) A (1,4)} { (AB) (1)}. 76 { B (1) A (1,2,4)} { (AB) (1)}. 77 { B (1) A (1,3,4)} { (AB) (1)}. 78 { B (1) A } { (AB) (1)}. 79 ABB (1,3) A (1) AB is invariant with respect to the choice of A (1) and B (1,3). 80 ABB (1) A (1,4) AB is invariant with respect to the choice of A (1,4) and B (1). 81 r[ A, B ] = r(a) + r(b) r(ab). 82 r[ F A, B ] = r(f A ) + r(b) r(f A B). 83 r[ A, E B ] = r(a) + r(e B ) r(ae B ). 84 r[ F A, E B ] = r(f A ) + r(e B ) r(f A E B ). 85 r( I n A ABB ) = n r(a ABB ) and/or r( I n BB A A ) = n r(bb A A ). 86 r( I n F A BB ) = n r(f A BB ) and/or r( I n BB F A ) = n r(bb F A ). 87 r( I n A AE B ) = n r(a AE B ) and/or r( I n E B A A ) = n r(e B A A ). 88 r( I n F A E B ) = n r(f A E B ) and/or r( I n E B F A ) = n r(e B F A ). 89 r( I m ABB A ) = m r(abb A ) and/or r( I p B A AB ) = p r(b A AB ). 90 r( I m AE B A ) = m r(ae B A ) and/or r( I p B F A B ) = p r(b F A B ). 91 dim[r(a ) R(B)] = r(ab) and/or dim[r(f A ) R(B)] = r(f A B), dim[r(a ) R(E B )] = r(ae B ), dim[r(f A ) R(E B )] = r(f A E B ). 92 R(A ABB ) R(BB ) and/or R(BB A A) R(A A). 93 R(F A BB ) R(BB ) and/or R(BB F A ) R(F A ). 94 R(A AE B ) R(E B ) and/or R(E B A A) R(A A). 95 R(F A E B ) R(E B ) and/or R(E B F A ) R(F A ). 96 R(A ABB ) = R(A A) R(BB ) and/or R(BB A A) = R(A A) R(BB ). 97 R(F A BB ) = R(F A ) R(BB ) and/or R(BB F A ) = R(F A ) R(BB ). 98 R(A AE B ) = R(A A) R(E B ) and/or R(E B A A) = R(A A) R(E B ). 99 R(F A E B ) = R(F A ) R(E B ) and/or R(E B F A ) = R(F A ) R(E B ). 100 R(A ABB ) = R(BB A A) and/or R(A AE B ) = R(E B A A), R(F A BB ) = R(BB F A ), R(F A E B ) = R(E B F A ). 101 R(AB) R(AE B ) = {0} and/or R(F A B) R(F A E B ) = {0}, R(B A ) R[(B F A ) = {0}, R(E B A ) R(E B F A ) = {0}. Proof. Setting both sides of (3.9) equal to zero leads to the equivalence of 1 and 81. By (2.17), r(f A ) = n r(a), r(f A B) = r[ A, B ] r(a ), (4.1) r(e B ) = n r(b), r(ae B ) = r[ A, B ] r(b), (4.2) r[ F A, B ] = r(f A ) + r(a AB) = n r(a) + r(ab), (4.3) r[ A, E B ] = r(abb ) + r(e B ) = n r(b) + r(ab), (4.4) r[ F A, E B ] = r(f A ) + r(a AE B ) = n r(a) r(b) r[ A, B ], (4.5) r(f A E B ) = r[ A, E B ] r(a ) = n r(a) r(b) + r(ab). (4.6) 12

Substituting (4.1) (4.6) into the three rank equalities in 82, 83, and 84 and simplifying, we obtain the equivalence of 81 84. Applying (3.9) to B A B A ABB A and simplifying by (2.29), (2.30), (2.32), and (2.37), we obtain r( B A B A ABB A ) = r[ (B ), A ] + r(b A ) r(b ) r(a ) = r[ A, B ] + r(ab) r(b) r(a). (4.7) Setting both side of (4.7) equal to zero, we obtain the equivalence of 2 and 81. Both 1 and 2 imply 3. Conversely, 3 implies 1 and 2. The following rank formulas can be established by a similar approach r[ (A ) B (A ) BB A (A ) B ] = r(m) + r(ab) r(a) r(b), (4.8) r[ A(B ) A(B ) B A A(B ) ] = r(m) + r(ab) r(a) r(b), (4.9) r[ (A A)B (A A)BB (A A) (A A)B ] = r(m) + r(ab) r(a) r(b), (4.10) r[ A(BB ) A(BB )(BB ) A A(BB ) ] = r(m) + r(ab) r(a) r(b), (4.11) r[ (A A)(BB ) (A A)(BB )(BB ) (A A) (A A)(BB ) ] = r(m) + r(ab) r(a) r(b), (4.12) r{ (A A) 1/2 (BB ) 1/2 (A A) 1/2 (BB ) 1/2 [(BB ) 1/2 ] [(A A) 1/2 ] (A A) 1/2 (BB ) 1/2 } = r(m) + r(ab) r(a) r(b), (4.13) r[ (AA A)(BB B) (AA A)(BB B)(BB B) (AA A) (AA A)(BB B) ] = r(m) + r(ab) r(a) r(b), (4.14) r[ A AB (A AB)B A A(A AB) ] = r(m) + r(ab) r(a) r(b), (4.15) r[ ABB (ABB )BB A (ABB ) ] = r(m) + r(ab) r(a) r(b), (4.16) r[ A ABB (A ABB )BB A A(A ABB ) ] = r(m) + r(ab) r(a) r(b). (4.17) Setting all both sides of (4.8) (4.17) equal to zero leads to the equivalence of 2 11 and 81. The following rank formulas can be established by a similar approach r[ F A BB (F A BB )BB F A (F A BB ) ] = r[ F A, B ] r(f A ) r(b) + r(f A B), (4.18) r[ A AE B (A AE B )E B A A(A AE B ) ] = r[ A, E B ] r(a) r(e B ) + r(ae B ), (4.19) r[ F A E B (F A E B )E B F A (F A E B ) ] = r[ F A, E B ] r(f A ) r(e B ) + r(f A AE B ). (4.20) Setting the both sides of (4.18) (4.20) equal to zero leads to the equivalence of 12 14 and 82 84. 13

Applying (2.22) and simplifying by (2.17), we obtain r( I n BB A A ) = r( I n A ABB ) AA ABB = r A r(a) I n AA = r ABB A 0 r(a) 0 I n = n + r( AA ABB A ) r(a) B = n + r B B A AB AA r(a) r(b) = n + r([ A, B ] [ A, B ]) r(a) r(b) = n + r(m) r(a) r(b), (4.21) r( I m ABB A B ) = r B B A r(b) AB I m B = r B B A AB 0 r(b) 0 I m Combining (4.21) (4.23) with (2.39) (2.42) yields = r( B F A B ) r(b) + m = r(f A B) r(b) + m = r(m) r(a) r(b) + m, (4.22) AA r( I p B A AB AB ) = r B A r(a) I p AA = r ABB A 0 r(a) 0 I p = r( AE B A ) r(a) + p = r(e B A ) r(a) + p = r(m) r(a) r(b) + p. (4.23) r( I n BB A A ) n + r(bb A A) = r(m) r(a) r(b) + r(ab), (4.24) r( I m ABB A ) m + r(abb A ) = r(m) r(a) r(b) + r(ab), (4.25) r( I p B A AB ) p + r(b A AB) = r(m) r(a) r(b) + r(ab). (4.26) Setting both sides of (4.24) (4.26) equal to zero leads to the equivalence of 85, 89, and 81. Replacing A A with F A in 85 leads to the equivalence of 86 and 82. Replacing BB with E B in 85 leads to the equivalence of 87 and 83. Replacing A A with F A and BB with E B in 85 leads to the equivalence of 88 and 84. Replacing A A with F A and BB with E B in 89 respectively leads to the equivalence of 90, 82, and 83. Applying (2.24), (4.21) (4.23) to (A ABB ) 2 A ABB, we obtain r[ (A ABB ) 2 A ABB ] = r( I n A ABB ) + r(a ABB ) n = r(m) r(a) r(b) + r(ab), (4.27) r[ (BB A A) 2 BB A A ] = r( I n BB A A ) + r(bb A ) n = r(m) r(a) r(b) + r(ab), (4.28) r[ (ABB A ) 2 ABB A ] = r( I m ABB A ) + r(abb A ) m = r(m) r(a) r(b) + r(ab), (4.29) r[ (B A AB) 2 B A AB ] = r( I p B A AB ) + r(b A AB) p = r(m) r(a) r(b) + r(ab). (4.30) Setting both sides of (4.27) (4.30) equal to zero leads to the equivalence of 20, 24, and 81. 14

Applying (2.23) (A ABB ) BB A A, we obtain r[ (A ABB ) BB A A ] = r[ (A ABB ) (A ABB ) ] = r[ (A ABB ) (A ABB )(A ABB ) (A ABB ) ] = r[ (A ABB ) (A ABB ) 2 ] = r[ (A ABB + r[ I n A ABB ] n = r(m) + r(ab) r(a) r(b). (4.31) Setting both sides of (4.31) equal to zero leads to the equivalence of 15 and 81. Replacing A A with F A in 11 leads to the equivalence of 12 and 82. Replacing BB with E B in 11 leads to the equivalence of 13 and 83. Replacing A A with F A and BB with E B in 11 leads to the equivalence of 14 and 84. Replacing A A with F A in 15 leads to the equivalence of 16 and 82. Replacing BB with E B in 15 leads to the equivalence of 17 and 83. Replacing A A with F A and BB with E B in 15 leads to the equivalence of 18 and 84. Applying (2.25) to A ABB BB A A and simplifying by (2.32) and (2.40), we obtain r( A ABB BB A A ) = 2r[ A A, BB ] 2r(A A) 2r(BB ) + 2r(A ABB ) = 2r(M) 2r(A) 2r(B) + 2r(AB). (4.32) Setting both sides of (4.32) equal to zero leads to the equivalence of 19 and 81. Replacing A A with F A in 20 leads to the equivalence of 21 and 82. Replacing BB with E B in 20 leads to the equivalence of 22 and 83. Replacing A A with F A and BB with E B in 20 leads to the equivalence of 23 and 84. Replacing A A with F A and BB with E B in 24 respectively leads to the equivalence of 25, 82, and 83. Applying (2.17) to and simplifying by Lemma 2.3(c), we obtain r(e B A ABB ) = r(bb A AE B ) = r[ B, A ABB ] r(b) = r[ B A AB, A AB ] r(b) = r(b A AB) + r(a AB) r(b) r(a ABB F A ) = r(f A BB A A) = r[ A, BB A A ] r(a) = r(m) r(a) r(b) + r(ab), (4.33) = r[ A BB A, BB A ] r(a) = r(a BB A ) + r(bb A ) r(a) = r(m) r(a) r(b) + r(ab). (4.34) Setting all sides of (4.33) and (4.34) equal to zero leads to the equivalence of 26, 27, and 81. The following rank identities were proved in [16] r ( [ A, B ][ A, B ] A A BB + A ABB ) = r ( [ A, B ][ A, B ] A A BB + BB A A ) = r(m) r(a) r(b) + r(ab). Setting three sides of these two identities equal to zero leads to the equivalence of 29 and 81. Replacing A A with F A in 29 leads to the equivalence of 30 and 82. Replacing BB with E B in 29 leads to the equivalence of 31 and 83. Replacing A A with F A and BB with E B in 29 leads to the equivalence of 32 and 84. Setting all sides of (3.10) and (3.11) equal to zero leads to the equivalence of 32 78, and 81. Setting all sides of (3.12) and (3.13) equal to zero leads to the equivalence of 79, 80, and 81. The equivalence of 81 84, and 91 follows from the following well-known dimension formulas dim[r(a ) R(B)] = r(a) + r(b) r[ A, B ], dim[r(f A ) R(B)] = r(f A ) + r(b) r[ F A, B ], dim[r(a ) R(E B )] = r(a) + r(e B ) r[ A, E B ], dim[r(f A ) R(E B )] = r(f A ) + r(e B ) r[ F A, E B ]. Applying Lemma 2.3(a) and (b), to 26 and 27 leads to the equivalence of 26 and 27, and 92. 15

Replacing A A with F A in 92 leads to the equivalence of 93 and 82. Replacing BB with E B in 92 leads to the equivalence of 94 and 83. Replacing A A with F A and BB with E B in 92 leads to the equivalence of 95 and 84. It follows first from 92 that R(A ABB ) R(A A) R(BB ) and/or R(BB A A) R(A A) R(BB ). (4.35) Furthermore, It follows first from 81 that dim[r(a A) R(BB )] = r(a A) + r(bb ) r[ A A, BB ] = r(a ABB ) = r(bb A A). (4.36) Applying (2.26) to (4.35) and (4.36) leads to the range identities in 96. Conversely, 96 obviously implies 92. The equivalence of 93 and 97, 94 and 98, and 95 and 99 can be established similarly. By Lemma 2.3(c) and (2.19), r[ A ABB, BB A A ] = r[ (I n BB )A ABB, BB A A ] Combining (4.37) with (2.39) and (2.40), we obtain = r[ (I n BB )A ABB ] + r(bb A A) = r[ B, A AB ] r(b) + r(ab) = r[ (I n A A)B, A AB ] r(b) + r(ab) = r[ (I n A A)B] + r(a AB) r(b) + r(ab) = r(m) r(a) r(b) + 2r(AB). (4.37) r[ A ABB, BB A A ] r(a ABB ) = r[ A ABB, BB A A ] r(bb A A) = r(m) r(a) r(b) + r(ab). (4.38) Setting both sides of (4.38) equal to zero, we obtain the equivalence of the first range equality in 100 and 81. Replacing A A with F A and BB with E B in the first range equality of 100, respectively, leads to the equivalences of the second, third, and fourth range equalities in 100 and those in 82, 83, and 84. Finally by (2.17), dim[r(ab) R(AE B )] = r(ab) + r(ae B ) r[ AB, AE B ] = r[ A, B ] r(a) r(b) + r(ab). (4.39) Setting both sides of (4.39) equal to zero, we obtain the equivalence of the first range equality in 101 and 81. The equivalence of the third fact in 101 with 81 can be established similarly. Replacing A A with F A and BB with E B in the first and third facts of 101, respectively, leads to the equivalences of the second and fourth facts in 101, 82, and 83. 5 Concluding remarks We have collected/established, as a summery, hundreds of known/novel identifying conditions for the reverseorder law in (1.12) to hold by using various matrix rank/range formulas, which surprisingly link versatile results in matrix theory together, and thus provide us different views of the reverse-order laws and a solid foundation for further approach to (1.2). It is easy to use these results in various situations where generalized inverses of matrices are involved. Observe further that the last three reverse-order laws in (1.11) are special cases of the first in (1.11), it is possible to derive many novel/valuable identifying conditions for the last three reverse-order laws in (1.11) to hold based in the previous results. The matrix rank methodology has been recognized as a direct and effective tool for establishing and characterizing various matrix equalities in matrix theory since 1970s, while this method was introduced in the establishments of reverse-order laws in 1990s by the present author (see [9, 10]). As mentioned in Section 1, it is a tremendous work to establish necessary and sufficient conditions for the thousands of possible equalities in (1.2) and their variations to hold. It is expected that more matrix rank formulas associated with (1.2) can be produced, and many new and unexpected necessary and sufficient conditions will be derived from the rank formulas for (1.2) to hold. References [1] E. Arghiriade. Sur les matrices qui sont permutables avec leur inverse généralisée. Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat. 35(1963), 244 251. 16

[2] E. Arghiriade. Remarques sur l inverse généralisée d un produit de matrice. Atti Accad. Naz. Lincei, VIII. Ser., Rend., Cl. Sci. Fis. Mat. Nat. 42(1967), 621 625. [3] A. Ben-Israel, T.N.E. Greville. Generalized Inverses: Theory and Applications, 2nd Ed., Springer, New York, 2003. [4] S.L. Campbell, C.D. Meyer. Generalized inverses of linear transformations, Pitman, London, 1979. [5] I. Erdelyi, On the reverse order law related to the generalized inverse of matrix products. J. Assoc. Comput. Mach. 13(1966), 439 433. [6] T.N.E. Greville. Note on the generalized inverse of a matrix product. SIAM Rev. 8(1966), 518 521. [7] G. Marsaglia, G.P.H. Styan. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 2(1974), 269 292. [8] C.R. Rao, S.K. Mitra. Generalized Inverse of Matrices and Its Applications. Wiley, New York, 1971. [9] Y. Tian. The Moore Penrose inverse of a triple matrix product. Math. Theory Practice (in Chinese) 1(1992), 64 70. [10] Y. Tian. Reverse order laws for the generalized inverses of multiple matrix products. Linear Algebra Appl. 211(1994), 85 100. [11] Y. Tian. The maximal and minimal ranks of some expressions of generalized inverses of matrices. Southeast Asian Bull. Math. 25(2002), 745 755. [12] Y. Tian. Equalities and inequalities for ranks of products of generalized inverses of two matrices and their applications. J. Inequal. Appl. 182(2016), 1 51. [13] Y. Tian. How to establish exact formulas for calculating the max-min ranks of products of two matrices and their generalized inverses. Linear Multilinear Algebra, 2017, http://dx.doi.org/10.1080/03081087.2017.1283388. [14] Y. Tian, S. Cheng. The maximal and minimal ranks of A BXC with applications. New York J. Math. 9(2003), 345 362. [15] Y. Tian, G.P.H. Styan. Rank equalities for idempotent and involutory matrices, Linear Algebra Appl. 335(2001), 101 117. [16] Y. Tian, Y. Wang. Expansion formulas for orthogonal projectors onto ranges of row block matrices. J. Math. Res. Appl. 34(2014), 147 154. 17