Generalized inverses of partitioned matrices in Banachiewicz Schur form

Similar documents
A property of orthogonal projectors

On some linear combinations of hypergeneralized projectors

The generalized Schur complement in group inverses and (k + 1)-potent matrices

Linear Algebra and its Applications

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

On V-orthogonal projectors associated with a semi-norm

Rank equalities for idempotent and involutory matrices

Linear Algebra and its Applications

A note on the equality of the BLUPs for new observations under two linear models

a Λ q 1. Introduction

Some inequalities for sum and product of positive semide nite matrices

Intrinsic products and factorizations of matrices

Re-nnd solutions of the matrix equation AXB = C

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

On a matrix result in comparison of linear experiments

Equivalence constants for certain matrix norms II

The Laplacian spectrum of a mixed graph

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

Determinantal divisor rank of an integral matrix

Some results on matrix partial orderings and reverse order law

On product of range symmetric matrices in an indefinite inner product space

INITIAL COMPLEX ASSOCIATED TO A JET SCHEME OF A DETERMINANTAL VARIETY. the affine space of dimension k over F. By a variety in A k F

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Estimation under a general partitioned linear model

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

On Pseudo SCHUR Complements in an EP Matrix

New insights into best linear unbiased estimation and the optimality of least-squares

Determinants of Partition Matrices

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Matrix Inequalities by Means of Block Matrices 1

Two applications of the theory of primary matrix functions

Bulletin of the. Iranian Mathematical Society

The maximal determinant and subdeterminants of ±1 matrices

A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field

Perron eigenvector of the Tsetlin matrix

Application of Theorems of Schur and Albert

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Operators with Compatible Ranges

Ep Matrices and Its Weighted Generalized Inverse

Pattern correlation matrices and their properties

Generalized Principal Pivot Transform

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

Basic Concepts in Linear Algebra

ACI-matrices all of whose completions have the same rank

The Drazin inverses of products and differences of orthogonal projections

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

A Necessary and Sufficient Condition. of a Mixed NE in Bimatrix Games

SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

The DMP Inverse for Rectangular Matrices

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On an operator inequality

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Locally linearly dependent operators and reflexivity of operator spaces

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

ELE/MCE 503 Linear Algebra Facts Fall 2018

Moore Penrose inverses and commuting elements of C -algebras

SOME INEQUALITIES FOR THE EUCLIDEAN OPERATOR RADIUS OF TWO OPERATORS IN HILBERT SPACES

Clarkson Inequalities With Several Operators

A DECOMPOSITION OF E 0 -SEMIGROUPS

Subset selection for matrices

Kantorovich type inequalities for ordered linear spaces

Lecture notes: Applied linear algebra Part 1. Version 2

1 Coordinate Transformation

On Hadamard and Kronecker Products Over Matrix of Matrices

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

Dragan S. Djordjević. 1. Introduction

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Principal pivot transforms of range symmetric matrices in indefinite inner product space

RANK REDUCTION AND BORDERED INVERSION

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

On Sums of Conjugate Secondary Range k-hermitian Matrices

The Equality of OLS and GLS Estimators in the

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

PM functions, their characteristic intervals and iterative roots

HUA S MATRIX EQUALITY AND SCHUR COMPLEMENTS

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity

GENERALIZED POWER SYMMETRIC STOCHASTIC MATRICES

Review of Basic Concepts in Linear Algebra

ELA

On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field

The best generalised inverse of the linear operator in normed linear space

Where is matrix multiplication locally open?

Electronic Journal of Linear Algebra

Remarks on the Thickness of K n,n,n

Interpolating the arithmetic geometric mean inequality and its operator version

The Moore-Penrose inverse of differences and products of projectors in a ring with involution

Complementary bases in symplectic matrices and a proof that their determinant is one

arxiv: v1 [math.ra] 24 Aug 2016

The minimum rank of matrices and the equivalence class graph

Means of unitaries, conjugations, and the Friedrichs operator

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

arxiv: v1 [math.ra] 14 Apr 2018

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

Hierarchical open Leontief models

Transcription:

Linear Algebra and its Applications 354 (22) 41 47 www.elsevier.com/locate/laa Generalized inverses of partitioned matrices in Banachiewicz Schur form Jerzy K. Baksalary a,, George P.H. Styan b a Department of Mathematics, Zielona Góra University, pl. Słowiański 9, PL 65-69 Zielona Góra, Poland b Department of Mathematics and Statistics, McGill University, 85 Sherbrooke Street West, Montreal, QC, Canada H3A 2K6 Received 3 October 21; accepted 24 February 22 Submitted by S. Puntanen Abstract The problem of developing conditions under which generalized inverses of a partitioned matrix can be expressed in the so-called Banachiewicz Schur form is reconsidered. Theorem of Marsaglia and Styan [Sankhyā Ser. A 36 (1974) 437], concerning the class of all generalized inverses, the class of reflexive generalized inverses, and the Moore Penrose inverse, is strengthened and new results are established for the classes of outer inverses, least-squares generalized inverses, and minimum norm generalized inverses. 22 Elsevier Science Inc. All rights reserved. AMS classification: 15A9 Keywords: Inner inverse; Outer inverse; Reflexive generalized inverse; Least-squares generalized inverse; Minimum norm generalized inverse; Moore Penrose inverse; Schur complement 1. Introduction Let C m,n denote the set of m n complex matrices. It is known that if a matrix M C m+p,m+p, partitioned as A B M =, (1.1) C D Corresponding author. E-mail addresses: j.baksalary@im.uz.zgora.pl (J.K. Baksalary), styan@together.net (G.P.H. Styan). 24-3795/2/$ - see front matter 22 Elsevier Science Inc. All rights reserved. PII: S24-3795(2)334-8

42 J.K. Baksalary, G.P.H. Styan / Linear Algebra and its Applications 354 (22) 41 47 is such that A C m,m is nonsingular and the Schur complement S C p,p of A in M,definedas S = D C A 1 B, (1.2) is also nonsingular, then the inverse of M is expressible in the Banachiewicz Schur form ( M 1 A 1 = B S 1 ). (1.3) + A 1 B S 1 C A 1 A 1 S 1 C A 1 S 1 Marsaglia and Styan [1] substantially extended this result replacing (1.1) by a general partitioned matrix M C m+p,n+q of the form A B M = (1.4) C D and considering the problem of characterizing situations where generalized inverses of M admit representations similar to (1.3). Let us recall that the set of generalized inverses of a given matrix K C s,t is specified by K{1} ={L C t,s : KLK = K}. (1.5) Perhaps the best known element of K{1} is the Moore Penrose inverse of K, denoted by K +, which according to Penrose [2] is defined by L = K + KLK = K, LKL = L, KL = (KL), LK = (LK), (1.6) where the asterix superscript denotes the conjugate transpose. Other classes of generalized inverses considered in this paper are: the set of outer inverses K{2} ={L C t,s : LKL = L}, (1.7) the set of reflexive generalized inverses K{1, 2} ={L C t,s : KLK = K, LKL = L}, (1.8) the set of least-squares generalized inverses K{1, 3} ={L C t,s : KLK = K, KL = (KL) }, (1.9) and the set of minimum-norm generalized inverses K{1, 4} ={L C t,s : KLK = K, LK = (LK) }. (1.1) In addition to (1.4), there is some further notation common for all results of this paper. Namely, for any fixed generalized inverse G A{1}, S C p,q will denote the generalized Schur complement of A in M partitioned as in (1.4), i.e., S = D CGB, (1.11) and N C n+q,m+p will stand for a partitioned matrix having the specific structure G + GBHCG GBH N =, (1.12) HCG H with H C q,p.

J.K. Baksalary, G.P.H. Styan / Linear Algebra and its Applications 354 (22) 41 47 43 The first part of Theorem in [1] asserts that, under the assumption that H S{1}, a generalized inverse of M is expressible in the form (1.12) if and only if F A BE S =, F S CE A =, F A BHCE A =, (1.13) where E A = I n GA, F A = I m AG, E S = I q HS, F S = I p SH, (1.14) and conditions (1.13) are independent of the choice of G A{1} and H S{1} involved in (1.14). Solutions of a similar type were given also for the problem of characterizing the cases where N M{1, 2} and the cases where N = M +. The purpose of this paper is twofold. Firstly, we show that the requirements concerning H are actually also necessary conditions and therefore can be added to Marsaglia and Styan s [1] conditions to form new sets of necessary and sufficient conditions. It is clear that this leads to strengthened versions of the original results. Secondly, in addition to the problems of when N M{1}, N M{1, 2}, andn = M +,three new problems of when N M{2}, N M{1, 3}, andn M{1, 4} are solved. Consequently, results corresponding to the second and third parts of Theorem in [1] are here obtained as corollaries by combining solutions for the classes M{1} and M{2} to get a solution for the class M{1, 2} and combining solutions for the classes M{2}, M{1, 3},andM{1, 4} to get a solution for the Moore Penrose inverse M +. 2. Results In view of the notation (1.4), (1.11), (1.12), and (1.14), it follows that AG FA BHCG F A BH MN = F S CG SH (2.1) and GA GBHCEA GBE S NM =. (2.2) HCE A HS (Parenthetically notice that (2.1) contains a correction to the formula (18) in [1], where the factor CG in the south-west submatrix has been mistakenly specified as a generalized inverse of C.) In what follows, the abbreviations NW, NE, SW, and SE will be used to denote the north-west, north-east, south-west, and south-east submatrix of a given matrix, respectively. According to the purpose of this paper presented in Section 1, we will establish necessary and sufficient conditions under which a matrix N of the form (1.12) belongs to a given class of generalized inverses of the matrix M partitioned as in (1.4). Our interest focuses on six relations: N M{1}, N M{2}, N M{1, 2}, N M{1, 3}, N M{1, 4}, andn = M +.

44 J.K. Baksalary, G.P.H. Styan / Linear Algebra and its Applications 354 (22) 41 47 Theorem 1. With the notation (1.4), (1.11), (1.12), and (1.14), N M{1} if and only if H S{1}, F A BE S =, F S CE A =, F A BHCE A =, (2.3) the last three conditions being independent of the choice of G A{1} and H S{1} involved in E A, F A, E S, and F S. Proof. In view of the notation (1.11) and (1.14), it follows from (2.1) and (1.4) that (MNM) NW = AGA F A BHCGA + F A BHC = A + F A BHCE A, (MNM) NE = AGB F A BHCGB + F A BHD = AGB + F A BHS, (MNM) SW = F S CGA + SHC = CGA SHCGA + SHC = CGA + SHCE A, (MNM) SE = F S CGB + SHD = CGB SHCGB + SHD = CGB + SHS. Consequently, on account of the definition (1.5), N M{1} if and only if (MNM) NW = M NW F A BHCE A =, (MNM) NE = M NE F A BHS = F A B F A BE S =, (MNM) SW = M SW SHCE A = CE A F S CE A =, (MNM) SE = M SE SHS = S, which proves necessity and sufficiency of conditions (2.3). The independence of the last three conditions in (2.3) of the choice of G A{1} and H S{1} can be ascertained referring to the fact that, for any matrix K C s,t and any generalized inverses L 1, L 2 K{1}, (I t L 1 K)(I t L 2 K) = I t L 2 K, (I s KL 2 )(I s KL 1 ) = I s KL 2 and noting that these expressions are of the type E K and F K, respectively. Theorem 1 strengthens the first part of Theorem in [1] in the sense that H S{1} is no longer an assumption, but one of the conditions characterizing situations where N M{1}. The second part of the above-mentioned theorem contains an answer to the question of when a matrix N of the form (1.12) is a reflexive generalized inverse of M given in (1.4). In the present paper, a strengthened version of this result is obtained by combining Theorem 1 with its counterpart concerning outer inverses, which has not hitherto been considered in the literature. Conditions constituting a solution in the latter case appear to be quite mild.

J.K. Baksalary, G.P.H. Styan / Linear Algebra and its Applications 354 (22) 41 47 45 Theorem 2. With the notation (1.4), (1.11), and (1.12), N M{2} if and only if G A{1, 2} and H S{2}. Proof. Arguing similarly as in the proof of Theorem 1, we find that, with the additional temporary notation HSH H = Q, (NMN) NW = N NW (E A + GBHCE A )(G + GBHCG) = GBQCG, (2.4) (NMN) NE = N NE (E A + GBHCE A )GBH = GBQ, (2.5) (NMN) SW = N SW HCE A (G + GBHCG) = QCG, (2.6) (NMN) SE = N SE HCE A GBH = Q. (2.7) Substituting (2.7) to (2.6) yields HCE A (G + GBHCG) = HCE A GBHCG. Consequently, HCE A G =, (2.8) and thus (2.7) entails Q =, which in view of (1.7) corresponds to H S{2}.Moreover, the conditions in (2.4) and (2.5) then reduce to E A (G + GBHCG) = and E A GBH =, and hence to E A G =. But this means that GAG = G, i.e., G A{2}. Combining this assertion with the assumption that G A{1} leads to G A{1, 2}. In view of the definition (1.8) of reflexive generalized inverses, an immediate consequence of Theorems 1 and 2 is the following strengthened version of the second part of Theorem in [1]. Corollary 1. With the notation (1.4), (1.11), (1.12), and (1.14), N M{1, 2} if and only if G A{1, 2}, H S{1, 2}, F A BE S =, F S CE A =, F A BHCE A =, the last three conditions being independent of the choice of G A{1} and H S{1} involved in E A, F A, E S, and F S. In a similar way we can strengthen the third part of Theorem in [1], which deals with the Moore Penrose inverse M +. Our solution will be obtained as a corollary to Theorem 2 combined with the next two results, concerned with the classes M{1, 3} and M{1, 4}.

46 J.K. Baksalary, G.P.H. Styan / Linear Algebra and its Applications 354 (22) 41 47 Theorem 3. With the notation (1.4), (1.11), (1.12), and (1.14), N M{1, 3} if and only if G A{1, 3}, H S{1, 3}, F A B =, F S C =, (2.9) the last two conditions being independent of the choice of G A{1} and H S{1} involved in F A and F S. Proof. From the definition (1.9) of the class of least-squares generalized inverses it is clear that N M{1, 3} if and only if conditions (2.3) hold along with (MN) NW = (MN) NW, (MN) NE = (MN) SW, (MN) SE = (MN) SE. (2.1) In view of (2.1), a consequence of the second condition in (2.1) and the equality HF S = E S H is (MN) NE (MN) NE = (MN) NE(MN) SW = F A BHF S CG = F A BE S HCG. Hence it is clear that the second condition in (2.3) implies (MN) NE (MN) NE =, which is equivalent to (MN) NE = and thus also to (MN) SW =. In view of (2.1), this means that F A BH = and F S CG =. (2.11) The first of these conditions transforms the equality (MN) NW = (MN) NW to AG = (AG). On account of (1.9), combining this observation with the assumption that G A{1} leads to G A{1, 3}. The third part of (2.1) simply means that SH = (SH), and therefore combining this assertion with the condition H S{1} yields H S{1, 3}. The final step of the proof consists in observing that, under the equalities (2.11), the second and third conditions in (2.3) strengthen to F A B = and F S C =, respectively. The independence of these conditions of the choice of G A{1} in F A and H S{1} in F S follows by the same arguments as in the proof of Theorem 1. Theorem 4. With the notation (1.4), (1.11), (1.12), and (1.14), N M{1, 4} if and only if G A{1, 4}, H S{1, 4}, BE S =, CE A =, the last two conditions being independent of the choice of G A{1} and H S{1} involved in E A and E S. Proof. The proof is similar to that of Theorem 3. In view of (1.6), (1.7), (1.9), and (1.1), an immediate consequence of Theorems 2 4 is the following strengthened version of the third part of Theorem in [1].

J.K. Baksalary, G.P.H. Styan / Linear Algebra and its Applications 354 (22) 41 47 47 Corollary 2. With the notation (1.4), (1.11), (1.12), and (1.14), N = M + if and only if G = A +, H = S +, F A B =, CE A =, BE S =, F S C =, the last four conditions being independent of the choice of G A{1} and H S{1} involved in E A, F A, E S, and F S. The final comment refers to the choice of generalized inverses occurring in the results of this paper. In Theorems 1, 3, and 4 and Corollaries 1 and 2 we emphasized that in all conditions involved in them the choice of G A{1} and H S{1} is negligible. However, there is one important formula in which the independence property of this type is not valid, viz. the formula (1.11) defining a generalized Schur complement of A in M of the form (1.4). For example, if A = ( 1 ), B = ( 1), C = ( 1 1 ), D = (1), (2.12) then A{1} = { s 1 G = : s, u, v C }, (2.13) u v and hence the set of all possible generalized Schur complements of A in M is {S = D CGB: G A{1}} = {( v): v C}. (2.14) The set (2.14) is obviously dependent on the choice of v in the representation (2.13) of a generalized inverse of A. Ifv, then E S = () = F S, and since F A B =, it follows from Theorem 1 that a generalized inverse of the matrix M with submatrices (2.12) is expressible in the Banachiewicz Schur form. If, however, v =, then E S = (1) = F S, and since CE A = ( 1), a generalized inverse of M in the form (1.12) does not exist. Consequently, all results given in this paper should be understood as regarding any, but fixed, Schur complement S. Acknowledgements This research was begun when George P. H. Styan visited Poznań, Poland,in June 2. The authors wish to thank an anonymous referee for valuable comments and suggestions. References [1] G. Marsaglia, G.P.H. Styan, Rank conditions for generalized inverses of partitioned matrices, Sankhyā Ser. A 36 (1974) 437 442. [2] R. Penrose, A generalized inverse for matrices, Proc. Cambridge Philos. Soc. 51 (1955) 46 413.