More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

Similar documents
ELA

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

On V-orthogonal projectors associated with a semi-norm

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

Symmetry Labeling of Molecular Energies

On convexity of polynomial paths and generalized majorizations

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

Continuity and Differentiability Worksheet

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

MA455 Manifolds Solutions 1 May 2008

Continuity. Example 1

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

A = h w (1) Error Analysis Physics 141

232 Calculus and Structures

ch (for some fixed positive number c) reaching c

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

Differentiation in higher dimensions

Function Composition and Chain Rules

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

f a h f a h h lim lim

Derivatives of Exponentials

MVT and Rolle s Theorem

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

INTRODUCTION AND MATHEMATICAL CONCEPTS

Characterization of reducible hexagons and fast decomposition of elementary benzenoid graphs

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Material for Difference Quotient

WYSE Academic Challenge 2004 Sectional Mathematics Solution Set

2.11 That s So Derivative

Generic maximum nullity of a graph

A SHORT INTRODUCTION TO BANACH LATTICES AND

Finding and Using Derivative The shortcuts

VARIANCE ESTIMATION FOR COMBINED RATIO ESTIMATOR

The Column and Row Hilbert Operator Spaces

Notes on wavefunctions II: momentum wavefunctions

MTH 119 Pre Calculus I Essex County College Division of Mathematics Sample Review Questions 1 Created April 17, 2007

Generalized Principal Pivot Transform

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative


Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015

Tail Conditional Expectations for Extended Exponential Dispersion Models

3.4 Worksheet: Proof of the Chain Rule NAME

64 IX. The Exceptional Lie Algebras

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix

MAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points

Math 312 Lecture Notes Modeling

A note on the equality of the BLUPs for new observations under two linear models

. If lim. x 2 x 1. f(x+h) f(x)

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk

Chapter 1 Functions and Graphs. Section 1.5 = = = 4. Check Point Exercises The slope of the line y = 3x+ 1 is 3.

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

2011 Fermat Contest (Grade 11)

1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible.

Polynomial Functions. Linear Functions. Precalculus: Linear and Quadratic Functions

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Introduction to Derivatives

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

4.2 - Richardson Extrapolation

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

arxiv: v1 [math.dg] 4 Feb 2015

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

The generalized Schur complement in group inverses and (k + 1)-potent matrices

Packing polynomials on multidimensional integer sectors

Exam 1 Review Solutions

Poisson Equation in Sobolev Spaces

Linear Algebra and its Applications

MATH1151 Calculus Test S1 v2a

Higher Derivatives. Differentiable Functions

7.1 Using Antiderivatives to find Area

Cointegration in functional autoregressive processes

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -

2.8 The Derivative as a Function

3.4 Algebraic Limits. Ex 1) lim. Ex 2)

1 Lecture 13: The derivative as a function.

INTRODUCTION AND MATHEMATICAL CONCEPTS

CHAPTER 2. Unitary similarity and unitary equivalence

, meant to remind us of the definition of f (x) as the limit of difference quotients: = lim

5.1 We will begin this section with the definition of a rational expression. We

Pre-Calculus Review Preemptive Strike

Differential Calculus (The basics) Prepared by Mr. C. Hull

Numerical Differentiation

Some Results on the Growth Analysis of Entire Functions Using their Maximum Terms and Relative L -orders

Rank equalities for idempotent and involutory matrices

Lab 6 Derivatives and Mutant Bacteria

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point.

DIGRAPHS FROM POWERS MODULO p

LATTICE EXIT MODELS S. GILL WILLIAMSON

Investigating Euler s Method and Differential Equations to Approximate π. Lindsay Crowl August 2, 2001

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

Transcription:

More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing, Cina b epartment of Psycology, McGill University, Montréal, Québec, Canada bstract. Necessary and sufficient conditions are derived for a 2-by-2 partitioned matrix to ave {1}-, {1,2}-, {1,3}-, {1,4}-inverses and te Moore-Penrose inverse wit anaciewicz-scur forms. s applications, te anaciewicz-scur forms of {1}-, {1,2}-, {1,3}-, {1,4}-inverses and te Moore-Penrose inverse of a 2-by-2 partitioned Hermitian matrix are also given. Keywords: anaciewicz-scur form; partitioned matrix; generalized inverse; Moore-Penrose inverse; Hermitian matrix; Scur complement; matrix rank metod Matematics Subject Classifications: 1503; 1509 1 Introduction Trougout tis paper, C m n stands for te set of all m n matrices over te field of complex numbers. Te symbols, r() and R() stand for te conjugate transpose, te rank and te range (column space) of a matrix C m n, respectively;, denotes a row block matrix consisting of and. Te Moore-Penrose inverse of C m n, denoted by, is defined to be te unique matrix X C n m satisfying te following four matrix equations (1) X = (2) XX = X, (3) (X) = X, (4) (X) = X. Furter, let E = I m and F = I n stand for te two ortogonal projectors. matrix X is called an {i,..., j}-inverse of, denoted by (i,...,j), if it satisfies te i,..., jt equations. Te collection of all {i,..., j}-inverses of is denoted by { (i,...,j) }. Some frequently used generalized inverses of are (1), (1,2), (1,3) and (1,4). Let M be a 2 2 block matrix M =, (1.1) C were C m n, C m k, C C l n and C l k. If in (1.1) is square and nonsingular, ten M can be decomposed as Im 0 0 Im M = 1 C 1 I l 0 C 1. (1.2) 0 I l Tis decomposition is often called itken block-diagonalization formula in te literature, see Puntanen and Styan 9. Moreover, if bot M and are nonsingular, ten te Scur complement S = C 1 is nonsingular too, and te inverse of M can be written in te following form M 1 Im = 1 1 0 I m 0 0 I l 0 S 1 C 1 I l = 1 + 1 S 1 C 1 1 S 1 S 1 C 1 S 1. (1.3) Tis well-known formula is called te anaciewicz inversion formula for te inverse of a nonsingular matrix in te literature, see Puntanen and Styan 9, and can be found in most linear algebra books. Te two formulas in (1.2) and (1.3) and teir consequences are widely used in manipulating partitioned matrices and teir operations. Wen bot and M in (1.1) are singular, te E-mails: yongge@mail.sufe.edu.cn, takane@psyc.mcgill.ca 1

two formulas in (1.2) and (1.3) can be extended to generalized inverses of matrices. reasonable extension of (1.3) wit generalized inverses of submatrices in (1.1) is given by N( (i,...,j), S (i,...,j) In (i,...,j) (i,...,j) 0 I m 0 0 I k 0 S (i,...,j) C (i,...,j) I l = (i,...,j) + (i,...,j) S (i,...,j) C (i,...,j) (i,...,j) S (i,...,j) S (i,...,j) C (i,...,j) S (i,...,j), (1.4) were S = C (i,...,j). Eq. (1.4) is called te anaciewicz-scur form induced from M, see aksalary and Styan 1. It can be seen from (1.4) tat te matrix N( (i,...,j), S (i,...,j) ) varies over te coice of (i,...,j) and S (i,...,j). Let {N( (i,...,j), S (i,...,j) )} denote te collection of all N( (i,...,j), S (i,...,j) ). ltoug te rigt-and side of (1.4) is obtained by replacing inverses wit generalized inverses, it is not necessarily an {i,..., j}-inverse of M. In tis case, it is of interest to investigate relations between generalized inverses of M in (1.1) and te matrix N( (i,...,j), S (i,...,j) ) in (1.4), in particular, to derive necessary and sufficient conditions for N( (i,...,j), S (i,...,j) ) to be generalized inverses of M in (1.1). some autors ave investigated te relations between M (i,...,j) and N( (i,...,j), S (i,...,j) ) for some special coices of {i,..., j}. well-known result asserts tat M = N(, S ) R() R(), R(C ) R( ), R(C) R(S) and R( ) R(S ), (1.5) were S = C ; see, e.g., 1. Oter results can be found, e.g., in 2, 4, 5, 8, 15, 16. In a recent paper 16, we considered relations between M (1) and N( (1), S (1) ) troug te matrix rank metod and obtain te following two rank formulas r M (1) N( (1), S (1) ) (1), M (1) { } = max r(m) r() r C,, r(m) r() r, 0, (1.6) max r M (1) N( (1), S (1) ) (1) M (1) = r 0 0 0 + r r r C, 2r(). (1.7) 0 C C y setting te rigt-and sides of te two rank equalities to zero, we obtain tat (a) Tere exist (1) and S (1) suc tat N( (1), S (1) ) is a {1}-inverse of M if and only if { r(m) r() + r C,, } r() + r (1.8) olds. (b) Te set inclusion {N( (1), S (1) )} {M (1) } olds if and only if 0 r = r() + r C, and r 0 0 = r() +r 0 C C, (1.9) old, or equivalently, 0 C R (E ) R 0 and R R CF old. 2

s an extension of te previous investigation, we derive in tis paper some rank formulas for te difference M (i,...,j) N( (i,...,j), S (i,...,j) ) (1.10) for {1,2}-, {1,3}-, {1,4}-inverses and te Moore-Penroses of matrices, and use te rank formulas to caracterize te following relations {N( (i,...,j), S (i,...,j) )} {M (i,...,j) }, (1.11) {N( (i,...,j), S (i,...,j) )} {M (i,...,j) }. (1.12) In order to establis rank equalities associated wit (1.10), we need a variety of rank formulas for partitioned matrices and generalized Scur complements. Lemma 1.1 (7) Let C m n, C m k, C C l n and C l k. Ten r, = r() + r( (1) r() + r( (1) ), (1.13) r = r() + r( C C (1) r(c) + r( C (1) C ), (1.14) C r = r() + r(c) + r (I C 0 m (1) )(I n C (1) C), (1.15) 0 r = r() + r (1) C C C (1) C (1). (1.16) Lemma 1.2 (12, 14) Let M be as given in (1.1). Ten r( C (1) r() + r C, + r + r (1) C max (1) { r( C (1) r C,, r, r C 0 0 r r 0, 0 C C (1.17) } r(), (1.18) r( C (1,2) r + r C, + r() + max{ r (1,2) 1, r 2 }, (1.19) { } max r( C (1,2) r() + r(), r C,, r, r r(), (1.20) (1,2) C r( C (1,3) r + r r 0 0, (1.21) (1,3) C C { } max r( C (1,3) r r(), r, (1.22) (1,3) C r( C (1,4) 0 r C, + r (1,4) C r, (1.23) 0 C { } max r( C (1,4) r C,, r (1,4) C r(), (1.24) r( C r C r(), (1.25) were 0 r 1 = r r r 0 0 C 0 C C 0, r 2 = r() r C r 0. 3

Te following lemma is derived from Lemma 1.2 by setting and C to identity matrices in (1.17), (1.19), (1.21) and (1.23). Lemma 1.3 Let C m n and C n m. Ten (1) r( (1) r( ), (1.26) (1,2) r( (1,2) max{ r( ), r() + r() r() r() }, (1.27) (1,3) r( (1,3) r( ), (1.28) (1,4) r( (1,4) r( ). (1.29) 2 Generalized inverses of partitioned matrices wit anaciewicz-scur forms We first give two rank formulas for te difference M (1,2) N( (1,2), S (1,2) ). Teorem 2.1 Let M and N( (1,2), S (1,2) ) be as given in (1.1) and (1.4), respectively, were S = C (1,2). Ten were (1,2), M (1,2) r M (1,2) N( (1,2), S (1,2) ) = max{r 1, r 2, r 3, 0}, (2.1) max r M (1,2) N( (1,2), S (1,2) ) = { s 1, s 2 }, (2.2) (1,2) M (1,2) r 1 = r(m) 2r() r(), r 2 = r(m) r() r C,, r 3 = r(m) r() r, s 1 = r 0 0 0 + r 0 C C 0 s 2 = r + r C 0 r r C, 2r(), + r(m) 2r() r() r C, r. Hence, (a) 16 Tere exist (1,2) and S (1,2) suc tat N( (1,2), S (1,2) ) is a {1, 2}-inverse of M if and only if { } r(m) 2r() + r(), r() + r, r() + r C,. (2.3) (b) Te set inclusion {N( (1,2), S (1,2) )} {M (1,2) } olds if and only if (1.9) olds, or 0 r(m r + r C, r() and r = r = r() + r() (2.4) C 0 old. Proof It was sown in 16 tat r M (1,2) N( (1,2), S (1,2) ) = r(m) r() r( C (1,2) ), (2.5) M (1,2) 4

so tat (1,2),M (1,2) r M (1,2) N( (1,2), S (1,2) ) = r(m) r() max (1,2) r( C (1,2) ), (2.6) max r M (1,2) N( (1,2), S (1,2) ) = r(m) r() r( C (1,2) ). (2.7) (1,2) M (1,2) (1,2) Substituting (1.19) and (1.20) into (2.6) and (2.7) gives (2.1) and (2.2). Setting te rigt-and side of (2.1) to zero leads to r 1 0, r 2 0 and r 3 0, tat is, (2.3) olds. Setting te rigt-and side of (2.2) to zero leads to s 1 0 or s 2 0. It can be seen from (1.7) tat s 1 0 is equivalent to (1.9). Note tat s 2 in (2.2) can be rewritten as a sum of tree parts ( ) ( ) 0 s 2 = + r() r() r C ( r(m) + r() r C, r + r 0 ), r() r() were eac part is nonnegative. In tis case, Setting s 2 = 0 leads to (2.4). Teorem 2.2 Let M and N( (1,3), S (1,3) ) be as given in (1.1) and (1.4). Ten were Hence, (1,3), M (1,3) r M (1,3) N( (1,3), S (1,3) ) = max{ r 1, r 2 }, (2.8) max r M (1,3) N( (1,3), S (1,3) ) = r 1 + r 3, (2.9) (1,3) M (1,3) r 1 = r, + r C, r, C r 2 = r, + r C, r() r, r 3 = r 0 0 r r(). C (a) 16 Tere exist (1,3) and S (1,3) suc tat N( (1,3), S (1,3) ) is a {1, 3}-inverse of M if and only if R() R(), R C R = {0} and r C, r old. (b) Te set inclusion {N( (1,3), S (1,3) )} {M (1,3) } olds if and only if R() R(), R C R = {0} and R(CF ) R(F ) old. Proof Te following formula r M (1,3) N( (1,3), S (1,3) ) = r, + r C, r() r( C (1,3) ) (2.10) M (1,3) was sown in 16. Substituting (1.21) and (1.22) into (2.10) gives (2.8) and (2.9). Setting te rigtand side of (2.8) to zero, we see tat tere exist (1,3) and S (1,3) suc tat N( (1,3), S (1,3) ) {M (1,3) } if and only if r, + r C, =r and r, + r C, r() + r. (2.11) C 5

Note tat r, + r C, r() + r C, r C Hence, te first equality in (2.11) is equivalent to R() R() and r = r, + r C,, C and te second inequality in (2.11) is equivalent to r C, r. Setting te rigt-and side of (2.9) to zero and noting tat te two terms on te rigt-and side of (2.9) are nonnegative, we obtain (b). Te following teorem can be sown similarly. Teorem 2.3 Let M and N( (1,4), S (1,4) ) be as given in (1.1) and (1.4). Ten. were (1,4), M (1,4) r M (1,4) N( (1,4), S (1,4) ) = max{ r 1, r 2 }, max r M (1,4) N( (1,4), S (1,4) ) = r 1 + r 3, (1,4) M (1,4) r 1 = r + r r C C, r 2 = r + r r() r C,, C 0 r 3 = r r C, r(). 0 C Hence, (a) 16 Tere exist (1,4) and S (1,4) suc tat N( (1,4), S (1,4) ) is a {1, 4}-inverse of M if and only if R(C ) R( ), R C R = {0} and r r C, old. (b) Te set inclusion {N( (1,4), S (1,4) )} {M (1,4) } olds if and only if R(C ) R( ), R C R = {0} and R( E ) R( E C ) old. special case of (1.4) corresponding to te Moore-Penrose inverse is given by N(, S + S C S S C S, (2.12) were S = C. Te relations between N(, S ) and {i,..., j}-inverse of M are given in te following teorems. Teorem 2.4 Let M and N(, S ) be as given in (1.1) and (2.12), respectively. Ten r M (1) N(, S ) = r M (1,2) N(, S ) = r(m) r M (1) M (1,2) C. (2.13) Hence, te following statements are equivalent: 6

(a) N(, S ) is a {1}-inverse of M. (b) N(, S ) is a {1, 2}-inverse of M. (c) r(m r() + r( C ). (d) r(m r C. (e) R C = R(M) and R C = R(M ). Proof It follows from (1.26) and (1.27) tat M (1) r M (1) N(, S ) = r M MN(, S )M, (2.14) M (1,2) r M (1,2) N(, S ) = max{ r M MN(, S )M, It is also easy to verify tat rn(, S ) + r(m) rmn(, S ) rn(, S )M }. (2.15) rmn(, S rn(, S )M = r() + r(s), r M MN(, S )M = r(m) r() r(s), were S = C. Substituting tese two equalities and (1.25) into (2.14) and (2.15) gives (2.13). Te equivalence of (a), (b), (c) and (d) follows from (2.13). Recall a simple fact tat r(p Q r() R( P R( ) and R(Q R(). pplying tis result to (d) gives te equivalence of (d) and (e). Teorem 2.5 Let M and N(, S ) be as given in (1.1) and (2.12), respectively. Ten r M (1,3) N(, S ) = r, + r C, r M (1,3) C. (2.16) Hence, te following statements are equivalent: (a) N(, S ) is a {1, 3}-inverse of M. (b) r C = r, + r C,. (c) r C = r, + r C,, r C, = r C, and r, = r,. (d) R C R = {0}, R() R() and R C, = R C,. Proof It follows from (1.28) tat It is easy to verify tat r M (1,3) N(, S ) = r M MN(, S ) M. (2.17) M (1,3) MN(, S E S C SS Im 0 C, I l M MN(, S ) M = M I m+l MN(, S ) = M E + E S C E S E S C E S. 7

Recall tat elementary block matrix operations (EMOs) don t cange te rank of a matrix. Hence we can derive by elementary block matrix operations tat ( ) r M MN(, S ) M = r M E + E S C E S E S C E S ( ) = r M E 0 0 E S E E = r E S C E S 0 = r r() r(s) (by (1.13)) 0 S C 0 0 = r r() r(s) 0 C 0 = r, + r C, r() r( C ). Tus we ave (2.16) by (1.25). lso note tat r C r, + r C, r, + r C,. pplying tis inequality to (b) leads to te equivalence of (b) and (c). Te equivalence of (c) and (d) is obvious. Te following result can be sown similarly. Teorem 2.6 Let M and N(, S ) be as given in (1.1) and (2.12), respectively. Ten r M (1,4) N(, S ) = r + r r M (1,4) C C. Hence, te following statements are equivalent: (a) N(, S ) is a {1, 4}-inverse of M. (b) r C = r + r. C (c) r C = r C + r (d) R C R, r C = r and r C = {0}, R, = R, and R(C ) R( ). = r. 3 Generalized inverses of partitioned Hermitian matrices Let M be an Hermitian matrix, and partition M as M =, (3.1) were = C m m, C m n and = C n n. Te anaciewicz-scur form induced from M is N( (i,...,j), S (i,...,j) (i,...,j) + (i,...,j) S (i,...,j) (i,...,j) (i,...,j) S (i,...,j) S (i,...,j) (i,...,j) S (i,...,j), (3.2) were S = (i,...,j). pplying te results in Section 2 to (3.1) and (3.2) gives te following results. 8

Teorem 3.1 Let M and N( (1), S (1) ) be as given in (3.1) and (3.2). Ten Hence, r M (1) N( (1), S (1) ) = max {r(m) r() r,, 0}, (1), M (1) max r M (1) N( (1), S (1) 0 ) = 2r (1) M (1) 0 2r() 2r,. (a) Tere exist (1) and S (1) suc tat N( (1), S (1) ) is a {1}-inverse of M if and only if r(m) r() + r,. (b) Te set inclusion {N( (1), S (1) )} {M (1) } olds if and only if 0 r 0 = r() + r,. Proof It follows from (1.6) and (1.7) by setting C =. Teorem 3.2 Let M and N( (1,2), S (1,2) ) be as given in (3.1) and (3.2), were S = (1,2). Ten were Hence, r M (1,2) N( (1,2), S (1,2) ) = max{r 1, r 2, 0}, (1,2), M (1,2) max r M (1,2) N( (1,2), S (1,2) ) = { s 1, s 2 }, (1,2) M (1,2) r 1 = r(m) 2r() r(), r 2 = r(m) r() r,, 0 s 1 = 2r 0 s 2 = 2r 0 2r, 2r(), + r(m) 2r() r() 2r,. (a) Tere exist (1,2) and S (1,2) suc tat N( (1,2), S (1,2) ) is a {1, 2}-inverse of M if and only if r(m) { 2r() + r(), r() + r, }. (3.3) (b) Te set inclusion {N( (1,2), S (1,2) )} {M (1,2) } olds if and only if (3.3) olds, or r(m 2r r() and r 0 = r() + r(). Proof It follows from Teorem 2.1 by setting C =. Teorem 3.3 Let M and N( (1,3), S (1,3) ) be as given in (3.1) and (3.2). Ten r M (1,3) N( (1,3), S (1,3) ) = max{ r 1, r 2 }, (1,3), M (1,3) max r M (1,3) N( (1,3), S (1,3) ) = r 1 + r 3, (1,3) M (1,3) 9

were r 1 = r, + r 2, r r 2 = r, r(), r 3 = r 0 0 r r()., Hence, (a) Tere exist (1,3) and S (1,3) suc tat N( (1,3), S (1,3) ) is {1, 3}-inverse of M if and only if 2 R() R() and R R = {0}. (b) Te set inclusion {N( (1,3), S (1,3) )} {M (1,3) } olds if and only if 2 R() R() and R R = {0}. Proof It follows from Teorem 2.2 by setting C =. special case of (3.2) corresponding to te Moore-Penrose inverse is N(, S + S S S S, (3.4) were S =. Te relations between N(, S ) and {i,..., j}-inverse of M are given in te following teorems. Teorem 3.4 Let M and N(, S ) be as given in (3.1) and (3.4), respectively. Ten r M (1) N(, S ) = r M (1,2) N(, S 3 ) = r(m) r M (1) M (1,2). Hence, te following statements are equivalent: (a) N(, S ) is a {1}-inverse of M. (b) N(, S ) is a {1, 2}-inverse of M. (c) r(m r() + r( ). 3 (d) r(m r. 2 (e) R = R(M). Proof It follows from Teorem 2.4 by setting C =. Teorem 3.5 Let M and N(, S ) be as given in (3.1) and (3.4), respectively. Ten r M (1,3) N(, S ) = r M (1,4) N(, S 3 ) = r M (1,3) M (1,4) + r r Hence, te following statements are equivalent: (a) N(, S ) is a {1, 3}-inverse of M.. 10

(b) N(, S ) is a {1, 4}-inverse of M. 3 (c) r = r + r. 3 (d) R R = {0}, R() R() and R, = R,. Proof It follows from Teorems 2.5 and 2.6 by setting C =. ssume te Hermitian matrix in (3.1) is nonnegative definite, tat is, tere exists a matrix U suc tat M = UU. In tis case, R() R() and R( ) R() and S = (1) = for any (1) old. Relations between {1}- and {1, 2}-inverses of M and N( (1), S (1) ) and N( (1,2), S (1,2) ) were considered by Rode 11. pplying Teorems 3.1, 3.2, 3.3 and (1.5) to te nonnegative Hermitian matrix in (3.1) gives te following result. Teorem 3.6 Let M and N( (1), S (1) ) be as given in (3.1) and (3.2), and assume M is nonnegative definite. lso let S =. Ten, (a) 11 Te set inclusion {N( (1), S (1) )} {M (1) } always olds. (b) 11 Te set inclusion {N( (1,2), S (1,2) )} {M (1,2) } always olds. (c) Te following statements are equivalent: (i) Te set inclusion {N( (1,3), S (1,3) )} {M (1,3) } olds. (ii) Te set inclusion {N( (1,4), S (1,4) )} {M (1,4) } olds. (iii) M = N(, S ). (iv) R R = {0}. n {i,..., j}-inverse of a square matrix is said to be an Hermitian {i,..., j}-inverse of and is denoted by (i,...,j), if it Hermitian. It is easy to verify tat any Hermitian matrix always as an Hermitian {i,..., j}-inverse for any give set {i,..., j}. Furter, te Hermitian anaciewicz- Scur form induced from te Hermitian matrix M in (3.1) is defined to be (i,...,j) + (i,...,j) S (i,...,j) (i,...,j) (i,...,j) S (i,...,j) N( (i,...,j), S (i,...,j) S (i,...,j) (i,...,j) S (i,...,j), (3.5) were S = (i,...,j). In order to caracterize relations between M and N( (i,...,j), S (i,...,j) ), we need to know te extremal ranks of (i,...,j) wit respect to (i,...,j), wic now are open problems. 4 Generalized inverses of a bordered Hermitian matrix Setting = 0 in (3.1) gives M =, (4.1) 0 were C m m is nonnegative definite and C m n. Tis matrix occurs widely in various problems in matrix teory, in particular, in regression analysis. Te anaciewicz-scur form induced from M is N( (i,...,j), S (i,...,j) (i,...,j) + (i,...,j) S (i,...,j) (i,...,j) (i,...,j) S (i,...,j) S (i,...,j) (i,...,j) S (i,...,j), (4.2) were S = (i,...,j). pplying te results in Section 2 to (4.1) and (4.2) gives us te following results. 11

Teorem 4.1 Let M and N( (1), S (1) ) be as given in (4.1) and (4.2). Ten te following statements are equivalent: (a) Tere exist (1) and S (1) suc tat N( (1), S (1) ) {M (1) }. (b) Te set inclusion {N( (1), S (1) )} {M (1) } olds. (c) Tere exist (1,3) and S (1,3) suc tat N( (1,3), S (1,3) ) {M (1,3) }. (d) Te set inclusion {N( (1,3), S (1,3) )} {M (1,3) } olds. (e) Tere exist (1,4) and S (1,4) suc tat N( (1,4), S (1,4) ) {M (1,4) }. (f) Te set inclusion {N( (1,4), S (1,4) )} {M (1,4) } olds. (g) M = N(, S ). () R() R(). Teorem 4.2 Let M and N( (1), S (1) ) be as given in (4.1) and (4.2). Ten te following statements are equivalent: (a) Tere exist (1,2) and S (1,2) suc tat N( (1,2), S (1,2) ) {M (1,2) }. (b) Te set inclusion {N( (1,2), S (1,2) )} {M (1,2) } olds. (c) r, 2r() r() or R() R(). 5 Concluding remarks If in (1.1) is square and nonsingular, ten te matrix in (1.1) can also be decomposed as Im M = 1 1 C 0 Im 0 0 I l 0 1. (5.1) C I l If bot M and are nonsingular in (1.1), ten te Scur complement T = 1 C is nonsingular, too, and te inverse of M can also be written as M 1 T = 1 T 1 1 1 CT 1 1 + 1 CT 1 1. (5.2) y symmetry, anoter type of anaciewicz-scur form induced from M is given by K( (i,...,j), T (i,...,j) T (i,...,j) T (i,...,j) (i,...,j) (i,...,j) CT (i,...,j) (i,...,j) + (i,...,j) CT (i,...,j) (i,...,j), (5.3) were T = (i,...,j) C. ltoug (1.3) and (5.2) are identical, (1.4) and (5.3) are not necessarily te same. pplying te results in te previous sections to (5.3), we can derive various conclusions on relations between M and K( (i,...,j), T (i,...,j) ). Furtermore, it is of interest to give necessary and sufficient conditions for te following equality N( (i,...,j), S (i,...,j) K( (i,...,j), T (i,...,j) ) (5.4) to old, or equalities of submatrices in tem to old. ll te results in tis paper on partitioned matrices and anaciewicz-scur forms induced from te matrices can be used to furter study various problems related to block matrices and teir generalized inverses. cknowledgements. We are grateful to anonymous referees for teir elpful comments and suggestions on an earlier version of tis paper. 12

References 1 J.K. aksalary and G.P.H. Styan. Generalized inverses of partitioned matrices in anaciewicz-scur form. Linear lgebra ppl. 354(2002) 41 47. 2. en-israel. note on partitioned matrices and equations. SIM Rev. 11(1969), 247 250. 3. en-israel and T.N.E. Greville. Generalized Inverses: Teory and pplications, 2nd Ed., Springer- Verlag, New York, 2003. 4 P. imasankaram. On generalized inverses of partitioned matrices. Sankyā Ser. 33(1971), 311 314. 5 F. urns,. Carlson, E. Haynswort and T. Markam. Generalized inverse formulas using te Scur complement. SIM. J. ppl. Mat. 26(1974), 254 259. 6 S.L. Campbell and C.. Meyer. Generalized Inverses of Linear Transformations, Corrected reprint of te 1979 original, over Publications, Inc., New York, 1991. 7 G. Marsaglia and G.P.H. Styan. Equalities and inequalities for ranks of matrices. Linear and Multilinear lgebra 2(1974), 269 292. 8 G. Marsaglia and G.P.H. Styan. Rank conditions for generalized inverses of partitioned matrices. Sankyā Ser. 36(1974), 437 442. 9 S. Puntanen and G.P.H. Styan. Scur complements in statistics and probabiliy. In: Te Scur Complement and Its pplications (F. Zang ed.), Springer, pp. 163 226, 2005. 10 C.R. Rao and S.K. Mitra. Generalized Inverse of Matrices and Its pplications. Wiley, New York, 1971. 11 C.. Rode. Generalized inverses of partitioned matrices. SIM J. ppl. Mat. 13(1965), 1033 1035. 12 Y. Tian. Upper and lower bounds for ranks of matrix expressions using generalized inverses. Linear lgebra ppl. 355(2002), 187 214. 13 Y. Tian. Rank equalities for block matrices and teir Moore-Penrose inverses. Houston J. Mat. 30(2004), 483 510. 14 Y. Tian. More on maximal and imal ranks of Scur complements wit applications. ppl. Mat. Comput. 152(2004), 675 692. 15 Y. Tian. Eigt expressions for generalized inverses of a bordered matrix. Linear and Multilinear lgebra, in press. 16 Y. Tian and Y. Takane. Scur complements and anaciewicz-scur forms. Electron. J. Linear lgebra 13(2005), 405 418. 13