On V-orthogonal projectors associated with a semi-norm

Similar documents
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

New insights into best linear unbiased estimation and the optimality of least-squares

A note on the equality of the BLUPs for new observations under two linear models

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

A new algebraic analysis to linear mixed models

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

ELA

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

The generalized Schur complement in group inverses and (k + 1)-potent matrices

Rank equalities for idempotent and involutory matrices

The DMP Inverse for Rectangular Matrices

Statistics for Social and Behavioral Sciences

SOLUTIONS TO PROBLEMS

LOWELL JOURNAL. MUST APOLOGIZE. such communication with the shore as Is m i Boimhle, noewwary and proper for the comfort

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Re-nnd solutions of the matrix equation AXB = C

Dragan S. Djordjević. 1. Introduction

On a matrix result in comparison of linear experiments

Analysis of transformations of linear random-effects models

A matrix handling of predictions of new observations under a general random-effects model

Linear Algebra and its Applications

arxiv: v1 [math.ra] 14 Apr 2018

Weighted least-squares estimators of parametric functions of the regression coefficients under a general linear model

EE731 Lecture Notes: Matrix Computations for Signal Processing

Lecture notes: Applied linear algebra Part 1. Version 2

Moore Penrose inverses and commuting elements of C -algebras

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Chapter 3. Matrices. 3.1 Matrices

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Generalized Principal Pivot Transform

Proceedings of the Third International DERIV/TI-92 Conference

Dragan S. Djordjević. 1. Introduction and preliminaries

eneralized Inverse of a Matrix And Its Applications.

On the necessary and sufficient condition for the extended Wedderburn-Guttman theorem

Some results on matrix partial orderings and reverse order law

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Properties of Matrices and Operations on Matrices

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

An Alternative Proof of the Greville Formula

arxiv: v1 [math.ra] 28 Jan 2016

Linear Algebra and its Applications

Chapter 1. Matrix Algebra

On pairs of generalized and hypergeneralized projections on a Hilbert space

Further Mathematical Methods (Linear Algebra) 2002

EXPLICIT EXPRESSIONS OF PROJECTORS ON CANONICAL VARIABLES AND DISTANCES BETWEEN CENTROIDS OF GROUPS. Haruo Yanai*

of a Two-Operator Product 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

A property of orthogonal projectors

arxiv: v4 [math.oc] 26 May 2009

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL

Generalized inverses of partitioned matrices in Banachiewicz Schur form

On Euclidean distance matrices

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

ECE 275A Homework #3 Solutions

Research Article Constrained Solutions of a System of Matrix Equations

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Expressions and Perturbations for the Moore Penrose Inverse of Bounded Linear Operators in Hilbert Spaces

Some results on the reverse order law in rings with involution

Linear Algebra. Session 12

On some linear combinations of hypergeneralized projectors

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Inner image-kernel (p, q)-inverses in rings

arxiv: v1 [math.ra] 16 Nov 2016

Normed & Inner Product Vector Spaces

STAT 100C: Linear models

On Sums of Conjugate Secondary Range k-hermitian Matrices

EP elements in rings

Singular Value Decomposition (SVD)

Chapter 3 Transformations

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

On the projection onto a finitely generated cone

arxiv: v1 [math.fa] 31 Jan 2013

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

arxiv: v1 [math.ra] 21 Jul 2013

Weak Monotonicity of Interval Matrices

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE

MATH 304 Linear Algebra Lecture 19: Least squares problems (continued). Norms and inner products.

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The restricted weighted generalized inverse of a matrix

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Transcription:

On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai 200433, China b Department of Psychology, McGill University, Montréal, Québec, Canada Abstract. For any n p matrix X and n n nonnegative definite matrix V, the matrix X(X VX) + X V is called a V-orthogonal projector with respect to the semi-norm V, where ( ) + denotes the Moore-Penrose inverse of a matrix. Various new properties of the V-orthogonal projector were derived under the condition that rank(vx) = rank(x), including its rank, complement, equivalent expressions, conditions for additive decomposability, equivalence conditions between two (V-)orthogonal projectors, etc. Mathematics Subject Classifications (2000): 62J05; 62H12; 15A09 Keywords: General linear model; weighted least-squares estimator; V-orthogonal projector; Moore-Penrose inverses of matrices; rank formulas for partitioned matrix 1 Introduction Throughout this paper, R m n stands for the collections of all m n real matrices. The symbols A, r(a), R(A) and N (A) stand for the transpose, the rank, the range (column space) and the kernel (null space) of a matrix A R m n, respectively; R (A) stands for the orthogonal complement of R(A). The Moore-Penrose inverse of A, denoted by A +, is defined to be the unique solution G to the four matrix equations (i) AGA = A, (ii) GAG = G, (iii) (AG) = AG, (iv) (GA) = GA. A matrix G is called a generalized inverse (g-inverse) of A, denoted by A, if it satisfies (i), an outer inverse of A if it satisfies (ii). Further, let P A, F A and E A stand for the three orthogonal projectors P A = AA +, E A = I m P A = I n AA + and F A = I n P A = I n A + A. Let V R n n be a nonnegative definite (nnd) matrix, i.e., V can be expressed as V = ZZ for some Z. The seminorm of a vector x R m 1 induced by V is defined by x V = (x Vx) 1/2. Suppose we are given a general linear model y = Xβ + ε, E(ε) = 0, Cov(ε) = σ 2 Σ, (1) E-mails: yongge@mail.shufe.edu.cn, takane@psych.mcgill.ca 1

or in the triplet form M = {y, Xβ, σ 2 Σ}, (2) where X R n p is a known matrix, y R n 1 is an observable random vector, ε R n 1 is a random error vector, β R n 1 is a vector of unknown parameters, σ 2 is a positive unknown parameter, Σ R n n is a known nonnegative definite matrix. As is well known, for a given nnd matrix V, the WLSE of β under (2), denoted by WLSE M (β), is defined to be β = argmin β (y Xβ) V(y Xβ). (3) The WLSE of Xβ under (2), denoted by WLSE M (Xβ), is defined to be X β. The normal equation associated with (3) is given by X VXβ = X Vy. This equation is always consistent. Solving this equation gives the following well-known result. Lemma 1 The general expression of the WLSE of β under (2) is given by β = (X VX) + X Vy + [ I (VX) + (VX) ]u = (X VX) + X Vy + F VX u, (4) where u R p 1 is arbitrary. For y 0, let u = Uy in (4), where U R p n is arbitrary. Then the WLSEs of β and Xβ under (2) can be written in the following homogeneous forms WLSE M (β) = [ (X VX) + X V + F VX U ]y, (5) WLSE M (Xβ) = [ X(X VX) + X V + XF VX U ]y. (6) Further, let P X:V denote the matrix pre-multiplied to y in (6): P X:V = X(X VX) + X V + XF VX U, (7) which is called the projector into R(X) with respect to the semi-norm V, (see Mitra and Rao (1974)). Because there is an arbitrary matrix U in (7), it is possible to take the U such that P X:V has some special forms, for example, P X:V = X(X VX) X V, (8) P X:V = X(X VX) + X V, (9) P X:V = XX + X(X VX) X V( I XX ), (10) P X:V = XX + + X(X VX) + X V( I XX + ). (11) It can be seen from (7) that P X:V is not necessarily unique. In addition, P 2 X:V = P X:V and P X:V X = X do not necessarily hold for a given U in (7). Using the notation in (7), the expectation and the covariance matrix of WLSE M (Xβ) are given by E[ WLSE M (Xβ) ] = P X:V Xβ and Cov[ WLSE M (Xβ) ] = σ 2 P X:V ΣP X:V. (12) 2

Eqs. (6) and (12) indicate that the algebraic and statistical properties of the WLSE of Xβ under (2) are mainly determined by the projector P X:V in (7). Hence it is quite essential to know various properties of the projector when applying WLSE M (Xβ) in statistical practice, for example, the rank, range, trace, norm, uniqueness, idempotency, symmetry, decompositions of the projector, as well as various equalities involving projectors. The projector P X:V in (7) and its special cases have widely been investigated in the literature since 1970s, see, e.g., Harville (1997), Mitra and Rao (1974), Rao (1974), Rao and Mitra (1971a, 1971b), Takane and Yanai (1999), Tian and Takane (2007b). In Tian and Takane (2007b), a variety of new properties were derived on projectors associated with WLSEs by making use of the matrix rank method. As further extensions of Tian and Takane (2007b), of particular interest in the present paper are various properties of P X:V when it is unique. Some rank formulas for partitioned matrices due to Marsaglia and Styan (1974, Theorem 19) are given below, which can be used to simplify various matrix expressions involving generalized inverses. Lemma 2 Let A R m n, B R m k and C R l n. Then r[ A, B = r(a) + r[ ( I m AA )B ] = r(b) + r[ ( I m BB )A ], (13) [ ] A r = r(a) + r[ C( I C n A A ) ] = r(c) + r[ A( I n C C ) ], (14) where the ranks do not depend on the particular choice of A, B and C. The results in the following lemma are given in Tian and Styan (2001, Theorem 2.1 and Corollary 2.3). Lemma 3 Any pair of idempotent matrices A and B of the order m satisfy the following two rank formulas [ ] A r( A B ) = r + r[ A, B ] r(a) r(b), (15) B Hence, r( I m A B ) = m + r(ab) + r(ba) r(a) r(b). (16) A = B R(A) = R(B) and R(A ) = R(B ), (17) A + B = I m r(a) + r(b) = m + r(ab) + r(ba). (18) The result in the following lemma is shown in Tian and Takane (2007a, Lemma 1.3). Lemma 4 Let A R m n, and let G 1, G 2, G 3 R n m be three outer inverses of A, i.e., G i AG i = G i, i = 1, 2, 3. Also suppose R(G i ) R(G 1 ) and R(G i) R(G 1), i = 2, 3. Then r( G 1 G 2 G 3 ) = r(g 1 ) r(g 2 ) r(g 3 ) + r(g 2 AG 3 ) + r(g 3 AG 2 ). (19) 3

The following result is given in Tian (2004, Corollary 2). Lemma 5 Let A R m n and B R m k. Then (P A P B ) + = P B P A P B (E B E A ) + P A. (20) We also use the following well-known results on Moore-Penrose inverses, ranges and ranks of matrices: A = AA (A + ) = (A + ) A A, (A + ) + = A, (A + ) = (A ) +, (21) A + = (A A) + A = A (AA ) +, (22) R(B) R(A) r[ A, B ] = r(a) AA + B = B, (23) R(B) R(A) and r(b) = r(a) R(B) = R(A), (24) R(A) = R(AA ) = R(AA + ) = R[ (A + ) ], (25) R(A ) = R(A A) = R(A + A) = R(A + ), (26) R(AB + B) = R(AB + ) = R(AB ), (27) R(A 1 ) = R(A 2 ) and R(B 1 ) = R(B 2 ) r[ A 1, A 2 ] = r[ B 1, B 2 ]. (28) Moreover, if V is nnd, then VV + = V + V, R(V) = R(V 1/2 ) = R(V + ), R(A V) = R(A V 1/2 ) = R(A V + ), (29) where V 1/2 is the nnd square root of V, see, e.g., Ben-Israel and Greville (2003), and Rao and Mitra (1971). 2 Properties of P X:V when it is unique We first give a series of equivalent statements on the uniqueness of the projector P X:V in (7). Theorem 6 Let P X:V be as given in (7). Then the following statements are equivalent: (a) P X:V is unique. (b) R(X V) = R(X ). (c) r(vx) = r(x). (d) R(X) R(E V ) = {0}. 4

(e) R(X) R(E VX ) = {0}. (f) r[ E X, V ] = n. (g) E X + V is positive definite (pd). In this case, the unique V-orthogonal projector can be written as P X:V = X(X VX) + X V. (30) Proof From (7), the projector P X:V is unique if and only if X(VX) + (VX) = X, which, by (23) is equivalent to R(X V) = R(X ), and to r(vx) = r(x). It is easy to find from (13) that r[ X, E V ] = r(p V X) + r(e V ) = r(vx) + r(e V ) = r(vx) + n r(v), r[ X, E VX ] = r(p VX X) + r(e VX ) = r(vx) + r(e VX ) = n, r[ E X, V ] = r(e X ) + r(p X V) = r(e X ) + r(vx) = n r(x) + r(vx), r( E X + V ) = r[ E X, V ] = r(e X ) + r(vx) = n r(x) + r(vx) hold. Hence, R(X) R(E V ) = {0} r[ X, E V ] = r(x) + r(e V ) r(vx) = r(x), R(X) R(E VX ) = {0} r[ X, E VX ] = r(x) + r(e VX ) r(vx) = r(x), r[ E X, V ] = n r(vx) = r(x), E X + V is pd r[ E X, V ] = n, establishing the equivalence of (c), (d), (e), (f) and (g). If the projector P X:V in (7) is unique, it is often called the V-orthogonal projector onto R(X). The following theorem gives a variety of properties of the V-orthogonal projector P X:V in (30), some of which were given in the literature, see, e.g., Harville (1997), and Mitra and Rao (1974). Theorem 7 Let P X:V be as given in (30) and suppose r(vx) = r(x). Then: (a) P X:V Z = Z holds for any Z with R(Z) R(X). (b) P 2 X:V = P X:V. (c) R(P X:V ) = R(X) and R(P X:V ) = R(VX). (d) N (P X:V ) = N (X V) and N (P X:V ) = N (X ). 5

(e) P X:V can be written in the following nine forms P X:V = X(V 1/2 X) + V 1/2 (31) = XX V(X VXX V) + X V (32) = X(XX VX) + XX V (33) = P X V(X VP X V) + X V (34) = X(P X VX) + P X V (35) = (P VX P X ) + (36) = P X P VX P X (E X E VX ) + P VX (37) = XX ( XX + E VX ) 1 (38) = P X ( P X + E VX ) 1, (39) where V 1/2 is the nonnegative definite square root of V. (f) P X:V = P X:V V + P X:V V. (g) P X P X:V = P X:V, P X:V P X = P X and ( P X P X:V ) 2 = 0. (h) P XX :V, P PX :V, P PX:V :V, P X:(λEX +V) and P X:PVX are unique, and P X:V = P XX :V = P PX :V = P PX:V :V = P X:(λEX +V) = P X:PVX, where λ is any real number. (i) P X:V = P VX:(E X +V) 1. (j) P X:V + P E X :(E X +V) 1 = I n. (k) P X:V + P (EX +V) 1 E X :(E X +V) = I n. (l) P VX:V + is unique and P VX:V + = VP X:V V +. Proof Parts (a) and (b) are obvious. Parts (c) and (d) are derived by (27). Applying (22) to (V 1/2 X) + gives X(V 1/2 X) + V 1/2 = X(X VX) + X V, as required for (31). Denote the eight matrices in (32) (39) by Z 1 = XX V(X VXX V) + X V, Z 2 = X(XX VX) + XX V, Z 3 = P X V(X VP X V) + X V, Z 4 = X(P X VX) + P X V, Z 5 = (P VX P X ) +, Z 6 = P X P VX P X (E X E VX ) + P VX, Z 7 = XX ( XX + E VX ) 1, Z 8 = P X ( P X + E VX ) 1. 6

Then it is easy to verify that Z 1,..., Z 5 are all idempotent and that R(Z 1 ) = R(Z 2 ) = R(Z 3 ) = R(Z 4 ) = R(Z 5 ) = R(X) = R(P X:V ), R(Z 1) = R(Z 2) = R(Z 3) = R(Z 4) = R(Z 5) = R(VX) = R(P X:V). In these cases, applying (17) to P X:V, Z 1,..., Z 5 yields P X:V = Z 1 = = Z 5. The equality Z 5 = Z 6 follows from (20). By (13), r( XX + E VX ) = r[ X, E VX ] = r(p VX X) + r(e VX ) = r(vx) + r(e VX ) = n, r( P X + E VX ) = r[ P X, E VX ] = r(p VX P X ) + r(e VX ) = r(vx) + r(e VX ) = n. Hence both XX + E VX and P X + E VX are nonsingular. From (a) and (30), we obtain that P X:V XX = XX, P X:V P X = P X and P X:V (VX)(VX) + = P X:V, so that r( P X:V Z 7 ) = r[ P X:V XX ( XX + E VX ) 1 ] = r[ P X:V ( XX + E VX ) XX ] = r[ P X:V P X:V (VX)(VX) + ] = 0, r( P X:V Z 8 ) = r[ P X:V P X ( P X + E VX ) 1 ] = r[ P X:V ( P X + E VX ) P X ] = r[ P X:V P X:V (VX)(VX) + ] = 0. Hence P X:V = Z 7 = Z 8, establishing (e). The result in (f) is straightforward from (30). The first two equalities in (g) are derived from (a). The third equality in (g) follows from (b) and the first two equalities in (f). If P X:V is unique, then the five projectors P XX :V, P PX :V, P X:(λEX +V), P PX:V :P V and P X:PVX are also unique by Theorem 6. In these cases, it is easy to verify that R(P XX :V) = R(P PX :V) = R(P PX:V :V) = R(P X:(λEX +V)) = R(P X:PVX ) = R(X), R(P XX :V) = R(P P X :V) = R(P P X:V :V) = R(P X:(λE X +V)) = R(P X:P VX ) = R(VX). Hence the equalities in (h) hold by (17). If P X:V is unique, then E X + V is pd by Theorem 6(g), so that the projector P VX:(EX +V) 1 is unique, too. In this case, r[ (E X + V) 1 VX, X ] = r[ VX, (E X + V)X ] = r[ VX, VX ] = r(vx) = r(x).(40) This implies R[ (E X + V) 1 VX ] = R(X) by (24). In these cases, we derive from (b) that r(p VX:(EX +V) 1) = r(vx) = r(x), 1) = R(VX), R(P VX:(EX +V) R(P VX:(E X +V) 1) = R[ (E X + V) 1 VX ] = R(X). 7

Hence applying (17) to the two idempotent matrices P X:V and P VX:(E X +V) 1 leads to (i). Since both P X:V and P EX :(E X +V) 1 are unique, we obtain from (c) that Thus r(p X:V ) = r(x) and r(p EX :(E X +V) 1) = r(e X) = n r(x). r(p X:V ) + r(p EX :(E X +V) 1) = n. (41) On the other hand, (40) is equivalent to E X (E X + V) 1 VX = 0 by (13). In such a case, it is easy to verify that P X:V P E X :(E X +V) 1 = 0 and P E X :(E X +V) 1P X:V = 0. (42) Applying (18) to (41) and (42) results in (j). Similarly, we can show that r(p X:V ) + r(p (EX +V) 1 E X :(E X +V)) = n, P X:V P (EX +V) 1 E X :(E X +V) = P (EX +V) 1 E X :(E X +V)P X:V = 0. Hence we obtain (k) from (18). Since r(v + VX) = r(vx), the projector P VX:V + unique by Theorem 6(c). In such a case, we obtain from (30) that is P VX:V + = VX(X VV + VX) + X VV + = VX(X VX) + X VV + = VP X:V V +, as required for (l). In the remaining part of this section, we give some rank equalities for two projectors and then use them to characterize relations between two projectors. The results obtained can be used to investigate relationships between two estimators under two linear models. Theorem 8 Let P X:V be as given in (30) and suppose r(vx) = r(x). Then r( P X:V P X:V ) = 2r[ X, VX ] 2r(X), (43) r( P X:V P X ) = r[ X, VX ] r(x), (44) Hence, the following statements are equivalent: (a) P X:V = P X:V. r( P VX P X ) = 2r[ X, VX ] 2r(X). (45) (b) P X:V = P X, i.e., P X:V is the orthogonal projector onto R(X). (c) P VX = P X, i.e., P VX is the orthogonal projector onto R(X). (d) P X:V = P VX:(EX +V) 1. (e) P EX :(E X +V) 1 is the orthogonal projector onto R (X). (f) P (EX +V) 1 E X :(E X +V) is the orthogonal projector onto R (X). 8

(g) R(VX) R(X). Proof Since both P X:V and P X:V are idempotent, we derive from (15) that r( P X:V P X:V ) = 2r[ P X:V, P X:V ] 2r(P X:V ) establishing (43). Also by (15), we find that and = 2r[ X, VX ] 2r(X) (by (28) and Theorem 7(c)), r( P X:V P X ) = r[ P X:V, P X ] + r[ P X:V, P X ] r(p X:V ) r(x) = r[ VX, X ] r(x), r( P VX P X ) = 2r[ P VX, P X ] r(p VX ) r(x) = 2r[ VX, X ] 2r(X), establishing (44) and (45). The equivalence of (a), (b), (c) and (g) follows from (43), (44) and (45). The equivalence of (a), (d), (e) and (f) follows from Theorem 7(i), (j) and (k). Harville (1997, Sec. 14.2e) investigated relationships between two projectors and gave some necessary and sufficient conditions for P X:V1 = P X:V2 to hold when V 1 and V 2 are pd. A general result on the equality P X1 :V 1 = P X2 :V 2 is given below. Theorem 9 Let X 1 R n p and X 2 R n k, let V 1, V 2 R n n be nnd with r(v 1 X 1 ) = r(x 1 ) and r(v 2 X 2 ) = r(x 2 ). Then r( P X1 :V 1 P X2 :V 2 ) = r[ V 1 X 1, V 2 X 2 ] + r[ X 1, X 2 ] r(x 1 ) r(x 2 ). (46) Hence, the following statements are equivalent: (a) P X1 :V 1 = P X2 :V 2. (b) R(X 1 ) = R(X 2 ) and R(V 1 X 1 ) = R(V 2 X 2 ). Proof Since both P X1 :V 1 and P X2 :V 2 are idempotent, we obtain by (15), (28) and Theorem 7(c) that r( P X1 :V 1 P X2 :V 2 ) = r[ P X 1 :V 1, P X 2 :V 2 ] + r[ P X1 :V 1, P X2 :V 2 ] r(p X1 :V 1 ) r(p X2 :V 2 ) = r[ V 1 X 1, V 2 X 2 ] + r[ X 1, X 2 ] r(x 1 ) r(x 2 ), establishing (46). Also note that r[ V 1 X 1, V 2 X 2 ] r(v 1 X 1 ) = r(x 1 ), r[ V 1 X 1, V 2 X 2 ] r(v 2 X 2 ) = r(x 2 ), r[ X 1, X 2 ] r(x 1 ), r[ X 1, X 2 ] r(x 2 ). 9

Thus it follows from (46) that P X1 :V 1 = P X2 :V 2 holds if and only if r[ V 1 X 1, V 2 X 2 ] = r(v 1 X 1 ) = r(v 2 X 2 ), r[ X 1, X 2 ] = r(x 1 ) = r(x 2 ), both of which are equivalent to the two range equalities in (b) by (23) and (24). For more relations between P X1 :V 1 and P X2 :V 2 without the assumptions r(v 1 X 1 ) = r(x 1 ) and r(v 2 X 2 ) = r(x 2 ), see Tian and Takane (2007b). Two results on sum decompositions of V-orthogonal projectors are given below. Theorem 10 Let P X:V be as given in (30) with r(vx) = r(x), and partition X as X = [ X 1, X 2 ]. Then: (a) Both P X1 :V and P X2 :V are unique. (b) P X:V P Xi :V = P Xi :V, i = 1, 2. (c) r( P X:V P Xi :V ) = r(x) r(x i ), i = 1, 2. (d) r( P X:V P X1 :V P X2 :V ) = r(x) + 2r(X 1VX 2 ) r(x 1 ) r(x 2 ). (e) P X:V = P X1 :V holds if and only if R(X 2 ) R(X 1 ); P X:V = P X2 :V holds if and only if R(X 1 ) R(X 2 ). (f) P X:V = P X1 :V + P X2 :V holds if and only if X 1VX 2 = 0. Proof Note that R(X V) = R(X ) obviously implies R(X 1V) = R(X 1) and R(X 2V) = R(X 2). Hence both P X1 :V and P X2 :V are unique, too, by Theorem 6(b). Also note from Theorem 7(c) that R(P X:V ) = R(X), R(P X1 :V) = R(X 1 ), R(P X2 :V) = R(X 2 ), and that R(X 1 ) R(X) and R(X 2 ) R(X). Hence the results in (b) follow from Theorem 7(a). Since P X:V, P X1 :V and P X2 :V are idempotent when they are unique, we obtain from (15) and (28) that [ ] PX:V r( P X:V P X1 :V ) = r + r[ P P X:V, P X1 :V ] r(p X:V ) r(p X1 :V) X1 :V [ ] X = r V X + r[ X, X 1 ] r(x) r(x 1 ) 1V = r(x) r(x 1 ), establishing the rank equality in (c) for i = 1. The equality in (c) for i = 2 can be shown similarly. The rank equality in (d) is derived from (19), the details are omitted. The results in (e) and (f) follow directly from (c) and (d). For further results on relations between P X:V in (7) and P X1 :V +P X2 :V and their applications to estimations under (2), see Tian and Takane (2007a). 10

Theorem 11 Let X, Y R n p, let V R n n be nnd, and suppose r( VX+VY ) = r( X + Y ), r(vx) = r(x) and r(vy) = r(y). If X VY = 0 and XY = 0, then P (X+Y):V = P X:V + P Y:V. Proof It is well known that if X Y = 0 and XY = 0, which are equivalent to X + Y = Y + X = 0 and YX + = XY + = 0, then ( X + Y ) + = X + + Y +. Hence if X VY = 0 and XY = 0, then ( V 1/2 X + V 1/2 Y ) + = (V 1/2 X) + + (V 1/2 Y) +. Applying this equality and (29) to P (X+Y):V leads to P (X+Y):V = (X + Y)[ V 1/2 (X + Y) ] + V 1/2 = (X + Y)[ (V 1/2 X) + + (V 1/2 Y) + ]V 1/2 = X(V 1/2 X) + V 1/2 + Y(V 1/2 Y) + V 1/2 = P X:V + P Y:V, as required. 3 Properties of P X:V when V is positive definite In statistical practice, the weight matrix V is often assumed to be positive definite. In this case, the rank equality r(vx) = r(x) is satisfied, and the projector P X:V in (7) is unique and possesses all the properties presented in the previous section. Because the inverse of V 1/2 exists, the V-orthogonal projector P X:V in (30) can be written as P X:V = X(V 1/2 X) + V 1/2 = V 1/2 (V 1/2 X)(V 1/2 X) + V 1/2 = V 1/2 P V 1/2 XV 1/2. (47) This result indicates that P X:V is similar to the orthogonal projector P V X. 1/2 In addition, the V-orthogonal projector in (47) can also be expressed through the weighted Moore-Penrose inverse of X. Recall that the weighted Moore-Penrose inverse of a matrix A R m n with respect to two pd matrices M R m m and N R n n is defined to be the unique solution G satisfying the following four matrix equations (i) AGA = A, (ii) GAG = G, (iii) (MAG) = MAG, (iv) (NGA) = NGA, and is denoted by G = A + M,N. It is well known that the weighted Moore-Penrose inverse A + M,N of A Rm n can be rewritten as A + M,N = N 1/2 (M 1/2 AN 1/2 ) + M 1/2, (48) where M 1/2 and N 1/2 are the pd square roots of M and N, respectively: see, e.g., Ben-Israel and Greville (2003). The following result reveals relations between V-orthogonal projectors and weighted Moore-Penrose inverses of matrices. 11

Theorem 12 Let X R n p, and let M R n n and N R p p be pd. Then P X:M = XX + M,I p, P X :N 1 = X+ I n,n X, X+ M,N = P X :N 1X P X:M. (49) Proof Applying (30) and (48) to XX + M,I p and X + I n,n X gives XX + M,I p = X(M 1 2 X) + M 1 2 = PX:M and X + I X = n,n N 1/2 (XN 1/2 ) + X = P X :N 1. Also by definition and (48) X + M,N = X+ M,N XX+ M,N = (X+ M,N X)X (XX + M,N ) = (X + I n,n X)X (XX + M,I p ) = P X :N 1X P X:M. Hence, the three equalities in (49) hold. Further properties of P X:V with V pd are given below. Theorem 13 Let X R n p, and let V R n n be pd. Then: (a) P X:V + P E X :V = I 1 n. (b) P X:V + P V 1 E X :V R(V 1 E X ). = I n, i.e., I n P X:V is the V-orthogonal projector onto (c) P X:V = XX ( XX + V 1 E X V 1 ) 1 = XX V( VXX V + E X ) 1 V. (d) P X:V = P X ( P X + V 1 E X V 1 ) 1 = P X V( VP X V + E X ) 1 V. (e) P VX:V 1 = VP X:V V 1 = P X:V. (f) P V 1 X:V = V 1 P X:V 1V = P X:V 1. Theorem 13(a) is called Khatri s lemma (1966). Its generalization has been given by Yanai and Takane (1992). Proof It is easy to verify that r(p X:V ) + r(p E X :V 1) = n, r(p X:V) + r(p V 1 E X :V) = n, P X:V P E X :V 1 = P E X :V 1P X:V = 0, P X:V P V 1 E X :V = P V 1 E X :VP X:V = 0. Hence (a) and (b) follow from (18). It can be seen from (13) that r[ X, V 1 E X ] = r(x) + r(e X V 1 E X ) = r(x) + r(e X ) = n, r[ P X, V 1 E X ] = r(p X ) + r(e X V 1 E X ) = r(x) + r(e X ) = n. Hence the following four matrices [ X, V 1 E X ] [ X, V 1 E X ] = XX + V 1 E X V 1, [ VX, E X ] [ VX, E X ] = VXX V + E X V, [ PX, V 1 E X ] [ PX, V 1 E X ] = PX + V 1 E X V 1, [ VP X, E X ] [ VP X, E X ] = VP X V + E X 12

are nonsingular. In these cases, r[ P X:V XX ( XX + V 1 E X V 1 ) 1 ] = r[ P X:V ( XX + V 1 E X V 1 ) XX ] = r[ P X:V XX + P X:V XV 1 E X V 1 XX ] = r( XX XX ) = 0, r[ P X:V P X ( P X + V 1 E X V 1 ) 1 ] = r[ P X:V ( P X + V 1 E X V 1 ) P X ] = r[ P X:V P X + P X:V XV 1 E X V 1 P X ] = r( P X P X ) = 0, so that P X:V = XX ( XX + V 1 E X V 1 ) 1 = P X ( P X + V 1 E X V 1 ) 1. (50) Substituting the following two equalities ( XX + V 1 E X V 1 ) 1 = V( VXX V + E X ) 1 V, ( P X + V 1 E X V 1 ) 1 = V( VP X V + E X ) 1 V into (50) gives the equalities in (c) and (d). By (30), P VX:V 1 = VX(X VV 1 VX) + X VV 1 = VX(X VX) + X VV 1 = VP X:V V 1, P VX:V 1 = VX(X VX) + X = P X:V, as required for (e). Replacing V with V 1 in (e) leads to (f). Theorem 14 Let X 1 R n p and X 2 R n k, and let V 1, V 2 R n n be pd. Then the following statements are equivalent: (a) P X1 :V 1 = P X2 :V 2. (b) P EX1 :V 1 1 = P EX2 :V 1 2. (c) P V 1 1 E X 1 :V 1 = P V 1 2 E X 2 :V 2. (d) P V1 X 1 :V 1 1 = P V2 X 2 :V 1 2. (e) R(X 1 ) = R(X 2 ) and R(V 1 X 1 ) = R(V 2 X 2 ). (f) R(E X1 ) = R(E X2 ) and R(V 1 1 E X1 ) = R(V 1 2 E X2 ). Proof It follows from Theorem 8(a) and (b) and Theorem 13(a), (b) and (e). When X 1 = X 2, statements (a) and (e) above are equivalent to Theorem 14.12.18 of Harville (1997). Theorem 15 Let X R n p and let V R n n be pd. Then the following statements are equivalent: 13

(a) P X:V is the orthogonal projector onto R(X). (b) P VX is the orthogonal projector onto R(X). (c) P X:V = P VX:V 1. (d) P EX :V 1 is the orthogonal projector onto R (X). (e) P V 1 E X :V is the orthogonal projector onto R (X). (f) R(VX) R(X). Proof It follows from Theorem 8(b), (c) and (g), and Theorem 13(a), (b) and (e). References A. and Greville, T.N.E. (2003). Generalized Inverses: Theory and Applications. 2nd Ed., Springer, New York. Harville, D.A. (1997). Matrix Algebra from a Statistician s Perspective. Springer, New York. Khatri, C.G. (1966). A note on a MANOVA model applied to problems in growth curves. Annals of the Institute of Statistical Mathematics 18, 75 86. Marsaglia, G. and Styan, G.P.H. (1974). Equalities and inequalities for ranks of matrices. Linear and Multilinear Algebra 2, 269 292. Mitra, S.K. and Rao, C.R. (1974). Projections under seminorms and generalized Moore-Penrose inverses. Linear Algebra and Its Applications 9, 155 167. Rao, C.R. (1973). Representations of best linear unbiased estimators in the Gauss- Markoff model with a singular dispersion matrix. Journal of Multivariate Analysis 3, 276 292. Rao, C.R. (1974). Projectors, generalized inverses and the BLUE s. Journal of the Royal Statistical Society, Series B 36, 442 448. Rao, C.R. and Mitra, S.K. (1971a). Further contributions to the theory of generalized inverse of matrices and its applications. Sankhyā, Series A 33, 289 300. Rao, C.R. and Mitra, S.K. (1971b). Generalized Inverse of Matrices and Its Applications. Wiley, New York. Takane, Y. and Yanai, H. (1999). On oblique projectors. Linear Algebra and Its Applications 289, 297 310. Tian, Y. (2004). On mixed-type reverse-order laws for the Moore-Penrose inverse of a matrix product. International Journal of Mathematics and Mathematical Sciences 58, 3103 3116. Tian, Y. and Styan, G.P.H. (2001). Rank equalities for idempotent and involutory matrices. Linear Algebra and Its Applications 335, 101 117. 14

Tian, Y. and Takane, Y. (2007a). On sum decompositions of weighted least-squares estimators under the partitioned linear model. Communications in Statistics: Theory and Methods, in press. Tian, Y. and Takane, Y. (2007b). Some properties of projectors associated with WLSE under a general linear model, submitted. Yanai, H. and Takane, Y. (1992). Canonical correlation analysis with linear constraints. Linear Algebra and Its Applications 176, 75 82. 15