On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

Similar documents
The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

On V-orthogonal projectors associated with a semi-norm

A note on the equality of the BLUPs for new observations under two linear models

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

A new algebraic analysis to linear mixed models

New insights into best linear unbiased estimation and the optimality of least-squares

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL

The generalized Schur complement in group inverses and (k + 1)-potent matrices

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

Rank equalities for idempotent and involutory matrices

Linear Algebra and its Applications

On some linear combinations of hypergeneralized projectors

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

Estimation under a general partitioned linear model

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

A property of orthogonal projectors

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

The Equality of OLS and GLS Estimators in the

Analysis of transformations of linear random-effects models

On a matrix result in comparison of linear experiments

A matrix handling of predictions of new observations under a general random-effects model

ELA

Generalized inverses of partitioned matrices in Banachiewicz Schur form

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

arxiv: v1 [math.ra] 14 Apr 2018

On Hadamard and Kronecker Products Over Matrix of Matrices

A note on sufficient dimension reduction

SOLUTIONS TO PROBLEMS

Inclusion and exclusion of data or parameters in the general linear model

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

PROBLEMS AND SOLUTIONS

On a connection between the Bradley Terry model and the Cox proportional hazards model

Jensen s inequality for medians

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

On Sums of Conjugate Secondary Range k-hermitian Matrices

Partial isometries and EP elements in rings with involution

The Drazin inverses of products and differences of orthogonal projections

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Review of Linear Algebra

Re-nnd solutions of the matrix equation AXB = C

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity

Weighted least-squares estimators of parametric functions of the regression coefficients under a general linear model

Operators with Compatible Ranges

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Moore-Penrose-invertible normal and Hermitian elements in rings

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Intrinsic products and factorizations of matrices

Equivalence constants for certain matrix norms II

arxiv: v1 [math.ra] 21 Jul 2013

The DMP Inverse for Rectangular Matrices

Some results on matrix partial orderings and reverse order law

Pseudo-minimax linear and mixed regression estimation of regression coecients when prior estimates are available

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

304 A^VÇÚO 1n ò while the commonly employed loss function for the precision of estimation is squared error loss function ( β β) ( β β) (1.3) or weight

Moore Penrose inverses and commuting elements of C -algebras

On EP elements, normal elements and partial isometries in rings with involution

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Dragan S. Djordjević. 1. Introduction

The Maximum Dimension of a Subspace of Nilpotent Matrices of Index 2

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Linear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices

Chapter 1. Matrix Algebra

Proceedings of the Third International DERIV/TI-92 Conference

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Displacement rank of the Drazin inverse

Stochastic Design Criteria in Linear Models

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

How do our representations change if we select another basis?

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

The reflexive re-nonnegative definite solution to a quaternion matrix equation

Linear estimation in models based on a graph

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Elementary maths for GMT

Pattern correlation matrices and their properties

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Possible numbers of ones in 0 1 matrices with a given rank

Fast principal component analysis using fixed-point algorithm

Matrix Inequalities by Means of Block Matrices 1

Matrix Algebra Review

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Lecture notes: Applied linear algebra Part 1. Version 2

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Generalized Principal Pivot Transform

Applied Mathematics Letters

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

On some matrices related to a tree with attached graphs

MATH36001 Generalized Inverses and the SVD 2015

Some inequalities for sum and product of positive semide nite matrices

Chapter 3 Transformations

Simplified marginal effects in discrete choice models

Systems of Linear Equations and Matrices

Transcription:

Statistics & Probability Letters 76 (2006) 1265 1272 www.elsevier.com/locate/stapro On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model Yongge Tian a, Douglas P. Wiens b, a School of Economics, Shanghai University of Finance and Economics, Shanghai 200433, China b Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alta, Canada T6G 2G1 Received 4 November 2004; received in revised form 23 December 2005 Available online 20 February 2006 Abstract Equality and proportionality of the ordinary least-squares estimator (OLSE), the weighted least-squares estimator (WLSE), and the best linear unbiased estimator (BLUE) for Xb in the general linear (Gauss Markov) model M ¼ fy; Xb; s 2 Rg are investigated through the matrix rank method. r 2006 Elsevier B.V. All rights reserved. MSC: Primary 62J05; 62H12; 15A24 Keywords: General linear model; Equality; Proportionality; OLSE; WLSE; BLUE; Projectors; Rank formulas for partitioned matrices 1. Introduction Consider the general linear (Gauss Markov) model M ¼fy; Xb; Rg, (1) where X is a nonnull n p known matrix, y is an n 1 observable random vector with expectation EðyÞ ¼Xb and with the covariance matrix CovðyÞ ¼R, b is a p 1 vector of unknown parameters, and R is an n n symmetric nonnegative definite (n.n.d.) matrix, known entirely except for a positive constant multiplier. In this paper, we investigate some special relations among the ordinary least-squares estimator (OLSE), the weighted least-squares estimator (WLSE), and the best linear unbiased estimator (BLUE) for Xb in the model M. Throughout this paper, A 0, rðaþ and RðAÞ represent the transpose, the rank and the range (column space) of a real matrix A, respectively; A y denotes the Moore Penrose inverse of A with the properties AA y A ¼ A, A y AA y ¼ A y, ðaa y Þ 0 ¼ AA y and ða y AÞ 0 ¼ A y A. In addition, let P A ¼ AA y and Q A ¼ I P A. Let M be as given in (1). The OLSE b of b is defined to be a vector minimizing ky Xbk 2 ¼ ðy XbÞ 0 ðy XbÞ and OLSEðXbÞ is defined to be X b b. Suppose V is an n n n.n.d. matrix. The WLSE e b of b is Corresponding author. Tel.: +1 780 4924406; fax: +1 780 4926826. E-mail addresses: yongge@mail.shufe.edu.cn (Y. Tian), doug.wiens@ualberta.ca (D.P. Wiens). 0167-7152/$ - see front matter r 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2006.01.005

1266 Y. Tian, D.P. Wiens / Statistics & Probability Letters 76 (2006) 1265 1272 defined to be a vector b minimizing ky Xbk 2 V ¼ðy XbÞ0 Vðy XbÞ, and WLSE V ðxbþ is defined to be X e b. When V ¼ I n, WLSE In ðxbþ ¼OLSEðXbÞ. The weight matrix V in WLSE V ðxbþ can be constructed from the given matrix X and R, for example, V ¼ R y or V ¼ðXTX 0 þ RÞ y, see, e.g., Rao (1971). The BLUE of Xb is a linear estimator Gy such that EðGyÞ ¼Xb and for any other linear unbiased estimator My of Xb, CovðMyÞ CovðGyÞ ¼MRM 0 GRG 0 is nonnegative definite. The three estimators are well known and have been extensively investigated in the literature. Definition 1. Let X be an n p matrix, and let V and R be two n n symmetric n.n.d. matrices. (a) The projector into the range of X under the seminorm kxk 2 V ¼ x0 Vx is defined to be P X:V ¼ XðX 0 VXÞ y X 0 V þ½x X y ŠU, (2) where U is arbitrary. (b) The BLUE projector P XkR is defined to be the solution G satisfying the equation G½X; RQ X Š¼½X; 0Š, which can be expressed as P XkR ¼½X; 0Š½X; RQ X Š y þ U 1 ði n ½X; RQ X Š½X; RQ X Š y Þ, where U 1 is arbitrary. The product P XkR R is unique and can be written as P XkR R ¼ P X R P X RðQ X RQ X Þ y R. (3) The following results on the OLSE, WLSE and BLUE of Xb in M are well known, see, e.g., Mitra and Rao (1974) and Puntanen and Styan (1989). Lemma 1. Let M be as given in (1). Then: (a) The OLSE of Xb in M is unique, and can be written as OLSEðXbÞ ¼P X y. In this case, E½OLSEðXbÞŠ ¼ Xb and Cov½OLSEðXbÞŠ ¼ P X RP X. (b) The general expression of the WLSE of Xb in M can be written as WLSE V ðxbþ ¼P X:Vy. In this case, E½WLSE V ðxbþš ¼ P X:V Xb and Cov½WLSE V ðxbþš ¼ P X:V RP 0 X:V. In particular, WLSE V ðxbþ is unique if and only if r ¼rðXÞ. In such a case, P X:V X ¼ X and WLSE V ðxbþ is unbiased. (c) The general expression of the BLUE of Xb in M can be written as BLUEðXbÞ ¼P XkR y. In this case, E½BLUEðXbÞŠ ¼ Xb and Cov½BLUEðXbÞŠ ¼ P XkR RP 0 XkR. Moreover, Cov½BLUEðXbÞŠ can be expressed as Cov½BLUEðXbÞŠ ¼ P X RP X P X RðQ X RQ X Þ y RP X. In particular, BLUEðXbÞ is unique if y 2 R½X; RŠ, the range of ½X; RŠ. Various properties of P X:V and P XkR can be found in Mitra and Rao (1974), and Puntanen and Styan (1989). Because these estimators are derived from different optimal criteria, they are not necessarily equal. An interesting problem on these estimators is to give necessary and sufficient conditions for them to be equal. If this is true, one can use the OLSE of Xb instead of the BLUE of Xb. Further, it is of interest to consider the proportionality of these estimators. In order to compare two estimators for an unknown parameter vector, various efficiency functions have been introduced. The most frequently used measure for relations between two unbiased estimators L 1 y and

Y. Tian, D.P. Wiens / Statistics & Probability Letters 76 (2006) 1265 1272 1267 L 2 y of b with both CovðL 1 yþ and CovðL 2 yþ nonsingular is the D-relative efficiency eff D ðl 1 y; L 2 yþ¼ det½covðl 2yÞŠ det½covðl 2 yþš. A similar relative efficiency function is also defined for the determinants of the information matrices corresponding to two designs for a linear regression model. Other relative efficiency functions can be defined through the traces and norms of covariance matrices, for example, eff A ðl 1 y; L 2 yþ¼ tr½covðl 1yÞŠ tr½covðl 2 yþš. If eff D ðl 1 y; L 2 yþ¼1, i.e., det½covðl 1 yþš ¼ det½covðl 2 yþš, the two estimators L 1 y and L 2 y are said to have the same D-efficiency, and is denoted by L 1 y D L 2 y. In addition to the determinant equality det½covðl 1 yþš ¼ det½covðl 2 yþš, some strong relations between two unbiased estimators are defined as follows. Definition 2. Suppose that L 1 y and L 2 y are two unbiased linear estimators of b in (1). (a) The two estimators are said to have the same efficiency if CovðL 1 yþ¼covðl 2 yþ, (4) and this is denoted by L 1 y C L 2 y. (b) The two estimators are said to be identical (coincide) with probability 1 if CovðL 1 y L 2 yþ¼0, (5) and this is denoted by L 1 y P L 2 y. Clearly (4) is equivalent to L 1 RL 0 1 ¼ L 2RL 0 2, (6) and (5) is equivalent to ðl 1 L 2 ÞRðL 1 L 2 Þ 0 ¼ 0, i.e., L 1 R ¼ L 2 R. (7) It is easy to see from (6) and (7) that L 1 y P L 2 y ) L 1 y C L 2 y ) L 1 y D L 2 y. However, the reverse implication is not true. For example, let R ¼ 2 1 ; L 1 ¼½1; 1Š; L 2 ¼½ 1; 1Š. 1 2 Then L 1 RL 0 1 ¼ L 2RL 0 2 ¼ 2; but ðl 1 L 2 ÞR ¼½2; 2Ša0. Many authors investigated the equalities among these estimators using Definition 2(a); see, e.g., Baksalary and Puntanen (1989), Puntanen and Styan (1989), Young et al. (2000), and GroX et al. (2001) among others. In this paper, we use Definition 2(b) to characterize the relations between two estimators. Lemma 2. Let M be as given in (1), and suppose L 1 y and L 2 y are two linear estimators for Xb in M. Then L 1 y ¼ L 2 y with probability 1, i.e., L 1 y P L 2 y, if and only if L 1 X ¼ L 2 X and L 1 R ¼ L 2 R, i.e., L 1 ½X; RŠ ¼L 2 ½X; RŠ. The rank of a matrix is defined to be the dimension of the column space or row space of the matrix. It has been recognized since the 1970s that rank equalities for matrices provide a powerful method for finding general properties of matrix expressions. In fact, for any two matrices A and B of the same size, the equality

1268 Y. Tian, D.P. Wiens / Statistics & Probability Letters 76 (2006) 1265 1272 A ¼ B holds if and only if rða BÞ ¼0; two sets S 1 and S 2 consisting of matrices of the same size have a common matrix if and only if min A2S1 ;B2S 2 rða BÞ ¼0; the set inclusion S 1 S 2 holds if and only if max A2S1 min B2S2 rða BÞ ¼0. If some formulas for the rank of A B are derived, they can be used to characterize the equality A ¼ B. In order to simplify various matrix expressions involving generalized inverses, we need some rank equalities for partitioned matrices due to Marsaglia and Styan (1974). Lemma 3. Let A 2 R mn, B 2 R mk and C 2 R ln. Then r½a; BŠ ¼rðAÞþr½ðI m AA ÞBŠ ¼rðBÞþr½ðI m BB ÞAŠ, (8) r A ¼ rðaþþr½cði n A AÞŠ ¼ rðcþþr½aði n C CÞŠ, (9) C r A B ¼ rðbþþrðcþþr½ði m BB ÞAðI n C CÞŠ. (10) C 0 The following result is shown in Tian (2002), andtian and Cheng (2003). Lemma 4. Let A 2 R mn, B 2 R mk and C 2 R ln. Then min rða BXCÞ ¼r½A; BŠþr A r A B. (11) X2R kl C C 0 2. Equality and proportionality of estimators Theorem 1. Let M be as given in (1). (a) There is WLSE V ðxbþ such that WLSE V ðxbþ P OLSEðXbÞ if and only if P X VQ X R ¼ 0. (b) Assume r½a; RŠ ¼n. Then there is WLSE V ðxbþ such that WLSE V ðxbþ P OLSEðXbÞ if and only if P X V ¼ VP X. Proof. From Lemma 2, we see that WLSE V ðxbþ P OLSEðXbÞ if and only if there is P X:V such that P X:V ½X; RŠ ¼½X; P X RŠ. Let ¼½X; RŠ and M 2 ¼½X; P X RŠ. Then by (2) P X:V ½X; RŠ ½X; P X RŠ¼XðX 0 VXÞ y X 0 V M 2 ðx A y ÞU, where U is arbitrary. From (11), min rðp X:V M 2 Þ¼ min r½xðx 0 VXÞ y X 0 V M 2 ðx A y ÞU Š P X:V U ¼ r½xðx 0 VXÞ y X 0 V M 2 ; X X y Š þ r XðX0 VXÞ y X 0 # V M 2 r XðX0 VXÞ y X 0 V M 2 X X y #. ð12þ 0

By (8) (10) and elementary block matrix operations, # r½xðx 0 VXÞ y X 0 V M 2 ; X X y Š ¼ r XðX0 VXÞ y X 0 V M 2 X r 0 VX # 0 X ¼ r VXðX 0 VXÞ y X 0 r V VM 2 0 ¼ r½vxðx 0 VXÞ y X 0 V VM 2 ŠþrðXÞ r, # r XðX0 VXÞ y X 0 V M 2 ¼ r M # 2 ¼ r X X P XR 0 0 ¼ r ¼ rð Þ, R X R r XðX0 VXÞ y X 0 V M 2 X X y # ¼ rð Þþr½X X y Š 0 ¼ rð ÞþrðXÞ r. Also note that VXðX 0 VXÞ y X 0 VX ¼ VX. Hence, VXðX 0 VXÞ y X 0 V VM 2 ¼½0; VXðX 0 VXÞ y X 0 VR VP X RŠ. Moreover, P X ½VXðX 0 VXÞ y X 0 VR VP X RŠ¼P X VR P X VP X R ¼ P X VQ X R. Hence, r½vxðx 0 VXÞ y X 0 V VM 2 Š¼rðP X VQ X RÞ. Substituting these rank equalities into (12) and simplifying gives min rðp X:V ½X; RŠ ½X; P X RŠÞ ¼ rðp X VQ X RÞ. P X:V Let the right-hand side be zero, we obtain the result in (a). The rank equality r½x; RŠ ¼n is equivalent to rðq X RÞ¼rðQ X Þ by (8). Hence P X VQ X R ¼ 0 is equivalent to P X VQ X ¼ 0, i.e., P X V ¼ P X VP X. Note that P X VP X is symmetric, P X V ¼ P X VP X is equivalent to P X V ¼ VP X. & Some other problems on WLSE V ðxbþ ¼OLSEðXbÞ can be investigated. For example, one can solve for n.n.d. V from the equation P X VQ X R ¼ 0 if both X and R are given, or solve for n.n.d. R from the equation if both X and V are given. Theorem 2. Let M be as given in (1). Then the following statements are equivalent: (a) There is BLUEðXbÞ such that BLUEðXbÞ P OLSEðXbÞ. (b) There is BLUEðXbÞ such that BLUEðXbÞ C OLSEðXbÞ. (c) P X RQ X ¼ 0, i.e., RðRXÞ RðXÞ. (d) P X R ¼ RP X. Proof. From Definition 1(a) and (c) and Lemma 2, we see that there is BLUEðXbÞ such that OLSEðXbÞ P BLUEðXbÞ if and only if P X R ¼ P XkR R. From (3), we obtain P X R P XkR R ¼ P X RðQ X RQ X Þ y R. It is easy to verify that Y. Tian, D.P. Wiens / Statistics & Probability Letters 76 (2006) 1265 1272 1269 r½p X RðQ X RQ X Þ y RŠ¼rðP X RQ X RQ X Þ¼rðP X RQ X Þ. The equivalence of (a), (c) and (d) follows from this result. From Lemma 1(a) and (c), we see that Cov½OLSEðXbÞŠ Cov½BLUEðXbÞŠ ¼ P X RðQ X RQ X Þ y RP X.

1270 Y. Tian, D.P. Wiens / Statistics & Probability Letters 76 (2006) 1265 1272 It is easy to verify that r½p X RðQ X RQ X Þ y RP X Š¼r½P X RðQ X RQ X Þ y Š ¼ rðp X RQ X RQ X Þ ¼ rðp X RQ X Þ. The equivalence of (b) and (d) follows from this result. Theorem 2(b) (d) are well known, see, e.g., Puntanen and Styan (1989), Young et al. (2000). Theorem 3. Let M be as given in (1). (a) There are WLSE V ðxbþ and BLUEðXbÞ such that WLSE V ðxbþ P BLUEðXbÞ if and only if P X VRQ X ¼ 0, that is, RðRVXÞ RðXÞ. (b) If V ¼ðXX 0 þ RÞ y, then WLSE V ðxbþ is unique and WLSE V ðxbþ P BLUEðXbÞ. Proof. From Lemma 2, WLSE V ðxbþ P BLUEðXbÞ if and only if P X:V ½X; RŠ ¼½X; P XkR RŠ. Let ¼½X; RŠ and M 2 ¼½X; P XkR RŠ. Then by (11), min rðp X:V M 2 Þ¼ min r½xðx 0 VXÞ y X 0 V M 2 ðx X y ÞU Š P X:V U ¼ r½xðx 0 VXÞ y X 0 V M 2 ; X X y Š þ r XðX0 VXÞ y X 0 # V M 2 r XðX0 VXÞ y X 0 V M 2 X X y #. ð13þ 0 By (8) (10), # r½xðx 0 VXÞ y X 0 V M 2 ; X X y Š ¼ r XðX0 VXÞ y X 0 V M 2 X rðvaþ 0 VX # 0 X ¼ r VVðX 0 VXÞ y X 0 r V VM 2 0 ¼ r½vxðx 0 VXÞ y X 0 V VM 2 ŠþrðXÞ r, # r XðX0 VXÞ y X 0 V M 2 ¼ r M # # 2 0 ¼ r ¼ rðm Þ, 1 r XðX0 VXÞ y X 0 V M 2 X X y # ¼ rð Þþr½X X y Š 0 ¼ rð ÞþrðXÞ r. Also note that VXðX 0 VXÞ y X 0 VX ¼ VX. Hence, VXðX 0 VXÞ y X 0 V VM 2 ¼½0; VXðX 0 VXÞ y X 0 VR VP XkR RŠ. It can be verified that r½vxðx 0 VXÞ y X 0 VR VP XkR RŠ¼rðP X VRQ X Þ. &

Y. Tian, D.P. Wiens / Statistics & Probability Letters 76 (2006) 1265 1272 1271 Substituting these rank equalities into (13) and simplifying leads to min rðp X:V ½X; RŠ ½X; P XkR RŠÞ ¼ rðp X VRQ X Þ, P X:V completing the proof. & The following results are concerned with the proportionality of the three estimators. Theorem 4. Let M be as given in (1) and suppose that VXa0. Then there are WLSE V ðxbþ and a scalar l such that WLSE V ðxbþ P lolseðxbþ if and only if l ¼ 1 and P X VQ X R ¼ 0, that is, there is WLSE V ðxbþ proportional to OLSEðXbÞ with probability 1 if and only if the WLSE V ðxbþ satisfies WLSE V ðxbþ P OLSEðXbÞ. Proof. From Lemma 2, there is WLSE V ðxbþ so that WLSE V ðxbþ P lolseðxbþ if and only if P X:V ½X; RŠ ¼½lX; lp X RŠ. Let ¼½X; RŠ and M 2 ¼½X; P X RŠ. Then we can find through the matrix rank method that min rðp X:V lm 2 Þ¼r½ð1 lþvx; VXðX 0 VXÞ y X 0 VR lvp X RŠ. (14) P X:V Let the right-hand side of (14) be zero, we see that l ¼ 1andVXðX 0 VXÞ y X 0 VR ¼ VP X R. Hence, the result in this theorem follows from Theorem 1. & Theorem 5. Let M be as given in (1) and suppose that Xa0. Then there are BLUEðXbÞ and scalar l such that BLUEðXbÞ P lolseðxbþ if and only if P X R ¼ RP X, that is, there is BLUEðXbÞ proportional to OLSEðXbÞ with probability 1 if and only if the BLUEðXbÞ satisfies BLUEðXbÞ P OLSEðXbÞ. Proof. From Lemma 2, BLUEðXbÞ P lolseðxbþ holds, if and only if X ¼ lx and RP X R ¼ lp XkR R, which are equivalent to l¼1andrp X R ¼ P XkR R. Hence, the result in this theorem follows from Theorem 2. & Theorem 6. Let M be as given in (1) and suppose that VXa0. Then there are WLSE V ðxbþ and l such that WLSE V ðxbþ P lblueðxbþ if and only if l ¼ 1 and RðRVXÞ RðXÞ, that is, there is WLSE V ðxbþ proportional to BLUEðXbÞ with probability 1 if and only if the WLSE V ðxbþ satisfies WLSE V ðxbþ P BLUEðXbÞ. Proof. From Lemma 2, WLSE V ðxbþ P lblueðxbþ if and only if P AX:V ½X; RŠ ¼l½X; P XkR RŠ. Let ¼½X; RŠ and M 2 ¼½X; P XkR RŠ. Then by (11), min rðp X:V lm 2 Þ¼ min r½xðx 0 VAÞ y X 0 V lm 2 ðx XðVAÞ y ðvaþþu Š P X:V U ¼ r½xðx 0 VXÞ y X 0 V lm 2 ; X X y Š þ r XðX0 VXÞ y X 0 # V lm 2 r XðX0 VXÞ y X 0 V lm 2 X X y # 0 ¼ r½vxðx 0 VXÞ y X 0 V lvm 2 Š ¼ r½ð1 lþvx; VXðX 0 VXÞ y X 0 VR lvp XkR RŠ. Let the right-hand side of (15) be zero, we obtain that l ¼ 1andVXðX 0 VXÞ y X 0 VR ¼ VP XkR R. Hence, the result in this theorem follows from Theorem 3. & ð15þ

1272 Y. Tian, D.P. Wiens / Statistics & Probability Letters 76 (2006) 1265 1272 It can be seen from Theorems 4 6 that if any two of the OLSE, WLSE and BLUE for Xb in the model (1) are proportional with probability 1, the two estimators are identical with probability 1. Many other problems on the model (1) can be investigated through the matrix rank method. For example, assume that the general linear model fy; Xb; Rg incorrectly specifies the covariance matrix as s 2 R 0, where R 0 is a given n.n.d. matrix and s 2 is a positive parameter (possibly unknown). Then consider the relations between the BLUEs of Xb in the original model and the misspecified model fy; Xb; s 2 R 0 g. Some previous results on the relations between the BLUEs of Xb in the two models can be found in Mitra and Moore (1973), and Mathew (1983). In addition, it is of interest to consider partial coincidence and partial proportionality of the OLSE, WLSE and BLUE of Xb in (1), as well as to consider coincidence and proportionality of the OLSE, WLSE and BLUE of Xb in (1) under a restriction Ab ¼ b. Acknowledgment The research of both authors was supported by the Natural Sciences and Engineering Research Council of Canada. References Baksalary, J.K., Puntanen, S., 1989. Weighted-least squares estimation in the general Gauss Markov model. In: Dodge, Y. (Ed.), Statistical Data Analysis and Inference. Elsevier, New York, pp. 355 368. GroX, G., Trenkler, T., Werner, H.J., 2001. The equality of linear transformations of the ordinary least squares estimator and the best linear unbiased estimator. Sankhyā Ser. A 63, 118 127. Marsaglia, G., Styan, G.P.H., 1974. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 2, 269 292. Mathew, T., 1983. Linear estimation with an incorrect dispersion matrix in linear models with a common linear part. J. Amer. Statist. Assoc. 78, 468 471. Mitra, S.K., Moore, B.J., 1973. Gauss-Markov estimation with an incorrect dispersion matrix. Sankhyā Ser. A 35, 139 152. Mitra, S.K., Rao, C.R., 1974. Projections under seminorms and generalized Moore Penrose inverses. Linear Algebra Appl. 9, 155 167. Puntanen, S., Styan, G.P.H., 1989. The equality of the ordinary least squares estimator and the best linear unbiased estimator. With comments by O. Kempthorne, S.R. Searle, and a reply by the authors. Amer. Statist. 43, 153 164. Rao, C.R., 1971. Unified theory of linear estimation. Sankhyā Ser. A 33, 371 394. Tian, Y., 2002. The maximal and minimal ranks of some expressions of generalized inverses of matrices. Southeast Asian Bull. Math. 25, 745 755. Tian, Y., Cheng, S., 2003. The maximal and minimal ranks of A BXC with applications. New York J. Math. 9, 345 362. Young, D.M., Odell, P.L., Hahn, W., 2000. Nonnegative-definite covariance structures for which the blu, wls, and ls estimators are equal. Statist. Probab. Lett. 49, 271 276.