The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model
|
|
- Caroline Charles
- 5 years ago
- Views:
Transcription
1 The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of Finance and Economics, Shanghai , China b Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada T6G 2G1 Abstract. We investigate in this paper a variety of equalities for the ordinary least-squares estimators and the best linear unbiased estimators under the general linear (Gauss-Markov) model {y, Xβ, σ 2 Σ} and the restrained model {y, Xβ Aβ = b, σ 2 Σ}. Keywords: method OLSE; BLUE; general linear model; linear restriction; parametric function; matrix rank 1 Introduction Suppose we are given the following general linear (Gauss-Markov) model y = Xβ + ε, E(ε) = 0 and Cov(ε) = σ 2 Σ, (1) constrained by a consistent linear matrix equation Aβ = b, (2) where X is a known n p matrix with rank(x) p, A is a known m p matrix with rank(a) = m, b is a known m 1 vector, y is an n 1 observable random vector, β is a p 1 vector of parameters to be estimated, 0 < σ 2 is an unknown parameter, and Σ is an n n known nonnegative definite matrix. The model (1) is often written as the following triplet M = {y, Xβ, σ 2 Σ}, (3) the model (1) with the parameters constrained by the equation (2) is written as M c = {y, Xβ Aβ = b, σ 2 Σ}. (4) In the investigation of linear models, parameter constraints are usually handled by transforming an explicitly constrained model into an implicitly constrained model. The most popular transformations are based on model reduction, Lagrangian multipliers, and reparameterization by general solution of the equation (2). Through block matrices, (1) and (2) can be written as {[ y M c = b ], [ X A ] β, σ 2 [ Σ ]}. (5) If the observable vector y in (3) arises from the combination of two sub-samples y 1 and y 2, where y 1 and y 2 are n 1 1 and n 2 1 vectors with n 1 + n 2 = n, then (3) can also be written as a partitioned form M = {[ y1 y 2 ], addresses: yongge@mail.shufe.edu.cn, doug.wiens@ualberta.ca } X(1) β, σ 2 Σ11 Σ 12, (6) X (2) Σ 21 Σ 22 1
2 where X (1) and X (2) are n 1 p, n 2 p matrices, Σ 11 is an n 1 n 1 matrix. In this case, the first sub-sample model in (6) can be written as and the first sub-sample model in (4) can be written as M 1 = {y 1, X (1) β, σ 2 Σ 11 }, (7) M 1c = {y 1, X (1) β Aβ = b, σ 2 Σ 11 }. (8) Because (4) has a linear restriction on β, the estimation of β is more complicated than that of β under (3). Of interest to us is to consider relations between estimators of β under (3) and (4). In particular, it is important to give necessary and sufficient conditions for the estimators of β under (3) and (4) to be equal. If two estimators of β under (3) and (4) are equal, it is natural to use the estimator of β under (3) instead of that of (4). The restricted linear model (4) also has a closed link with the hypothesis test Aβ = b for β under (3). From the equalities of the estimators of β under (3) and (4), one can also check the testability of the hypothesis. Linear models with restrictions occur widely in statistics, see, e.g., Amemiya (1985), Bewley (1986), Chipman and Rao (1964), Dent (1980), Haupt and Oberhofer (2002), Ravikumar, Ray and Savin (2000) and Theil (1971). Throughout this paper, R m n stands for the set of all real m n matrices; M, r(m) and R(M) denote the transpose, the rank and the range (column space) of a matrix M, respectively. For M R m n, M denotes a g-inverse of M, i.e., MM M = M; M + denotes the Moore-Penrose inverse of M, i.e., MM + M = M, M + MM + = M +, (MM + ) = MM + and (M + M) = M + M. Further, let P M = MM +, E M = I m MM + and F M = I n M + M. Let M be as given in (3). The OLSE of the parameter vector β under (3) is defined to be any vector minimizing y Xβ 2 = ( y Xβ ) ( y Xβ ) and is denoted by β, The OLSE of the parameter vector Xβ under (3) is defined to be OLSE(Xβ) = X β. In general, for any l p given matrix K, the OLSE of the vector Kβ under (3) is defined to be OLSE(Kβ) = K β. The BLUE of Xβ under (3) is defined to be a linear estimator Gy such that E(Gy) = Xβ and for any other linear unbiased estimator My of Xβ, Cov(My) Cov(Gy) = MΣM GΣG is nonnegative definite. The two estimators are well known and have been extensively investigated in the literature. Some frequently used OLSEs and BLUEs under the models (3), (4), (6), (7) and (8) are OLSE M (Xβ), OLSE Mc (Xβ), OLSE M (β), OLSE Mc (β), OLSE M1 (β), OLSE M1c (β), BLUE M (Xβ), BLUE Mc (Xβ), BLUE M (β), BLUE Mc (β), BLUE M1 (β), BLUE M1c (β). The OLSEs and BLUEs of Xβ and β under (3) and (4) can be derived through matrix equations, generalized inverses of matrices or the Lagrange multipliers method. A well-known result on the solvability and general solution of the linear matrix equation Ax = b is given below. Lemma 1 The linear matrix equation Ax = b is consistent if and only if AA b = b. In this case, the general solution of Ax = b can be written as x = A b + ( I p A A )v, where the vector v is arbitrary. If A is taken as the Moore-Penrose inverse A +, the general solution of Aβ = b can be written as x = A + b + ( I p A + A )v, (9) where the vector v is arbitrary. If Ax = b is not consistent, then (9) is the general expression of the least-squares solutions of Ax = b. 2
3 and Note that [ X, Σ ][ X, Σ ] + [ X, Σ ] = [ X, Σ ]. Hence if the model in (1) is correct, then E([ X, Σ ][ X, Σ ] + y y) = [ X, Σ ][ X, Σ ] + Xβ Xβ = 0 Cov([ X, Σ ][ X, Σ ] + y y) = σ 2 ([ X, Σ ][ X, Σ ] + I)Σ([ X, Σ ][ X, Σ ] + I) = 0, so that [ X, Σ ][ X, Σ ] + y = y, or equivalently, y [ X, Σ ] holds with probability 1. Definition 1 The linear model in (1) is said to be consistent if y [ X, Σ ] holds with probability 1. The consistency concept for a general linear model was introduced in Rao (1971, 1973), which is a basic assumption in the investigation of the best linear unbiased estimator (BLUE) of Xβ under the general linear model in (1). Definition 2 Suppose M in (3) is consistent. Then two linear estimators L 1 y and L 2 y of Xβ under (3) are said to be equal with probability 1 if (L 1 L 2 )[ X, Σ ] = 0. Lemma 2 Let M be as given in (3), and suppose K 1 y + z 1 and K 2 y + z 2 are two linear unbiased estimators for Xβ under (3). Then K 1 y + z 1 = K 2 y + z 2 holds with probability 1 if and only if K 1 Σ = K 2 Σ holds. Lemma 3 Let M and M c be as given in (3) and (4). Then: (a) The OLSE of Xβ under M can uniquely be written as OLSE M (Xβ) = P X y (10) with E[ OLSE M (Xβ) ] = Xβ and Cov[ OLSE M (Xβ) ] = σ 2 P X ΣP X. (b) The BLUE of Xβ under M can be written as BLUE M (Xβ) = P X Σ y, (11) where P X Σ is a solution of the consistent matrix equation Z[ X, ΣE X ] = [ X, 0 ], where the two partitioned matrices satisfy X X R[ X, ΣE X ] = R[ X, Σ ] and R R. (12) 0 E X Σ The general expression of P X Σ can be written as P X Σ = [ X, 0 ][ X, ΣE X ] + + V(I n [ X, ΣE X ][ X, ΣE X ] + ), (13) where V R n n is arbitrary. Moreover, E[ BLUE M (Xβ) ] = Xβ and Cov[ BLUE M (Xβ) ] = σ 2 P X Σ ΣP X Σ. In particular, BLUE M (Xβ) is unique with probability 1 if (3) is consistent. (c) The OLSE of Xβ under M c can uniquely be written as where X A = XF A. In this case, OLSE Mc (Xβ) = ( I n P XA )XA + b + P XA y, (14) E[ OLSE Mc (Xβ) ] = Xβ and Cov[ OLSE Mc (Xβ) ] = σ 2 P XA ΣP XA. 3
4 (d) The BLUE of Xβ under M c can be written as BLUE Mc (Xβ) = ( I n P XA Σ )XA + b + P XA Σ y (15) with E[ BLUE Mc (Xβ) ] = Xβ and Cov[ BLUE Mc (Xβ) ] = σ 2 P XA ΣΣP X A Σ, where P X A Σ is a solution to the consistent equation Z 1 [ X A, ΣE XA ] = [ X A, 0 ], and the two block matrices satisfy [ X R[ X A, ΣE XA ] = R[ X A, Σ ] and R A X R A 0 F XA Σ The general expression of P XA Σ can be written as ]. (16) P XA Σ = [ X A, 0 ][ X A, ΣF XA ] + + V 1 (I n [ X A, ΣF XA ][ X A, ΣF XA ] + ), (17) X Σ where V 1 R n n is arbitrary. P XA Σ is unique if and only if r = r(a) + n; A 0 BLUE Mc (Xβ) is unique with probability 1 if and only if y X Σ R (18) b A 0 holds with probability 1. Proof The results in (a) and (b) are well known, see, e.g., Puntanen and Styan (1989), Puntanen, Styan and Werner (2000). If (2) is consistent, the general solution of (2) is where the vector γ is arbitrary. Hence, β = A + b + F A γ, (19) OLSE Mc (Xβ) = A + b + OLSE(X A γ) and BLUE Mc (Xβ) = A + b + BLUE(X A γ). (20) Substituting (19) into the equation in (1) yields the following reparameterized model z = X A γ + ε, (21) where z = y XA + b and X A = XF A. The OLSE of γ under (21) can be written as γ = X + A z + F X A u, where the vector u is arbitrary. Substituting this result into (19) gives the OLSE of β under (4) as follows OLSE Mc (β) = A + b + F A X + 1 z + F AF XA u. Hence, OLSE Mc (Xβ) = XA + b + P XA z( I n P XA )XA + b + P XA y. It can be derived from (b) that the BLUE of X A γ under (21) is BLUE M (X A γ) = P XA Σz. Substituting this result into the second equality in (18) gives the BLUE of Xβ under (4) BLUE Mc (Xβ) = XA + b + BLUE Mc (X A γ) = XA + b + P XA Σz, (22) as required for (15). It can be seen from (15) and (22) that BLUE c (Xβ) is unique with probability 1 if and only if r[ z, X A, Σ ] = r[ X A, Σ ] (23) 4
5 holds with probability 1. Applying Lemma 4(b), AA + b = b and elementary block matrix operations to both sides of (23) gives r[ z, X A, Σ ] = r[ y XA + b, XF A, Σ y XA = r + b X Σ r(a) 0 A 0 y X Σ = r r(a), b A 0 X Σ r[ X 1, Σ ] = r[ XF A, Σ ] = r r(a). A 0 y X Σ X Σ Thus, (23) is equivalent to r = r, i.e., (18) holds. b A 0 A 0 For the parametric vector Xβ, the four linear estimators OLSE M (Xβ), BLUE M (Xβ), OLSE Mc (Xβ) and BLUE Mc (Xβ) of Xβ under (3) and (4) are not necessarily equal. It is therefore important to reveal relations among these estimators. In Section 2, we investigate the following six equalities among the four estimators for Xβ under (3) and (4): (a1) OLSE M (Xβ) = BLUE M (Xβ), (b1) OLSE M (Xβ) = OLSE Mc (Xβ), (c1) BLUE M (Xβ) = BLUE Mc (Xβ), (d1) OLSE Mc (Xβ) = BLUE Mc (Xβ), (e1) OLSE M (Xβ) = BLUE Mc (Xβ), (f1) OLSE Mc (Xβ) = BLUE M (Xβ). In Section 3, we investigate the following four equalities for the estimators of an estimable parametric function Kβ under (3) and (4): (a2) OLSE M (Kβ) = BLUE M (Kβ), (b2) OLSE M (Kβ) = OLSE Mc (Kβ), (c2) BLUE M (Kβ) = BLUE Mc (Kβ), (d2) OLSE Mc (Kβ) = BLUE Mc (Kβ). In Section 4, we investigate the following three equalities (a3) OLSE M (β) = OLSE M1 (β), (b3) BLUE M (β) = BLUE M1 (β), (c3) OLSE Mc (β) = OLSE M1c (β). Some useful rank formulas for partitioned matrices due to Masarglia and Styan (1974) are given below. These rank formulas can be used to simplify various matrix equalities involving inverses and Moore-Penrose inverses. Lemma 4 Let A R m n, B R m k C R l n and D R l k. Then: (a) r[ A, B ] = r(a) + r(e A B) = r(b) + r(e B A). 5
6 A (b) r = r(a) + r(cf C A ) = r(c) + r(af C ). A B (c) r = r(b) + r(c) + r(e C 0 B AF C ). (d) r[ A, B ] = r(a) E A B = 0 R(B) R(A). A (e) r = r(a) CF C A = 0 R(C ) R(A ). (f) If R(A) R(B) = {0}, then (E A B) + (E A B) = B + B. (g) If R(A ) R(C ) = {0}, then (CF A )(CF A ) + = CC +. (h) if R(B) R(A) and R(C ) R(A ), then CA B is unique, CA B = CA + B, and A B r = r(a) + r( D CA + B ). C D 2 Relations between the four estimators of Xβ Relations between OLSE M (Xβ) and BLUE M (Xβ) under (3), especially the equality OLSE M (Xβ) = BLUE M (Xβ) have been extensively investigated in the literature, see, e.g., Baltagi (1989), Puntanan and Styan (1989), Puntanan, Styan and Werner (2000), Qian and Schmidt (2003), and Tian and Wiens (2006) among others. A well-known result on the equality OLSE M (Xβ) = BLUE M (Xβ) is summarized below. Theorem 1 Then the following statements are equivalent: (a) BLUE M (Xβ) = OLSE M (Xβ) holds with probability 1. (b) P X ΣE X = 0, i.e., R(ΣX) R(X). (c) P X Σ = ΣP X. Proof Note that both OLSE M (Xβ) and BLUE M (Xβ) are unbiased for Xβ. Then by Lemma 2, OLSE M (Xβ) = BLUE M (Xβ) holds with probability 1 if and only if ( P X Σ P X )Σ = 0. From (13), we obtain P X Σ Σ = [ X, 0 ][ X, ΣE X ] + Σ. (24) A It is easy to verify that any matrix [ A, B ] has a g-inverse [ A, B ] = + A + B(E A B) + (E A EB) +. Applying this to [ X, ΣE X ] gives [ X, ΣE X ] X = + X + ΣE X (E X ΣE X ) + (E X ΣE X ) +. Hence, we obtain by Lemma 4(h) and (24) the following result P X Σ Σ = [ X, 0 ][ X, ΣE X ] + Σ = [ X, 0 ][ X, ΣE X ] Σ = P X Σ P X Σ(E X ΣE X ) + Σ. (25) In this case, so that (24) is equivalent to Simplify (26) by the following equivalence P X Σ(E X ΣE X ) + Σ = 0. (26) MN + = 0 MN + N = 0 MN N = 0 MN = 0 (27) leads to the equivalence of (a), (b) and (c). 6
7 Theorem 2 Let M and M c be as given in (3) and (4). Then: (a) OLSE M (Xβ) = OLSE Mc (Xβ) holds with probability 1 if and only if X R Σ X R X. (28) 0 A (b) If R(X ) R(A ) = {0}, then OLSE M (Xβ) = OLSE Mc (Xβ) holds with probability 1. (c) Under the condition R(X) R(Σ), OLSE M (Xβ) = OLSE Mc (Xβ) holds with probability 1 if and only if R(X ) R(A ) = {0}. (d) Under the condition r(x) = p. Then OLSE M (Xβ) = OLSE Mc (Xβ) holds with probability 1 if and only if AX + Σ = 0. (e) Under the conditions R(X) R(Σ) and r(x) = p, OLSE M (Xβ) = OLSE Mc (Xβ) holds with probability 1 if and only if A = 0. Proof Note that both OLSE M (Xβ) and OLSE Mc (Xβ) are unbiased for Xβ. Then by Lemma 2, OLSE M (Xβ) = OLSE Mc (Xβ) holds with probability 1 if and only if P X Σ = P XA Σ. (29) To simplify this equality, we first find the rank of P X Σ P XA Σ. It is easy to see P XA P X = P XA. Hence, we find by Lemma 4(a) and (b) that r( P X Σ P XA Σ ) = r( P X Σ P XA P X Σ ) = r[ P X Σ, X A ] r(x A ) = r[ P X Σ, XF A ] r(xf A ) [ X = r X X Σ X r X A 0 A Hence, OLSE M (Xβ) = OLSE Mc (Xβ) holds with probability 1 if and only if X r X X Σ X = r X, (30) A 0 A which is equivalent to (28) by Lemma 4(d). Results (b) (e) are also derived from (30). Relations between BLUE M (Xβ) and BLUE Mc (Xβ) were also studied by several authors, see e.g., Baksalary and Kala (1979), Hallum, Lewis and Boullion (1973), Mathew (1983), and Yang, Wen, Cui and Sun (1987), and there are some disputes among these papers. Our result for the equality between BLUE M (Xβ) and BLUE Mc (Xβ) is given below. ]. Theorem 3 (a) BLUE M (Xβ) = BLUE Mc (Xβ) holds with probability 1 if and only if X Σ X r = r + r[ X, Σ ] r(x). A 0 A (b) Under the condition R(X) R(Σ) = {0}, BLUE M (Xβ) = BLUE Mc (Xβ) holds with probability 1. (c) Under the condition R(X ) R(A ) = {0}, BLUE M (Xβ) = BLUE Mc (Xβ) holds with probability 1. (d) Under the condition R(X) R(Σ), BLUE M (Xβ) = BLUE Mc (Xβ) holds with probability 1 if and only if R(X ) R(A ) = {0}. 7
8 Proof Because both BLUE M (Xβ) and BLUE Mc (Xβ) are unbiased for Xβ, it follows from Lemma 2 that BLUE M (Xβ) = BLUE Mc (Xβ) holds with probability 1 if and only if Note from (13) that the product P XA ΣΣ is P X Σ Σ = P XA ΣΣ. P XA ΣΣ = [ X A, 0 ][ X A, ΣE XA ] + Σ. (31) From (24), (25), (8), (12), Lemma 4(h) and elementary block matrix operations, we find that r( P X Σ Σ P XA ΣΣ ) = r{ [ X, 0 ][ X, ΣE X ] + Σ [ X A, 0 ][ X A, ΣE XA ] + Σ } = r X ΣE X 0 0 Σ 0 0 X A ΣE XA Σ r[ X, ΣE X ] r[ X A, ΣE XA ] X 0 X A 0 0 = r 0 0 X A 0 Σ 0 ΣE X 0 ΣE XA 0 r[ X, Σ ] r[ X A, Σ ] X = r[ X A, Σ ] + r[ ΣE X, ΣE XA ] + r(x) r[ X, Σ ] r[ X A, Σ ] = r(σe XA ) + r(x) r[ X, Σ ] = r[ X A Σ ] r(x A ) + r(x) r[ X, Σ ] X Σ X = r r r[ X, Σ ] + r(x). A 0 A Letting this equality be zero gives (a). Results (b), (c) and (d) follow from (a). Mathew (1983) showed that BLUE M (Xβ) = BLUE Mc (Xβ) holds with probability 1 if and only if R(Σ) R(X) R[ X(I n A + A) ]. It can be shown by Lemma 4(a) and (b) that this result is equivalent to Theorem 3(a). Theorem 4 Let M and M c be as given in (3) and (4). Then: (a) OLSE Mc (Xβ) = BLUE Mc (Xβ) holds with probability 1 if and only if (XF A )(XF A ) + Σ = Σ(XF A )(XF A ) +, or equivalently, ΣX X r 0 A X = r + r(a). A A 0 (b) Under the condition R(X ) R(A ) = {0}, OLSE Mc (Xβ) = BLUE Mc (Xβ) holds with probability 1 f and only if OLSE M (Xβ) = BLUE M (Xβ). Proof The proof is left for the reader. Theorem 5 Let M and M c be as given in (3) and (4). Then: ΣX X (a) OLSE M (Xβ) = BLUE Mc (Xβ) holds with probability 1 if and only if R R. 0 A In this case, OLSE M (Xβ) = OLSE Mc (Xβ) = BLUE Mc (Xβ) holds with probability 1. (b) Under the condition R(X ) R(A ) = {0}, OLSE M (Xβ) = BLUE Mc (Xβ) holds if and only if OLSE M (Xβ) = BLUE M (Xβ) holds with probability 1. In this case, holds with probability 1. OLSE M (Xβ) = OLSE Mc (Xβ) = BLUE M (Xβ) = BLUE Mc (Xβ) 8
9 Proof Since both OLSE M (Xβ) and BLUE Mc (Xβ) are unbiased for Xβ, it follows from Lemma 2 that OLSE M (Xβ) = BLUE Mc (Xβ) holds with probability 1 if and only if P X Σ = P XA ΣΣ. From (31), Lemma 4(a), (b) and (h), (15) and elementary block matrix operations, we find that r( P X Σ P XA ΣΣ ) = r( P X Σ [ X A, 0 ][ X A, ΣE XA ] + Σ ) XA ΣE = r XA Σ r[ X X A 0 P X Σ A, ΣE XA ] XA 0 Σ = r r[ X 0 P X ΣE XA 0 A, Σ ] = r(e XA ΣP X ) = r[ X A, ΣX ] r(x A ) = r[ XF A, ΣX ] r(xf A ) X ΣX X = r r. A 0 A Letting this equality be zero yields the desired results. Theorem 6 Let M and M c be as given in (3) and (4). Then OLSE Mc (Xβ) = BLUE M (Xβ) holds with probability 1 if and only if X Σ X ΣX X r = r + r[ X, Σ ] r(x) and r = r(x) + r(a). A 0 A A 0 In this case, OLSE M (Xβ) = BLUE M (Xβ) = BLUE Mc (Xβ) holds with probability 1. Proof The proof is left for the reader. 3 Relations between estimators of parametric functions Let K be an l p known matrix. Then the product Kβ is called a parametric function of β under (3). When K = I p, Kβ = β is the unknown parameter vector under (3). The parametric function Kβ is said to be estimable under (3) if there exists an l n matrix L so that E(Ly) = Kβ holds. It is well known that Kβ is estimable under (3) if and only if R(K ) R(X ), see Alalouf and Styan (1979). Since there is a linear restriction for β under (4), the estimability of Kβ under (4) is defined be below. Definition 3 Let K be an l p known matrix. The vector Kβ is said to be estimable under M c in (4) if there exist an l n constant matrix L and an l 1 constant vector c such that E(Ly + c) = Kβ. Note E(Ly + c) = LXβ + c. Hence E(Ly + c) = Kβ is equivalent to LXβ + c = Kβ. However, we cannot simply say that LX = K and c = 0, because β is not free but subject to the linear equation Aβ = b. A well-known result on the estimability of Kβ under (4) asserts that Kβ is estimable under (4) if and only if R(K ) R[ X, A ]. (32) It can be seen from (32) that if Kβ is estimable under (3), then Kβ is estimable under (4). Let M be as given in (3). If K = K 1 K 2, then OLSE M (Kβ) = K 1 OLSE M (K 2 β), (33) 9
10 see Groß, Trenkler and Werner (2001). Suppose Kβ is estimable under (3), i.e., K = L 1 X for some L 1. Then the BLUE of Kβ is a linear estimator Ly such that E(Ly) = Kβ and for any other linear unbiased estimator My of Kβ, Cov(My) Cov(Ly) = MΣM LΣL is nonnegative. It is easy to verify that if Kβ is estimable under (3), then BLUE M (Kβ) = LBLUE M (Xβ), (34) see, e.g., Groß, Trenkler and Werner (2001). Results (33) and (34) imply that the OLSEs and BLUEs of the estimable parametric function Kβ under (3) and (4) can be derived from the OLSEs and BLUEs of Xβ under (3) and (4). Suppose Kβ is estimable under (4), i.e., there exists a K = [ L 1, L 2 ] such that L = L 1 X+L 2 A. From (33) and (34), we see OLSE Mc (Kβ) = L 1 OLSE Mc (Xβ) + L 2 b, BLUE Mc (Kβ) = L 1 BLUE Mc (Xβ) + L 2 b. (35) From these expressions and Lemma 3, we obtain the following two consequences. Theorem 7 Let M be as given in (3) and suppose Kβ is estimable under M, i.e., K can be written as K = LX. Then: (a) OLSE M (Kβ) = LP X y = KX + y with E[ OLSE M (Kβ) ] = Kβ and Cov[ OLSE M (Kβ) ] = σ 2 KX + Σ(KX + ) = σ 2 (LP X )Σ(LP X ). (b) BLUE M (Kβ) = Sy, where S is the solution of the consistent equation S[ X, VE X ] = [ K, 0 ]. Moreover, BLUE M (Kβ) can be written as BLUE M (Kβ) = LBLUE M (Xβ) = LP X Σ y with E[ BLUE M (Kβ) ] = Kβ and Cov[ BLUE M (Kβ) ] = σ 2 (LP X Σ )Σ(LP X Σ ). Theorem 8 Let M c be as given in (4) and suppose Kβ is estimable under (4), i.e., K can be written as K = L 1 X + L 2 A. (a) The OLSE of Kβ under M c can be written as OLSE Mc (Kβ) = L 1 ( I n P XA )XA + b + L 1 P XA y + L 2 b, where X A = XF A. In this case, E[ OLSE Mc (Kβ) ] = Kβ and Cov[ OLSE Mc (Kβ) ] = σ 2 L 1 P XA ΣP XA L 1. (b) The BLUE of Kβ under M c can be written as BLUE Mc (Kβ) = L 1 ( I n P XA Σ )XA + b + L 1 P XA Σ y + L 2 b with E[ BLUE Mc (Kβ) ] = Kβ and Cov[ BLUE Mc (Kβ) ] = σ 2 (L 1 P XA Σ)Σ(L 1 P XA Σ). Concerning the relations among the four estimators for Kβ in Theorems 7 and 8. We have the following several results. Theorem 9 Let M be as given in (3), and suppose Kβ is estimable under M, i.e., K can be written as K = LX. Then OLSE M (Kβ) = BLUE M (Kβ) holds with probability 1 if and only if R[(KX + Σ) ] R(X). Proof Since both OLSE M (Kβ) and BLUE M (Kβ) are unbiased for Kβ under M, it follows from Lemma 2, Theorem 7(a) and (b) that OLSE M (Kβ) = BLUE M (Kβ) holds with probability 1 if and only if LP X Σ = LP X Σ Σ. From (25), we see LP X Σ LP X Σ Σ = LP X Σ(E X ΣE X ) + Σ. Hence, LP X Σ = LP X Σ Σ is equivalent to LP X Σ(E X ΣE X ) + Σ = 0. It is easy to verify by (27) that this equality is equivalent to LP X ΣE X = 0, which can also be written as KX + Σ = KX + ΣP X. 10
11 Theorem 10 Let M and M c be as given in (3) and (4), and suppose Kβ is estimable under M, i.e., K = LX. Then: (a) OLSE M (Kβ) = OLSE Mc (Kβ) holds with probability 1 if and only if KX + Σ = K(XF A ) + Σ. (b) Under the condition R(X ) R(A ) = {0}, OLSE M (Kβ) = OLSE Mc (Kβ) holds with probability 1. (c) Under the condition [ R(X) ] [ R(Σ), ] OLSE M (Kβ) = OLSE Mc (Kβ) holds with probability 1 K X if and only if R R X. 0 A (d) Under the conditions R(X) R(Σ) and r(x) = p, OLSE M (Kβ) = OLSE Mc (Kβ) holds with probability 1 if and only if A(X X) + K = 0. Proof Since both OLSE M (Kβ) and OLSE Mc (Kβ) are unbiased for Kβ, it follows from Lemma 2, Theorems 7(a) and 8(a) that OLSE M (Kβ) = OLSE Mc (Kβ) if and only if LP X Σ = LP XA Σ, that is, KX + Σ = KE A (XE A ) + Σ. If R(X ) R(A ) = {0}, then (XE A )(XE A ) + = XX +. Hence, (b) follows. The proofs of (c) and (d) are left for the reader. Theorem 11 Let M and M c be as given in (3) and (4), and suppose Kβ is estimable under M, i.e., K = LX. Then: (a) BLUE M (Kβ) = BLUE Mc (Kβ) holds with probability 1 if and only if r 0 X K X Σ 0 X = r + r[ X, Σ ]. A A 0 0 (b) Under the condition R(X) R(Σ), BLUE M (Kβ) = BLUE Mc (Kβ) holds with probability 1 if and only if K X R R Σ + X. 0 A (c) Under the conditions that Σ is positive definite and r(x) = p, BLUE M (Kβ) = BLUE Mc (Kβ) holds with probability 1 if and only if A(X Σ 1 X) 1 K = 0. Proof It can be seen from Lemma 2, Theorems 7(b) and 8(b) that BLUE M (Kβ) = BLUE Mc (Kβ) holds with probability 1 if and only if LP X Σ Σ = LP XA ΣΣ, this is, L[ X, 0 ][ X, ΣE X ] + Σ L[ X A, 0 ][ X A, ΣE XA ] + Σ = 0. Applying Lemma 4(h) to the left-hand side gives r( L[ X, 0 ][ X, ΣE X ] + Σ L[ X A, 0 ][ X A, ΣE XA ] + Σ ) = r 0 X K [ X Σ 0 X r A A 0 0 ] r[ X, Σ ]. The verification of the rank equality is left for the reader. Theorem 12 Let M c be as given in (4), and suppose Kβ is estimable under M c, i.e., K = L 1 X+L 2 A. Then OLSE Mc (Kβ) = BLUE Mc (Kβ) holds with probability 1 if and only if KX + A Σ = KX + A ΣP X A. 11
12 Proof Since both OLSE Mc (Kβ) and BLUE Mc (Kβ) are unbiased for Kβ, it follows from Lemma 2, Theorem 8(a) and (b) that OLSE Mc (Kβ) = BLUE Mc (Kβ) holds with probability 1 if and only if LP XA Σ = LP XA ΣΣ, i.e., From (25) LP XA Σ L[ X A, 0 ][ X A, ΣE XA ] + Σ = 0. LP XA Σ LP XA ΣΣ = LP XA Σ(E XA ΣE XA ) + Σ. Hence, LP XA Σ = LP XA,ΣΣ is equivalent to LP XA Σ(E XA ΣE XA ) + Σ = 0. It is easy to verify by (27) that this equality is equivalent to LP XA ΣE XA = 0, which can also be written as KX + A Σ = KX + A ΣP X A. 4 Relations between estimators for the nonsingular linear model and its sub-sample model We have seen that the model (7) is the first part of the model (3). In many situations, a general linear model is the combination of a set of sub-sample models, for example, the fix-effect model for panel data in econometrics, see, Gourieroux and Monfort (1980), Qian and Schmidt (2003). If r(x) = r(x (1) ) = p and r(σ) = n, (3), (4), (7) and (8) are said to be nonsingular models. In such cases, X + X = I p, and β is estimable under (3), (4), (7) and (8). From Lemma 3, we obtain the OLSEs and BLUEs of β under (3), (4), (7) and (8) as follows. Lemma 5 Let M and M c be as given in (3) and (4), and suppose r(x) = p and r(σ) = n. Then: (a) The OLSE of β under M can be written as OLSE M (β) = X + y with E[ OLSE M (β) ] = β and Cov[ OLSE M (β) ] = σ 2 X + Σ(X + ). (b) The BLUE of β under M can be written as BLUE M (β) = (X Σ 1 X) 1 X Σ 1 y with E[ BLUE M (β) ] = β and Cov[ BLUE M (β) ] = σ 2 (X Σ 1 X) 1. (c) The OLSE of β under M c can be written as OLSE Mc (β) = ( I p X + A X )A+ b + X + A y, where X A = XF A. In this case, E[ OLSE Mc (β) ] = β and Cov[ OLSE Mc (β) ] = σ 2 X + A Σ(X+ A ). Lemma 6 Let M 1 and M 1c be as given in (7) and (8), and suppose r(x (1) ) = p and r(σ) = n. Then: (a) The OLSE of β under M 1 can be written as OLSE M1 (β) = X + (1) y 1 with E[ OLSE M1 (β) ] = β and Cov[ OLSE M1 (β) ] = σ 2 X + (1) Σ 11(X + (1) ). (b) The BLUE of β under M 1 can be written as BLUE M1 (β) = (X (1) Σ 1 11 X (1)) 1 X (1) Σ 1 11 y 1 with E[ BLUE M1 (β) ] = β and Cov[ BLUE M1 (β) ] = σ 2 (X (1) Σ 1 11 X (1)) 1. (c) The OLSE of β under M 1c can be written as OLSE M1c (β) = [ I p1 (X (1) ) + A X (1) ]A + b + (X (1) ) + A y 1, where (X (1) ) A = X (1) F A. In this case, E[ OLSE M1c (β) ] = β and Cov[ OLSE M1c (β) ] = σ 2 (X (1) ) + A Σ 11[(X (1) ) + A ]. The following results are derived from Lemmas 6 and 7. 12
13 Theorem 13 Let M and M c be as given in (3) and (4), and suppose r(x (1) ) = p and r(σ) = n. Then OLSE M (β) = OLSE M1 (β) holds with probability 1 if and only if X (2) = 0. Proof Note that both BLUE M (β) and OLSE M1 (β) are unbiased for β. Also note that y 1 = [ I n1, 0 ]y and Σ is nonsingular. Hence we find from Lemmas 4, 6(a) and 7(a) that OLSE M (β) = OLSE M1 (β) holds with probability 1 if and only if X + = [ X + (1), 0 ], which is equivalent to X = X(1). 0 Theorem 14 Let M and M c be as given in (3) and (4), and suppose r(x (1) ) = p and r(σ) = n. Then the following statements are equivalent: (a) BLUE M (β) = BLUE M1 (β) holds with probability 1. (b) X (2) = Σ 21 Σ 1 11 X (1). Σ11 X (c) r (1) = r(σ Σ 21 X 11 ). (2) Proof Note that both OLSE M (β) and BLUE M1 (β) are unbiased for β. Then we see from Lemma 2, Lemmas 6(b) and 7(b) that BLUE M (β) = BLUE M1 (β) holds with probability 1 if and only if This is equivalent to (X Σ 1 X) 1 X = [ (X (1) Σ 1 11 X (1)) 1 X (1) Σ 1 11, 0 ]Σ. (36) X = X Σ 1 X(X (1) Σ 1 11 X (1)) 1 [ X (1), X (1) Σ 1 11 Σ 12 ]. It can be found from Lemma 4(h) that ( ) r X X Σ 1 X(X (1) Σ 1 11 X (1)) 1 [ X (1), X (1) Σ 1 11 Σ 12 ] = r[ X Σ 1 X X (1) Σ 1 11 X (1), X (2) X (1) Σ 1 11 Σ 12 ]. Hence (36) is equivalent to (c). The equivalence of (c) and (d) follows from Lemma 4(h). The verification of the following theorem is left for the reader. Theorem 15 Let M c and M 1c be as given in (4) and (8), and suppose r(x (1) ) = p and r(σ) = n. Then OLSE Mc (β) = OLSE M1c (β) holds with probability 1 if and only if R(X (2) ) R(A ). Remarks. Many problems on relations between estimators for the general linear model can reasonably be investigated. For example, assume that the restricted linear model M c in (3) incorrectly specifies the dispersion matrix Σ as σ 2 Σ 0, where Σ 0 is a given nonnegative definite matrix and σ 2 is a positive parameter (possibly unknown). Then consider relations between the BLUEs of the original model M c and the misspecified model {y, Xβ Aβ = b, σ 2 Σ 0 }. Some results on relations between the BLUEs of two general linear models can be found in Mathew (1983), and Mitra and Moore (1973). Through the matrix rank method, we can also derive some necessary and sufficient conditions for the BLUEs of the original model and the misspecified model to equal with probability 1. This work will be presented in another paper. In addition to the explicit restrictions in (2), the parameter vector β may be subject to some implicit restrictions, represented by adding-up conditions on the endogenous variables By = c, where both the q n matrix B and q 1 vector c are fixed and known. Note that y is a random variable. Hence By = c is equivalent to E(By) = c and Cov(By) = 0 with probability 1, i.e., BXβ = c and BΣ = 0. (37) 13
14 In this case, (3), (4) and (37) can be written as M r = {y, Xβ Aβ = b, BXβ = c, BΣ = 0, Σ}. This model is called the fully restricted linear model, see Haupt and Oberhofer (2002). It may also be of interest to investigate various equalities for the estimators of Xβ in this model. References I.S. Alalouf and G.P.H. Styan (1979). Characterizations of estimability in the general linear model. Ann. Statist. 7, T. Amemiya (1985). Advanced Econometrics, Basil Blackwell, Oxford. J.K. Baksalary and R. Kala (1979). Best linear unbiased estimation in the restricted general linear model. Math. Operationsforsch. Statist. Ser. Statist. 10, J.K. Baksalary, S. Puntanen and G.P.H. Styan (1990). A property of the dispersion matrix of the best linear unbiased estimator in the general Gauss-Markov model. Sankhyā Ser. A 52, B.H. Baltagi (1989). Applications of a necessary and sufficient condition for OLS to be BLUE. Statist. & Probab. Letters 8, R. Bewley (1986). Allocation Models: Specifications, Estimation, and Application. Ballinger Publishing Company. P. Bhimasankaram and R. SahaRay (1997). On a partitioned linear model and some associated reduced models. Linear Algebra Appl. 264, J.S. Chipman, M.M. Rao (1964). The treatment of linear restrictions in regression analysis. Econometrica 32, W.T. Dent (1980). On restricted estimation in linear models. J. Econometrics 12, C. Gourieroux and A. Monfort (1980). Sufficient linear structures: econometric applications. Econometrica 48, J. Groß and S. Puntanen (2000). Estimation under a general partitioned linear model. Linear Algebra Appl. 321, J. Groß, T. Trenkler and H.J. Werner (2001). The equality of linear transformations of the ordinary least squares estimator and the best linear unbiased estimator. Sankhyā 63, C.R. Hallum, T.O. Lewis and T.L. Boullion (1973). Estimation in the restricted general linear model with a positive semidefinite covariance matrix. Comm. Statist. 1, H. Haupt and W. Oberhofer (2002). Fully restricted linear regression: A pedagogical note. Economics Bulletin 3, 1 7. G. Marsaglia and G.P.H. Styan (1974). Equalities and inequalities for ranks of matrices. Linear and Multilinear Algebra 2, T. Mathew (1983). A note on best linear unbiased estimation in the restricted general linear model. Math. Operationsforsch. Statist. Ser. Statist. 14, 3 6. S.K. Mitra and B.J. Moore (1973). Sankhyā Ser. A 35, Gauss-Markov estimation with an incorrect dispersion matrix. M. Nurhonen and S. Puntanen (1992). A properties a partitioned generalized regression. Commu. Statist. A-Theory Meth. 21, S. Puntanen (1996). Some matrix results related to a partitioned singular linear model. Commu. Statist.- Theory Meth. 25, S. Puntanen and A.J. Scott (1996). Some further remarks on the singular linear models. Linear Algebra Appl. 237/238, S. Puntanen and G.P.H. Styan (1989). The equality of the ordinary least squares estimator and the best linear unbiased estimator. With comments by O. Kempthorne, S.R. Searle, and a reply by the authors. Amer. Statist. 43,
15 S. Puntanen, G.P.H. Styan and H.J. Werner (2000). Two matrix-based proofs that the linear estimator Gy is the best linear unbiased estimator. J. Statist. Plan. Infer. 88, H. Qian and P. Schmidt (2003). Partial GLS regression. Economic Letters 79, H. Qian, Y. Tian (2006). Partially superfluous observations. Econometric Theory 22(2006), C.R. Rao (1971). Unified theory of linear estimation. Sankhyā Ser. A 33, C.R. Rao (1973). Representations of best linear unbiased estimators in the Gauss-Markoff model with a singular dispersion matrix. J. Multivariate Anal. 3, C.R. Rao and S.K. Mitra (1971). Generalized Inverse of Matrices and Its Applications. Wiley, New York. B. Ravikumar, S. Ray and N.E. Savin (2000). Robust wald tests in SUR systems with adding up restrictions. Econometrica 68, S.R. Searle (1971). Linear Models. Wiley, New York. H. Theil (1971). Principles of Econometrics. Wiley, New York. Y. Tian and D.P. Wiens (2006). On equality and proportionality of ordinary least-squares, weighted least-squares and best linear unbiased estimators in the general linear model. Statist. Prob. Lett., 76, H.J. Werner and C. Yapar (1995). More on partitioned possibly restricted linear regression. In Proceedings of the 5th Tartu Conference on Multivariate Statistics (E.-M. Tiit et al. eds.), W.-L. Yang, H.-J. Cui and G.-W. Sun (1987). On best linear unbiased estimation in the restricted general linear model. Statistics 18, B.-X. Zhang, B.-S. Liu and C.-Y. Lu (2004). A study of the equivalence of the BLUEs between a partitioned singular Linear model and its reduced singular linear models. Acta Math. Sinica 20,
On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model
Statistics & Probability Letters 76 (2006) 1265 1272 www.elsevier.com/locate/stapro On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators
More informationA note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationOn V-orthogonal projectors associated with a semi-norm
On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai
More informationA new algebraic analysis to linear mixed models
A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a
More informationSolutions of a constrained Hermitian matrix-valued function optimization problem with applications
Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =
More informationNew insights into best linear unbiased estimation and the optimality of least-squares
Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationEstimation under a general partitioned linear model
Linear Algebra and its Applications 321 (2000) 131 144 www.elsevier.com/locate/laa Estimation under a general partitioned linear model Jürgen Groß a,, Simo Puntanen b a Department of Statistics, University
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationA CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL
A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL GABRIELA BEGANU The existence of the best linear unbiased estimators (BLUE) for parametric estimable functions in linear models
More informationA matrix handling of predictions of new observations under a general random-effects model
Electronic Journal of Linear Algebra Volume 29 Special volume for Proceedings of the International Conference on Linear Algebra and its Applications dedicated to Professor Ravindra B. Bapat Article 4 2015
More informationYongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China
On global optimizations of the rank and inertia of the matrix function A 1 B 1 XB 1 subject to a pair of matrix equations B 2 XB 2, B XB = A 2, A Yongge Tian China Economics and Management Academy, Central
More informationRank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications
Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University
More informationAnalysis of transformations of linear random-effects models
Analysis of transformations of linear random-effects models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. Assume that
More informationAnalytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix
Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationThe symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn
More informationSPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN
Electronic Journal of Linear lgebra ISSN 1081-3810 publication of the International Linear lgebra Society EL SPECIL FORMS OF GENERLIZED INVERSES OF ROW BLOCK MTRICES YONGGE TIN bstract. Given a row block
More informationELA
Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF
More informationThe Hermitian R-symmetric Solutions of the Matrix Equation AXA = B
International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China
More informationRank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix
Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis
More informationSOLUTIONS TO PROBLEMS
Z Z T Z conometric Theory, 1, 005, 483 488+ Printed in the United States of America+ DI: 10+10170S06646660505079 SLUTINS T PRBLMS Solutions to Problems Posed in Volume 0(1) and 0() 04.1.1. A Hausman Test
More informationOn some linear combinations of hypergeneralized projectors
Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß
More informationAnalytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications
Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY
More informationProceedings of the Third International DERIV/TI-92 Conference
Proceedings of the Third International DERIV/TI-9 Conference The Use of a Computer Algebra System in Modern Matrix Algebra Karsten Schmidt Faculty of Economics and Management Science, University of Applied
More informationThe DMP Inverse for Rectangular Matrices
Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for
More information304 A^VÇÚO 1n ò while the commonly employed loss function for the precision of estimation is squared error loss function ( β β) ( β β) (1.3) or weight
A^VÇÚO 1n ò 1nÏ 2014c6 Chinese Journal of Applied Probability and Statistics Vol.30 No.3 Jun. 2014 The Best Linear Unbiased Estimation of Regression Coefficient under Weighted Balanced Loss Function Fang
More informationThe reflexive re-nonnegative definite solution to a quaternion matrix equation
Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the
More informationA system of matrix equations and a linear matrix equation over arbitrary regular rings with identity
Linear Algebra and its Applications 384 2004) 43 54 www.elsevier.com/locate/laa A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity Qing-Wen Wang Department
More informationThe Equality of OLS and GLS Estimators in the
The Equality of OLS and GLS Estimators in the Linear Regression Model When the Disturbances are Spatially Correlated Butte Gotu 1 Department of Statistics, University of Dortmund Vogelpothsweg 87, 44221
More informationRank equalities for idempotent and involutory matrices
Linear Algebra and its Applications 335 (2001) 101 117 www.elsevier.com/locate/laa Rank equalities for idempotent and involutory matrices Yongge Tian a, George P.H. Styan a, b, a Department of Mathematics
More informationPROBLEMS AND SOLUTIONS
Econometric Theory, 18, 2002, 193 194+ Printed in the United States of America+ PROBLEMS AND SOLUTIONS PROBLEMS 02+1+1+ LS and BLUE Are Algebraically Identical, proposed by Richard W+ Farebrother+ Let
More informationRe-nnd solutions of the matrix equation AXB = C
Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.
More informationThe generalized Schur complement in group inverses and (k + 1)-potent matrices
The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The
More informationarxiv: v1 [math.ra] 14 Apr 2018
Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,
More informationOn Sums of Conjugate Secondary Range k-hermitian Matrices
Thai Journal of Mathematics Volume 10 (2012) Number 1 : 195 202 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Sums of Conjugate Secondary Range k-hermitian Matrices S. Krishnamoorthy,
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationResearch Article Some Results on Characterizations of Matrix Partial Orderings
Applied Mathematics, Article ID 408457, 6 pages http://dx.doi.org/10.1155/2014/408457 Research Article Some Results on Characterizations of Matrix Partial Orderings Hongxing Wang and Jin Xu Department
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More information114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector,
A^VÇÚO 1n ò 1Ï 2015c4 Chinese Journal of Applied Probability and Statistics Vol.31 No.2 Apr. 2015 Optimal Estimator of Regression Coefficient in a General Gauss-Markov Model under a Balanced Loss Function
More informationChapter 3 Best Linear Unbiased Estimation
Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if
More informationDiagonal and Monomial Solutions of the Matrix Equation AXB = C
Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of
More informationNonsingularity and group invertibility of linear combinations of two k-potent matrices
Nonsingularity and group invertibility of linear combinations of two k-potent matrices Julio Benítez a Xiaoji Liu b Tongping Zhu c a Departamento de Matemática Aplicada, Instituto de Matemática Multidisciplinar,
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationSome results on matrix partial orderings and reverse order law
Electronic Journal of Linear Algebra Volume 20 Volume 20 2010 Article 19 2010 Some results on matrix partial orderings and reverse order law Julio Benitez jbenitez@mat.upv.es Xiaoji Liu Jin Zhong Follow
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationOn Hadamard and Kronecker Products Over Matrix of Matrices
General Letters in Mathematics Vol 4, No 1, Feb 2018, pp13-22 e-issn 2519-9277, p-issn 2519-9269 Available online at http:// wwwrefaadcom On Hadamard and Kronecker Products Over Matrix of Matrices Z Kishka1,
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES
ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham
More informationBest linear unbiased prediction for linear combinations in general mixed linear models
Journal of Multivariate Analysis 99 (2008 1503 1517 www.elsevier.com/locate/jmva Best linear unbiased prediction for linear combinations in general mixed linear models Xu-Qing Liu a,, Jian-Ying Rong b,
More informationThe Linear Regression Model
The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general
More informationVariable Selection in Restricted Linear Regression Models. Y. Tuaç 1 and O. Arslan 1
Variable Selection in Restricted Linear Regression Models Y. Tuaç 1 and O. Arslan 1 Ankara University, Faculty of Science, Department of Statistics, 06100 Ankara/Turkey ytuac@ankara.edu.tr, oarslan@ankara.edu.tr
More informationDecomposition of a ring induced by minus partial order
Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012) Article 72 2012 Decomposition of a ring induced by minus partial order Dragan S Rakic rakicdragan@gmailcom Follow this and additional works
More information1. The OLS Estimator. 1.1 Population model and notation
1. The OLS Estimator OLS stands for Ordinary Least Squares. There are 6 assumptions ordinarily made, and the method of fitting a line through data is by least-squares. OLS is a common estimation methodology
More informationWeighted least-squares estimators of parametric functions of the regression coefficients under a general linear model
Ann Inst Stat Math (2010) 62:929 941 DOI 10.1007/s10463-008-0199-8 Weighted least-squaes estimatos of paametic functions of the egession coefficients unde a geneal linea model Yongge Tian Received: 9 Januay
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationQuick Review on Linear Multiple Regression
Quick Review on Linear Multiple Regression Mei-Yuan Chen Department of Finance National Chung Hsing University March 6, 2007 Introduction for Conditional Mean Modeling Suppose random variables Y, X 1,
More informationThe Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D
International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES
Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse
More informationECONOMETRIC THEORY. MODULE VI Lecture 19 Regression Analysis Under Linear Restrictions
ECONOMETRIC THEORY MODULE VI Lecture 9 Regression Analysis Under Linear Restrictions Dr Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur One of the basic objectives
More informationDepartment of Economics. Working Papers
10ISSN 1183-1057 SIMON FRASER UNIVERSITY Department of Economics Working Papers 08-07 "A Cauchy-Schwarz inequality for expectation of matrices" Pascal Lavergne November 2008 Economics A Cauchy-Schwarz
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationMoore-Penrose-invertible normal and Hermitian elements in rings
Moore-Penrose-invertible normal and Hermitian elements in rings Dijana Mosić and Dragan S. Djordjević Abstract In this paper we present several new characterizations of normal and Hermitian elements in
More informationSpecification errors in linear regression models
Specification errors in linear regression models Jean-Marie Dufour McGill University First version: February 2002 Revised: December 2011 This version: December 2011 Compiled: December 9, 2011, 22:34 This
More informationOn the equality of the ordinary least squares estimators and the best linear unbiased estimators in multivariate growth-curve models.
RACSAM Rev. R. Acad. Cien. Serie A. Mat. VOL. 101 (1), 2007, pp. 63 70 Estadística e Investigación Operativa / Statistics and Operations Research On the equality of the ordinary least squares estimators
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationOperators with Compatible Ranges
Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationA property of orthogonal projectors
Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,
More informationResearch Article Constrained Solutions of a System of Matrix Equations
Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1
More informationPartial isometries and EP elements in rings with involution
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 55 2009 Partial isometries and EP elements in rings with involution Dijana Mosic dragan@pmf.ni.ac.yu Dragan S. Djordjevic Follow
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationGeneralized inverses of partitioned matrices in Banachiewicz Schur form
Linear Algebra and its Applications 354 (22) 41 47 www.elsevier.com/locate/laa Generalized inverses of partitioned matrices in Banachiewicz Schur form Jerzy K. Baksalary a,, George P.H. Styan b a Department
More informationResearch Article On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in Linear Regression Model
Applied Mathematics, Article ID 173836, 6 pages http://dx.doi.org/10.1155/2014/173836 Research Article On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in
More informationYONGDO LIM. 1. Introduction
THE INVERSE MEAN PROBLEM OF GEOMETRIC AND CONTRAHARMONIC MEANS YONGDO LIM Abstract. In this paper we solve the inverse mean problem of contraharmonic and geometric means of positive definite matrices (proposed
More informationThe reflexive and anti-reflexive solutions of the matrix equation A H XB =C
Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction
Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More information~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.
Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical
More informationX -1 -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs
DOI:.55/bile-5- Biometrical Letters Vol. 5 (5), No., - X - -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs Ryszard Walkowiak Department
More informationMatrix Differential Calculus with Applications in Statistics and Econometrics
Matrix Differential Calculus with Applications in Statistics and Econometrics Revised Edition JAN. R. MAGNUS CentERjor Economic Research, Tilburg University and HEINZ NEUDECKER Cesaro, Schagen JOHN WILEY
More informationReferences. Press, New York
References Banerjee, Dolado J, Galbraith JW, Hendry DF (1993) Co-integration, errorcorrection and the econometric analysis of non stationary data. Oxford University Press, New York Bellman R (1970) Introduction
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationAn iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C
Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationOn a matrix result in comparison of linear experiments
Linear Algebra and its Applications 32 (2000) 32 325 www.elsevier.com/locate/laa On a matrix result in comparison of linear experiments Czesław Stȩpniak, Institute of Mathematics, Pedagogical University
More informationMOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE
J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse
More informationThe Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where
21 : CHAPTER Seemingly-Unrelated Regressions The Algebra of the Kronecker Product Consider the matrix equation Y = AXB where Y =[y kl ]; k =1,,r,l =1,,s, (1) X =[x ij ]; i =1,,m,j =1,,n, A=[a ki ]; k =1,,r,i=1,,m,
More informationa Λ q 1. Introduction
International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE
More informationFormulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms
Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms Dijana Mosić and Dragan S Djordjević Abstract We introduce new expressions for the generalized Drazin
More informationComparison of prediction quality of the best linear unbiased predictors in time series linear regression models
1 Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models Martina Hančová Institute of Mathematics, P. J. Šafárik University in Košice Jesenná 5,
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More information