Estimation under a general partitioned linear model
|
|
- Ethelbert Leonard
- 5 years ago
- Views:
Transcription
1 Linear Algebra and its Applications 321 (2000) Estimation under a general partitioned linear model Jürgen Groß a,, Simo Puntanen b a Department of Statistics, University of Dortmund, Vogelpothsweg 87, D Dortmund, Germany b Department of Mathematics, Statistics and Philosophy, University of Tampere, P.O. Box 607, FIN Tampere, Finland Received 7 November 1998; accepted 5 January 2000 Submitted by G.P.H. Styan Abstract In this paper, we consider a general partitioned linear model and a corresponding reduced model. We derive a necessary and sufficient condition for the BLUE for the expectation of the observable random vector under the reduced model to remain BLUE in the partitioned model. The former is shown to be always an admissible estimator under a mild condition. We also regard alternative linear estimators and their coincidence with the BLUE under the partitioned model Elsevier Science Inc. All rights reserved. AMS classification: 62J05; 62H12 Keywords: Partitioned linear model; Best linear unbiased estimation; Admissible estimation; Orthogonal projector 1. Introduction Let R m,n denote the set of m n real matrices. The symbols A, A +, A, C(A), N(A),andrk(A) will stand for the transpose, the Moore Penrose inverse, any generalized inverse, the column space, the null space, and the rank, respectively, of A R m,n.bya we denote any matrix satisfying C(A ) = N(A ).Further,P A = AA + Corresponding author. Fax: addresses: gross@amadeus.statistik.uni-dortmund.de (J. Groß), sjp@uta.fi (S. Puntanen) /00/$ - see front matter 2000 Elsevier Science Inc. All rights reserved. PII:S (00)
2 132 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) denotes the orthogonal projector (with respect to the standard inner product) onto C(A),andM A = I P A. In particular, we denote P i = P Xi, M i = I P i,i= 1, 2. Consider a general Gauss Markov model denoted by M ={y, Xβ,σ 2 V}, E(y) = Xβ, D(y) = σ 2 V, (1.1) where X is a known n p matrix, β a p 1 vector of unknown parameters, V a known n n nonnegative definite matrix, and σ 2 > 0 is an unknown scalar. E( ) and D( ) denote expectation and dispersion of a random vector argument. It is assumed that the model is consistent,thatis y C(X : V), (1.2) see [6,14,15]. A vector of parametric functions Kβ, wherek R k,p, is estimable under the model M if and only if K = CX for some C R k,n. It is well known, see e.g. [16, p. 282], that under the model M the best linear unbiased estimator (BLUE) for an estimable vector of parametric functions CXβ is given by Fy,whereF R k,n is any solution to F(X : VX ) = C(X : 0). (1.3) We may also conclude that if Gy is the BLUE for Xβ, i.e., the matrix G R n,n is a solution to G(X : VX ) = (X : 0), (1.4) then CG is a solution to (1.3), and therefore CGy is BLUE for CXβ. By partitioning X = (X 1 : X 2 ) so that X 1 has p 1 columns and X 2 has p 2 columns with p = p 1 + p 2, and by accordingly writing β = (β 1, β 2 ), we can express M in its partitioned form M ={y, X 1 β 1 + X 2 β 2,σ 2 V}. (1.5) Regarding β 1 as a nuisance parameter, our interest focuses on estimation of a vector of estimable parametric functions K 2 β 2. Lemma 1. Under the model M ={y, X 1 β 1 + X 2 β 2,σ 2 V}, the vector of parametric functions K 2 β 2 is estimable if and only if K 2 = C 2 M 1 X 2 (1.6) for some matrix C 2, where M 1 = I P 1 is the orthogonal projector onto N(X 1 ). Proof. The vector K 2 β 2 is estimable under the model M if and only if (0 : K 2 ) = C(X 1 : X 2 ) (1.7) for some C. IfK 2 = C 2 M 1 X 2,thenC = C 2 M 1 satisfies (1.7). Conversely, if (1.7) holds for some C, thencx 1 = 0, implying C = C 2 M 1 for some C 2. Hence K 2 = C 2 M 1 X 2.
3 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) Reduced models In view of Lemma 1, our interest is lead to estimation of M 1 X 2 β 2. Under model M, the linear transform M 1 y of the observable random vector y has expectation E(M 1 y) = M 1 X 2 β 2 and dispersion D(M 1 y) = σ 2 M 1 VM 1. Hence, we obtain a reduced linear model M cr ={M 1 y, M 1 X 2 β 2,σ 2 M 1 VM 1 }, (2.1) which is in accordance with model M and is appropriate for inferences about M 1 X 2 β 2.Suchacorrectly reduced model has been considered for example in [3 5,10 12]. On the other hand, the triplet {y, M 1 X 2 β 2,σ 2 V} contains all information, which we need for estimating M 1 X 2 β 2. Hence, we can raise the question whether it is possible to obtain estimators for M 1 X 2 β 2 by regarding the triplet {y, M 1 X 2 β 2,σ 2 V} as a reduced model M r ={y, M 1 X 2 β 2,σ 2 V}, E(y) = M 1 X 2 β 2, D(y) = σ 2 V. (2.2) Such a model has been considered by Bhimasankaram and Saha Ray [3] and Bhimasankaram et al. [5]. It should be emphasized that we do not propose to consider model M r for practical purposes. For this, model M cr would probably be the better choice in most cases. As a matter of fact, one of the referee s pointed out that using model M r in practice could be quite obscure. Therefore we rather like to think of model M r as a source of estimators whose properties under the true model M are investigated in Sections 3 5. Eventually in Section 6 we reconsider the correctly reduced model M cr. Let us now turn our attention to model M r. Clearly, if we consider M r as a linear model, then we have to assume that y C(M 1 X 2 : V) with probability one. (2.3) On the other hand, it would not contradict our original inference base under the model M if y realizes in C(X 1 : X 2 : V) but not in C(M 1 X 2 : V). Inotherwords, inconsistency of model M r, i.e., y / C(M 1 X 2 : V), does not automatically imply inconsistency of model M. To overcome any logical difficulties which arise from regarding a reduced model M r, we assume that the subspace in which y realizes almost surely is the same under both models, i.e., C(X 1 : X 2 : V) = C(M 1 X 2 : V). (2.4) In that case we will call M not contradictory to M r. Obviously, if M is not contradictory to M r, then consistency of M implies consistency of M r in view of C(M 1 X 2 : V) C(X 1 : X 2 : V). If model M is only weakly singular,thatis C(X 1 : X 2 ) C(V), (2.5) then M is never contradictory to M r. In the following lemma we collect together some properties related to condition (2.4).
4 134 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) Lemma 2. Let X 1 R n,p1, X 2 R n,p2, and let V R n,n be nonnegative definite. Then: (i) The following three conditions are equivalent: (a1) C(X 1 : X 2 : V) = C(M 1 X 2 : V), (a2) C(X 1 ) C(M 1 X 2 : V), (a3) rk(x 1 ) + dim[c(m 1 X 2 ) C(V)] =dim[c(x 1 : X 2 ) C(V)]. (ii) Condition (a1) implies the following four conditions (three first being equivalent): (b1) C(X 1 ) = C(P 1 V), (b2) rk(x 1 ) = rk(vx 1 ), (b3) C(X 1 ) C(V ) ={0}, (b4) C(X 1 ) C[(I P M1 X 2 )V]. (iii) The following two conditions are equivalent: (c1) C(X 1 ) C(V), (c2) C(X 1 ) [C(M 1 X 2 ) C(V)] =C(X 1 : X 2 ) C(V). (iv) Furthermore, (d1) condition (c1) implies (a1), (d2) condition (a1) does not imply (c1). Proof. In view of C(X 1 : X 2 : V) = C(X 1 : X 2 ) + C(V) = C(X 1 ) + C(M 1 X 2 ) + C(V), (cf. e.g. [9]), condition (a1) is equivalent to (a2). Since always C(M 1 X 2 : V) C(X 1 : X 2 : V), (a1) is equivalent to rk(x 1 : X 2 : V) = rk(m 1 X 2 : V). (2.6) From the identities rk(m 1 X 2 : V)=rk(M 1 X 2 ) + rk(v) dim[c(m 1 X 2 ) C(V)], rk(x 1 : X 2 : V)=rk(X 1 : X 2 ) + rk(v) dim[c(x 1 : X 2 ) C(V)], (and from rk(x 1 : X 2 ) = rk(x 1 ) + rk(m 1 X 2 )) we see that (2.6) is equivalent to (a3). Thus part (i) of the lemma is proved. Assume for part (ii) that (a2) holds. Then there exist matrices A and B such that X 1 = M 1 X 2 A + VB. (2.7) Premultiplying (2.7) by P 1 yields X 1 = P 1 VB, i.e., C(X 1 ) C(P 1 V). (2.8)
5 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) Since C(P 1 V) C(X 1 ), (2.8) is equivalent to rk(x 1 ) = rk(p 1 V) = rk(x 1 V) = rk(vx 1). (2.9) In light of [9], (2.9) holds if and only if rk(x 1 ) = rk(x 1 V) = rk(x 1) dim[c(x 1 ) C(V )], i.e., if and only if (b3) holds. If (2.7) is premultiplied by I P M1 X 2, then we obtain X 1 = (I P M1 X 2 )VB, thus showing that (a1) indeed implies (b4). This shows part (ii) of the lemma. For part (iii) let (c1) be satisfied. It is obvious that (c1) implies (a1) and hence (a3). But since C(X 1 ) C(V) is equivalent to C(X 1 ) = C(X 1 ) C(V), and since always [C(X 1 ) C(V)] [C(M 1 X 2 ) C(V)] C(X 1 : X 2 ) C(V), (2.10) where C(X 1 : X 2 ) = C(X 1 ) C(M 1 X 2 ), (a3) is equivalent to (c2), showing that (c1) implies (c2). Conversely, if (c2) holds, then C(X 1 ) C(X 1 : X 2 ) C(V) C(V), i.e., (c1). Hence part (iii) is shown. For part (iv) we note that (d1) has already been mentioned above. To prove (d2), take (X 1 : X 2 ) = 0 1, V = (2.11) Then (a1) holds but C(X 1 ) is not contained in C(V). We note that statement (d2) is in contradiction with a statement by Bhimasankaram et al. [5, Section 1]. Unfortunately, the aforementioned authors seem to believe that condition (a1) is equivalent to condition (c1), which is easily disproved by (2.11). In Section 3, we investigate conditions under which the BLUE for M 1 X 2 β 2 under model M r remains BLUE under the partitioned model M, where it is assumed that M is not contradictory to M r. Note that Bhimasankaram and Saha Ray [3, Theorem 2.4] investigate a similar problem when V is positive definite by supplying a sufficient condition for coincidence of both BLUEs. Bhimasankaram et al. [9, Theorem 3.1] study a generalization to the case of a singular matrix V such that C(X 1 ) C(V). 3. Best linear unbiased estimation The following two lemmas give characterizations of the BLUEs of M 1 X 2 β 2 under models M and M r, respectively. Since model M is assumed to be not contradictory to model M r, every two representations F 1 y and F 2 y of the BLUE of M 1 X 2 β 2 under model M r, where possibly F 1 = F 2, satisfy F 1 y = F 2 y for all y C(M 1 X 2 : V) = C(X 1 : X 2 : V). (3.1)
6 136 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) This means, if we consider the set of different representations of the BLUE for M 1 X 2 β 2 under M r as a set of linear estimators for M 1 X 2 β 2 under M, thenall these estimators coincide almost surely under M, provided the model M is not contradictory to the model M r. This is true even if no BLUE for M 1 X 2 β 2 under M r remains BLUE under M. Lemma 3. Let Z = I P M1 X 2, and let V = M 1 VM 1. The following four statements are equivalent: (i) Fy is BLUE for M 1 X 2 β 2 under the model M ={y, X 1 β 1 + X 2 β 2,σ 2 V}. (ii) F satisfies F(X 1 : X 2 ) = (0 : M 1 X 2 ) and FVM 1 Z = 0. (iii) F = NM 1,whereN satisfies NM 1 X 2 = M 1 X 2 and NV Z = 0. (iv) F =[I V Z(ZV Z) + Z]M 1 + P[I ZV Z(ZV Z) + ]ZM 1 for some P. Proof. An estimator Fy is BLUE for M 1 X 2 β 2 under the model M if and only if F(X 1 : X 2 ) = (0 : MX 2 ) and FV(X 1 : X 2 ) = 0, where (X 1 :X 2 ) is any matrix satisfying C[(X 1 :X 2 ) ]=N[(X 1 : X 2 ) ]. But since M 1 Z = M 1 (I P M1 X 2 ) = M 1 P M1 X 2 = I P (X1 :X 2 ), equivalence between (i) and (ii) is shown. It is clear that (iii) implies (ii). Conversely, if (ii) is satisfied, then FX 1 = 0 implies F = NM 1 for some N and (iii) follows. From [17, Theorem 1] we know that the general solution to the equations NM 1 X 2 = M 1 X 2 and NV Z = 0 with respect to N is N =[I V Z(ZV Z) + Z]+A[I ZV Z(ZV Z) + ]Z for arbitrary A. Hence equivalence between (iii) and (iv) holds. Note that we will not need condition (iv) of Lemma 3 in this paper. The BLUE under the reduced model can be characterized as follows. Lemma 4. Let Z = I P M1 X 2. The following three statements are equivalent: (i) Fy is BLUE for M 1 X 2 β 2 under the model M r ={y, M 1 X 2 β 2,σ 2 V}. (ii) F satisfies FM 1 X 2 = M 1 X 2 and FVZ = 0. (iii) F =[I VZ(ZVZ) + Z]+B[I ZVZ(ZVZ) + ]Z for some B. Proof. Equivalence between (i) and (ii) follows immediately by noting that Z is the orthogonal projector onto N(X 2 M 1), and therefore is a special choice for (M 1 X 2 ). Equivalence between (ii) and (iii) follows from [17, Theorem 1]. We may now state the main result of this section. Theorem 1. Let the partitioned model M ={y, X 1 β 1 + X 2 β 2,σ 2 V} be not contradictory to the reduced model M r. Then every BLUE for M 1 X 2 β 2 under M r = {y, M 1 X 2 β 2,σ 2 V} remains BLUE for M 1 X 2 β 2 under M if and only if
7 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) C(X 1 ) C[V(I P M1 X 2 )]. (3.2) Proof. Suppose that Fy is BLUE for M 1 X 2 β 2 under M r, meaning that F is any matrix from Lemma 4. By Lemma 4 (ii) it follows FV = FVP M1 X 2.SinceP M1 X 2 = P M1 X 2 M 1,thisgives FV = FVM 1. (3.3) Hence from condition (ii) of Lemma 3, Fy is BLUE under M if and only if FX 1 = 0 and FX 2 = M 1 X 2. (3.4) But FX 1 = 0 if and only if FM 1 = F, sothatfx 1 = 0 implies FX 2 = FM 1 X 2 = M 1 X 2, where the latter equality comes from Lemma 4 (ii). This shows that Fy,being the BLUE under M r, is BLUE under M if and only if FX 1 = 0. (3.5) It remains to show that (3.2) is equivalent to (3.5) for every BLUE Fy under M r. Suppose that the latter is satisfied, i.e., (3.5) holds for every F from condition (iii) in Lemma 4. Then [I VZ(ZVZ) + Z]X 1 = 0, showing (3.2). Conversely if (3.2) holds, then C(ZX 1 ) C(ZVZ), i.e., ZVZ(ZVZ) + ZX 1 = ZX 1. Hence from Lemma 4 (iii), FX 1 =[I VZ(ZVZ) + Z]X 1 (3.6) for every BLUE Fy under M r. But in view of (3.2), X 1 = VZA for some A. Since clearly VZ(ZVZ) + ZVZ = VZ, we arrive at FX 1 = 0, thus concluding the proof. Remark 1. From the proof of Theorem 1 it becomes evident that the assertion remains true when the phrase remains BLUE for M 1 X 2 β 2 under M is replaced by is unbiased for M 1 X 2 β 2 under M. In connection with a weakly singular model M, we may state a result which is in accordance with the equivalence of (a) and (c) in [12, Theorem 1]. Corollary 1. Let the partitioned model M be weakly singular. Then every BLUE for M 1 X 2 β 2 under the reduced model M r remains BLUE for M 1 X 2 β 2 under M if and only if X 2 M 1V X 1 = 0 (3.7) for any generalized inverse V of V. Proof. In advance we note that the invariance of X 2 M 1V X 1 with respect to the choice of generalized inverse V is equivalent to C(X 1 ) C(V) and C(M 1 X 2 ) C(V), (3.8) cf. [18, p. 43]. Of course it is assumed that X 1 = 0 and X 2 M 1 = 0. Conditions (3.8) are equivalent to C(X 1 : X 2 ) C(V), which means that M is weakly singular.
8 138 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) If (3.7) is satisfied, then obviously V X 1 = ZA for some matrix A, wherez = I P M1 X 2. In addition, the weak singularity implies VV X 1 = X 1 and hence X 1 = VZG. Thus from Theorem 1 every BLUE for M 1 X 2 β 2 under M r remains BLUE for M 1 X 2 β 2 under M. Conversely, if condition (3.2) from Theorem 1 is satisfied, i.e., C(X 1 ) C(VZ), andm is weakly singular, i.e., C(X 1 : X 2 ) C(V), then X 2 M 1V X 1 = X 2 M 1V VZA = X 2 M 1ZA = 0 for some matrix A and every generalized inverse V of V. Obviously, when C(X 1 ) C(V), then we can never expect that the BLUE for M 1 X 2 β 2 under M r remains BLUE under M. On the other hand, when C(X 1 ) = C(VX 1 ), then condition (3.2) is always satisfied in view of X 1 = ZX 1. When the partitioned model M is not contradictory to M r, then the condition C(X 1 ) = C(VX 1 ) is equivalent to C(VX 1 ) C(X 1 ), as shown in the proof of the following corollary. Corollary 2. Let the partitioned model M be not contradictory to the reduced model M r. Then every BLUE for M 1 X 2 β 2 under M r remains BLUE for M 1 X 2 β 2 under M if C(VX 1 ) C(X 1 ). (3.9) Proof. If model M is not contradictory to M r, then condition (b2) of Lemma 2 implies rk(x 1 ) = rk(vx 1 ). Hence, (3.9) is equivalent to C(X 1 ) = C(VX 1 ), and therefore C(X 1 ) = C(VZX 1 ) C(VZ). This shows that condition (3.2) from Theorem 1 is satisfied under (3.9). It follows easily from Lemmas 3 (ii) and 4 (ii) that under C(VX 1 ) C(X 1 ) every BLUE for M 1 X 2 β 2 under M remains BLUE for M 1 X 2 β 2 under M r.inother words, if C(VX 1 ) C(X 1 ), then the sets of BLUEs under M and M r, respectively, coincide. We note that the condition C(VX 1 ) C(X 1 ) is necessary and sufficient for equality of ordinary least squares estimator (OLSE) and BLUE for X 1 β 1 under a linear model {y,e(y) = X 1 β 1,D(y) = σ 2 V}, cf. [13]. Corollary 2 has also been established by Bhimasankaram and Saha Ray [3, Theorem 2.4] and Bhimasankaram et al. [5, Theorem 3.1] under more restrictive assumptions. We will now consider the estimator P M1 X 2 y as the OLSE for M 1 X 2 β 2.Ifthis estimator is BLUE under M r, then it remains BLUE under M, as demonstrated by the following corollary. Corollary 3. Let the partitioned model M be not contradictory to the reduced model M r.ifp M1 X 2 y is the BLUE for M 1 X 2 β 2 under the reduced model M r, then it remains BLUE under M.
9 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) Proof. Condition (b4) of Lemma 2 implies that C(X 1 ) C(ZV), wherez = I P M1 X 2.ButifP M1 X 2 y is BLUE for M 1 X 2 β 2 under the reduced model M r,then from [13], VZ = ZV. Hence, C(X 1 ) C(ZV), showing that condition (3.2) from Theorem 1 is satisfied. 4. Admissible estimation When the BLUE for M 1 X 2 β 2 under model M r does not remain BLUE under model M, it is natural to ask whether this estimator would make any sense under the partitioned model M. In this section, we give a partial answer to this question by demonstrating that there cannot exist a linear estimator which is uniformly better, provided C(X 1 ) C(V). More precisely, when C(X 1 ) C(V), then the BLUE for M 1 X 2 β 2 under model M r is an admissible estimator for M 1 X 2 β 2 among the set of linear estimators L n (y) ={Ly + λ : L R n,n, λ R n,1 } (4.1) for M 1 X 2 β 2 under the partitioned model M. According to Baksalary and Markiewicz [2] a linear estimator Ay + a is called admissible among L n (y) under M if there does not exist Ly + λ L n (y) such that the inequality ϱ(ly + λ; M 1 X 2 β 2 ) ϱ(ay + a; M 1 X 2 β 2 ) holds for every pair (β,σ 2 ) Θ and is strict for at least one pair (β,σ 2 ) Θ,whereΘ is the parameter space corresponding to model M,and ϱ(ly + λ; M 1 X 2 β 2 ) = E[(Ly + λ M 1 X 2 β 2 ) (Ly + λ M 1 X 2 β 2 )] (4.2) is the quadratic risk of Ly + λ L n (y) under M. The following lemma, which follows easily from the main result in [2, Theorem], characterizes homogenous linear admissible estimators for M 1 X 2 β 2 under model M. Lemma 5. An estimator Ay is admissible for M 1 X 2 β 2 among L n (y) under the partitioned model M if and only if A R n,n satisfies the following four conditions: C(VA ) C(X 1 : X 2 ), (4.3) AVM 1 is symmetric, (4.4) AV(M 1 A ) is nonnegative definite, (4.5) C[(A M 1 )(X 1 : X 2 )]=C[(A M 1 )W], (4.6) where W is any matrix such that C(W) = C(X 1 : X 2 ) C(V). The following result shows that the BLUE for M 1 X 2 β 2 under the reduced model M r can be regarded as a reasonable choice of estimator under the partitioned
10 140 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) model M, provided C(X 1 ) C(V), since it cannot be uniformly outperformed by a different linear estimator. Theorem 2. Let the partitioned model M ={y, X 1 β 1 + X 2 β 2,σ 2 V} be not contradictory to the reduced model M r ={y, M 1 X 2 β 2,σ 2 V}. Then every BLUE for M 1 X 2 β 2 under M r is admissible for M 1 X 2 β 2 among L n (y) under M if C(X 1 ) C(V). (4.7) Proof. The assertion is proved when the four conditions (4.3) (4.6) are true for F replacing A,whereFis any matrix satisfying Lemma 4. From Lemma 4(ii) we have FVZ = 0, wherez = I P M1 X 2. This may equivalently be expressed as C(VF ) N(Z),whereN(Z) = C(M 1 X 2 ) C(X 1 : X 2 ). Thus, C(VF ) C(X 1 : X 2 ). (4.8) Moreover, C(VF ) C(M 1 X 2 ) implies FV = FVM 1. In view of the identity ZVZ(ZVZ) + ZV = ZV it follows from Lemma 4(iii) that FV = V VZ(ZVZ) + ZV. Therefore, FVM 1 = V VZ(ZVZ) + ZV is symmetric. (4.9) Since we easily compute FVF = FV, it is seen that FVM 1 FVF = FV FV = 0 is nonnegative definite. (4.10) From Lemma 4(ii) we have (F M 1 )M 1 X 2 = 0. Hence, in view of C(X 1 : X 2 ) = C(X 1 ) C(M 1 X 2 ), and in view of C(W) = C(X 1 ) [C(M 1 X 2 ) C(V)] by Lemma 2 (iii), we obtain C[(F M 1 )(X 1 : X 2 )]=C[(F M 1 )W] =C(FX 1 ). (4.11) The assertion now follows from (4.8) to (4.11) by Lemma Alternative estimation We will now consider estimators for M 1 X 2 β 2 of the form FM 1 y,wheref is any matrix such that Fy is the BLUE for M 1 X 2 β 2 under the reduced model M r.an estimator FM 1 y can be seen as the generalized version of an estimator which has been considered by Aigner and Balestra [1], see also [19] for related results. We pose the question whether an estimator of the form FM 1 y can be reasonably used under the partitioned model M. It is clear that FM 1 y is unbiased for M 1 X 2 β 2 under model M in view of FM 1 (X 1 : X 2 ) = (0 : M 1 X 2 ). (5.1) Obviously, since FM 1 y is unbiased for M 1 X 2 β 2,theBLUEforM 1 X 2 β 2 is uniformly not worse than FM 1 y with respect to the quadratic risk of estimators. Therefore,
11 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) FM 1 y can be admissible for M 1 X 2 β 2 among L n (y) under M only if it coincides with the BLUE for M 1 X 2 β 2 under M. The following theorem gives a necessary and sufficient condition for the latter. Theorem 3. Let the partitioned model M ={y, X 1 β 1 + X 2 β 2,σ 2 V} be not contradictory to the reduced model M r, and let Fy be BLUE for M 1 X 2 β 2 under M r = {y, M 1 X 2 β 2,σ 2 V}. Then every estimator FM 1 y is BLUE for M 1 X 2 β 2 under M if and only if C(P 1 VM 1 Z) C(VZ), (5.2) where Z = I P M1 X 2. Proof. Since for any matrix F from Lemma 4 identity (5.1) holds, it follows by Lemma 3(ii) that FM 1 y is BLUE for M 1 X 2 β 2 under M if and only if FM 1 VM 1 Z = 0. (5.3) Now, let (5.3) be satisfied for every matrix F from Lemma 4(iii). Then (choosing B = 0), [I VZ(ZVZ) + Z]M 1 VM 1 Z = 0. (5.4) In view of M 1 = I P 1 and M 1 Z = ZM 1, (5.4) can be written as [I VZ(ZVZ) + Z](VZM 1 P 1 VM 1 Z) = 0, (5.5) which in view of VZ(ZVZ) + ZVZ = VZ is equivalent to [I VZ(ZVZ) + Z]P 1 VM 1 Z = 0. (5.6) Since it is easily seen that N[I VZ(ZVZ) + Z]=C(VZ), identity (5.6) is equivalent to (5.2). Conversely, let (5.2) be satisfied. As just shown above, (5.2) is equivalent to (5.6), which in turn is equivalent to (5.4). Condition (5.4) also entails [Z ZVZ(ZVZ) + Z]M 1 VM 1 Z = 0. (5.7) Now, (5.4) and (5.7) show that for every matrix F from Lemma 4(iii) condition (5.3) is satisfied and hence FM 1 y is BLUE for M 1 X 2 β 2 under M. Remark 2. Condition (5.2) from Theorem 3 may alternatively be expressed as C(P 1 VM) C(VZ), (5.8) where P 1 = X 1 X + 1, M = I P (X 1 :X 2 ) and Z = I P M1 X 2. The condition C(P 1 VM 1 Z) C(VZ) from Theorem 3 is obviously weaker than the condition C(X 1 ) C(VZ) from Theorem 1, since the latter implies the former. That the converse is not true can be seen from the matrices X 1 = 1 1, X 2 = , V = (5.9)
12 142 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) Then C(P 1 VM 1 Z) C(VZ) but C(X 1 ) C(VZ). The same matrices can be used to demonstrate that [7, Theorem 1] is false. Under a partitioned model M with positive-definite V and (X 1 : X 2 ) of full column rank, the authors claim that a necessary and sufficient condition for equality of the generalized least squares (GLS) estimator and the so-called pseudo-gls estimator for (the unbiasedly estimable vector) β 2 is X 2 M 1V 1 X 1 = 0. However, under (5.9) it is easily computed that the estimators in question coincide without satisfying this condition. The correct condition, being X 2 M 1V + P 1 VM 1 Z = 0, appears in [12, Theorem 1]. It is related to our Theorem 3 in the following way. Corollary 4. Let the partitioned model M be weakly singular, and let Fy be BLUE for M 1 X 2 β 2 under the reduced model M r. Then every estimator FM 1 y is BLUE for M 1 X 2 β 2 under M if and only if X 2 M 1V P 1 VM 1 Z = 0 (5.10) for any generalized inverse V of V, and Z = I P M1 X 2. Proof. From the proof of Corollary 1 we know that X 2 M 1V X 1 is invariant with respect to the choice of generalized inverse V in view of C(X 1 ) C(V) and C(M 1 X 2 ) C(V). (5.11) Hence the left-hand side of (5.10) does not depend on the choice V. From Theorem 3, the assertion is true if (5.10) is equivalent to (5.2). Clearly (5.2) implies (5.10). To go the other way, assume that (5.10) holds. Then V P 1 VM 1 Z = ZA (5.12) for some matrix A. Premultiplying (5.12) by V yields (5.2) since VV P 1 = P Frisch Waugh estimation In Section 2, we introduced the correctly reduced model M cr ={M 1 y, M 1 X 2 β 2,σ 2 M 1 VM 1 }, E(M 1 y) = M 1 X 2 β 2, D(M 1 y) = σ 2 (6.1) M 1 VM 1, which is in accordancewith model M. We state the following result as an immediate consequence of Lemma 3. Theorem 4. Every BLUE for M 1 X 2 β 2 under the correctly reduced model M cr remains BLUE for M 1 X 2 β 2 under the partitioned model M. Proof. An estimator is BLUE for M 1 X 2 β 2 under M cr if and only if it is of the form NM 1 y,wherensatisfies NM 1 X 2 = M 1 X 2 and NM 1 VM 1 Z = 0 with Z = I P M1 X 2. But then from Lemma 3 (iii), NM 1 y is BLUE for M 1 X 2 β 2 under M.
13 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) It is obvious that also the reverse relation holds in Theorem 4, i.e., every BLUE for M 1 X 2 β 2 under M remains BLUE for M 1 X 2 β 2 under M cr. In other words, the sets of BLUEs under M and M cr, respectively, coincide. Estimation under the correctly reduced model M cr as carried out in Theorem 4 can be seen as a generalization of a well-known procedure due to [8] to the case of possibly nonsingular V and possibly nonestimable β 2. Acknowledgement The first author was supported by the Deutsche Forschungsgemeinschaft under grant Tr 253/2-3. This research was done when the second author was a Senior Scientist of the Academy of Finland. References [1] D.J. Aigner, P. Balestra, Optimal experimental design for error components models, Econometrica 56 (1988) [2] J.K. Baksalary, A. Markiewicz, Admissible linear estimators in the general Gauss Markov model, J. Statist. Plan. Inf. 19 (1988) [3] P. Bhimasankaram, R.S. Ray, On a partitioned linear model and some associated reduced models, Linear Algebra Appl. 264 (1997) [4] P. Bhimasankaram, D. Sengupta, The linear zero functions approach to linear models, Sankhya Ser. B 58 (1996) [5] P. Bhimasankaram, K.R. Shah, R.S. Ray, On a singular partitioned linear model and some associated reduced models, J. Combin. Inform. Systems Sci. (to appear). [6] A. Feuerverger, D.A.S. Fraser, Categorial information and the singular linear model, Canadian J. Statist. 8 (1980) [7] D.G. Fiebig, R. Bartels, W. Krämer, The Frisch Waugh theorem and generalized least squares, Econometric Rev. 15 (1996) [8] R. Frisch, F.V. Waugh, Partial time regressions as compared with individual trends, Econometrica 1 (1933) [9] G. Marsaglia, G.P.H. Styan, Equalities and inequalities for ranks of matrices, Linear and Multilinear Algebra 2 (1974) [10] M. Nurhonen, S. Puntanen, A property of partitioned generalized regression, Commun. Statist. Theory Meth. 21 (1992) [11] S. Puntanen, Some matrix results related to a partitioned singular linear model, Commun. Statist. Theory Meth. 25 (1996) [12] S. Puntanen, Some further results related to reduced singular linear models, Commun. Statist. Theory Meth. 26 (1997) [13] S. Puntanen, G.P.H. Styan, The equality of the ordinary least squares estimator and the best linear unbiased estimator with discussion, Amer. Statist. 43 (1989) [14] C.R. Rao, Unified theory of linear estimation, Sankhya, Ser. A 33 (1971) [15] C.R. Rao, Corrigenda, Sankhya, Ser. A 34 (1972) 194, 477. [16] C.R. Rao, Representations of best linear unbiased estimators in the Gauss Markoff model with a singular dispersion matrix, J. Multivariate Anal. 3 (1973)
14 144 J. Groß, S. Puntanen / Linear Algebra and its Applications 321 (2000) [17] C.R. Rao, Choice of best linear estimators in the Gauss Markoff model with a singular dispersion matrix, Commun. Statist. Theory Meth. 7 (1978) [18] C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and its Applications, Wiley, New York, [19] H.J. Werner, C. Yapar, More on partitioned possibly restricted linear regression, in: E.-M. Tiit, T. Kollo, H. Niemi (Eds.), Multivariate Statistics and Matrices in Statistics, VSP, Utrecht, Netherlands, TEV, Vilnius, Lithuania, 1995, pp
A note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationThe equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model
The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of
More informationOn V-orthogonal projectors associated with a semi-norm
On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai
More informationNew insights into best linear unbiased estimation and the optimality of least-squares
Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto
More informationOn some linear combinations of hypergeneralized projectors
Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß
More informationOn equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model
Statistics & Probability Letters 76 (2006) 1265 1272 www.elsevier.com/locate/stapro On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators
More informationA property of orthogonal projectors
Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,
More informationA new algebraic analysis to linear mixed models
A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the
More informationGeneralized inverses of partitioned matrices in Banachiewicz Schur form
Linear Algebra and its Applications 354 (22) 41 47 www.elsevier.com/locate/laa Generalized inverses of partitioned matrices in Banachiewicz Schur form Jerzy K. Baksalary a,, George P.H. Styan b a Department
More informationThe generalized Schur complement in group inverses and (k + 1)-potent matrices
The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The
More informationThe Equality of OLS and GLS Estimators in the
The Equality of OLS and GLS Estimators in the Linear Regression Model When the Disturbances are Spatially Correlated Butte Gotu 1 Department of Statistics, University of Dortmund Vogelpothsweg 87, 44221
More informationSolutions of a constrained Hermitian matrix-valued function optimization problem with applications
Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =
More informationA CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL
A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL GABRIELA BEGANU The existence of the best linear unbiased estimators (BLUE) for parametric estimable functions in linear models
More informationOn a matrix result in comparison of linear experiments
Linear Algebra and its Applications 32 (2000) 32 325 www.elsevier.com/locate/laa On a matrix result in comparison of linear experiments Czesław Stȩpniak, Institute of Mathematics, Pedagogical University
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationWhere is matrix multiplication locally open?
Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?
More informationNonsingularity and group invertibility of linear combinations of two k-potent matrices
Nonsingularity and group invertibility of linear combinations of two k-potent matrices Julio Benítez a Xiaoji Liu b Tongping Zhu c a Departamento de Matemática Aplicada, Instituto de Matemática Multidisciplinar,
More informationRank equalities for idempotent and involutory matrices
Linear Algebra and its Applications 335 (2001) 101 117 www.elsevier.com/locate/laa Rank equalities for idempotent and involutory matrices Yongge Tian a, George P.H. Styan a, b, a Department of Mathematics
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY
More informationThe symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn
More informationLinear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008
Linear Algebra Chih-Wei Yi Dept. of Computer Science National Chiao Tung University November, 008 Section De nition and Examples Section De nition and Examples Section De nition and Examples De nition
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,
More informationTime Invariant Variables and Panel Data Models : A Generalised Frisch- Waugh Theorem and its Implications
Time Invariant Variables and Panel Data Models : A Generalised Frisch- Waugh Theorem and its Implications Jaya Krishnakumar No 2004.01 Cahiers du département d économétrie Faculté des sciences économiques
More informationComparison of perturbation bounds for the stationary distribution of a Markov chain
Linear Algebra and its Applications 335 (00) 37 50 www.elsevier.com/locate/laa Comparison of perturbation bounds for the stationary distribution of a Markov chain Grace E. Cho a, Carl D. Meyer b,, a Mathematics
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationOn Sums of Conjugate Secondary Range k-hermitian Matrices
Thai Journal of Mathematics Volume 10 (2012) Number 1 : 195 202 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Sums of Conjugate Secondary Range k-hermitian Matrices S. Krishnamoorthy,
More informationThe DMP Inverse for Rectangular Matrices
Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationMOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE
J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n
Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this
More informationLinear estimation in models based on a graph
Linear Algebra and its Applications 302±303 (1999) 223±230 www.elsevier.com/locate/laa Linear estimation in models based on a graph R.B. Bapat * Indian Statistical Institute, New Delhi 110 016, India Received
More informationEXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B
EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B Dragan S. Djordjević November 15, 2005 Abstract In this paper we find the explicit solution of the equation A X + X A = B for linear bounded operators
More informationDragan S. Djordjević. 1. Introduction
UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationRank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications
Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University
More informationA property concerning the Hadamard powers of inverse M-matrices
Linear Algebra and its Applications 381 (2004 53 60 www.elsevier.com/locate/laa A property concerning the Hadamard powers of inverse M-matrices Shencan Chen Department of Mathematics, Fuzhou University,
More information~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.
Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical
More informationSolutions for Econometrics I Homework No.1
Solutions for Econometrics I Homework No.1 due 2006-02-20 Feldkircher, Forstner, Ghoddusi, Grafenhofer, Pichler, Reiss, Yan, Zeugner Exercise 1.1 Structural form of the problem: 1. q d t = α 0 + α 1 p
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationLecture 6: Geometry of OLS Estimation of Linear Regession
Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns
More informationBASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =
CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with
More information114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector,
A^VÇÚO 1n ò 1Ï 2015c4 Chinese Journal of Applied Probability and Statistics Vol.31 No.2 Apr. 2015 Optimal Estimator of Regression Coefficient in a General Gauss-Markov Model under a Balanced Loss Function
More informationThe Linear Regression Model
The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general
More informationThe best generalised inverse of the linear operator in normed linear space
Linear Algebra and its Applications 420 (2007) 9 19 www.elsevier.com/locate/laa The best generalised inverse of the linear operator in normed linear space Ping Liu, Yu-wen Wang School of Mathematics and
More informationELA
Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF
More informationYongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China
On global optimizations of the rank and inertia of the matrix function A 1 B 1 XB 1 subject to a pair of matrix equations B 2 XB 2, B XB = A 2, A Yongge Tian China Economics and Management Academy, Central
More informationRe-nnd solutions of the matrix equation AXB = C
Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationAnalysis of transformations of linear random-effects models
Analysis of transformations of linear random-effects models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. Assume that
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationPreliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100
Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012
More informationSome results on matrix partial orderings and reverse order law
Electronic Journal of Linear Algebra Volume 20 Volume 20 2010 Article 19 2010 Some results on matrix partial orderings and reverse order law Julio Benitez jbenitez@mat.upv.es Xiaoji Liu Jin Zhong Follow
More informationSOLUTIONS TO PROBLEMS
Z Z T Z conometric Theory, 1, 005, 483 488+ Printed in the United States of America+ DI: 10+10170S06646660505079 SLUTINS T PRBLMS Solutions to Problems Posed in Volume 0(1) and 0() 04.1.1. A Hausman Test
More informationEconomics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N
1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N
More informationOn Euclidean distance matrices
On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto
More informationLecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University
Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................
More informationWeaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms
DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,
More informationThis note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants.
of By W. A. HARRIS, Jr. ci) and T. N. This note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants. 1. Introduction.
More informationLinear Codes, Target Function Classes, and Network Computing Capacity
Linear Codes, Target Function Classes, and Network Computing Capacity Rathinakumar Appuswamy, Massimo Franceschetti, Nikhil Karamchandani, and Kenneth Zeger IEEE Transactions on Information Theory Submitted:
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationweb: HOMEWORK 1
MAT 207 LINEAR ALGEBRA I 2009207 Dokuz Eylül University, Faculty of Science, Department of Mathematics Instructor: Engin Mermut web: http://kisideuedutr/enginmermut/ HOMEWORK VECTORS IN THE n-dimensional
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationRank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix
Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis
More informationidentity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij
Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting
More informationExistence and multiple solutions for a second-order difference boundary value problem via critical point theory
J. Math. Anal. Appl. 36 (7) 511 5 www.elsevier.com/locate/jmaa Existence and multiple solutions for a second-order difference boundary value problem via critical point theory Haihua Liang a,b,, Peixuan
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS
Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationGlobal Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations
Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationMath 147, Homework 1 Solutions Due: April 10, 2012
1. For what values of a is the set: Math 147, Homework 1 Solutions Due: April 10, 2012 M a = { (x, y, z) : x 2 + y 2 z 2 = a } a smooth manifold? Give explicit parametrizations for open sets covering M
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More informationMeans of unitaries, conjugations, and the Friedrichs operator
J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationTHE NEARLY ADDITIVE MAPS
Bull. Korean Math. Soc. 46 (009), No., pp. 199 07 DOI 10.4134/BKMS.009.46..199 THE NEARLY ADDITIVE MAPS Esmaeeil Ansari-Piri and Nasrin Eghbali Abstract. This note is a verification on the relations between
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationarxiv: v1 [math.ra] 14 Apr 2018
Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationSOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX
Iranian Journal of Fuzzy Systems Vol 5, No 3, 2008 pp 15-29 15 SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX M S HASHEMI, M K MIRNIA AND S SHAHMORAD
More informationIf Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.
20 6. CONDITIONAL EXPECTATION Having discussed at length the limit theory for sums of independent random variables we will now move on to deal with dependent random variables. An important tool in this
More informationMIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations
Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete
More informationLocally linearly dependent operators and reflexivity of operator spaces
Linear Algebra and its Applications 383 (2004) 143 150 www.elsevier.com/locate/laa Locally linearly dependent operators and reflexivity of operator spaces Roy Meshulam a, Peter Šemrl b, a Department of
More informationThe Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D
International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University
More informationX -1 -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs
DOI:.55/bile-5- Biometrical Letters Vol. 5 (5), No., - X - -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs Ryszard Walkowiak Department
More informationMATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:
MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is
More information