Estimation for parameters of interest in random effects growth curve models

Size: px
Start display at page:

Download "Estimation for parameters of interest in random effects growth curve models"

Transcription

1 Journal of Multivariate Analysis 98 (2007) Estimation for parameters of interest in random effects growth curve models Wai-Cheung Ip a,, Mi-Xia Wu b, Song-Gui Wang b, Heung Wong a a Department of Applied Mathematics, Hong Kong Polytechnic University, Hong Kong b College of Applied Sciences, Beiing University of Technology, Beiing 00022, China Received March 2005 Available online 8 September 2006 Abstract In this paper, we consider the general growth curve model with multivariate random effects covariance structure and provide a new simple estimator for the parameters of interest. This estimator is not only convenient for testing the hypothesis on the corresponding parameters, but also has higher efficiency than the least-square estimator and the improved two-stage estimator obtained by Rao under certain conditions. Moreover, we obtain the necessary and sufficient condition for the new estimator to be identical to the best linear unbiased estimator. Examples of its application are given Elsevier Inc. All rights reserved. AMS 99 subect classification: 62H2; 62H5; 62F0 Keywords: Growth curve models; Random-effects; Exact test; Best linear unbiased estimator; Maximum likelihood estimator. Introduction The linear mixed model is a popular choice for the treatment of longitudinal data with random effects, see Laird and Ware [6], Diggle et al. [3], Srivastava and VonRosen [4] and Verbeke and Molenberghs [5]. In this paper, we consider the multivariate model in which m distinct characteristics on each of N individuals taken from r different groups are measured on each of p different occasions. The th characteristic on the ith individual can be assumed to follow the mixed model y i = X β (k) + Z u i + ε i, i =,...,N, =,...,m, i kth group, () Corresponding author. Fax: address: mathipwc@polyu.edu.hk (W.C. Ip) X/$ - see front matter 2006 Elsevier Inc. All rights reserved. doi:0.06/.mva

2 38 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) where y i = (y i y ip ), y i l is the measurement of the th characteristic at occasion l on the ith individual, X and Z are, respectively, known p q and p c design matrices of full column rank, β (k) is the q vector of regression parameters on the th characteristic in the treatment group k (k =,...,r),u i is c random effect vector, and ε i is p random error vector. If each of the m characteristics that we consider follows a response curve of the same general type over the p occasions, that is, X = X and Z = Z, then for ith individual, where y i = (X I m )β (k) + (Z I m )u i + ε i, y i = Vec((y i,...,y im ) ), β (k) = Vec((β (k),...,β(k) m ) ), ε i = Vec((ε i,...,ε im ) ), u i = Vec((u i,...,u im ) ), denotes the Kronecker product, and Vec( ) operator stacks the columns of a matrix below one another to form a column vector. The individual random effects u i are assumed to be distributed independently as N(0, Σ u ), and independent of the error ε i, with distribution N(0,I p Σ e ), where Σ e is m m positive definite matrix and Σ u is cm cm nonnegative definite matrix, that is, Σ e > 0 and Σ u 0. Thus the covariance matrix of y i is Cov(y i ) = I p Σ e + (Z I m )Σ u (Z I m )=Ω (say). (2) Let Y = (y,...,y N ), U = (u,...,u N ), and E = (ε,...,ε N ), the full multivariate model () can be expressed as Y = (X I m )ξa + (Z I m )U + E, Cov(Vec(Y )) = I N Ω, (3) which is a general growth curve model with multivariate random effects covariance structure, where ξ = (β (),...,β (r) ) is the qm r matrix of the growth curve coefficients, and A = (a,...,a N ) is the r N matrix with full row rank. In particular, if its elements are either or 0 indicating the group from which an observation comes, A is called the group indicator matrix in literature. When m =, model (3) becomes Y = XξA + ZU + E, and Ω = σ 2 e I p + ZΣ u Z, reducing to the single-variable case. Reinsel [,2], Azzalini [], Lange and Laird [7], and Nummi [8,9] considered two special cases: Z = X and Z = X c, where X = (X c : X c ), and showed that the maximum likelihood estimator (MLE) of ξ is identical to its least-squares estimator (LSE) ˆξ = ((X X) X I m )Y A (AA ). (4) However, this is not the case for a general design matrix Z, where the explicit MLE of ξ usually does not exist. Moreover, both the two-stage estimator provided by Khatri [5] and the LSE ˆξ ignore the structure information on the covariance matrix Ω. One alternative is to adopt the improved two-stage estimator suggested by Rao [0] using the structure covariance matrix of the mixed effect model. In many practical situations, only a part of the parameters of model (3) are meaningful to the researcher. For example, in the rat data of Verbeke and Molenberghs [5], of primary interest is the estimation of changes over time and testing whether these changes are treatment dependent. For the mixed linear model with one random effect, Wu and Wang [7] gave a simple estimator

3 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) of the part parameter by the reduced model which is often used to remove nuisance parameters. In this paper, we mainly consider this problem under the general growth curve model (3) with random effects. In Section 2, we introduce a new simple estimator for the part parameter of ξ by a reduced model, which can be superior to the LSE under certain conditions. In Section 3, we consider the optimality of the new estimator, and obtain the necessary and sufficient condition the new estimator to be identical to the best linear unbiased estimator (BLUE). Furthermore, we show that the new estimator is superior to the corresponding two-stage estimator under certain conditions. Examples are presented in Section New estimator Without loss of generality, we partition X = (X : X 2 ) in model (3) such that M(X ) M(Z), M(X 2 ) M(Z) ={0}, (5) where X and X 2 are p l and p (q l) matrices, respectively, (0 l q), and M(C) is the range space of any matrix C. Partition ξ = (ξ : ξ 2 ) conformably. Thus model (3) can be rewritten as Y = (X I m )ξ A + (X 2 I m )ξ 2 A + (Z I m )U + E, (6) where ξ 2 is the parameter matrix of primary interest. In what follows, we mainly consider the estimation of ξ 2. Denote Q the p (p c) matrix such that Q Q = I p c and Q Z = 0. Thus Q Q = I p P Z, where P Z = Z(Z Z) Z is the orthogonal proector on the space M(Z). Let M Z = I p P Z = Q Q, premultiplying model (6) by Q I m, we can obtain the reduced model (Q I m)y = (Q X 2 I m )ξ 2 A + ε, Cov(Vec(ε)) = I N(p c) Σ e, (7) which is relevant to the matrix parameters ξ 2 and Σ e. It is readily to see that, under model (7), the MLE of ξ 2 equals to its LSE ] ξ 2 = [(X 2 M ZX 2 ) X 2 M Z I m YA (AA ). (8) Let ξ 2 be partitioned as ξ 2 = (ξ 2,...,ξ 2(q l)), where ξ 2i (i =,...,q l)are m r matrices. Assume that the elements of X are functions of the time variable t, such as X = (f i (t )), then ξ 2i represent the regression coefficients attached to f l+ (t),...,f q l (t), for the m characteristics and r groups. Denote Ψ = (ξ 2,...,ξ 2(q l) ). Applying definition of the restricted maximum likelihood (REML) estimator, we can obtain the REML estimator of Σ e under reduced model (7), that is, Σ e = N i= ( )( ) / Y i Q Ψ(X 2 Q a i ) Y i Q Ψ(X 2 Q a i ) k, (9) where k = N(p c) (q l)r, Y i = (y i,...,y im ), and Ψ = ( ξ 2,..., ξ 2(q l) ). Under the original model (6), estimators ξ 2 and Σ e are also unbiased for ξ 2 and Σ e, respectively.

4 320 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) Theorem 2.. (a) ξ 2 N(ξ 2, (AA ) (X 2 M ZX 2 ) Σ e ), (b) k Σ e W m (k, Σ e ), and independent of ξ 2, where W m (k, Σ e ) is the m-dimensional Wishart distribution with k degrees of freedom and parameter matrix Σ e. Proof. (a) is obvious so we only need to prove (b). Noting Vec(AXC) = (C A) Vec(X), thus model (7) can be rewritten as Y i Q = Ψ(X 2 Q a i ) + E i, Cov(Vec(E i )) = I p c Σ e, i =,...,N, (0) where Y i is the same as that in (9), Ψ = (ξ 2,...,ξ 2(q ) ), and E i = (ε i,...,ε im ) Q. Denote Y 0 =(Y Q,...,Y N Q ), X 0 =(X 2 Q a,...,x 2 Q a N ), E 0 =(E,...,E N ), then model (7) can also be rewritten as Y 0 = ΨX 0 + E 0, Cov(Vec(E 0 )) = I N(p c) Σ e. () Thus we get the equivalent forms of Ψ = ( ξ 2,..., ξ 2(q l) ) and Σ e, respectively, Ψ = Y 0 X 0 (X 0X 0 ) = N Y i Q (Q X 2(X 2 M ZX 2 ) a i (AA ) ), i= Σ e = Y 0 (I N(p c) X 0 (X 0X 0 ) X 0 )Y 0 ( /k N )/ = Y i M Z Y i Ψ(X 2 M ZX 2 AA ) Ψ k. i= It follows readily from () that Σ e is independent of Ψ and k Σ e W m (k, Σ e ). The proof of Theorem 2. is completed. Remark 2.. Cov(Vec( ξ 2 )) does not depend on the matrix Σ u. According to Theorem 2., the new estimator ξ 2 can be used to construct an exact test on Ψ, i.e. on ξ 2 for the general linear hypothesis H 0 : LΨG = 0. In fact, the Wilks s Λ is given by Λ = W W + H, where W = k(l Σ e L ) and H = (L ΨG) [ G (X 2 M ZX 2 AA ) G ] (G Ψ L ). On the other hand, from (4), it is easy to obtain that the LSE of ξ 2 under model (6) is ) ˆξ 2 = ((X 2 M X X 2 ) X 2 M X I m YA (AA ), (2) and [ ) Cov(Vec(ˆξ 2 )) = (AA ) (X 2 M X X 2 ) Σ e + ((X 2 M X X 2 ) X 2 M X Z I m Σ u )] (Z M X X 2 (X 2 M X X 2 ) I m. (3)

5 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) Combining with Remark 2., it is reasonable to expect that ξ 2 may be superior to ξ ˆ 2 under some conditions. Suppose that X,X 2 and Z in (5) satisfy the following conditions: X X 2 = 0, Z = (X,Z 0 ), M(Z 0 ) M(X i ) ={0}, i =, 2, (4) then M Z X = 0, M X X 2 = X 2 and M Z = M X M X Z 0 (Z 0 M X Z 0 ) Z 0 M X. (5) For the proof of the last equality see [3]. By the use of (4) and the matrix identity (A + BCB ) = A A B(B A B + C ) B A, we can obtain that (X 2 M ZX 2 ) = (X 2 X 2) + (X 2 X 2) X 2 Z 0(Z 0 M XZ 0 ) Z 0 X 2(X 2 X 2). Thus Cov(Vec( ξ 2 )) can be rewritten as [ ) Cov(Vec( ξ 2 )) = (AA ) (X 2 X 2) Σ e + ((X 2 X 2) X 2 Z 0 I m )] ((Z 0 M XZ 0 ) Σ e )(Z 0 X 2(X 2 X 2) I m. (6) Furthermore, under the assumption (4), (3) can be simplified as [ Cov(Vec(ˆξ 2 )) = (AA ) (X 2 X 2) Σ e ) )] + ((X 2 X 2) X 2 Z 0 I m (Σ u ) 22 (Z 0 X 2(X 2 X 2) I m, (7) where (Σ u ) 22 = (0,I m(c l) )Σ u (0,I m(c l) ) is m(c l) m(c l) nonnegative definite matrix. Comparing (6) with (7), we obtain the condition for which the new estimator ξ 2 is superior to the LSE ξ ˆ 2 in the following theorem. Theorem 2.2. If conditions (4) hold, then Cov(Vec( ξ 2 )) < Cov(Vec(ˆξ 2 )) if and only if X 2 Z 0 = 0 and ((Z 0 M XZ 0 ) Σ e )<(Σ u ) 22. (8) For example, X = (,, ), X 2 = (, 0, ), Z 0 = (, 4, 9), and (Σ u ) 22 = a Σ e, then ξ 2 is superior to ξ ˆ 2 only if a> 2 3. Remark 2.2. The condition (8) is irrelevant to the other sub-matrices of Σ u besides (Σ u ) 22. In the following section, we will show that the new estimator ξ 2 can also be superior to the improved two-stage estimator of ξ 2 under some conditions. 3. The optimality of the new estimator In this section, we consider the optimality of the new estimator ξ 2 of ξ 2, the parameter matrix of primary interest, and give the necessary and sufficient condition under which ξ 2 is equal to the BLUE of ξ 2 for the original model (6).

6 322 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) Theorem 3.. ξ2 is the BLUE of ξ 2 under model (6), if and only if Z M X X 2 = 0, where M X = I p X (X X ) X. (9) Proof. By Lemma [6], ξ 2 is the BLUE of ξ 2 under model (6) if and only if Cov(Vec( ξ 2 ), (I n P A P X I m ) Vec(Y )) = 0, (20) where n = Npm, that is, (AA ) A ((X 2 M ZX 2 ) X 2 M Z I m )Ω(M X I m ) = 0, which is equivalent to X 2 M ZM X = 0. From (5), we have X M Z = 0, thus (2) X P Z M X = 0 P X P Z M X = 0 P X P Z = P X. Combining Theorem (A0, A) of [2] and the fact that P X = P X + M X X 2 (X 2 M X X 2 ) X 2 M X. (22) Eq. (2) can been simplified to Z M X X 2 = 0. The proof of the Theorem 3. is completed. The BLUE of ξ 2 under model (6) is ξ 2 = C 22. C 2M C Ω /2 YA (AA ), (23) where C i = (X i I m))ω /2,i =, 2, C 22. = C 2 M C C 2,M C = I C (C C ) C. Obviously, ξ 2 is an usually unfeasible estimator since Ω in (23) includes two unknown covariance matrices Σ u and Σ e. However, if condition (9) holds, then by Theorem 3., we have ξ 2 = ξ 2, that means the existence of the explicit ML estimator of the part parameter ξ 2. Theorem 3.2. Under model (6), the following statements are equivalent: (a) Z M X X 2 = 0, (b) ξ 2 = ˆξ 2, (c) ˆξ 2 = ξ 2. Proof. (i) Proof of (a) (c). Similar to the proof of Theorem 3., we have ˆξ 2 = ξ 2 Cov(Vec(ˆξ 2 ), (I n P A P X I m ) Vec(Y )) = 0, which can be simplified as (X 2 M X Z I m )Σ u (Z M X I m ) = 0. (24) Since Σ u 0 is arbitrary, (24) is equivalent to Z M X X 2 = 0orZ M X = 0. From (5), we have Z M X = 0 M(Z) = M(X ) Z M X X 2 = 0, thus (24) is equivalent to Z M X X 2 = 0, that is, (a) (c). (2)

7 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) (ii) Proof of (a) (b). Obviously, (a) (b). Now, we consider (b) (a). If (b) holds, then (X 2 M X X 2 ) X 2 M X = (X 2 M ZX 2 ) X 2 M Z. (25) Postmultiplying (25) by Z,wehave(X 2 M X X 2 ) X 2 M X Z=0, which is equivalent to Z M X X 2 = 0. The proof of Theorem 3.2 is completed. Theorem 3.2 shows that the new estimator ξ 2 and the LSE ˆξ 2 can achieve optimality simultaneously, and both necessary and sufficient conditions are Z M X X 2 = 0. Remark 3.. The condition Z M X X 2 = 0 is weaker than the necessary and sufficient condition for ˆξ = ξ under model (6). In fact, by Lemma of [6], we can testify that ˆξ = ξ if and only if Z X = 0or M(Z) = M(X ), which are two special cases for Z M X X 2 = 0. Remark 3. sharpens the intuition that while the explicit MLE of the whole parameter matrix does not exist, it may do so for the part parameter matrix. For the covariance matrix with random effects such as (2), Rao [0] presented an improved two-stage estimator of ξ ˆξ T = ˆξ ((X X) X I m )SQ(Q SQ) Q YA (AA ), where Q = M X Z I m,t = Q Y, and S = Y(I n P A )Y. ˆξ T is derived by covariance adustment in the LSE ˆξ using the concomitant variable T, Rao [0] and Grizzle and Allen [4] proved N r Cov(Vec(ˆξ T )) = N r mk 0 (AA ) {(X I m )Ω (X I m )} N r = N r mk 0 Cov(Vec(ξ )) Cov(Vec(ξ )), (26) where k 0 = rk(m X Z). Clearly, (26) takes the equality if and only if k 0 = 0, which requires M(Z) M(X). In this case the improved two-stage estimator ˆξ T is equal to the LSE ˆξ, and ξ 2 = ˆξ 2 = ˆξ 2T, where ˆξ 2T is the corresponding improved two-stage estimator of ξ 2, ˆξ T = (ˆξ T, ˆξ 2T ). In the following corollary, we will give a set of conditions for the new estimator ξ 2 to be superior to the improved two-stage estimator ˆξ 2T under the case k 0 = 0. Corollary 3.. If M(X) M(Z) = M(X ), M(Z) = M(X ) and Z M X X 2 = 0, then Cov(Vec( ξ ˆ 2T )) > Cov(Vec(ξ 2 )) = Cov(Vec( ξ 2 )) = Cov(Vec(ˆξ 2 )). (27) The conditions M(X) M(Z) = M(X ) and M(Z) = M(X ) ensure k 0 = 0. Upon use of Theorem 3. and (26), we can obtain (27). In order to understand the set of conditions in Corollary 3., without loss of generality, we assume Z = (X : Z 0 ). By (5), we have and M(Z 0 ) M(X) ={0}, Z M X X 2 = 0 Z 0 M X X 2 = 0.

8 324 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) According to Theorem (A0, A) of [2], Z 0 M X X 2 = 0 is equivalent to P Z0 P X = 0, thus Z M X X 2 = 0 Z 0 X = 0, (28) and hence the conditions in Corollary 3. are equivalent to Z 0 X = 0, and Z 0 = 0. For example, X = (,,, ),X 2 = (, 2, 3, 4),Z = (X,Z 0 ), where Z 0 = (,,, ), clearly, Z 0 = 0 and Z 0 X = Examples In this section, we shall give two simple examples to illustrate the foregoing results. Example. Consider the model for the rat data (see Verbeke and Molenberghs [4]): β 0 + u 0 + β t i + u z i + ε i if low dose, y i = β 02 + u 0 + β 2 t i + u z i + ε i if high dose, β 03 + u 0 + β 3 t i + u z i + ε i if control, where y i is the observation for the th individual at t i time point, i =,...,p, u = (u 0,u ) is random effect vector with normal distribution N(0, Σ u ), and random error ε i follows the distribution N(0, σ 2 e ). We assume that u,...,u N, ε,...,ε N,...,ε p,...,ε pn are independent each other. Of primary interest is the estimation of the slopes β, β 2 and β 3, and testing whether these slopes are equal to each other. Without loss of generality, we assume that Y = (y,...,y n ), Y 2 = (y n +,..., y n2 and Y 3 = (y n2 +,..., y N ) are the matrices of the observations for the low-dose group, high-dose group and control group, respectively, where y = (y,...,y p ). Denote and Y = (Y : Y 2 : Y 3 ), U = (u,...,u N ), T = (t,...,t p ), Z 0 = (z,...,z p ), X = ( p : T), Z = ( p : Z 0 ) ξ = ( ξ ξ 2 ) ( ) n β0 β = 02 β 0 (n2 n ) 0 (N n2 ) 03, A = 0 n β β 2 β 3, (n 2 n ) 0 (N n2 ) 0 n 0 (n2 n ) (N n 2 ) then model (4.2) can be rewritten as Y = XξA + ZU + E = p ξ A + T ξ 2 A + ZU + E. (30) The covariance matrix of Y is Cov(Vec(Y )) = I N (ZΣ u Z + σ 2 e I p). Denote Q the p (p 2) matrix, such that Q Q = M Z = I p P Z, then the reduced model of (30) may be represented by Q Y = Q T ξ 2A + Q E, Cov(Vec(Q Y)) = I N (σ 2 e I p 2). (3) By (8) and (9), we obtain the new estimators of ξ 2 = (β, β 2, β 3 ) and σ 2 ξ 2 = T M Z T T M Z (Y : Y 2 : Y 3 ),, (29)

9 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) or and β r = σ 2 e = T M Z T T M Z Y r, r =, 2, 3, (32) 3 SS r /k, r= (33) where k = N(p 2) 3, Y r = (Y r,...,y pr ), n Y i = y i /n, Y i2 = = n n 2 =n + SS = (y β T) M Z (y β T), SS 2 = SS 3 = = n 2 =n + N =n 2 + y i /(n 2 n ), Y i3 = (y β 2 T) M Z (y β 2 T), and (y β 3 T) M Z (y β 3 T). N =n 2 + y i /(N n 2 ), By Theorem 2., ξ 2 and σ 2 are independent, based on which we can construct a test statistic for testing H 0 : β = β 2 = β 3. Let ( ) H 0 =, 0 then H 0 is equivalent to testing ξ 2 H = 0. The exact test statistic is F = ξ 2 H(H (AA ) H) H ξ 2 /2 (T M Z T), σ 2 e which is an F-statistic and hence F F 2,k if ξ 2 H = 0 holds. By Theorem 2.2, under the case T Z 0 = 0, if the variance of the error σ 2 satisfies σ 2 e <(Z 0 M XZ 0 )(Σ u ) 22 = (Z 0 M XZ 0 ) Var(u ), (34) then ξ 2 is superior to the LSE of ξ 2 ˆξ 2 = (T t p ) (T t p ) (T t p) (Y : Y 2 : Y 3 ). (35) By Theorem 3.2, if T Z 0 = 0 and p Z 0 = 0, then ξ 2 = ˆξ 2, and ξ 2 is the BLUE of ξ 2 under model (29). Furthermore, if Z 0 = 0, by Corollary 3., then ξ 2 is superior to the improved two-stage estimator of ξ 2.

10 326 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) Example 2. Consider the model for systolic and diastolic blood pressure data: { () β 0 y i l = + u i + β () x l + v i t l + ε i l if treatment group, β (2) 0 + u i + β (2) x l + v i t l + ε i l if treatment 2 group, i =,...,N, =, 2, l =,...,p, (36) where y il,y i2l are the systolic and diastolic blood pressure for the ith patients (with moderate essential hypertension) at t l time point, respectively, β () 0 and β (2) 0 are fixed intercepts, β () and β (2) are the fixed effects of treatment and 2 on the endpoints, respectively. We assume that random individual effect u i = (u i,u i2,v i,v i2 ) follows distribution N(0, Σ u ), and independent of the error ε i = (ε i,ε i2,...,ε ip,ε i2p ) with distribution N(0,I p Σ e ). Clearly, (36) is the case of model (3) with m = 2, β () 0 β (2) 0 ( ) β () ξ 02 β (2) 02 ( ) ξ = =, A = n 0 (N n ) ξ 2 β () β (2) 0 n, (37) (N n ) β () 2 β (2) 2 and X = ( p, x), Z = ( p, t), U = (u,...,u N ). Here x = (x,...,x p ) and t = (t,...,t p ) are designs vectors on dose and time point. Of primary interest is the testing on ξ 2, the effects of treatment and 2 on systolic and diastolic blood pressure. According to Theorem 2., we can obtain the actual test statistic on ξ 2, Wilks s Λ statistic (2), which is constructed by the new estimator ξ 2 given by (8). Otherwise, the new estimator of ξ 2 is superior to the LSE and two-stage estimator under the conditions of Theorem 2.2 and Corollary 3.. See the following typical designs on dose x and time point t. Take x = (,, 0,, ), t = (, 2, 3, 4, 5), then conditions (8) can be equivalent to Σ e = Cov((ε il,ε i2l ) )<Cov((v i,v i2 ) ). (38) That is, if the covariance matrix of error is lower than the covariance matrix of time effect under Löwner partial ordering, then the new estimator of ξ 2 is superior to the LSE. Take x = (0, 2, 2, 2, 0), t = ( 2,, 0,, 2), it is easy to verify that Z M X x = (0,t x) = (0, 0), so the new estimator of ξ 2 is superior to the two-stage estimator provided by Rao. The above two examples show that Theorem 2.2 and Corollary 3. are helpful for the study on optimal design of experiments too. Acknowledgments This research was supported by a grant from The Hong Kong Polytechnic University Research Committee. Wu s and Wang s research was also partially supported by National Natural Science

11 W.C. Ip et al. / Journal of Multivariate Analysis 98 (2007) Foundation of China. We are grateful to the two referees for their detailed suggestions which considerably improved the quality of the paper. References [] A. Azzalini, Growth curves analysis for patterned covariance matrices, in: M.L. Puri, J.P. Vilaplana, W. Wertz (Eds.), New Perspectives in Theoretical and Applied Statistics, Wiley, New York, 987, pp [2] J.K. Baksalary, Algebraic characterizations and statistical implications of the commutativity of orthogonal proectors, in: Tarmo Pukkila, S. Puntanen (Eds.), Proceedings of the Second International Tampere Conference in Statistics, 987, pp [3] P.J. Diggle, K.E. Liang, S.L. Zeger, Analysis of Longitudinal Data, Oxford Science, New York, 994. [4] J.E. Grizzle, D.M. Allen, Analysis of growth and dose response curves, Biometrics 25 (969) [5] C.C. Khatri, A note on a MANOVA model applied to problems in growth curves, Ann. Inst. Statist. Math. 8 (966) [6] N.M. Laird, J.H. Ware, Random effects models for longitudinal data, Biometrics 38 (982) [7] N. Lange, N.M. Laird, The effect of covariance structure on variance estimation in balanced growth-curve models with random parameters, J. Amer. Statist. Assoc. 84 (989) [8] T. Nummi, Estimation in a random effects growth curve model, J. Appl. Statist. 24 (997) [9] T. Nummi, Möttönen, On the analysis of multivariate growth curves, Metrika 52 (2000) [0] C.R. Rao, Least squares theory using an estimated dispersion matrix and its application to measurement of signals, in: L. Lecam, J. Neyman (Eds.), Proceeding of the Fifth Berkeley Symposium on Math Statistics & Probability, first ed., 967, pp [] G. Reinsel, Multivariate repeated-measurement or growth curve models with multivariate random-effects covariance structure, J. Amer. Statist. Assoc. 77 (982) [2] G. Reinsel, Estimation and prediction in a multivariate random effects generalized linear model, J. Amer. Statist. Assoc. 79 (984) [3] G.A.F. Seber, Linear Regression Analysis, Wiley, New York, 977 p. 66. [4] M.S. Srivastava, D. VonRosen, Growth curve models, in: S. Ghosh (Ed.), Multivariate Analysis, Design of Experiments, and Survey Sampling, Marcel Dekker Inc., New York, 999. [5] G. Verbeke, G. Molenberghs, Linear Mixed Models for Longitudinal Data, Springer, New York, [6] S.G. Wang, S.C. Chow, Advanced Linear Models, Marcel Dekker Inc., New York, 994. [7] M.X. Wu, S.G. Wang, Parameter estimation in a partitioned mixed-effects model, Chinese J. Eng. Math. 9 (2) (2002) 3 38.

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

A property of orthogonal projectors

A property of orthogonal projectors Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,

More information

Analysis of Microtubules using. for Growth Curve modeling.

Analysis of Microtubules using. for Growth Curve modeling. Analysis of Microtubules using Growth Curve Modeling Md. Aleemuddin Siddiqi S. Rao Jammalamadaka Statistics and Applied Probability, University of California, Santa Barbara March 1, 2006 1 Introduction

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

A new algebraic analysis to linear mixed models

A new algebraic analysis to linear mixed models A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay Lecture 5: Multivariate Multiple Linear Regression The model is Y n m = Z n (r+1) β (r+1) m + ɛ

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

Small area estimation with missing data using a multivariate linear random effects model

Small area estimation with missing data using a multivariate linear random effects model Department of Mathematics Small area estimation with missing data using a multivariate linear random effects model Innocent Ngaruye, Dietrich von Rosen and Martin Singull LiTH-MAT-R--2017/07--SE Department

More information

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: , ISSN(Online): Vol.9, No.9, pp , 2016

International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: , ISSN(Online): Vol.9, No.9, pp , 2016 International Journal of PharmTech Research CODEN (USA): IJPRIF, ISSN: 0974-4304, ISSN(Online): 2455-9563 Vol.9, No.9, pp 477-487, 2016 Modeling of Multi-Response Longitudinal DataUsing Linear Mixed Modelin

More information

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly

More information

Multivariate Linear Regression Models

Multivariate Linear Regression Models Multivariate Linear Regression Models Regression analysis is used to predict the value of one or more responses from a set of predictors. It can also be used to estimate the linear association between

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

Likelihood Ratio Tests in Multivariate Linear Model

Likelihood Ratio Tests in Multivariate Linear Model Likelihood Ratio Tests in Multivariate Linear Model Yasunori Fujikoshi Department of Mathematics, Graduate School of Science, Hiroshima University, 1-3-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8626,

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

On Properties of QIC in Generalized. Estimating Equations. Shinpei Imori

On Properties of QIC in Generalized. Estimating Equations. Shinpei Imori On Properties of QIC in Generalized Estimating Equations Shinpei Imori Graduate School of Engineering Science, Osaka University 1-3 Machikaneyama-cho, Toyonaka, Osaka 560-8531, Japan E-mail: imori.stat@gmail.com

More information

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham

More information

Linear Models for Multivariate Repeated Measures Data

Linear Models for Multivariate Repeated Measures Data THE UNIVERSITY OF TEXAS AT SAN ANTONIO, COLLEGE OF BUSINESS Working Paper SERIES Date December 3, 200 WP # 007MSS-253-200 Linear Models for Multivariate Repeated Measures Data Anuradha Roy Management Science

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC A Simple Approach to Inference in Random Coefficient Models March 8, 1988 Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC 27695-8203 Key Words

More information

Fisher information for generalised linear mixed models

Fisher information for generalised linear mixed models Journal of Multivariate Analysis 98 2007 1412 1416 www.elsevier.com/locate/jmva Fisher information for generalised linear mixed models M.P. Wand Department of Statistics, School of Mathematics and Statistics,

More information

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A ) Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling

Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling Monte Carlo Methods Appl, Vol 6, No 3 (2000), pp 205 210 c VSP 2000 Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling Daniel B Rowe H & SS, 228-77 California Institute of

More information

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský

More information

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley Zellner s Seemingly Unrelated Regressions Model James L. Powell Department of Economics University of California, Berkeley Overview The seemingly unrelated regressions (SUR) model, proposed by Zellner,

More information

Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models

Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models 1 Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models Martina Hančová Institute of Mathematics, P. J. Šafárik University in Košice Jesenná 5,

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

Orthogonal arrays obtained by repeating-column difference matrices

Orthogonal arrays obtained by repeating-column difference matrices Discrete Mathematics 307 (2007) 246 261 www.elsevier.com/locate/disc Orthogonal arrays obtained by repeating-column difference matrices Yingshan Zhang Department of Statistics, East China Normal University,

More information

Multivariate Regression (Chapter 10)

Multivariate Regression (Chapter 10) Multivariate Regression (Chapter 10) This week we ll cover multivariate regression and maybe a bit of canonical correlation. Today we ll mostly review univariate multivariate regression. With multivariate

More information

Estimation and Testing for Common Cycles

Estimation and Testing for Common Cycles Estimation and esting for Common Cycles Anders Warne February 27, 2008 Abstract: his note discusses estimation and testing for the presence of common cycles in cointegrated vector autoregressions A simple

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

On a matrix result in comparison of linear experiments

On a matrix result in comparison of linear experiments Linear Algebra and its Applications 32 (2000) 32 325 www.elsevier.com/locate/laa On a matrix result in comparison of linear experiments Czesław Stȩpniak, Institute of Mathematics, Pedagogical University

More information

GLM Repeated Measures

GLM Repeated Measures GLM Repeated Measures Notation The GLM (general linear model) procedure provides analysis of variance when the same measurement or measurements are made several times on each subject or case (repeated

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 431 (29) 188 195 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Lattices associated with

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

MULTIVARIATE POPULATIONS

MULTIVARIATE POPULATIONS CHAPTER 5 MULTIVARIATE POPULATIONS 5. INTRODUCTION In the following chapters we will be dealing with a variety of problems concerning multivariate populations. The purpose of this chapter is to provide

More information

On the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors

On the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors On the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors Takashi Seo a, Takahiro Nishiyama b a Department of Mathematical Information Science, Tokyo University

More information

On Hadamard and Kronecker Products Over Matrix of Matrices

On Hadamard and Kronecker Products Over Matrix of Matrices General Letters in Mathematics Vol 4, No 1, Feb 2018, pp13-22 e-issn 2519-9277, p-issn 2519-9269 Available online at http:// wwwrefaadcom On Hadamard and Kronecker Products Over Matrix of Matrices Z Kishka1,

More information

The equivalence of the Maximum Likelihood and a modified Least Squares for a case of Generalized Linear Model

The equivalence of the Maximum Likelihood and a modified Least Squares for a case of Generalized Linear Model Applied and Computational Mathematics 2014; 3(5): 268-272 Published online November 10, 2014 (http://www.sciencepublishinggroup.com/j/acm) doi: 10.11648/j.acm.20140305.22 ISSN: 2328-5605 (Print); ISSN:

More information

SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX

SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX Iranian Journal of Fuzzy Systems Vol 5, No 3, 2008 pp 15-29 15 SOLVING FUZZY LINEAR SYSTEMS BY USING THE SCHUR COMPLEMENT WHEN COEFFICIENT MATRIX IS AN M-MATRIX M S HASHEMI, M K MIRNIA AND S SHAHMORAD

More information

EPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large

EPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large EPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large Tetsuji Tonda 1 Tomoyuki Nakagawa and Hirofumi Wakaki Last modified: March 30 016 1 Faculty of Management and Information

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

Minimax design criterion for fractional factorial designs

Minimax design criterion for fractional factorial designs Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:

More information

Rejoinder. 1 Phase I and Phase II Profile Monitoring. Peihua Qiu 1, Changliang Zou 2 and Zhaojun Wang 2

Rejoinder. 1 Phase I and Phase II Profile Monitoring. Peihua Qiu 1, Changliang Zou 2 and Zhaojun Wang 2 Rejoinder Peihua Qiu 1, Changliang Zou 2 and Zhaojun Wang 2 1 School of Statistics, University of Minnesota 2 LPMC and Department of Statistics, Nankai University, China We thank the editor Professor David

More information

Generalized Multivariate Rank Type Test Statistics via Spatial U-Quantiles

Generalized Multivariate Rank Type Test Statistics via Spatial U-Quantiles Generalized Multivariate Rank Type Test Statistics via Spatial U-Quantiles Weihua Zhou 1 University of North Carolina at Charlotte and Robert Serfling 2 University of Texas at Dallas Final revision for

More information

Expected probabilities of misclassification in linear discriminant analysis based on 2-Step monotone missing samples

Expected probabilities of misclassification in linear discriminant analysis based on 2-Step monotone missing samples Expected probabilities of misclassification in linear discriminant analysis based on 2-Step monotone missing samples Nobumichi Shutoh, Masashi Hyodo and Takashi Seo 2 Department of Mathematics, Graduate

More information

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion

More information

MULTIVARIATE ANALYSIS OF VARIANCE

MULTIVARIATE ANALYSIS OF VARIANCE MULTIVARIATE ANALYSIS OF VARIANCE RAJENDER PARSAD AND L.M. BHAR Indian Agricultural Statistics Research Institute Library Avenue, New Delhi - 0 0 lmb@iasri.res.in. Introduction In many agricultural experiments,

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Department of Statistics

Department of Statistics Research Report Department of Statistics Research Report Department of Statistics No. 05: Testing in multivariate normal models with block circular covariance structures Yuli Liang Dietrich von Rosen Tatjana

More information

Regression models for multivariate ordered responses via the Plackett distribution

Regression models for multivariate ordered responses via the Plackett distribution Journal of Multivariate Analysis 99 (2008) 2472 2478 www.elsevier.com/locate/jmva Regression models for multivariate ordered responses via the Plackett distribution A. Forcina a,, V. Dardanoni b a Dipartimento

More information

Determinants of Partition Matrices

Determinants of Partition Matrices journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised

More information

A MODIFICATION OF THE HARTUNG KNAPP CONFIDENCE INTERVAL ON THE VARIANCE COMPONENT IN TWO VARIANCE COMPONENT MODELS

A MODIFICATION OF THE HARTUNG KNAPP CONFIDENCE INTERVAL ON THE VARIANCE COMPONENT IN TWO VARIANCE COMPONENT MODELS K Y B E R N E T I K A V O L U M E 4 3 ( 2 0 0 7, N U M B E R 4, P A G E S 4 7 1 4 8 0 A MODIFICATION OF THE HARTUNG KNAPP CONFIDENCE INTERVAL ON THE VARIANCE COMPONENT IN TWO VARIANCE COMPONENT MODELS

More information

Multivariate Linear Models

Multivariate Linear Models Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where 21 : CHAPTER Seemingly-Unrelated Regressions The Algebra of the Kronecker Product Consider the matrix equation Y = AXB where Y =[y kl ]; k =1,,r,l =1,,s, (1) X =[x ij ]; i =1,,m,j =1,,n, A=[a ki ]; k =1,,r,i=1,,m,

More information

Shrinkage Estimation of the Slope Parameters of two Parallel Regression Lines Under Uncertain Prior Information

Shrinkage Estimation of the Slope Parameters of two Parallel Regression Lines Under Uncertain Prior Information Shrinkage Estimation of the Slope Parameters of two Parallel Regression Lines Under Uncertain Prior Information Shahjahan Khan Department of Mathematics & Computing University of Southern Queensland Toowoomba,

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Two remarks on normality preserving Borel automorphisms of R n

Two remarks on normality preserving Borel automorphisms of R n Proc. Indian Acad. Sci. (Math. Sci.) Vol. 3, No., February 3, pp. 75 84. c Indian Academy of Sciences Two remarks on normality preserving Borel automorphisms of R n K R PARTHASARATHY Theoretical Statistics

More information

The Hausdorff measure of a class of Sierpinski carpets

The Hausdorff measure of a class of Sierpinski carpets J. Math. Anal. Appl. 305 (005) 11 19 www.elsevier.com/locate/jmaa The Hausdorff measure of a class of Sierpinski carpets Yahan Xiong, Ji Zhou Department of Mathematics, Sichuan Normal University, Chengdu

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION

ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION K. KRISHNAMOORTHY 1 and YONG LU Department of Mathematics, University of Louisiana at Lafayette Lafayette, LA

More information

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity Linear Algebra and its Applications 384 2004) 43 54 www.elsevier.com/locate/laa A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity Qing-Wen Wang Department

More information

Reference: Davidson and MacKinnon Ch 2. In particular page

Reference: Davidson and MacKinnon Ch 2. In particular page RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy

More information

Solutions to the generalized Sylvester matrix equations by a singular value decomposition

Solutions to the generalized Sylvester matrix equations by a singular value decomposition Journal of Control Theory Applications 2007 5 (4) 397 403 DOI 101007/s11768-006-6113-0 Solutions to the generalized Sylvester matrix equations by a singular value decomposition Bin ZHOU Guangren DUAN (Center

More information

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix journal of optimization theory and applications: Vol. 127, No. 3, pp. 639 663, December 2005 ( 2005) DOI: 10.1007/s10957-005-7508-7 Recursive Determination of the Generalized Moore Penrose M-Inverse of

More information

6 Pattern Mixture Models

6 Pattern Mixture Models 6 Pattern Mixture Models A common theme underlying the methods we have discussed so far is that interest focuses on making inference on parameters in a parametric or semiparametric model for the full data

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

Application of Ghosh, Grizzle and Sen s Nonparametric Methods in. Longitudinal Studies Using SAS PROC GLM

Application of Ghosh, Grizzle and Sen s Nonparametric Methods in. Longitudinal Studies Using SAS PROC GLM Application of Ghosh, Grizzle and Sen s Nonparametric Methods in Longitudinal Studies Using SAS PROC GLM Chan Zeng and Gary O. Zerbe Department of Preventive Medicine and Biometrics University of Colorado

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 MA 575 Linear Models: Cedric E Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 1 Within-group Correlation Let us recall the simple two-level hierarchical

More information

Analysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution

Analysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution Journal of Computational and Applied Mathematics 216 (2008) 545 553 www.elsevier.com/locate/cam Analysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China

More information

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES

A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Bull. Korean Math. Soc. 52 (205), No. 3, pp. 825 836 http://dx.doi.org/0.434/bkms.205.52.3.825 A NOTE ON THE COMPLETE MOMENT CONVERGENCE FOR ARRAYS OF B-VALUED RANDOM VARIABLES Yongfeng Wu and Mingzhu

More information

A measure for the reliability of a rating scale based on longitudinal clinical trial data Link Peer-reviewed author version

A measure for the reliability of a rating scale based on longitudinal clinical trial data Link Peer-reviewed author version A measure for the reliability of a rating scale based on longitudinal clinical trial data Link Peer-reviewed author version Made available by Hasselt University Library in Document Server@UHasselt Reference

More information

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation Using the least squares estimator for β we can obtain predicted values and compute residuals: Ŷ = Z ˆβ = Z(Z Z) 1 Z Y ˆɛ = Y Ŷ = Y Z(Z Z) 1 Z Y = [I Z(Z Z) 1 Z ]Y. The usual decomposition

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

On the conservative multivariate multiple comparison procedure of correlated mean vectors with a control

On the conservative multivariate multiple comparison procedure of correlated mean vectors with a control On the conservative multivariate multiple comparison procedure of correlated mean vectors with a control Takahiro Nishiyama a a Department of Mathematical Information Science, Tokyo University of Science,

More information

Efficient Estimation of Dynamic Panel Data Models: Alternative Assumptions and Simplified Estimation

Efficient Estimation of Dynamic Panel Data Models: Alternative Assumptions and Simplified Estimation Efficient Estimation of Dynamic Panel Data Models: Alternative Assumptions and Simplified Estimation Seung C. Ahn Arizona State University, Tempe, AZ 85187, USA Peter Schmidt * Michigan State University,

More information

Biostatistics Workshop Longitudinal Data Analysis. Session 4 GARRETT FITZMAURICE

Biostatistics Workshop Longitudinal Data Analysis. Session 4 GARRETT FITZMAURICE Biostatistics Workshop 2008 Longitudinal Data Analysis Session 4 GARRETT FITZMAURICE Harvard University 1 LINEAR MIXED EFFECTS MODELS Motivating Example: Influence of Menarche on Changes in Body Fat Prospective

More information

Statistical Inference On the High-dimensional Gaussian Covarianc

Statistical Inference On the High-dimensional Gaussian Covarianc Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Inequalities for the numerical radius, the norm and the maximum of the real part of bounded linear operators in Hilbert spaces

Inequalities for the numerical radius, the norm and the maximum of the real part of bounded linear operators in Hilbert spaces Available online at www.sciencedirect.com Linear Algebra its Applications 48 008 980 994 www.elsevier.com/locate/laa Inequalities for the numerical radius, the norm the maximum of the real part of bounded

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

This note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants.

This note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants. of By W. A. HARRIS, Jr. ci) and T. N. This note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants. 1. Introduction.

More information

Economics 620, Lecture 5: exp

Economics 620, Lecture 5: exp 1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

SOME INEQUALITIES FOR THE KHATRI-RAO PRODUCT OF MATRICES

SOME INEQUALITIES FOR THE KHATRI-RAO PRODUCT OF MATRICES SOME INEQUALITIES FOR THE KHATRI-RAO PRODUCT OF MATRICES CHONG-GUANG CAO, XIAN ZHANG, AND ZHONG-PENG YANG Abstract. Several inequalities for the Khatri-Rao product of complex positive definite Hermitian

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

Formulas for the Drazin Inverse of Matrices over Skew Fields

Formulas for the Drazin Inverse of Matrices over Skew Fields Filomat 30:12 2016 3377 3388 DOI 102298/FIL1612377S Published by Faculty of Sciences and Mathematics University of Niš Serbia Available at: http://wwwpmfniacrs/filomat Formulas for the Drazin Inverse of

More information