On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model

Similar documents
Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Simultaneous Estimation in a Restricted Linear Model*

Multivariate Distributions

A note on the equality of the BLUPs for new observations under two linear models

1. The Multivariate Classical Linear Regression Model

Sociedad de Estadística e Investigación Operativa

Research Article Strong Convergence Bound of the Pareto Index Estimator under Right Censoring

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

Stochastic Design Criteria in Linear Models

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

Bayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

Notes on Random Vectors and Multivariate Normal

Measures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT)

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Econometrics Summary Algebraic and Statistical Preliminaries

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley

SIMULTANEOUS ESTIMATION OF SCALE MATRICES IN TWO-SAMPLE PROBLEM UNDER ELLIPTICALLY CONTOURED DISTRIBUTIONS

WEAKER MSE CRITERIA AND TESTS FOR LINEAR RESTRICTIONS IN REGRESSION MODELS WITH NON-SPHERICAL DISTURBANCES. Marjorie B. MCELROY *

Consistent Bivariate Distribution

A NOTE ON ESTIMATION UNDER THE QUADRATIC LOSS IN MULTIVARIATE CALIBRATION

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances

MULTIVARIATE POPULATIONS

TAMS39 Lecture 2 Multivariate normal distribution

An Introduction to Multivariate Statistical Analysis

Spatial Regression. 3. Review - OLS and 2SLS. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved

Research Article On the Marginal Distribution of the Diagonal Blocks in a Blocked Wishart Random Matrix

Spherically Symmetric Logistic Distribution

A Study for the Moment of Wishart Distribution

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

The Bayesian Approach to Multi-equation Econometric Model Estimation

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Ratio of Linear Function of Parameters and Testing Hypothesis of the Combination Two Split Plot Designs

2. Matrix Algebra and Random Vectors

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models

Generalized Least Squares

Multivariate Statistics

Statistical Inference On the High-dimensional Gaussian Covarianc

Consistency of test based method for selection of variables in high dimensional two group discriminant analysis

Projektpartner. Sonderforschungsbereich 386, Paper 163 (1999) Online unter:

ORTHOGONALLY INVARIANT ESTIMATION OF THE SKEW-SYMMETRIC NORMAL MEAN MATRIX SATOSHI KURIKI

MFin Econometrics I Session 4: t-distribution, Simple Linear Regression, OLS assumptions and properties of OLS estimators

Han-Ying Liang, Dong-Xia Zhang, and Jong-Il Baek

On Selecting Tests for Equality of Two Normal Mean Vectors

Least Squares Estimation-Finite-Sample Properties

A MODIFICATION OF THE HARTUNG KNAPP CONFIDENCE INTERVAL ON THE VARIANCE COMPONENT IN TWO VARIANCE COMPONENT MODELS

Basic Concepts in Matrix Algebra

OMISSION OF RELEVANT EXPLANATORY VARIABLES IN SUR MODELS: OBTAINING THE BIAS USING A TRANSFORMATION

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Some Results on the Multivariate Truncated Normal Distribution

Thomas J. Fisher. Research Statement. Preliminary Results

Greene, Econometric Analysis (7th ed, 2012)

Some theorems on the explicit evaluations of singular moduli 1

Introduction to Estimation Methods for Time Series models. Lecture 1

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Minimax design criterion for fractional factorial designs

Linear Models and Estimation by Least Squares

On testing the equality of mean vectors in high dimension

Stein-Rule Estimation under an Extended Balanced Loss Function

PRE-TEST ESTIMATION OF THE REGRESSION SCALE PARAMETER WITH MULTIVARIATE STUDENT-t ERRORS AND INDEPENDENT SUB-SAMPLES

Are Probabilities Used in Markets? 1

COMPARISON OF GMM WITH SECOND-ORDER LEAST SQUARES ESTIMATION IN NONLINEAR MODELS. Abstract

Bayesian IV: the normal case with multiple endogenous variables

Estimation for parameters of interest in random effects growth curve models

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management

Statistical Inference with Monotone Incomplete Multivariate Normal Data

High-dimensional asymptotic expansions for the distributions of canonical correlations

Invariant coordinate selection for multivariate data analysis - the package ICS

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Asymptotic inference for a nonstationary double ar(1) model

On the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.

Linear Models for Multivariate Repeated Measures Data

Consistency of Test-based Criterion for Selection of Variables in High-dimensional Two Group-Discriminant Analysis

REVIEW (MULTIVARIATE LINEAR REGRESSION) Explain/Obtain the LS estimator () of the vector of coe cients (b)

4. Distributions of Functions of Random Variables

Large Sample Properties of Estimators in the Classical Linear Regression Model

On Hadamard and Kronecker Products Over Matrix of Matrices

Econ 2120: Section 2

Multivariate Regression Analysis

A Practical Guide for Creating Monte Carlo Simulation Studies Using R

Research Article Extended Precise Large Deviations of Random Sums in the Presence of END Structure and Consistent Variation

Business Economics BUSINESS ECONOMICS. PAPER No. : 8, FUNDAMENTALS OF ECONOMETRICS MODULE No. : 3, GAUSS MARKOV THEOREM

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

MARGINAL HOMOGENEITY MODEL FOR ORDERED CATEGORIES WITH OPEN ENDS IN SQUARE CONTINGENCY TABLES

Multivariate Regression

ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO

Department of Econometrics and Business Statistics

ON A ZONAL POLYNOMIAL INTEGRAL

1 Appendix A: Matrix Algebra

Transcription:

Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model Hiroshi Kurata* Yamaguchi University, Yamaguchi, Japan E-mail: kuratapo.cc.yamaguchi-u.ac.jp Received September 18, 1997 This paper investigates the efficiencies of several generalized least squares estimators (GLSEs) in terms of the covariance matrix. Two models are analyzed: a seemingly unrelated regression model and a heteroscedastic model. In both models, we define a class of unbiased GLSEs and show that their covariance matrices remain the same even if the distribution of the error term deviates from the normal distributions. The results are applied to the problem of evaluating the lower and upper bounds for the covariance matrices of the GLSEs. 1999 Academic Press AMS 1991 subject classifications: 62J05. Key words and phrases: elliptically symmetric distributions; covariance matrix; generalized least squares estimators; seemingly unrelated regression model; heteroscedastic model. 1. INTRODUCTION In this paper, we consider the linear regression model where y=x;+= with E(=)=0 and E(==$)=0, (1) 0=0(%)#S + (n). Here, X is an n_k known matrix of full rank, 0 is a function of an unknown vector %, and S + (n) denotes the set of n_n positive definite matrices. A generalized least squares estimator (GLSE) of ; is defined as ; (0 )=(X$0 &1 X ) &1 X$0 &1 y with 0 =0(% ), (2) * The author thanks the editors and an anonymous referee for their valuable comments on the earlier version of this paper. This research is financially supported by Faculty of Economics Research Fund in Yamaguchi University. 0047-259X99 30.00 Copyright 1999 by Academic Press All rights of reproduction in any form reserved. 86

EFFICIENCIES OF GLSEs 87 where % is an estimator of %. This paper investigates the finite sample efficiencies of several unbiased GLSEs in terms of covariance matrix. We assume that the distribution of the error term = is elliptically symmetric with mean 0 and covariance 0. That is, the probability density function f(=) of = is expressed as f(=)= 0 &12 f 0 (=$0 &1 =) for some nonnegative function f 0 such that R n f 0 (x$x) dx=1 and R n xx$f 0 (x$x) dx=i n. This will be written as L(=)#E n (0, 0), where L(=) denotes the distribution of =. The class E n (0, 0) contains some distributions whose tails are longer than that of the normal distribution N n (0, 0). In the case where = is distributed as N n (0, 0), several GLSEs satisfy the following condition: conditional on 0, where H is given mby E(; (0 ) 0 )=; and Cov(; (0 ) 0 )=H, (3) H#H(0, 0)=(X$0 &1 X ) &1 X$0 &1 00 &1 X(X$0 &1 X ) &1. (4) Clearly (3) implies that Cov(; (0 ))=E(H(0, 0)). Typical examples are the unrestricted Zellner estimator (UZE) in a seemingly unrelated regression (SUR) model (Zellner, 1962, 1963) and a GLSE in a heteroscedastic model. For such a GLSE, Kurata and Kariya (1996) and Kariya (1981) derived the lower and upper bounds for the covariance matrix Cov(; (0 )) as (X$0 &1 X ) &1 Cov(; (0 ))E[L(0, 0)](X$0 &1 X ) &1, (5) where L(0, 0)=(l 1 +l n ) 2 4l 1 l n and l 1 }}}l n are the latent roots of 0 &12 00 &12. Bilodeau (1990) regarded L(0, 0) as a loss function for choosing an estimator 0 in ; (0 ) and derived the optimal estimator with respect to L(0, 0). However, it strongly depends on the normality of = whether a GLSE satisfies the condition (3) or not. Hence we need to investigate the cases where the distribution of = may deviate from the normality. In the next section, we consider the SUR model under the elliptical symmetry of =. We first define a class of unbiased GLSE and show that their covariance matrices remain the same as long as L(=)#E n (0, 0). This leads to a generalization of the results of Kariya (1981), Bilodeau (1990), and Kurata and Kariya (1996) stated above. In Section 3, we pursue a similar analysis for some GLSE in the heteroscedastic model.

88 HIROSHI KURATA In the recent literature, Hasegawa (1994, 1995, 1996) treated the Revankar's SUR model (Revankar, 1974), an SUR model of a simple structure, and investigated the finite sample efficiencies of typical GLSEs under several families of non-normal distributions. See also the references therein. As for the the asymptotic efficiencies of the GLSEs under non-normality, consult Srivastava and Maekawa (1995). Fundamental results of statistical inference in the SUR model are summarized in Srivastava and Giles (1987). Here we briefly review several facts on the elliptically symmetric distributions. Let an n_1 random vector x be distributed as E n (0, 0). Then L(10 &12 x)=l(0 &12 x) holds for any 1 # O(n), where O(n) denotes the group of n_n orthogonal matrices. If 0=I n, then &x&=- x$x and x&x& are independent and x&x& is distributed as the uniform distribution on the unit sphere in R n (Muirhead, 1982, Chap. 1). Decompose x as x=(x$ 1, x$ 2 )$ with x j : n j _1 and n 1 +n 2 =n. Then the marginal distribution of x j is n j - variate elliptical with mean 0 and covariance I nj. Further, the conditional distribution of x 1 given x 2 is also n 1 -variate elliptical with mean and conditional covariance E(x 1 x 2 )=0 (6) Cov(x 1 x 2 )=c~(x$ 2 x 2 ) I n1 (7) for some function c~ (Fang and Zhang, 1990, Chap 2). The function c~ satisfies E(c~(x$ 2 x 2 ))=1, since Cov(x 1 )=I n1. Note that if x is normal, then c~ #1. 2. EFFICIENCIES OF GLSEs IN THE SUR MODEL The SUR model considered here is the model (1) with the structure y=(y$ 1,..., y$ N )$, X=diag[X 1,..., X N ], ;=(;$ 1,..., ;$ N )$ ==(=$ 1,..., =$ N )$, 0=7I m and 7 # S + (N), (8) here y j : m_1, X j : m_k j, ; j : k j _1, k= N j=1 k j, n=nm and diag denotes the block diagonal matrix. L(=)#E n (0, 7I m ) is assumed. Let 7 #7 (S ) be an estimator of 7 which depends on y only through the random matrix S, where S=Y$[I&X * (X$ * X * ) + X$ * ] Y =E$[I&X * (X$ * X * ) + X$ * ] E : N_N,

EFFICIENCIES OF GLSEs 89 Y=(y 1,..., y N ):m_n, X * =(X 1,..., X N ):m_k, E=(= 1,..., = N ):m_n, and A + denotes the MoorePenrose generalized inverse of A. LetC be the class of GLSEs of the form ; (7 I m ) with 7 =7 (S )and7 # S + (N ) a.s.. Any GLSE in C is unbiased if the expectation exists, because S is an even function of the ordinary least squares residual vector (Kariya and Toyooka, 1985, and Eaton, 1985). See also Theorem 2.1 below. We define a subclass C of C as C=[; (7 I m )#C 7 (S ) is one to one continuous function of S. For any a>0, there exists a positive number ###(a) such that 7 (as )=#(a) 7 (S )]. The class C contains the GLSEs with such 7 (S )'s as 7 (S )=TDT$, (9) where T is the lower triangular matrix with positive diagonal elements such that S=TT$ and D is a diagonal matrix with positive elements. In this case, #(a)=a. (It seems to be difficult to find reasonable estimators with #(a){a.) By letting D=I N in (9), we can see that the GLSE with 7 (S )=S, the unrestricted Zellner estimator (UZE), is in C. It is easily shown from the general result of Kariya and Toyooka (1985) that any GLSE in C has finite second moments. We introduce some notations. Let p=rankx and q=m&p. LetX and * Z be any m_p and m_q matrices such that and X X $=X * (X$ * X * ) + X$ *, X $X =I p Then 1#(X, Z )#O(m). Let Z Z $=I m &X * (X$ * X * ) + X$ *, Z $Z =I q. =~=(7 &12 I m ) =#(=~$ 1,..., =~$ N )$ '=(I N 1$) =~#('$ 1,..., '$ N )$ with ' j = \X $=~ j Z $=~ j+ = \ $ j! j+, where =~ j : m_1, $ j : p_1 and! j : q_1. Then we can easily see that L(=~)=L(')=L(($$,!$)$) # E n (0, I n ) S=7 12 U$U7 12,

90 HIROSHI KURATA where $=($$ 1,..., $$ N )$ : Np_1,!=(!$ 1,...,!$ N )$ : Nq_1 and U=(! 1,...,! N ): q_n. As a function of!, S#S(!) satisfies S(a!)=a 2 S(!) for any a>0. (10) If = is normally distributed, then S is distributed as W N (7, q), the Wishart distribution with mean q7 and degrees of freedom q. Theorem 2.1. Suppose that L(=)#E n (0, 7I m ). (i) If ; (7 I m )#C, then E(; (7 I m ) 7 )=;. (ii) If ; (7 I m )#C, then Cov(; (7 I m ) 7 )=c(7 ) H(7 I m, 7I m ) (11) for some function c such that E(c(7 ))=1. (iii) Proof. proved: If ; (7 I m )#C, then c(7 ) and H(7 I m, 7I m ) are independent. By using X j $Z =0 and (6), the following two equalities are X$(7 &1 I m ) ==X$(7 &1 7 12 X ) $ (12) and E($ 7 )=E[E(E($!) S) 7 ]=0. (13) Hence we obtain E(; (7 I m ) 7 )=(X$(7 &1 I m ) X ) &1 X$(7 &1 7 12 X ) E($ 7 )+;=;, proving (i). Similarly, E($$$ 7 )=E[E(E($$$!) S ) 7 ] =E[E(c~(&!& 2 ) S) 7 ] I Np holds for some function c~ (see (7)). Noting that &!& 2 =tr(s7 &1 ) and letting c(7 )=E(c~(tr(S7 &1 )) 7 ) yield E($$$ 7 )=c(7 ) I Np. (14) The function c satisfies E(c(7 ))=1, since Cov($)=I Np. Therefore, from (12) and (14), we obtain Cov(; \(7 I m ) 7 ) =(X$(7 &1 I m ) X ) &1 X$(7 &1 7 12 X ) _ E($$$ 7 )(7 12 7 &1 X $) X(X$(7 &1 I m ) X ) &1 =c(7 )(X$(7 _X(X$(7 &1 I m ) X ) &1 X$(7 &1 77 &1 I m ) X ) &1. &1 X X $)

EFFICIENCIES OF GLSEs 91 Here it is easily proved by a direct calculation that X$(7 &1 77 &1 X X $) X= X$(7 &1 77 &1 I ) X. Thus we establish (ii). To prove (iii), we assume that ; (7 I m )#C. As a function of!, H(7 (S(!))I m, 7I m )#H (!) (15) depends on! only through!&!&, since for any a>0, H (a!)=h(7 (S(a!))I m, 7I m ) =H(7 (a 2 S(!))I m, 7I m ) =H(#(a 2 ) 7 (S(!))I m, 7I m ) =H(7 (S(!))I m, 7I m ) =H (!), where the second equality follows from (10), the third follows from the definition of C, and the fourth follows because H(a0, 0)=H(0, 0) holds for any a>0 in general (see (4)). On the other hand, c is a function of &!& 2, because c(7 )=E[c~(tr(S7 &1 )) 7 ] This proves (iii). =E[c~(tr(S7 &1 )) S ] (7 is a one to one function of S ) =c~(tr(s7 &1 ))=c~(&!& 2 ). K Since the distribution of!&!& is unique, the following theorem is obtained. Theorem 2.2. For any GLSE ; (7 I m )#C, the covariance matrix Cov(; (7 I m )) remains the same as long as L(=)#E n (0, 7I m ). It follows from this result and (5) that any GLSE ; (7 I m )#C satisfies (X$(7 &1 I m ) X ) &1 Cov(; (7 I m ))E[L(7, 7 )](X$(7 &1 I m ) X ) &1 with L(7, 7 )=(l 1 +l N ) 2 4l 1 l N, (16) where l 1 }}}l N are the latent roots of 7 &12 7 (S ) 7 &12 and the expectation E[L(7, 7 )] is calculated under StW N (7, q). In particular, when N=2, the upper bound for the covariance matrix of the UZE is given by 1+2(q&3) (Kariya, 1981). For general N, see Kurata and Kariya (1996). As a loss function for estimating 7, L(7, 7 ) is invariant under the group GT + (N ) of N_N lower triangular matrices with positive diagonal elements with action 7 G7 G$ and 7 G7G$, where G # GT + (N ). An

92 HIROSHI KURATA equivariant estimator of 7 is of the form (9). In the case where N=2, Bilodeau (1990) derived the optimal estimator of 7 with respect to this loss function as 7 B#7 B(S )=TD B T$ with D B =diag[1, - (q+3)(q&1)] and proposed the GLSE ; (7 BI m ). Since L(7 (S(a!)), 7 )=L(#(a 2 ) 7 (S(!)), 7 )=L(7 (S(!)), 7 ) holds for any a>0, the upper bounds themselves also remain the same as long as L(=)#E n (0, 7I m ). This implies that the GLSE ; (7 BI m ) is still optimal under the elliptical symmetry of =. 3. EFFICIENCIES OF GLSEs IN THE HETEROSCEDASTIC MODEL In this section, we consider the heteroscedastic model of the form y=(y$ 1,..., y$ N )$, X=(X$ 1,..., X$ N )$, ==(=$ 1,..., =$ N )$ 0=0(%)=diag[% 1 I m1,..., % N I mn ] with %=(% 1,..., % N )$ : N_1, (17) where y j : m j _1, X j : m j _k, = j : m j _1, n= N j=1 m j, and L(=)# E n (0, 0(%)). Since the analysis in this section is quite similar to that of Section 2, we often omit the details. Let % =% (s) be an estimator of % which depends on y only through the random vector s=(s 1,..., s N )$ : N_1, where s j =y j $[I mj &X j (X j $ X j ) + X j $] y j q j, (18) p j =rankx j,andq j =m j &p j.letc be the class of unbiased GLSEs of the form ; (0(% )) with 0(% )#S + (n) a.s.. We consider the following subclass C of C whose definition is similar to that of Section 2: C=[; (0(% )) # C % (s) is one to one continuous function of s. For any a>0, there exists a positive number ###(a) such that % (as)=#(a) % (s)]. This class contains the GLSEs with such % (s)'s as % (s)=( f 1 s 1,..., f N s N )$, (19) where f j 's are positive constants. Let X j and Z j be any m j _p j and m j _q j matrices such that X jx j$=x j (X j $ X j ) + X j $, X j$ X j=i pj

EFFICIENCIES OF GLSEs 93 and Z jz j$=i mj &X j (X j $ X j ) + X j $, Z j$ Z j=i qj. Then 1#diag[1 1,..., 1 N ] # O(n), where 1 j =(X j, Z j)#o(m j ). The following notations are essentially the same as those of Section 2, =~=0 &12 =#(=~$ 1,..., =~$ N )$ '=1$=~#('$ 1,..., '$ N )$ with ' j = \X j$ =~ j Z j$ =~ j+ # \ $ j! j+, where =~ j : m j _1, $ j : p j _1 and! j : q j _1. Let $=($$ 1,..., $$ N )$ : ( p 1 +}}}+p N ) _1,!=(!$ 1,...,!$ N )$ : (q 1 +}}}+q N )_1. As a function of!, s#s(!) satisfies s(a!)=a 2 s(!) for any a>0. Theorem 3.1. Suppose that L(=)#E n (0, 0(%)). (i) If ; (0(% )) # C, the E(; (0(% )) % )=;. (ii) If ; (0(% )) # C, then Cov(; (0(% )) % )=c(% ) H(0(% ), 0(%)) (20) for some function c such that E(c(% ))=1. (iii) If ; (0(% )) # C, then c(% ) and H(0(% ), 0(%)) are independent. Proof. Let X =diag[x 1,..., X N]. Then we can see that X$0(% ) &1 == X$0(% ) &1 0(%) 12 X$. This proves (i). The proofs of (ii) and (iii) are parallel to those of Theorem 2.1. K Theorem 3.2. For any GLSE ; (0(% )) # C, the covariance matrix Cov(; (0(% )) remains the same as long as L(=)#E n (0, 0(%)). The implications of this theorem are also similar to those of Theorem 2.2 and omitted. REFERENCES 1. M. Bilodeau, On the choice of the loss fuction in covariance estimation, Statist. Decisions 8 (1990), 131139. 2. M. L. Eaton, The GaussMarkov theorem in multivariate analysis, in ``Multivariate Analysis'' (P. R. Krishnaiah, Ed.), Vol. 6, pp. 177201, 1985. 3. K. T. Fang and Y. T. Zhang, ``Generalized Multivariate Analysis,'' Springer-Verlag, New YorkBerlin, 1990.

94 HIROSHI KURATA 4. H. Hasegawa, On small sample properties of estimators after a preliminary test of independence in a constrained bivariate linear regression model, Econom. Stud. Quart. 45 (1994), 306320. 5. H. Hasegawa, On small sample properties of Zellner's estimator for the case of two SUR equations with compound normal disturbances, Comm. Statist. Simulation Comput. 24 (1995), 4559. 6. H. Hasegawa, On small sample properties of Zellner's estimator for two SUR equations model with specification error, J. Japan. Statist. Soc. 26 (1996), 2540. 7. T. Kariya, Bounds for the covariance matrices of Zellner's estimator in the SUR model and 2SAE in a heteroscedasitic model, J. Amer. Statist. Assoc. 76 (1981), 975979. 8. T. Kariya and Y. Toyooka, Nonlinear versions of the GaussMarkov Theorem and GLSE, in ``Multivariate Analysis'' (P. R. Krishnaiah, Ed.), Vol. 6, pp. 345354, 1985. 9. H. Kurata and T. Kariya, Least upper bound for the covariance matrix of a generalized least squares estimator in regression with applications to a seemingly unrelated regression model and a heteroscedastic model, Ann. Statist. 24 (1996), 15471559. 10. R. J. Muirhead, ``Aspects of Multivariate Statistical Theory,'' Wiley, New York, 1982. 11. N. S. Revankar, Some finite sample results in the context of two seemingly unrelated regression equations, J. Amer. Statist. Assoc. 69 (1974), 634639. 12. V. K. Srivastava and D. E. A. Giles, ``Seemingly Unrelated Regression Equation Models,'' Dekker, New York, 1987. 13. V. K. Srivastava and K. Maekawa, Efficiency properties of feasible generalized least squares estimators in SURE models under non-normal disturbances, J. Econometrics 66 (1995), 99121. 14. A. Zellner, An efficient method of estimating seemingly unrelated regression and tests for aggregation bias, J. Amer. Statist. Assoc. 57 (1962), 348368. 15. A. Zellner, Estimators for seemingly unrelated regression equations: Some exact sample results, J. Amer. Statist. Assoc. 58 (1963), 977992.