A note on the equality of the BLUPs for new observations under two linear models
|
|
- Bryce Stevenson
- 6 years ago
- Views:
Transcription
1 ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract We consider two linear models, L 1 and L 2, say, with new unobserved future observations We give necessary and sufficient conditions in the general situation, without any rank assumptions, that the best linear unbiased predictor (BLUP) of the new observation under the the model L 1 continues to be BLUP also under the model L 2 1 Introduction In the literature the invariance of the the best linear unbiased estimator (BLUE) of fixed effects under the general linear model y = Xβ + ε, (11) where E(y) = Xβ, E(ε) = 0, and cov(y) = cov(ε) = V 11, has received much attention; see, for example, the papers by Rao (1967, 1971, 1973), and Mitra and Moore (1973) In particular, the connection between the ordinary least squares estimator and BLUE (Xβ), has been studied extensively; see, eg, Rao (1967), Zyskind (1967), and the review papers by Puntanen and Styan (1989), and Baksalary, Puntanen and Styan (1990a) According to our knowledge, the equality of the best linear unbiased predictors (BLUPs) in two models L 1 and L 2, defined below, has received very little attention in the literature Haslett and Puntanen (2010a,b) considered the equality of the BLUPs of the random factor under two mixed models In their (2010a) paper they gave, without a proof, necessary and sufficient conditions in the general situation that the BLUP of the new observation under the model L 1 continues to be BLUP also under the model L 2 The purpose of this paper is to provide a complete proof of this result Received March 8, Mathematics Subject Classification 15A42, 62J05, 62H12, 62H20 Key words and phrases Best linear unbiased estimator (BLUE), best linear unbiased predictor (BLUP), generalized inverse, linear fixed effects model, orthogonal projector 27
2 28 STEPHEN J HASLETT AND SIMO PUNTANEN Let us start formally by considering the general linear model (11), which can be represented as a triplet F = {y, Xβ, V 11 } (12) Vector y is an n 1 observable random vector, ε is an n 1 random error vector, X is a known n p model matrix, β is a p 1 vector of fixed but unknown parameters, V 11 is a known n n nonnegative definite matrix Let y f denote the q 1 unobservable random vector containing new future observations The new observations are assumed to follow the linear model y f = β + ε f, where is a known q p model matrix associated with the new observations, β is the same vector of unknown parameters as in (12), and ε f is a q 1 random error vector associated with new observations The expectation and the covariance matrix are ( ( ) ( ( ( ) y Xβ X y V11 V E = = β, cov = 12 = V, yf) β Xf) yf) V 21 V 22 where the entire covariance matrix V is assumed to be known For brevity, we denote {( ( ( )} y X V11 V L 1 =, β, 12 yf) Xf) V 21 V 22 Before proceeding, we may introduce the notation used in this paper We will denote R m n the set of m n real matrices and R m = R m 1 We will use the symbols A, A, A +, C (A), C (A), and N (A) to denote the transpose, a generalized inverse, the Moore Penrose inverse, the column space, the orthogonal complement of the column space, and the null space, of the matrix A, respectively By (A : B) we denote the partitioned matrix with A R m n and B R m k as submatrices By A we denote any matrix satisfying C (A ) = N (A ) = C (A) Furthermore, we will write P A = AA + = A(A A) A to denote the orthogonal projector (with respect to the standard inner product) onto C (A) In particular, we denote H = P X and M = I n P X Notice that one choice for X is of course M We assume the model L 1 to be consistent in the sense that y C (X : V 11 ) = C (X : V 11 M), (13) ie, the observed value of y lies in C (X : V 11 ) with probability 1 The corresponding consistency is assumed in all models that we will consider The linear predictor Gy is said to be unbiased for y f if the expected prediction error is 0: E(y f Gy) = 0 for all β R p This is equivalent to GX =, ie, X f = X G The requirement that C (X f ) C (X ) means that β is estimable under F = {y, Xβ, V 11 } Now an unbiased
3 EQUALITY OF THE BLUPS 29 linear predictor Gy is the best linear unbiased predictor, BLUP, for y f, if the Löwner ordering cov(gy y f ) L cov(fy y f ) holds for all F such that Fy is an unbiased linear predictor for y f The following lemma characterizes the BLUP; for the proof, see, eg, Christensen (2002, p 283), and Isotalo and Puntanen (2006, p 1015) Lemma 11 Consider the linear model L 1 (with new unobserved future observations), where β is a given vector of estimable parametric functions The linear predictor Gy is the best linear unbiased predictor (BLUP) for y f if and only if G satisfies the equation for any given choice of X G(X : V 11 X ) = ( : V 21 X ) (14) We can get, for example, the following matrices G i such that G i y equals the BLUP(y f ): G 1 = (X W X) X W + V 21 W [I n X(X W X) X W ], G 2 = (X W X) X W + V 21 M(MV 11 M) M, G 3 = (X X) X + [V 21 (X X) X V 11 ]M(MV 11 M) M, where W (and the related U) are any matrices such that W = V 11 + XUX, C (W) = C (X : V 11 ) Notice that the equation (14) has a unique solution if and only if C (X : V 11 M) = R n According to Rao and Mitra (1971, p 24) the general solution to (14) can be written, for example, as G = G i + F(I n P (X :V11 M)), where the matrix F is free to vary and P (X :V11 M) denotes the orthogonal projector onto the column space of matrix (X : V 11 M) Even though the multiplier G may not be unique, the observed value Gy of the BLUP is unique with probability 1; this is due to the consistency requirement (13) Consider now another linear model L 2, which may differ from L 1 through its covariance matrix and model matrix, ie, {( ( ( )} y X V11 V L 2 =, β, 12 yf) Xf) V 21 V 22 In the next section we consider the conditions under which the BLUP for y f under L 1 continues to be BLUP under L 2 Naturally, as pointed out by an anonymous referee, β must be estimable under {y, Xβ, V 11 }; otherwise y f does not have a BLUP under L 2
4 30 STEPHEN J HASLETT AND SIMO PUNTANEN 2 Main results Theorem 21 Consider the models L 1 and L 2 (with new unobserved future observations), where C (X f ) C (X ) and C (X f ) C (X ) Then every representation of the BLUP for y f under the model L 1 is also a BLUP for y f under the model L 2 if and only if X V11 M X V11 M C C (21) V 21 M V 21 M Proof For the proof it is convenient to observe that (21) holds if and only if V11 M X V11 M C C, (22a) V 21 M V 21 M and ( ( ) X X V11 M C C (22b) Xf) V 21 M Let us first assume that every representation of the BLUP for y f under L 1 continues to be BLUP under L 2 Let G 0 be a general solution to (14): G 0 = (X W X) X W + V 21 M(MV 11 M) M + F(I n P (X :V11 M)), where F is free to vary Then G 0 has to satisfy also the other fundamental BLUP equation: G 0 (X : V 11 M) = ( : V 21 M), (23) where M = I n P X The X-part of the condition (23) is (X W X) X W X + V 21 M(MV 11 M) MX + F(I n P (X :V11 M))X = (24) Because (24) must hold for all matrices F, we necessarily have F(I n P (X :V11 M))X = 0 for all F R n n, which further implies C (X) C (X : V 11 M), ie, X = XK 1 + V 11 MK 2 for some K 1 R p p and K 2 R n p, (25) and hence (X W X) X W X = (X W X) X W (XK 1 + V 11 MK 2 ) = (X W X) X W XK 1 + (X W X) X W V 11 MK 2 = K 1 + (X W X) X W WMK 2 = K 1, (26)
5 EQUALITY OF THE BLUPS 31 where we have used V 11 M = WM and X W W = X Note also that the assumption C (X f ) C (X ) implies (X W X) X W X = ; for properties of the matrix of type (X W X), see, eg, Baksalary and Mathew (1990, Th 2) and Baksalary, Puntanen and Styan (1990b, Th 2) In view of (25) we have V 21 M(MV 11 M) MX = V 21 M(MV 11 M) MV 11 MK 2 Because V is nonnegative definite, we have C(V 12 ) C (V 11 ), and thereby and so C (MV 12 ) C (MV 11 ) = C (MV 11 M), V 21 M(MV 11 M) MX = V 21 MK 2 (27) Combining (26), (27) and (24) shows that = K 1 +V 21 MK 1, which together with (25) yields the following: ( ( )( ) X X V11 M K1 =, (28) Xf) V 21 M K 2 ie, (22b) holds The right-hand part of the condition (23) is (X W X) X W V 11 M (29a) + V 21 M(MV 11 M) MV 11 M (29b) + F(I n P (X:V11 M))V 11 M (29c) = V 21 M (29d) Again, because (29) must hold for all matrices F, we necessarily have F(I n P (X:V11 M))V 11 M = 0 for all F R n n, which further implies C (V 11 M) C (X : V 11 M), and hence V 11 M = XL 1 + V 11 ML 2 for some L 1 R p n and L 2 R n n (210) Substituting (210) into (29a), gives (proceeding as was done above with the X-part) while the term (29b) becomes and hence (29) gives (X W X) X W V 11 M = L 1, V 21 M(MV 11 M) MV 11 ML 2 = V 21 ML 2, L 1 + V 21 ML 2 = V 21 M (211)
6 32 STEPHEN J HASLETT AND SIMO PUNTANEN Combining (210) and (211) yields ( ) ( V11 M X V11 M = V 21 M V 21 M )( L1 L 2 ), (212) ie, (22a) holds To go the other way, suppose that (22a) and (22b) hold and that K 1 and K 2, and L 1 and L 2 are defined as in (28) and in (212), respectively Moreover, assume that Gy is the BLUP of y f under L 1, ie, G(X : V 11 M) = ( : V 21 M) (213) ( ) Postmultiplying (213) by K1 L 1 K 2 L 2 yields G(X : V 11 M) = ( : V 21 M), which confirms that Gy is also the BLUP of y f under L 2 Thus the proof is completed If the both models L 1 and L 2 have the same model matrix part we get the following corollary Corollary 21 Consider the same situation as in Theorem 21 but suppose that the two models L 1 and L 2 have the same model matrix part ( ) X Xf Then every representation of the BLUP for y f under the model L 1 is also a BLUP for y f under the model L 2 if and only if V11 M X V11 M C C V 21 M V 21 M Moreover, the sets of all representations of BLUPs for y f under L 1 and L 2 are identical if and only if X V11 M X V11 M C = C V 21 M V 21 M Acknowledgments Thanks go to the anonymous referees for constructive remarks References Baksalary, J K and Mathew, T (1990) Rank invariance criterion and its application to the unified theory of least squares, Linear Algebra Appl 127, Baksalary, J K, Puntanen, S and Styan, G P H (1990a) On T W Anderson s contributions to solving the problem of when the ordinary least-squares estimator is best linear unbiased and to characterizing rank additivity of matrices; In: The Collected Papers of T W Anderson: (GPH Styan, ed), Wiley, New York, pp Baksalary, J K, Puntanen, S and Styan, G P H (1990b) A property of the dispersion matrix of the best linear unbiased estimator in the general Gauss Markov model, Sankhyā Ser A 52, Christensen, R (2002) Plane Answers to Complex Questions: The Theory of Linear Models, Third Edition, Springer, New York
7 REFERENCES 33 Haslett, S J and Puntanen, S (2010a) On the equality of the BLUPs under two linear mixed models, Metrika, in press, available online Haslett, S J and Puntanen, S (2010b) Equality of BLUEs or BLUPs under two linear models using stochastic restrictions, Statist Papers, in press, available online Isotalo, J and Puntanen, S (2006) Linear prediction sufficiency for new observations in the general Gauss Markov model, Comm Statist Theory Methods 35, Mitra, S K and Moore, B J (1973) Gauss Markov estimation with an incorrect dispersion matrix, Sankhyā Ser A 35, Puntanen, S and Styan, G P H (1989) The equality of the ordinary least squares estimator and the best linear unbiased estimator (with discussion), Amer Statist 43, [Commented by Oscar Kempthorne on pp and by Shayle R Searle on pp , Reply by the authors on p 164] Rao, C R (1967) Least squares theory using an estimated dispersion matrix and its application to measurement of signals; In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability: Berkeley, California, 1965/1966, vol 1 (LM Le Cam and J Neyman, eds), Univ of California Press, Berkeley, pp Rao, C R (1971) Unified theory of linear estimation, Sankhyā Ser A 33, [Corrigendum (1972), 34, p 194 and p 477] Rao, C R (1973) Representations of best linear unbiased estimators in the Gauss Markoff model with a singular dispersion matrix, J Multivariate Anal 3, Rao, C R and Mitra, S K (1971) Generalized Inverse of Matrices and Its Applications, Wiley, New York Zyskind, G (1967) On canonical forms, non-negative covariance matrices and best and simple least squares linear estimators in linear models, Ann Math Statist 38, Institute of Fundamental Sciences, Massey University, Palmerston North, New Zealand address: sjhaslett@masseyacnz Department of Mathematics and Statistics, FI University of Tampere, Tampere, Finland address: simopuntanen@utafi
The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model
The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of
More informationOn V-orthogonal projectors associated with a semi-norm
On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai
More informationA new algebraic analysis to linear mixed models
A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a
More informationNew insights into best linear unbiased estimation and the optimality of least-squares
Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto
More informationA CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL
A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL GABRIELA BEGANU The existence of the best linear unbiased estimators (BLUE) for parametric estimable functions in linear models
More informationA matrix handling of predictions of new observations under a general random-effects model
Electronic Journal of Linear Algebra Volume 29 Special volume for Proceedings of the International Conference on Linear Algebra and its Applications dedicated to Professor Ravindra B. Bapat Article 4 2015
More informationOn equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model
Statistics & Probability Letters 76 (2006) 1265 1272 www.elsevier.com/locate/stapro On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators
More informationAnalysis of transformations of linear random-effects models
Analysis of transformations of linear random-effects models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. Assume that
More informationSolutions of a constrained Hermitian matrix-valued function optimization problem with applications
Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =
More informationEstimation under a general partitioned linear model
Linear Algebra and its Applications 321 (2000) 131 144 www.elsevier.com/locate/laa Estimation under a general partitioned linear model Jürgen Groß a,, Simo Puntanen b a Department of Statistics, University
More informationThe generalized Schur complement in group inverses and (k + 1)-potent matrices
The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationidentity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij
Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting
More informationTesting Some Covariance Structures under a Growth Curve Model in High Dimension
Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping
More informationComparison of prediction quality of the best linear unbiased predictors in time series linear regression models
1 Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models Martina Hančová Institute of Mathematics, P. J. Šafárik University in Košice Jesenná 5,
More informationThe Equality of OLS and GLS Estimators in the
The Equality of OLS and GLS Estimators in the Linear Regression Model When the Disturbances are Spatially Correlated Butte Gotu 1 Department of Statistics, University of Dortmund Vogelpothsweg 87, 44221
More informationEvaluation of Measurement Comparisons Using Generalised Least Squares: the Role of Participants Estimates of Systematic Error
Evaluation of Measurement Comparisons Using Generalised east Squares: the Role of Participants Estimates of Systematic Error John. Clare a*, Annette Koo a, Robert B. Davies b a Measurement Standards aboratory
More informationGauss Markov & Predictive Distributions
Gauss Markov & Predictive Distributions Merlise Clyde STA721 Linear Models Duke University September 14, 2017 Outline Topics Gauss-Markov Theorem Estimability and Prediction Readings: Christensen Chapter
More informationA property of orthogonal projectors
Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,
More informationarxiv: v2 [math.st] 22 Oct 2016
Vol. 0 (0000) arxiv:1608.08789v2 [math.st] 22 Oct 2016 On the maximum likelihood degree of linear mixed models with two variance components Mariusz Grządziel Department of Mathematics, Wrocław University
More informationRank equalities for idempotent and involutory matrices
Linear Algebra and its Applications 335 (2001) 101 117 www.elsevier.com/locate/laa Rank equalities for idempotent and involutory matrices Yongge Tian a, George P.H. Styan a, b, a Department of Mathematics
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationOn the equality of the ordinary least squares estimators and the best linear unbiased estimators in multivariate growth-curve models.
RACSAM Rev. R. Acad. Cien. Serie A. Mat. VOL. 101 (1), 2007, pp. 63 70 Estadística e Investigación Operativa / Statistics and Operations Research On the equality of the ordinary least squares estimators
More informationEstimation for parameters of interest in random effects growth curve models
Journal of Multivariate Analysis 98 (2007) 37 327 www.elsevier.com/locate/mva Estimation for parameters of interest in random effects growth curve models Wai-Cheung Ip a,, Mi-Xia Wu b, Song-Gui Wang b,
More informationRe-nnd solutions of the matrix equation AXB = C
Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationThe DMP Inverse for Rectangular Matrices
Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for
More informationOn some linear combinations of hypergeneralized projectors
Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß
More informationChapter 5 Prediction of Random Variables
Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the
More information~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.
Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical
More informationChapter 3 Best Linear Unbiased Estimation
Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationThe symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationRank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications
Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University
More informationOn a matrix result in comparison of linear experiments
Linear Algebra and its Applications 32 (2000) 32 325 www.elsevier.com/locate/laa On a matrix result in comparison of linear experiments Czesław Stȩpniak, Institute of Mathematics, Pedagogical University
More informationOn Sums of Conjugate Secondary Range k-hermitian Matrices
Thai Journal of Mathematics Volume 10 (2012) Number 1 : 195 202 www.math.science.cmu.ac.th/thaijournal Online ISSN 1686-0209 On Sums of Conjugate Secondary Range k-hermitian Matrices S. Krishnamoorthy,
More informationBilinear generating relations for a family of q-polynomials and generalized basic hypergeometric functions
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 16, Number 2, 2012 Available online at www.math.ut.ee/acta/ Bilinear generating relations for a family of -polynomials and generalized
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction
Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive
More informationOrthogonal decompositions in growth curve models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion
More informationMatrix Tricks for Linear Statistical Models
Matrix Tricks for Linear Statistical Models Simo Puntanen George P. H. Styan Jarkko Isotalo Matrix Tricks for Linear Statistical Models Our Personal Top Twenty Simo Puntanen School of Information Sciences
More information114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector,
A^VÇÚO 1n ò 1Ï 2015c4 Chinese Journal of Applied Probability and Statistics Vol.31 No.2 Apr. 2015 Optimal Estimator of Regression Coefficient in a General Gauss-Markov Model under a Balanced Loss Function
More informationBasic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:
8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions
More informationGeneralized inverses of partitioned matrices in Banachiewicz Schur form
Linear Algebra and its Applications 354 (22) 41 47 www.elsevier.com/locate/laa Generalized inverses of partitioned matrices in Banachiewicz Schur form Jerzy K. Baksalary a,, George P.H. Styan b a Department
More informationOn Pseudo SCHUR Complements in an EP Matrix
International Journal of Scientific Innovative Mathematical Research (IJSIMR) Volume, Issue, February 15, PP 79-89 ISSN 47-7X (Print) & ISSN 47-4 (Online) wwwarcjournalsorg On Pseudo SCHUR Complements
More informationMa 3/103: Lecture 24 Linear Regression I: Estimation
Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the
More informationEP elements in rings
EP elements in rings Dijana Mosić, Dragan S. Djordjević, J. J. Koliha Abstract In this paper we present a number of new characterizations of EP elements in rings with involution in purely algebraic terms,
More informationDragan S. Djordjević. 1. Introduction
UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationEXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B
EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B Dragan S. Djordjević November 15, 2005 Abstract In this paper we find the explicit solution of the equation A X + X A = B for linear bounded operators
More informationMIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations
Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete
More informationON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY
J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung
More informationMOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE
J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More information3 Multiple Linear Regression
3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is
More informationOn the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model
Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly
More informationBest linear unbiased prediction for linear combinations in general mixed linear models
Journal of Multivariate Analysis 99 (2008 1503 1517 www.elsevier.com/locate/jmva Best linear unbiased prediction for linear combinations in general mixed linear models Xu-Qing Liu a,, Jian-Ying Rong b,
More informationON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS
Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský
More informationActa Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica
Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica Lubomír Kubáček; Eva Tesaříková Underparametrization of weakly nonlinear regression models Acta Universitatis Palackianae
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationProduct of Range Symmetric Block Matrices in Minkowski Space
BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 29(1) (2006), 59 68 Product of Range Symmetric Block Matrices in Minkowski Space 1
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationOn the Moore-Penrose and the Drazin inverse of two projections on Hilbert space
On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space Sonja Radosavljević and Dragan SDjordjević March 13, 2012 Abstract For two given orthogonal, generalized or hypergeneralized
More informationMoore-Penrose-invertible normal and Hermitian elements in rings
Moore-Penrose-invertible normal and Hermitian elements in rings Dijana Mosić and Dragan S. Djordjević Abstract In this paper we present several new characterizations of normal and Hermitian elements in
More informationDiagonal and Monomial Solutions of the Matrix Equation AXB = C
Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of
More informationYongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China
On global optimizations of the rank and inertia of the matrix function A 1 B 1 XB 1 subject to a pair of matrix equations B 2 XB 2, B XB = A 2, A Yongge Tian China Economics and Management Academy, Central
More informationMatrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein
Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,
More informationMi-Hwa Ko. t=1 Z t is true. j=0
Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationEcon 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )
Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=
More informationFirst-order random coefficient autoregressive (RCA(1)) model: Joint Whittle estimation and information
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 9, Number, June 05 Available online at http://acutm.math.ut.ee First-order random coefficient autoregressive RCA model: Joint Whittle
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationSTAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017
STAT 151A: Lab 1 Billy Fang 2 September 2017 1 Logistics Billy Fang (blfang@berkeley.edu) Office hours: Monday 9am-11am, Wednesday 10am-12pm, Evans 428 (room changes will be written on the chalkboard)
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationEXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A
EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T A M MOHAMMADZADEH KARIZAKI M HASSANI AND SS DRAGOMIR Abstract In this paper by using some block operator matrix techniques we find explicit solution
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationOn product of range symmetric matrices in an indefinite inner product space
Functional Analysis, Approximation and Computation 8 (2) (2016), 15 20 Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/faac On product
More informationNOTES ON MATRICES OF FULL COLUMN (ROW) RANK. Shayle R. Searle ABSTRACT
NOTES ON MATRICES OF FULL COLUMN (ROW) RANK Shayle R. Searle Biometrics Unit, Cornell University, Ithaca, N.Y. 14853 BU-1361-M August 1996 ABSTRACT A useful left (right) inverse of a full column (row)
More informationX -1 -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs
DOI:.55/bile-5- Biometrical Letters Vol. 5 (5), No., - X - -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs Ryszard Walkowiak Department
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationAPPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS
Univ. Beograd. Publ. Eletrotehn. Fa. Ser. Mat. 15 (2004), 13 25. Available electronically at http: //matematia.etf.bg.ac.yu APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS Predrag
More informationRegression. Oscar García
Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationELA
Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF
More informationOn some matrices related to a tree with attached graphs
On some matrices related to a tree with attached graphs R. B. Bapat Indian Statistical Institute New Delhi, 110016, India fax: 91-11-26856779, e-mail: rbb@isid.ac.in Abstract A tree with attached graphs
More informationWeaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms
DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,
More informationLECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota
LCTURS IN CONOMTRIC THORY John S. Chipman University of Minnesota Chapter 5. Minimax estimation 5.. Stein s theorem and the regression model. It was pointed out in Chapter 2, section 2.2, that if no a
More informationOn Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions
Journal of Applied Probability & Statistics Vol. 5, No. 1, xxx xxx c 2010 Dixie W Publishing Corporation, U. S. A. On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions
More informationSOLUTIONS TO PROBLEMS
Z Z T Z conometric Theory, 1, 005, 483 488+ Printed in the United States of America+ DI: 10+10170S06646660505079 SLUTINS T PRBLMS Solutions to Problems Posed in Volume 0(1) and 0() 04.1.1. A Hausman Test
More informationSome results on matrix partial orderings and reverse order law
Electronic Journal of Linear Algebra Volume 20 Volume 20 2010 Article 19 2010 Some results on matrix partial orderings and reverse order law Julio Benitez jbenitez@mat.upv.es Xiaoji Liu Jin Zhong Follow
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More information