A matrix handling of predictions of new observations under a general random-effects model

Size: px
Start display at page:

Download "A matrix handling of predictions of new observations under a general random-effects model"

Transcription

1 Electronic Journal of Linear Algebra Volume 29 Special volume for Proceedings of the International Conference on Linear Algebra and its Applications dedicated to Professor Ravindra B. Bapat Article A matrix handling of predictions of new observations under a general random-effects model Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing , China, yongge.tian@gmail.com Follow this and additional works at: Recommended Citation Tian, Yongge. 2015), "A matrix handling of predictions of new observations under a general random-effects model", Electronic Journal of Linear Algebra, Volume 29, pp DOI: This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository. For more information, please contact scholcom@uwyo.edu.

2 A MATRIX HANDLING OF PREDICTIONS UNDER A GENERAL LINEAR RANDOM-EFFECTS MODEL WITH NEW OBSERVATIONS YONGGE TIAN Abstract. Linear regression models that include random effects are commonly used to analyze longitudinal and correlated data. Assume that a general linear random-effects model y = Xβ + ε with β = Aα+γ is given, and new observations in the future follow the linear model y f = X f β+ε f. This paper shows how to establish a group of matrix equations and analytical formulas for calculating the best linear unbiased predictor BLUP) of the vector φ = Fα+Gγ +Hε+H f ε f of all unknown parameters in the two models under a general assumption on the covariance matrix among the random vectors γ, ε and ε f via solving a constrained quadratic matrix-valued function optimization problem. Many consequences on the BLUPs of φ and their covariance matrices, as well as additive decomposition equalities of the BLUPs with respect to its components are established under various assumptions. Key words. Linear random-effects model, Quadratic matrix-valued function, Löwner partial ordering, BLUP, BLUE, Covariance matrix. AMS subject classifications. 15A09, 62H12, 62J05. Dedicated to Professor R. B. Bapat on the occasion of his 60th birthday 1. Introduction. Throughout this paper, R m n stands for the collection of all m n real matrices. The symbols A, ra) and RA) stand for the transpose, the rank and the range column space) of a matrix A R m n, respectively. I m denotes the identity matrix of order m. The Moore Penrose inverse of A, denoted by A +, is defined to be the unique solution G satisfying the four matrix equations AGA = A, GAG = G, AG) = AG, and GA) = GA. P A, E A, and F A stand for the three orthogonal projectors symmetric idempotent matrices) P A = AA +, E A = A = I m AA +, and F A = I n A + A, where E A and F A satisfy E A = F A and F A = E A. Two symmetric matrices A and B of the same size are said to satisfy the Löwner partial ordering A B if A B is nonnegative definite. Consider a general Linear Random-effects Model LRM) defined by or marginally, y = Xβ +ε, β = Aα+γ, 1.1) y = XAα+Xγ +ε, 1.2) Received by the editorsonfebruary22, Accepted forpublication onmay28, 2015 Handling Editor: Simo Puntanen. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China yongge.tian@gmail.com). Supported by the National Natural Science Foundation of China Grant No ). 30

3 Predictions under LRM 31 where in the first-stage model, y R n 1 is a vector of observable response variables, X R n p is a known matrix of arbitrary rank, ε R n 1 is a vector of unobservable random errors, while in the second-stage model, β R p 1 is a vector of unobservable random variables, A R p k is known matrix of arbitrary rank, α R k 1 is a vector of fixed but unknown parameters fixed effects), γ R p 1 is a vector of unobservable random variablesrandom effects). Concerning the expectation and covariance matrix of random vectors γ and ε in 1.1), we adopt the following general assumption E [ γ ε ] = 0, Cov [ γ ε ] = Σ11 Σ 12 := Σ, 1.3) Σ 21 Σ 22 where Σ 11 R p p, Σ 12 = Σ 21 Rp n, and Σ 22 R n n are known, while Σ R p+n) p+n) is non-negative definite nnd) matrix of arbitrary rank. One of the ultimate goals of statistical modelling is to be able to predict future observations based on currently available information. Assume that new observations of response variables in the future follow the model y f = X f β +ε f = X f Aα+X f γ +ε f, 1.4) where X f R nf p is a known model matrix associated with the new observations, β is the same vector of unknown parameters as in 1.1), and ε f R nf 1 is a vector of measurement errors associated with new observations. Combining 1.2) and 1.4) yields the following marginal model [ ỹ = XAα+ Xγ y X ε + ε, ỹ =, X = ], ε =. 1.5) yf X f ε f In order to establish some general results on prediction analysis of 1.5), we assume that the expectation and covariance matrix of γ, ε, and ε f are given by E γ ε = 0, Cov γ ε = Σ 11 Σ 12 Σ 13 Σ 21 Σ 22 Σ 23 := Σ, 1.6) ε f ε f Σ 31 Σ 32 Σ 33 where we don t attach any further restrictions to the patterns of the submatrices Σ ij in 1.6), although they are usually taken as certain prescribed forms for a specified LRM in the statistical literature. In particular, the covariances among the residual or error vector with other random factors in the model are usually assumed to be zero. This assumption is ordinarily applied to most practical applications in the biological sciences when the assumption is invalid. Linear regression models that include random effects are commonly used to analyze longitudinal and correlated data, which are available to account for the variability of model parameters due to different factors that influence a response variable. The LRM in 1.1) is also called nested model or two-level model in the statistical literature, where the two equations are called the first-stage model and the second-stage model, respectively. Statistical inference on LRMs is now an important part in data analysis,

4 32 Y. Tian and a huge amount of literature spreads in statistics and other disciplines. As usual, a main task in the investigation of LRM is to establish predictors/estimators of all unknown parameters in the model. Recall that the Best Linear Unbiased Predictors BLUPs) of unknown random parameters and the Best Linear Unbiased Estimators BLUEs) of fixed but unknown parameters in LRMs are fundamental concepts in current regression analysis, which are defined directly from the requirement of both unbiasedness and minimum covariance matrices of predictors/estimators of the unknown parameters. In fact, BLUPs/BLUEs are primary choices in all possible predictors/estimators due to their simple and optimality properties, and have wide applications in both pure and applied disciplines of statistical inferences. The theory of BLUPs/BLUEs under linear regression models belongs to the classical methods of mathematical statistics. Along with recent development of optimization methods in matrix theory, it is now easy to deal with various complicated matrix operations occurring in the statistical inference of 1.5). In [25], the present author established a group of fundamental matrix equations and analytical formulas for calculating the BLUPs/BLUEs of all unknown parameters in1.1) via solving a constrained quadratic matrix-valued function optimization problem, and also formulated an open problem of establishing matrix equations and formulas for calculating the BLUPs/BLUEs of the future y f, X f β, X f Aα, X f γ, and ε f in 1.4) from the observed response vector y in 1.1). For convenience of representation, let R = R1 = R 2 X In 0 = [ X f 0 I X, I n+nf ]. 1.7) nf Under the general assumptions in 1.3) and 1.6), the covariance matrix of the combined random vector ỹ in 1.5) is given by Covy) Cov{y,yf } Covỹ) = = R ΣR Cov{y f,y} Covy f ) := V, 1.8) where Covy) = R 1 ΣR 1 := V 11 Cov{y, y f } = R 1 ΣR 2 := V 12, 1.9) Cov{y f, y} = R 2 ΣR 1 := V 21, Covy f ) = R 2 ΣR 2 := V ) They are all known matrices under the assumptions in 1.1) 1.6), and will occur in the statistical inference of 1.5). Assumptions in 1.1) 1.6) are so general that they include almost all LRMs with different structures of covariance matrices as their special cases. Note from 1.5) that under the general assumptions in 1.1) 1.6), y and y f are correlated. Hence, it is desirable to give predictions of the future observations y f, as well as X f β and ε f in 1.4) from the original observation vector y in 1.1) under the assumptions in 1.1) 1.6). It is of great practical interest to simultaneously identify the important predictors that correspond to both the fixed- and random-effects components in LRM. Some previous and recent work on simultaneous estimations and predictions of combined unknown parameters under regression models can be found in [3, 17, 21, 25, 26].

5 Predictions under LRM 33 In order to predict/estimate all unknown parameters in 1.5) simultaneously, we construct a vector containing the fixed effects, random effects, and error terms in 1.5) as follows φ = Fα+Gγ +Hε+H f ε f, 1.11) where F R s k, G R s p, H R s n, and H f R s n f are known matrices. In this case, Eφ) = Fα, Covφ) = J ΣJ, Cov{φ, y} = J ΣR 1, J = [G, H, H f ]. 1.12) Eq.1.11) contains all possible matrix and vector operations in 1.1) 1.5) as its special cases. For instance, If F = T XA, G = T X, and [H, H f ] = T, then 1.11) becomes φ = T XAα+T Xγ +T ε = Tỹ, 1.13) whichcontainsy, y f, andỹasitsspecialcasesfordifferentchoicesoft. Anotherwellknown form of φ in 1.11) is the following target function discussed in [3, 4, 21, 26], which allows the prediction of both y f and Ey f ), τ = λy f +1 λ)ey f ) = X f Aα+λX f γ +λε f, 1.14) whereλ0 λ 1)isanon-stochasticscalarassigningweightstoactualandexpected values of y f. Clearly, the problem of predicting a linear combination of the fixed- and random-effects can be formulated as a special case of the general prediction problem on φ in 1.11). Thus, the simultaneous statistical inference of all unknown parameters in 1.11) is a comprehensive work, and will play prescriptive role for various special statistical inference problems under 1.1) from both theoretical and applied points of view. Note that there are 13 given matrices in 1.1) 1.6) and 1.11). Hence, statistical inference of φ in 1.11) is not easy task, we will encounter many tedious matrix operations for the given 13 matrices, as demonstrated in Section 3 below. The paper is organized as follows. Section 2 introduces the definitions of the BLUPs/BLUEs of all unknown parameters in 1.5) and 1.6), and gives a variety of matrix formulas needed to establish the BLUPs/BLUEs. Section 3 derives I) equations and formulas for the BLUPs/BLUEs of φ and its components; II) additive decompositions the BLUPs of φ with respect to its components; III) various formulas for the covariance matrix operations of the BLUPs/BLUEs of φ and its components. Section 4 formulates some further work on statistical inferences of LRMs, and describes how to do prediction analysis from a given linear equation of some random vectors. 2. Preliminaries. We first introduce definitions of the BLUPs/BLUEs of all unknown parameters in 1.5). A linear statistic Ly under 1.1), where L R s n, is

6 34 Y. Tian said to have the same expectation with φ in 1.11) if and only if ELy φ) = 0 holds. If there exists an L 0 y φ such that EL 0 y φ) = 0 and CovLy φ) CovL 0 y φ) s.t. ELy φ) = 0 2.1) hold, where CovLy φ) = E[Ly φ)ly φ) ] is the matrix mean squared error MMSE) of φ under ELy φ) = 0, then the linear statistic L 0 y is defined to be the BLUP of φ in 1.11), and is denoted by L 0 y = BLUPφ) = BLUPFα+Gγ +Hε+H f ε f ). 2.2) If G = 0, H = 0, and H f = 0, in 1.11), the L 0 y satisfying 2.1) is called the BLUE of Fα under 1.1), and is denoted by L 0 y = BLUEFα). 2.3) It should be pointed out that 2.1) can equivalently be converted to certain constrained matrix-valued function optimization problem in the Löwner partial ordering. This kind of equivalences between covariance matrix minimization problems and matrix-valued function minimization problems were firstly characterized in [17]; see also [19]. When A = I p and Σ 11 = 0, 1.1) is the well-known general linear fixedeffects model. In this instance, the work on predictions of new observations was widely considered since 1970s; see, e.g., [5, 7, 8, 9, 10, 11, 12]. On the other hand, 1.1) is a special case of general Linear Mixed-effects Models LMMs), and some previous results on BLUPs/BLUEs under LMMs can be found in the literature; see, e.g., [1, 6, 16, 18, 19]. The following lemma is well known; see [15]. Lemma 2.1. The linear matrix equation AX = B is consistent if and only if r[a, B] = ra), or equivalently, AA + B = B. In this case, the general solution of the equation can be written in the following parametric form X = A + B+I A + A)U, where U is an arbitrary matrix. We also need the following known formulas on ranks of matrices; see [13, 22]. Lemma 2.2. Let A R m n, B R m k, and C R l n. Then r[a, B] = ra)+re A B) = rb)+re B A), 2.4) A r = ra)+rcf C A ) = rc)+raf C ), 2.5) AA B r B = r[a, B]+rB). 2.6) 0

7 Predictions under LRM 35 If RA 1 ) RB 1 ), RA 2) RB 1 ), RA 2 ) RB 2 ) and RA 3) RB 2 ), then ra 1 B + 1 A B1 A 2) = r 2 rb A ), 0 B 2 A 3 2.7) ra 1 B + 1 A 2B + 2 A 3) = r B 1 A 2 0 rb 1 ) rb 2 ). 2.8) A The following result on analytical solutions of a constrained matrix-valued function optimization problem was given in [25]. Lemma 2.3. Let fl) = LC+D)MLC+D) s.t. LA = B, 2.9) where A R p q, B R n q, C R p m, and D R n m are given, M R m m is nnd, and the matrix equation LA = B is consistent. Then, there always exists a solution L 0 of L 0 A = B such that fl) fl 0 ) 2.10) holds for all solutions of LA = B. In this case, the matrix L 0 satisfying 2.10) is determined by the following consistent matrix equation L 0 [A, CMC A ] = [B, DMC A ], 2.11) while the general expression of L 0 and the corresponding fl 0 ) are given by L 0 = [B, DMC A ][A, CMC A ] + +U[A, CMC ], 2.12) fl 0 ) = KMK KMC A CMC A ) + CMK, 2.13) where K = BA + C+D, and U R n p is arbitrary. Many optimization problems in parametric statistical inferences, as demonstrated below, can be converted to the minimization of 2.9) in the Löwner partial ordering, while analytical solutions to these optimization problems in statistics can be derived from Lemma 2.3. More results on constrained) quadratic matrix-valued function optimization problems in the Löwner partial ordering can be found in [23, 24]. 3. Equations and formulas for BLUPs/BLUEs of all unknown parameters in LRM. In what follows, we assume that 1.1) is consistent, i.e., y R[XA, V 11 ] holds with probability 1. In this section, we first show how to derive the BLUP of the vector φ in 1.11), and then give some direct consequences under different assumptions. Lemma 3.1. The vector φ in 1.11) is predictable by y in 1.1) if and only if there exists L R s n such that LXA = F, or equivalently, R[XA) ] RF ). 3.1)

8 36 Y. Tian Proof. It is obvious that ELy φ) = 0 LXAα Fα = 0 for all α LXA = F. From Lemma 2.1, the matrix equation is consistent if and only if 3.1) holds. Theorem 3.2. Assume that φ in 1.11) is predictable by y in 1.1), namely, 3.1) holds, and let X, R 1, V ij, and J be as given in 1.5), 1.7), 1.9), 1.10), and 1.12). Also denote X = XA. Then ELy φ) = 0 and CovLy φ) = min L[ X, Covy) X ] = [F, Cov{φ,y} X ]. 3.2) The matrix equation in 3.2), called the fundamental equation for BLUP, is consistent as well under 3.1). In this case, the general solution of L and BLUPφ) can be written as BLUPφ) = Ly = [F, J ΣR 1 X ][ X, V 11 X ] + +U[ X, V 11 X ] ) y, 3.3) where U R s n is arbitrary. In particular, BLUPXβ) = [ X, [X, 0, 0] ΣR 1 X ][ X, V 11 X ] + +U 1 [ X, V 11 X ] ) y, 3.4) BLUPX f β) = [X f A, [X f, 0, 0] ΣR 1 X ][ X, V 11 X ] + +U 2 [ X, V 11 X ] ) y, 3.5) BLUEXAα) = [ X, 0][ X, V 11 X ] + +U 3 [ X, V 11 X ] ) y, 3.6) BLUEX f Aα) = [X f A, 0][ X, V 11 X ] + +U 4 [ X, V 11 X ] ) y, 3.7) BLUPXγ) = [0, [X, 0, 0] ΣR 1 X ][ X, V 11 X ] + +U 5 [ X, V 11 X ] ) y, 3.8) BLUPX f γ) = [0, [X f, 0, 0] ΣR 1 X ][ X, V 11 X ] + +U 6 [ X, V 11 X ] ) y, 3.9) BLUPε) = [0, [0, I n, 0] ΣR 1 X ][ X, V 11 X ] + +U 7 [ X, V 11 X ] ) y, 3.10) BLUPε f ) = [0, [0, 0, I nf ] ΣR 1 X ][ X, V 11 X ] + +U 8 [ X, V 11 X ] ) y, 3.11) where U i are arbitrary matrices of appropriate sizes, i = 1,2,...,8. Further, the following results hold. a) r[ X, V 11 X ] = r[ X, R 1 Σ], R[ X, V 11 X ] = R[ X, R 1 Σ], and R X) RV 11 X ) = {0}. b) L 0 is unique if and only if r[ X, V 11 ] = n. c) BLUPφ) is unique with probability 1 if and only if y R[ X, V 11 ], i.e., 1.1) is consistent.

9 Predictions under LRM 37 d) BLUPφ) satisfies Cov[BLUPφ)] = [F, J ΣR 1 X ][ X, V 11 X ] + V 11 [F, J ΣR 1 X ][ X, V 11 X ] + ), 3.12) Cov{BLUPφ), φ} = [F, J ΣR 1 X ][ X, V 11 X ] + R 1 ΣJ, 3.13) Covφ) Cov[BLUPφ)] = J ΣJ [F, J ΣR 1 X ][ X, V 11 X ] + V 11 [F, J ΣR 1 X ][ X, V 11 X ] + ), 3.14) Cov[φ BLUPφ)] = [F, J ΣR 1 X ][ X, V 11 X ] + R 1 J) Σ[F, J ΣR 1 X ][ X, V 11 X ] + R 1 J). Proof. By noticing that Ly φ = L Xα+LXγ +Lε Fα Gγ Hε H f ε f = L X F)α+LX G)γ +L H)ε H f ε f, we see that the covariance matrix of Ly φ can be written as CovLy φ) = Cov[LX G)γ +L H)ε H f ε f ] = [LX G, L H, H f ] Σ[LX G, L H, H f ] = L[X, I n, 0] [G, H, H f ]) ΣL[X, I n, 0] [G, H, H f ]) 3.15) = LR 1 J) ΣLR 1 J) := fl). 3.16) In this setting, we see from Lemma 2.3 that the first part of 3.2) is equivalent to finding a solution L 0 of the consistent matrix equation L 0 X = F such that fl) fl 0 ) s.t. L X = F 3.17) holds in the Löwner partial ordering. Further from Lemma 2.3, there always exists s solution L 0 of L 0 X = F such that 3.17) holds, and the L0 is determined by the matrix equation L 0 [ X, V 11 X ] = [F, J ΣR 1 X ], 3.18) establishing the matrix equation in 3.2). Solving the equation by Lemma 2.1 gives the L 0 in 3.3). Also from 2.13), fl 0 ) = CovL 0 y φ) = [F, J ΣR 1 X ][ X, V 11 X ] + R 1 J) Σ as required for 3.15). Result a) is well known. Results b) and c) follow directly from 3.3). [F, J ΣR 1 X ][ X, V 11 X ] + R 1 J),

10 38 Y. Tian Taking the covariance matrix of 3.3), and simplifying by 1.9) and RV 11 ) R[ X, V 11 X ] yield 3.12). From 1.11) and 3.3), Cov{BLUPφ), φ} = Cov{L 0 y, φ} = Cov { [F, J ΣR 1 X ][ X, V 11 X ] + R 1 [ β ε = [F, J ΣR 1 X ][ X, V 11 X ] + R 1 ΣJ, establishing 3.13). Eq. 3.14) follows from 1.12) and 3.12). ], J } β ε The matrix equation in 3.2) shows that the BLUPs/BLUEs of all unknown parameters in 1.5) generally depend on the covariance matrix of the observed random vector y, and the covariance matrix between φ and y. Because the matrix equation in 3.2) and the formulas in 3.3) 3.15) are presented by common operations of the given matrices and their generalized inverses, Theorem 3.2 and its proof in fact provide a standard procedure of handling matrix operations that occurr in the theory of BLUPs/BLUEs under general LRMs. From the fundamental equations and formulas in Theorem 3.2, we are now able to derive many new and valuable consequences on properties of BLUPs/BLUEs under various conditions. Corollary 3.3. Let φ be as given in 1.11). Then, the following results hold. a) If φ is predictable by y in 1.1), then Tφ is predictable by y in 1.1) as well for any matrix T R t s, and BLUPTφ) = TBLUPφ) 3.19) holds. b) If φ is predictable by y in 1.1), then Fα is estimable by y in 1.1) as well, and the BLUP of φ can be decomposed as the sum BLUPφ) = BLUEFα)+BLUPGγ)+BLUPHε)+BLUPH f ε f ), 3.20) and the following formulas for covariance matrices hold Cov{BLUEFα), BLUPGγ +Hε+H f ε f )} = 0, 3.21) Cov[BLUPφ)] = Cov[BLUEFα)]+Cov[BLUPGγ +Hε+H f ε f )]. 3.22) c) If α in 1.1) is estimable by y in 1.1), i.e., rxa) = k, then φ is predictable by y in 1.1) as well. In this case, the following BLUP/BLUE decomposition equalities BLUP α γ ε ε f = BLUEα) BLUPγ) BLUPε) BLUPε f ), 3.23) BLUPφ) = FBLUEα)+GBLUPγ)+HBLUPε)+H f BLUPε f ) 3.24) hold.

11 Predictions under LRM 39 Proof. The predictability of Tφ follows from R[XA) ] RF ) RF T ). Also from 3.3), BLUPTφ) = [TF, TJ ΣR 1 X ][ X, V 11 X ] + +U[ X, V 11 X ] ) y = T [F, J ΣR 1 X ][ X, V 11 X ] + +U 1 [ X, V 11 X ] ) y = TBLUPφ), where U = TU 1, establishing 3.19). where and Note that Ly in 3.3) can be decomposed as Ly = S 1 +S 2 +S 3 +S 4 )y = S 1 y+s 2 y+s 3 y+s 4 y, S 1 = [F, 0][ X, V 11 X ] + +U 1 [ X, V 11 X ], S 2 = [0, [G, 0, 0] ΣR 1 X ][ X, V 11 X ] + +U 2 [ X, V 11 X ], S 3 = [0, [0, H, 0] ΣR 1 X ][ X, V 11 X ] + +U 3 [ X, V 11 X ], S 4 = [0, [0, 0, H f ] ΣR 1 X ][ X, V 11 X ] + +U 4 [ X, V 11 X ], BLUEFα) = S 1 y, BLUPGγ) = S 2 y, BLUPHε) = S 3 y, BLUPH f ε f ) = S 4 y, establishing 3.20). We also obtain from 3.3) that Cov{BLUEFα), BLUPGγ +Hε+H f ε f )} { = Cov [F, 0][ X, V 11 X ] + +U 1 [ X, V 11 X ] )y, [0, J ΣR 1 X ][ X, V 11 X ] + +U 2 [ X, } V 11 X ] )y = [F, 0][ X, V 11 X ] + V 11 [0, J ΣR 1 X ][ X, V 11 X ] + ). 3.25) Applying 2.8) to 3.25) and simplifying, we obtain rcov{bluefα), BLUPGγ +Hε+H f ε f )}) = r [F, 0][ X, V 11 X ] + V 11 [0, J ΣR 1 X ][ X, V 11 X ] + ) ) X 0 0 X = r V 11 X R 1 ΣJ [ X, V 11 X ] V r[ X, V 11 X ] [F, 0] X 0 X R 1 ΣJ = r 0 X V 11 X 0 [ X, 0] V 11 0 [F, 0] 0 0 2r[ X, V 11 ]

12 40 Y. Tian = r 0 X X V 11 +r[ X V 11 X, X R 1 ΣJ ] 2r[ X, V 11 ] F 0 X X = r +r +r[ F X, V 11 X, R 1 ΣJ ] V 11 r X) 2r[ X, V 11 ] by 2.4) and 2.6)) = r[ X, V 11, R 1 ΣJ ] r[ X, V 11 ] = r[ X, V 11 ] r[ X, V 11 ] by Theorem 3.2a)) = 0, which implies that Cov{BLUEFα), BLUPGγ +Hε+H f ε f )} is a null matrix, establishing 3.21). Eq. 3.22) follows from 3.20) and 3.21). Eqs. 3.23), and 3.24) follow from 1.11), 3.19) and 3.20). Corollary 3.4. Let φ 1 = F 1 α+g 1 γ +H 1 ε+h f1 ε f, φ 2 = F 2 α+g 2 γ +H 2 ε+h f2 ε f, where F 1, F 2 R s k, G 1, G 2 R s p, H 1, H 2 R s n, and H f1, H f2 R s n f are known matrices, and assume that they are predictable by y in 1.1). Then, the following results hold. a) The sum φ 1 +φ 2 is predictable by y in 1.1), and the BLUP of φ 1 +φ 2 satisfies BLUPφ 1 +φ 2 ) = BLUPφ 1 )+BLUPφ 2 ). 3.26) b) BLUPφ 1 ) = BLUPφ 2 ) F 1 = F 2 and RR 1 ΣJ 1 R 1 ΣJ 2 ) R X), where J 1 = [G 1, H 1, H f1 ] and J 2 = [G 2, H 2, H f2 ]. Proof. Eq. 3.26) follows from 3.20). From Theorem 3.2, the two equations for the coefficient matrices of BLUPφ 1 ) = L 1 y and BLUPφ 2 ) = L 2 y are given by L 1 [ X, V 11 X ] = [F 1, J 1 ΣR 1 X ], L 2 [ X, V 11 X ] = [F 2, J 2 ΣR 1 X ]. The pair of matrix equations have a common solution if and only if X V11 X X V11 X r F 1 J 1 ΣR 1 X F 2 J 2 ΣR 1 X = r[ X, V 11 X, X, V 11 X ], 3.27) where by block elementary matrix operations X V11 X X V11 X r F 1 J 1 ΣR 1 X F 2 J 2 ΣR 1 X X V11 X 0 0 = r 0 0 F 2 F 1 J 2 ΣR 1 J 1 ΣR 1) X = r[ X, V 11 X ]+r[f 2 F 1, J 2 ΣR 1 J 1 ΣR 1) X ], r[ X, V 11 X, X, V 11 X ] = r[ X, V 11 X ].

13 Predictions under LRM 41 Hence, 3.27) is equivalent to [F 2 F 1, J 2 ΣR 1 J 1 ΣR 1) X ] = 0, which is further equivalent to b) by Lemma 2.1. As direct consequences of Theorem 3.2 and Corollary 3.3, we next give the BLUPs under 1.4). Corollary 3.5. Let R, R 1, and R 2 be as given in 1.7), and denote Z 1 = Cov{X f β, y} = X f Σ 11 X +X f Σ 12 and Z 2 = Cov{ε f, y} = Σ 31 X +Σ 32. Then, the following results hold. a) The future observation vector y f in 1.4) is predictable by y in 1.1) if and only if R X ) R[X f A) ]. In this case, BLUPy f ) = [X f A, V 21 X ][ X, V 11 X ] + +U[ X, V 11 ] ) y, 3.28) and Cov[BLUPy f )] = [X f A, V 21 X ][ X, V 11 X ] + V 11 [X f A, V 21 X ][ X, V 11 X ] + ), 3.29) Cov{BLUPy f ), y f } = [X f A, V 21 X ][ X, V 11 X ] + V 12, 3.30) Covy f ) Cov[BLUPy f )] = V 22 [X f A, V 21 X ][ X, V 11 X ] + V 11 [X f A, V 21 X ][ X, V 11 X ] + ), Cov[y f BLUPy f )] = [X f A, V 21 X ][ X, V 11 X ] + R 1 R 2 ) Σ 3.31) [X f A, V 21 X ][ X, V 11 X ] + R 1 R 2 ), 3.32) where U R nf n is arbitrary. b) X f β in 1.4) is predictable by y in 1.1) if and only if R X ) R[X f A) ]. In this case, BLUPX f β) = [X f A, Z 1 X ][ X, V 11 X ] + +U 1 [ X, V 11 X ] ) y, 3.33) and Cov[BLUPX f β)] = [X f A, Z 1 X ][ X, V 11 X ] + V 11 [X f A, Z 1 X ][ X, V 11 X ] + ), 3.34) Cov{BLUPX f β), X f β} = [X f A, Z 1 X ][ X, V 11 X ] + Z 1, 3.35) CovX f β) Cov[BLUPX f β)] = X f Σ 11 X f [X f A, Z 1 X ][ X, V 11 X ] + V 11 [X f A, Z 1 X ][ X, V 11 X ] + ), Cov[X f β BLUPX f β)] = [X f A, Z 1 X ][ X, V 11 X ] + R 1 [X f,0, 0]) Σ where U 1 R n f n is arbitrary. 3.36) [X f A, Z 1 X ][ X, V 11 X ] + R 1 [X f,0, 0]). 3.37)

14 42 Y. Tian c) ε f in 1.4) is always predictable by y in 1.1), and BLUPε f ) = Z 2 X V 11 X ) + +U 2 [ X, V 11 ] ) y, 3.38) Cov[BLUPε f )] = Cov{BLUPε f ), ε f } = Z 2 X V 11 X ) + Z 2, 3.39) Cov[ε f BLUPε f )] = Covε f ) Cov[BLUPε f )] where U 2 R n f n is arbitrary. = Σ 33 Z 2 X V 11 X ) + Z 2, 3.40) Finally, we give a group of fundamental decomposition equalities of the BLUPs of y, y f, and ỹ in 1.5). Corollary 3.6. The vector ỹ in 1.5) is predictable by y in 1.1) if and only if R[XA) ] R[X f A) ]. In this case, the following decomposition equalities always hold. y = BLUPy) = BLUEXAα) + BLUPXγ) + BLUPε), 3.41) BLUPy f ) = BLUEX f Aα)+BLUPX f γ)+blupε f ), 3.42) BLUPy) y BLUPỹ) = = 3.43) BLUPy f ) BLUPy f ) The additive decomposition equalities of the BLUPs in 3.41) and 3.42) are in fact built-in restrictions to BLUPs/BLUEs, which sufficiently demonstrate the key roles of BLUPs/BLUEs in statistical inferences of LRMs. Some previous discussions on built-in restrictions to BLUPs/BLUEs can be found; e.g., in [2, 14, 19, 20]. 4. Remarks. This paper established a general theory on BLUPs/BLUEs under LRM with original and future observations, and obtained many equations and formulas for calculating BLUPs/BLUEs of all unknown parameters in the LRM via solving a constrained quadratic matrix-valued function optimization problem in the Löwner partial ordering. Because the BLUPs/BLUEs in the previous sections are formulated by common operations of the given matrices and their generalized inverses in LRM, we can easily derive many new mathematical and statistical properties of the BLUPs/BLUEs under various assumptions. It is expected more interesting results on statistical inferences of LRMs can be derived from the equations and formulas in the previous sections. Here we mention a few: a) Derive closed-form formulas for calculating the ranks and inertias of the difference Cov[φ BLUPφ)] A, and use them to derive necessary and sufficient conditions for the following equality and inequalities Cov[φ BLUPφ)] = A A A, A, A) to hold under the assumptions in 1.1) 1.12), where A is symmetric matrix, say, A = Covφ) Cov[BLUPφ)].

15 Predictions under LRM 43 b) Establish necessary and sufficient conditions for the following decomposition equalities Cov[BLUPφ)] = Cov[BLUEFα)]+Cov[BLUPGγ)] +Cov[BLUPHε)]+Cov[BLUPH f ε f )], Cov[φ BLUPφ)] = Cov[BLUEFα)]+Cov[Gγ BLUPGγ)] + Cov[Hε BLUPHε)] + Cov[H f ε f BLUPH f ε f )] to hold respectively under the assumptions in 1.1) 1.12). c) Establish necessary and sufficient condition for BLUPφ) = φ like fixed points of matrix map) to hold. A special case is that BLUPTy) = Ty always holds for any matrix T, and is this case unique? It is expected that this type of work will bring deep understanding of statistical inferences of BLUPs/BLUEs from many new aspects. Finally, we give a general matrix formulation and derivation of BLUPs under random vector equations. Note that 1.1) is a special case of the following equation of random vectors A 1 y 1 +A 2 y 2 +A 3 y 3 = 0, 4.1) where A 1, A 2, and A 3 are three given matrices of appropriate sizes, and y 1, y 2 and y 3 are three random vectors of appropriate sizes satisfying E y 1 y 2 = b 1 b 2, Cov y 1 y 2 = Σ 11 Σ 12 Σ 13 Σ 21 Σ 22 Σ 23, 4.2) y 3 b 3 y 3 Σ 31 Σ 32 Σ 33 in which, b 1, b 2 and b 3 areconstant vectors,or fixed but unknown parametervectors. In this setting, taking expectation and covariance matrix of 4.1) yields A 1 b 1 +A 2 b 2 +A 3 b 3 = 0 and [A 1, A 2, A 3 ] Σ 11 Σ 12 Σ 13 Σ 21 Σ 22 Σ 23 A 1 A 2 = ) Σ 31 Σ 32 Σ 33 A 3 Assume now that one of y 1, y 2 and y 3 is observed, say, y 1, and the other two are unobservable, and we want to predict the vector L 2 y 2 + L 3 y 3 from 4.1) and 4.2). In this case, we let S = {L 1 y 1 L 2 y 2 L 3 y 3 EL 1 y 1 L 2 y 2 L 3 y 3 ) = 0}. 4.4) If there exists matrix L 1 such that [ CovL1 y 1 L 2 y 2 L 3 y 3 ) Cov L 1 y 1 L 2 y 2 L 3 y 3 ) s.t. L 1 y 1 L 2 y 2 L 3 y 3 S, 4.5)

16 44 Y. Tian the linear statistic L 1 y 1 is defined to the BLUP of L 2 y 2 + L 3 y 3 under 4.1) and is denoted by L 1 y 1 = BLUPL 2 y 2 +L 3 y 3 ). Note that CovL 1 y 1 L 2 y 2 L 3 y 3 ) = [L 1, L 2, L 3 ] Σ 11 Σ 12 Σ 13 Σ 21 Σ 22 Σ 23 L 1 L 2 := fl 1 ). Σ 31 Σ 32 Σ 33 L 3 Thus, 4.5) is equivalent to fl 1 ) f L 1 ) s.t. L 1 b 1 = L 2 b 2 +L 3 b ) By a similar approach as given in the previous sections, we can establish a group of equations as follows A 1 y 1 +BLUPA 2 y 2 )+BLUPA 3 y 3 ) = 0 if y 1 is obsevable, BLUPA 1 y 1 )+A 2 y 2 +BLUPA 3 y 3 ) = 0 if y 2 is obsevable, BLUPA 2 y 1 )+BLUPA 2 y 2 )+A 3 y 3 = 0 if y 3 is obsevable. Eqs. 4.1) 4.3) contain almost all linear structures occurred in regression analysis. It is expected that more valuable results can be obtained on 4.1) 4.6), which can serve as a mathematical foundation under various regression model assumptions. REFERENCES [1] R.B. Bapat. Linear Algebra and Linear Models. 3rd ed., Springer, [2] B.A. Brumback. On the built-in restrictions in linear mixed models with Application to smoothing spline analysis of variance. Commun. Statist. Theor. Meth., 39: , [3] A. Chaturvedi, S. Kesarwani and R. Chandra. Simultaneous prediction based on shrinkage estimator. Recent Advances in Linear Models and Related Areas, Essays in Honour of Helge Toutenburg, Springer, pp , [4] A. Chaturvedi, A.T.K. Wan and S.P. Singh. Improved multivariate prediction in a general linear model with an unknown error covariance matrix. J. Multivariate Analysis, 83: , [5] R. Christensen. Plane Answers to Complex Questions: The Theory of Linear Models. 3rd ed, Springer, New York, [6] D. Harville. Extension of the Gauss Markov theorem to include the estimation of random effects. Ann. Statist., 4: , [7] S.J. Haslett and S. Puntanen. Equality of BLUEs or BLUPs under two linear models using stochastic restrictions. Statist. Papers, 51: , [8] S.J. Haslett and S. Puntanen. Note on the equality of the BLUPs for new observations under two linear models. Acta Comm. Univ. Tartu. Math., 14:27 33, [9] S.J. Haslett and S. Puntanen. On the equality of the BLUPs under two linear mixed models. Metrika, 74: , [10] C.R. Henderson. Best linear unbiased estimation and prediction under a selection model. Biometrics, 31: , [11] J. Isotalo and S. Puntanen. Linear prediction sufficiency for new observations in the general GaussCMarkov model. Comm. Statist. Theor. Meth., 35: , [12] B. Jonsson. Prediction with a linear regression model and errors in a regressor. Internat. J. Forecast., 10: , [13] G. Marsaglia and G.P.H. Styan. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra, 2: , 1974.

17 Predictions under LRM 45 [14] R.A. McLean, W.L. Sanders and W.W. Stroup. A unified approach to mixed linear models. Amer. Statist., 45:54 64, [15] R. Penrose. A generalized inverse for matrices. Proc. Cambridge Phil. Soc., 51: , [16] D. Pfeffermann. On extensions of the Gauss Markov theorem to the case of stochastic regression coefficients. J. Roy. Statist. Soc. Ser. B, 46: , [17] C.R. Rao. A lemma on optimization of matrix function and a review of the unified theory of linear estimation. Statistical Data Analysis and Inference, Y. Dodge ed.), North Holland, pp , [18] G.K. Robinson. That BLUP is a good thing: the estimation of random effects. Statist. Science, 6:15 32, [19] S.R. Searle. The matrix handling of BLUE and BLUP in the mixed linear model. Linear Algebra Appl., 264: , [20] S.R. Searle. Built-in restrictions on best linear unbiased predictors BLUP) of random effects in mixed models. Amer. Statist., 51:19 21, [21] Shalabh. Performance of Stein-rule procedure for simultaneous prediction of actual and average values of study variable in linear regression models. Bull. Internat. Stat. Instit., 56: , [22] Y. Tian. More on maximal and minimal ranks of Schur complements with applications. Appl. Math. Comput., 152: , [23] Y. Tian. Solving optimization problems on ranks and inertias of some constrained nonlinear matrix functions via an algebraic linearization method. Nonlinear Analysis, 75: , [24] Y. Tian. Formulas for calculating the extremum ranks and inertias of a four-term quadratic matrix-valued function and their applications. Linear Algebra Appl., 437: , [25] Y. Tian. A new derivation of BLUPs under random-effects model. Metrika, DOI: /s [26] H. Toutenburg. Prior Information in Linear Models. Wiley, New York, 1982.

A new algebraic analysis to linear mixed models

A new algebraic analysis to linear mixed models A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a

More information

Analysis of transformations of linear random-effects models

Analysis of transformations of linear random-effects models Analysis of transformations of linear random-effects models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. Assume that

More information

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models

Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models 1 Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models Martina Hančová Institute of Mathematics, P. J. Šafárik University in Košice Jesenná 5,

More information

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair

More information

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 17 21 Rank and inertia of submatrices of te Moore- Penrose inverse of a Hermitian matrix Yongge Tian yongge.tian@gmail.com Follow tis

More information

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

ELA

ELA Electronic Journal of Linear Algebra ISSN 181-81 A publication of te International Linear Algebra Society ttp://mat.tecnion.ac.il/iic/ela RANK AND INERTIA OF SUBMATRICES OF THE MOORE PENROSE INVERSE OF

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China

Yongge Tian. China Economics and Management Academy, Central University of Finance and Economics, Beijing , China On global optimizations of the rank and inertia of the matrix function A 1 B 1 XB 1 subject to a pair of matrix equations B 2 XB 2, B XB = A 2, A Yongge Tian China Economics and Management Academy, Central

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model

On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model Statistics & Probability Letters 76 (2006) 1265 1272 www.elsevier.com/locate/stapro On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN

SPECIAL FORMS OF GENERALIZED INVERSES OF ROW BLOCK MATRICES YONGGE TIAN Electronic Journal of Linear lgebra ISSN 1081-3810 publication of the International Linear lgebra Society EL SPECIL FORMS OF GENERLIZED INVERSES OF ROW BLOCK MTRICES YONGGE TIN bstract. Given a row block

More information

On EP elements, normal elements and partial isometries in rings with involution

On EP elements, normal elements and partial isometries in rings with involution Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this

More information

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications

Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Rank and inertia optimizations of two Hermitian quadratic matrix functions subject to restrictions with applications Yongge Tian a, Ying Li b,c a China Economics and Management Academy, Central University

More information

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China

More information

Bayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function

Bayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function Communications in Statistics Theory and Methods, 43: 4253 4264, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0361-0926 print / 1532-415X online DOI: 10.1080/03610926.2012.725498 Bayesian Estimation

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL

A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL A CLASS OF THE BEST LINEAR UNBIASED ESTIMATORS IN THE GENERAL LINEAR MODEL GABRIELA BEGANU The existence of the best linear unbiased estimators (BLUE) for parametric estimable functions in linear models

More information

Generalized Schur complements of matrices and compound matrices

Generalized Schur complements of matrices and compound matrices Electronic Journal of Linear Algebra Volume 2 Volume 2 (200 Article 3 200 Generalized Schur complements of matrices and compound matrices Jianzhou Liu Rong Huang Follow this and additional wors at: http://repository.uwyo.edu/ela

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion

More information

Partial isometries and EP elements in rings with involution

Partial isometries and EP elements in rings with involution Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 55 2009 Partial isometries and EP elements in rings with involution Dijana Mosic dragan@pmf.ni.ac.yu Dragan S. Djordjevic Follow

More information

Dihedral groups of automorphisms of compact Riemann surfaces of genus two

Dihedral groups of automorphisms of compact Riemann surfaces of genus two Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 36 2013 Dihedral groups of automorphisms of compact Riemann surfaces of genus two Qingje Yang yangqj@ruc.edu.com Dan Yang Follow

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

Additivity of maps on generalized matrix algebras

Additivity of maps on generalized matrix algebras Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 49 2011 Additivity of maps on generalized matrix algebras Yanbo Li Zhankui Xiao Follow this and additional works at: http://repository.uwyo.edu/ela

More information

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij

identity matrix, shortened I the jth column of I; the jth standard basis vector matrix A with its elements a ij Notation R R n m R n m r R n s real numbers set of n m real matrices subset of R n m consisting of matrices with rank r subset of R n n consisting of symmetric matrices NND n subset of R n s consisting

More information

A note on estimates for the spectral radius of a nonnegative matrix

A note on estimates for the spectral radius of a nonnegative matrix Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow

More information

Pairs of matrices, one of which commutes with their commutator

Pairs of matrices, one of which commutes with their commutator Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 38 2011 Pairs of matrices, one of which commutes with their commutator Gerald Bourgeois Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Rank equalities for idempotent and involutory matrices

Rank equalities for idempotent and involutory matrices Linear Algebra and its Applications 335 (2001) 101 117 www.elsevier.com/locate/laa Rank equalities for idempotent and involutory matrices Yongge Tian a, George P.H. Styan a, b, a Department of Mathematics

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

On generalized Schur complement of nonstrictly diagonally dominant matrices and general H- matrices

On generalized Schur complement of nonstrictly diagonally dominant matrices and general H- matrices Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012) Article 57 2012 On generalized Schur complement of nonstrictly diagonally dominant matrices and general H- matrices Cheng-Yi Zhang zhangchengyi2004@163.com

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

On a matrix result in comparison of linear experiments

On a matrix result in comparison of linear experiments Linear Algebra and its Applications 32 (2000) 32 325 www.elsevier.com/locate/laa On a matrix result in comparison of linear experiments Czesław Stȩpniak, Institute of Mathematics, Pedagogical University

More information

On Projection of a Positive Definite Matrix on a Cone of Nonnegative Definite Toeplitz Matrices

On Projection of a Positive Definite Matrix on a Cone of Nonnegative Definite Toeplitz Matrices Electronic Journal of Linear Algebra Volume 33 Volume 33: Special Issue for the International Conference on Matrix Analysis and its Applications, MAT TRIAD 2017 Article 8 2018 On Proection of a Positive

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Efficient Estimation for the Partially Linear Models with Random Effects

Efficient Estimation for the Partially Linear Models with Random Effects A^VÇÚO 1 33 ò 1 5 Ï 2017 c 10 Chinese Journal of Applied Probability and Statistics Oct., 2017, Vol. 33, No. 5, pp. 529-537 doi: 10.3969/j.issn.1001-4268.2017.05.009 Efficient Estimation for the Partially

More information

On some linear combinations of hypergeneralized projectors

On some linear combinations of hypergeneralized projectors Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß

More information

The DMP Inverse for Rectangular Matrices

The DMP Inverse for Rectangular Matrices Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for

More information

The reflexive re-nonnegative definite solution to a quaternion matrix equation

The reflexive re-nonnegative definite solution to a quaternion matrix equation Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow

More information

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham

More information

Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices

Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 35 2009 Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices Eric A. Carlen carlen@math.rutgers.edu

More information

Dragan S. Djordjević. 1. Introduction

Dragan S. Djordjević. 1. Introduction UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized

More information

Group inverse for the block matrix with two identical subblocks over skew fields

Group inverse for the block matrix with two identical subblocks over skew fields Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional

More information

On (T,f )-connections of matrices and generalized inverses of linear operators

On (T,f )-connections of matrices and generalized inverses of linear operators Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 33 2015 On (T,f )-connections of matrices and generalized inverses of linear operators Marek Niezgoda University of Life Sciences

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance:

Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = The errors are uncorrelated with common variance: 8. PROPERTIES OF LEAST SQUARES ESTIMATES 1 Basic Distributional Assumptions of the Linear Model: 1. The errors are unbiased: E[ε] = 0. 2. The errors are uncorrelated with common variance: These assumptions

More information

Some results on matrix partial orderings and reverse order law

Some results on matrix partial orderings and reverse order law Electronic Journal of Linear Algebra Volume 20 Volume 20 2010 Article 19 2010 Some results on matrix partial orderings and reverse order law Julio Benitez jbenitez@mat.upv.es Xiaoji Liu Jin Zhong Follow

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

A new upper bound for the eigenvalues of the continuous algebraic Riccati equation

A new upper bound for the eigenvalues of the continuous algebraic Riccati equation Electronic Journal of Linear Algebra Volume 0 Volume 0 (00) Article 3 00 A new upper bound for the eigenvalues of the continuous algebraic Riccati equation Jianzhou Liu liujz@xtu.edu.cn Juan Zhang Yu Liu

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

Testing Hypotheses Of Covariance Structure In Multivariate Data

Testing Hypotheses Of Covariance Structure In Multivariate Data Electronic Journal of Linear Algebra Volume 33 Volume 33: Special Issue for the International Conference on Matrix Analysis and its Applications, MAT TRIAD 2017 Article 6 2018 Testing Hypotheses Of Covariance

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

Some results on the reverse order law in rings with involution

Some results on the reverse order law in rings with involution Some results on the reverse order law in rings with involution Dijana Mosić and Dragan S. Djordjević Abstract We investigate some necessary and sufficient conditions for the hybrid reverse order law (ab)

More information

Dragan S. Djordjević. 1. Introduction and preliminaries

Dragan S. Djordjević. 1. Introduction and preliminaries PRODUCTS OF EP OPERATORS ON HILBERT SPACES Dragan S. Djordjević Abstract. A Hilbert space operator A is called the EP operator, if the range of A is equal with the range of its adjoint A. In this article

More information

arxiv: v1 [math.ra] 28 Jan 2016

arxiv: v1 [math.ra] 28 Jan 2016 The Moore-Penrose inverse in rings with involution arxiv:1601.07685v1 [math.ra] 28 Jan 2016 Sanzhang Xu and Jianlong Chen Department of Mathematics, Southeast University, Nanjing 210096, China Abstract:

More information

Note on deleting a vertex and weak interlacing of the Laplacian spectrum

Note on deleting a vertex and weak interlacing of the Laplacian spectrum Electronic Journal of Linear Algebra Volume 16 Article 6 2007 Note on deleting a vertex and weak interlacing of the Laplacian spectrum Zvi Lotker zvilo@cse.bgu.ac.il Follow this and additional works at:

More information

On the M-matrix inverse problem for singular and symmetric Jacobi matrices

On the M-matrix inverse problem for singular and symmetric Jacobi matrices Electronic Journal of Linear Algebra Volume 4 Volume 4 (0/0) Article 7 0 On the M-matrix inverse problem for singular and symmetric Jacobi matrices Angeles Carmona Andres Encinas andres.marcos.encinas@upc.edu

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

SOLUTIONS TO PROBLEMS

SOLUTIONS TO PROBLEMS Z Z T Z conometric Theory, 1, 005, 483 488+ Printed in the United States of America+ DI: 10+10170S06646660505079 SLUTINS T PRBLMS Solutions to Problems Posed in Volume 0(1) and 0() 04.1.1. A Hausman Test

More information

Matrix Approach to Simple Linear Regression: An Overview

Matrix Approach to Simple Linear Regression: An Overview Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector,

114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector, A^VÇÚO 1n ò 1Ï 2015c4 Chinese Journal of Applied Probability and Statistics Vol.31 No.2 Apr. 2015 Optimal Estimator of Regression Coefficient in a General Gauss-Markov Model under a Balanced Loss Function

More information

Stein-Rule Estimation under an Extended Balanced Loss Function

Stein-Rule Estimation under an Extended Balanced Loss Function Shalabh & Helge Toutenburg & Christian Heumann Stein-Rule Estimation under an Extended Balanced Loss Function Technical Report Number 7, 7 Department of Statistics University of Munich http://www.stat.uni-muenchen.de

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

A generalization of Moore-Penrose biorthogonal systems

A generalization of Moore-Penrose biorthogonal systems Electronic Journal of Linear Algebra Volume 10 Article 11 2003 A generalization of Moore-Penrose biorthogonal systems Masaya Matsuura masaya@mist.i.u-tokyo.ac.jp Follow this and additional works at: http://repository.uwyo.edu/ela

More information

arxiv: v1 [math.ra] 14 Apr 2018

arxiv: v1 [math.ra] 14 Apr 2018 Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983),

Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983), Mohsen Pourahmadi PUBLICATIONS Books and Editorial Activities: 1. Foundations of Time Series Analysis and Prediction Theory, John Wiley, 2001. 2. Computing Science and Statistics, 31, 2000, the Proceedings

More information

Decomposition of a ring induced by minus partial order

Decomposition of a ring induced by minus partial order Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012) Article 72 2012 Decomposition of a ring induced by minus partial order Dragan S Rakic rakicdragan@gmailcom Follow this and additional works

More information

Operators with Compatible Ranges

Operators with Compatible Ranges Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges

More information

The least eigenvalue of the signless Laplacian of non-bipartite unicyclic graphs with k pendant vertices

The least eigenvalue of the signless Laplacian of non-bipartite unicyclic graphs with k pendant vertices Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 22 2013 The least eigenvalue of the signless Laplacian of non-bipartite unicyclic graphs with k pendant vertices Ruifang Liu rfliu@zzu.edu.cn

More information

of a Two-Operator Product 1

of a Two-Operator Product 1 Applied Mathematical Sciences, Vol. 7, 2013, no. 130, 6465-6474 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.39501 Reverse Order Law for {1, 3}-Inverse of a Two-Operator Product 1 XUE

More information

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A ) Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=

More information

Additivity Of Jordan (Triple) Derivations On Rings

Additivity Of Jordan (Triple) Derivations On Rings Fayetteville State University DigitalCommons@Fayetteville State University Math and Computer Science Working Papers College of Arts and Sciences 2-1-2011 Additivity Of Jordan (Triple) Derivations On Rings

More information

Sociedad de Estadística e Investigación Operativa

Sociedad de Estadística e Investigación Operativa Sociedad de Estadística e Investigación Operativa Test Volume 14, Number 2. December 2005 Estimation of Regression Coefficients Subject to Exact Linear Restrictions when Some Observations are Missing and

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

An improved characterisation of the interior of the completely positive cone

An improved characterisation of the interior of the completely positive cone Electronic Journal of Linear Algebra Volume 2 Volume 2 (2) Article 5 2 An improved characterisation of the interior of the completely positive cone Peter J.C. Dickinson p.j.c.dickinson@rug.nl Follow this

More information

ENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS. 1. Introduction

ENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS. 1. Introduction Trends in Mathematics Information Center for Mathematical Sciences Volume 6, Number 2, December, 2003, Pages 83 91 ENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS SEUNG-HYEOK KYE Abstract.

More information

Associated Hypotheses in Linear Models for Unbalanced Data

Associated Hypotheses in Linear Models for Unbalanced Data University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 5 Associated Hypotheses in Linear Models for Unbalanced Data Carlos J. Soto University of Wisconsin-Milwaukee Follow this

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

Advanced Econometrics I

Advanced Econometrics I Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics

More information

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 8 2017 Singular Value Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Aliaa Burqan Zarqa University,

More information

Generalization of Gracia's Results

Generalization of Gracia's Results Electronic Journal of Linear Algebra Volume 30 Volume 30 (015) Article 16 015 Generalization of Gracia's Results Jun Liao Hubei Institute, jliao@hubueducn Heguo Liu Hubei University, ghliu@hubueducn Yulei

More information

The generalized Schur complement in group inverses and (k + 1)-potent matrices

The generalized Schur complement in group inverses and (k + 1)-potent matrices The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The

More information

Application of Theorems of Schur and Albert

Application of Theorems of Schur and Albert University of South Carolina Scholar Commons Faculty Publications Mathematics, Department of 9-1-1976 Application of Theorems of Schur and Albert Thomas L. Markham University of South Carolina - Columbia,

More information