Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Size: px
Start display at page:

Download "Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error"

Transcription

1 Journal of Multivariate Analysis 00 (009) Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: Sphericity test in a GMANOVA MANOVA model with normal error Peng Bai Statistics and Mathematics College, Yunnan University of Finance and Economics, Kunming 650, PR China a r t i c l e i n f o a b s t r a c t Article history: Received 4 May 008 Available online 5 March 009 AMS 000 subject classifications: 6E5 6E0 6H0 6H5 Keywords: GMANOVA MANOVA model Sphericity test Null distribution Meijer s G p,0 p,p function Asymptotic distribution Bartlett type correction For the GMANOVA MANOVA model with normal error: Y = XB Z + B Z + E, E N q n (0, I n Σ), we study in this paper the sphericity hypothesis test problem with respect to covariance matrix: Σ = λi q (λ is unknown). It is shown that, as a function of the likelihood ratio statistic Λ, the null distribution of Λ /n can be expressed by Meijer s G q,0 q,q function, and the asymptotic null distribution of log Λ is χ q(q+)/ (as n ). In addition, the Bartlett type correction ρ log Λ for log Λ is indicated to be asymptotically distributed as χ q(q+)/ with order n for an appropriate Bartlett adjustment factor ρ under null hypothesis. 009 Elsevier Inc. All rights reserved.. Introduction The model considered here is a GMANOVA MANOVA model which can be defined as Y = XB Z + B Z + E, where Y is a q n observable random response matrix, X is a q p known constant matrix, Z and Z are the n m and n s known design matrices, respectively, B and B are the p m and q s unknown regression coefficient matrices, respectively, E is a q n unobservable random error matrix, and A denotes the transpose of matrix A. The model () was first proposed by Chinchilli and Elswick [], and was extensively applied to various fields including biology, medicine and economics. The error matrix E is often assumed to be normal: E N q n (0, I n Σ), i.e. E,..., E n i.i.d N q (0, Σ)(E ˆ=(E,..., E n )), where Σ(> 0) is a q q unknown covariance matrix. Under the assumption (), a variety of investigations have been made to handle the statistical inferences with respect to the parameter matrices B, B and Σ, a good summary for the related results can be found in Kollo and von Rosen [], and the excessive published papers will not be listed here for being irrelative to our subject. The available materials clearly show that most of the published works relating to the model () and () focused their attention on the statistical inferences for B and B, and few took Σ into account. In this paper, we study an inference with respect to Σ, which is referred to as sphericity hypothesis and can be described as H : Σ = λi q, λ(> 0) is unknown. () () (3) address: baipeng68@hotmail.com X/$ see front matter 009 Elsevier Inc. All rights reserved. doi:0.06/j.jmva

2 306 P. Bai / Journal of Multivariate Analysis 00 (009) To the best of our knowledge, the likelihood ratio test for the above hypothesis in the model () under the assumption () has not been done before. The remainder of this article is arranged as follows: Section gives the likelihood ratio statistic Λ for sphericity hypothesis (3). In Section 3, the exact null density function of Λ /n is expressed by Meijer s G q,0 q,q function, the asymptotic null distribution of log Λ is shown to be χ q(q+)/ (as n ), and ρ log Λ is indicated to be asymptotically distributed as χ q(q+)/ with order n for an appropriate Bartlett adjustment factor ρ for log Λ under null hypothesis.. Likelihood ratio statistic In order to obtain the likelihood ratio test statistic for sphericity hypothesis (3), we need the following results. We follow the symbols and notations in Muirhead [3] without specification. Lemma (Bai [4]). For the GMANOVA MANOVA model () with normal error (), the maximum likelihood estimates of B, B and Σ are given by (with probability one) ˆB = (X S X) X S YQ Z Z (Z Q Z Z ), ˆB = (Y X ˆB Z )Z (Z Z ), ˆΣ = n (Y X ˆB Z )Q Z (Y X ˆB Z ), respectively, where S = YQ Z Y, Z ˆ= (Z, Z ), P A ˆ= A(A A) A, Q A ˆ= I p P A (A is a p q matrix) and A denotes an arbitrary g-inverse of A such that AA A = A. Remark. The sufficient and necessary conditions for the random matrix S in Lemma being positive definite with probability one are n rk(z) + q (Okamato [5]), where rk(a) denotes the rank of matrix A. In addition, although the expressions of both ˆB and ˆB contain the g-inverses, we have ˆΣ = n {S + [I q X(X S X) X S ](S S)[I q X(X S X) X S ] }, (4) which and R(X ) = R(X S ) show that ˆΣ is unique, where S = YQ Z Y and R(A) denotes the linear subspace spanned by the columns of matrix A. Lemma. Let K(B, B, Σ Y) denote the likelihood function of (B, B, Σ) based on Y in the model () and (), i.e. { K(B, B, Σ; Y) = (π) qn/ Σ n/ etr (Y XB Z B Z ) Σ (Y XB Z B Z ) }, B R p m, B R q s, Σ > 0, (5) then sup K(B, B, λi q ; Y) = (πeˆλ) qn/, (6) B R p m,b R q s,λ>0 where ˆλ = qn tr(p X S + Q X S ). Proof. It follows from (5) that L(B, B, λ; Y) ˆ= log K(B, B, λi q ; Y) which implies that = qn log(πλ) λ tr{ (Y XB Z B Z )(Y XB Z B Z ) }, L(B, B ; Y) ˆ= sup L(B, B, λ; Y) λ>0 B R p m, B R q s, λ > 0, (7) = qn log{πe λ(b, B )}, B R p m, B R q s, (8)

3 P. Bai / Journal of Multivariate Analysis 00 (009) where λ(b, B ) = tr{(y XB qn Z B Z )(Y XB Z B Z ) } and tr(a) denotes the trace of matrix A. Note that (Y XB Z B Z )(Y XB Z B Z ) = (Y XB Z )Q Z (Y XB Z ) + (B B (B ))Z Z (B B (B )), B R q s, B R p m, where B (B ) = (Y XB Z )Z (Z Z ), hence tr{(y XB Z B Z )(Y XB Z B Z ) } tr{(y XB Z )Q Z (Y XB Z ) }, B R p m, (9) where the equality holds if B = B (B ), B R p m. Again note that (Y XB Z )Q Z (Y XB Z ) = [Y X ˆB0 Z X(B ˆB0 )Z ]Q Z [Y X ˆB0 Z X(B ˆB0 )Z ] = [Q X YP QZ Z X(B ˆB0 )Z Q Z ][Q X YP QZ Z X(B ˆB0 )Z Q Z ] + S, B R p m, where ˆB0 = (X X) X YQ Z Z (Z Q Z Z ), thus tr{(y XB Z )Q Z (Y XB Z ) } = tr(p X S + Q X S ) + tr{x(b ˆB0 )Z Q Z Z (B ˆB0 )X } tr(p X S + Q X S ), (0) where the equality holds if B = ˆB0. From the definition of λ(b, B ), (9) and (0), we have λ(b, B ) ˆλ ˆ= qn tr(p X S + Q X S ), B R p m, B R q s, where the equality holds if B = B (B ), B = ˆB0. This and (8) show that sup L(B, B, λ; Y) = sup L(B, B ; Y) B R p m,b R q s,λ>0 B R p m,b R q s qn log(πeˆλ), where the equality holds if B = B (B ), B = ˆB0. Therefore, from (7), we obtain (6). From Lemmas and, we immediately have Corollary. For the model () and (), the likelihood ratio statistic for testing the sphericity (3) is n/ ˆΣ Λ =, ˆλ q () where ˆΣ and ˆλ are given by Remark and Lemma, respectively. Proof. It follows from Lemma and (5) that sup K(B, B, Σ; Y) = (πe) qn/ ˆΣ n/, B R p m,b R q s,σ>0 which and (6) mean that the likelihood ratio statistic for testing the sphericity (3) is given by (). 3. Null distribution In this section, we will establish the exact null density function of Λ /n and the asymptotic null distributions of log Λ. The following theorem plays the key role for deriving the results mentioned above. Theorem. The null distribution of likelihood ratio statistic Λ is determined by Λ /n d = q q U V [tr(u) + tr(v) + W] q, () where X = d Y denotes that the random variables X and Y have the same distribution, U, V, W are mutually independent and U W rk(x) (n, I rk(x) ), V W q rk(x) (n, I q rk(x) ), (3) W χrk(x)(q rk(x)),

4 308 P. Bai / Journal of Multivariate Analysis 00 (009) where n = n rk(z) q + rk(x) and n = n rk(z ). Proof. Let the singular value decomposition of X be 0 X = P Q, 0 0 (4) where P and Q are the q q and p p orthogonal matrices, respectively, is a rk(x) rk(x) nonsingular diagonal matrix, then Irk(X) 0 P X = P P 0 0, Q 0 0 X = P P. (5) 0 I q rk(x) Make the transformation T T T ˆ= = P SP, T T (6) where T is a rk(x) rk(x) random matrix, then ( S = PT P = P T T T T T T T T ) P, (7) where T = T T T T, T = T T T T, hence from (4) and (7), we have ( X S T X = Q ) 0 Q, 0 0 which means that (X S X) = T Q C Q, (8) C C where C, C and C are arbitrary. Substitute (4), (6) (8) into (4) to yield ˆΣ = ] [T n P 0 T + T P 0 0 ( S S)P 0 I q rk(x) T T P, (9) I q rk(x) where S = EQ Z E, S = EQ Z E. Furthermore, make the transformation ) ( T T T ˆ= = P ( S S)P, T T (0) where T is a rk(x) rk(x) random matrix, then from (6) and (9), we obtain ˆΣ = ( n P T + T T T T T T (I q rk(x) + T T ) ) (I q rk(x) + T T )T P, T + T which shows that (Theorem A5.3 in [3]) ˆΣ = n q T T + T. () In addition, it follows from (5), (6) and (0) that ˆλ = qn tr{s + Q X( S S)} = qn [tr(t ) + tr(t T T ) + tr(t + T )]. () When the sphericity hypothesis (3) holds, from (), we have E N q n (0, λi n I q ), which implies that { S Wq (n rk(z), λi q ), S S W q (rk(z) rk(z ), λi q ). (3) (4)

5 P. Bai / Journal of Multivariate Analysis 00 (009) Note that Q Z (Q Z Q Z ) = 0, hence from (3) and Theorem 0.4 in Schott [6], S = EQ Z E and S S = E(Q Z Q Z )E are mutually independent. Therefore, from (6), (0) and (4), we know that T and T are mutually independent and { T Wq (n rk(z), λi q ), T W q (rk(z) rk(z ), λi q ), which and Theorem 3..0 in Muirhead [3] indicate that T W rk(x) (n, λi rk(x) ), T T N rk(x) (q rk(x)) (0, λt I rk(x) ), T W q rk(x) (n rk(z), λi q rk(x) ), T W q rk(x) (rk(z) rk(z ), λi q rk(x) ), (5) and T is independent of (T, T ). It follows from the independence between T and T that (T, T, T ) and T are independent, hence from the independence between T and (T, T ), we know that T, (T, T ), T are mutually independent. Note that from the second equality in (5), we have T T T T W rk(x) (q rk(x), λi rk(x) ), which means that T T T and T are independent and T T T W rk(x) (q rk(x), λi rk(x) ). Thus from the independence among T, (T, T ) and T, we know that T, T T T, T, T are mutually independent. Let U = λ T, V = λ (T + T ), (7) W = λ tr(t T T ), then U, V, W are mutually independent, and from (5) and (6), we obtain (3). Finally, it follows from (), () and (7) that λ q ˆΣ = U V, ˆλ = λ [tr(u) + tr(v) + W], n qn which and Corollary shows that () holds. In order to obtain the exact null density function of Λ /n based on Theorem, we need the following definition and lemma. Definition. If the m m nonnegative definite random matrix X has the density function ( ma Γ m (a) Σ a x a (m+)/ e tr ) Σ x (dx), x > 0, (8) where Re(a) > (m ), Σ is a m m symmetric matrix such that Re(Σ) > 0, then X is said to have an m-variate gamma distribution with parameter (a, Σ) and is denoted by X Γ m (a, Σ). When a = n, n is an integer, Γ m(a, Σ) is the Wishart distribution W m (n, Σ) (Muirhead [3]). (6) Lemma 3. If X Γ m (a, Σ), then tr(σ X) Γ (ma, ). Proof. Make the transformation Σ / XΣ / = T T, where T = (T ij ) m m is upper-triangular with positive diagonal elements, then (Theorem..9 in Muirhead [3]) m m (dx) = Σ (m+)/ m dt ij, i= T m+ i ii i j which and (8) means that the joint density function of T ij, i j m can be written as [ ] m m exp t ij dtij (π) / a (i )/ Γ [a (i )/] (t ii )a (i+)/ exp t ii dt ii, i<j i= (9)

6 30 P. Bai / Journal of Multivariate Analysis 00 (009) which shows that T ij N(0, ), i < j m, T ii Γ (a (i ), ), i =,..., m and T ij, i j m are mutually independent. Therefore, from (9) and additivity of gamma distribution [7], we have m tr(σ X) = tr(t T) = T ij Γ (ma, ). i j Theorem. Let Λ = Λ /n, when the sphericity hypothesis (3) holds, we have E( Λ z ) = q Γ qz rk(x)(n / + z) Γ q rk(x) (n / + z) Γ (n 3 /), Re(z) 0, (30) Γ rk(x) (n /) Γ q rk(x) (n /) Γ (n 3 / + qz) where n 3 = rk(x)(n rk(z)) + (q rk(x))(n rk(z )). Proof. When the sphericity hypothesis (3) holds, from Theorem, we know that { } U E( Λ z V z z ) = q qz E [tr(u) + tr(v) + W] qz = (q) Γ { qz rk(x)(n / + z) Γ q rk(x) (n / + z) E Γ rk(x) (n /) Γ q rk(x) (n /) [tr(ũ) + tr(ṽ) + W] qz }, Re(z) 0, (3) where Ũ, Ṽ, W are mutually independent and Ũ Γ rk(x) n + z, I rk(x), Ṽ Γ q rk(x) n + z, I q rk(x). (3) It follows from the additivity of gamma distribution [7], Lemma 3, (3) and (3) that tr(ũ) + tr(ṽ) + W Γ n 3 + qz,, which indicates that { } E [tr(ũ) + tr(ṽ) + W] qz Substitute the above equality into (3) to get (30). Γ = qz (n 3 /), Re(z) 0. Γ (n 3 / + qz) Theorem 3. As the function of likelihood ratio statistic, the null density function of Λ = Λ /n can be expressed as f Λ ( λ) = (π)(q )/ q (n 3 )/ π [rk(x)(rk(x) )+(q rk(x))(q rk(x) )]/4 Γ (n 3 /) G q,0 q,q Γ rk(x) (n /)Γ q rk(x) (n /) ( λ a,...,a q b,...,b q ), 0 < λ <, (33) where G p,q m,n(z a,...,a p b,...,b q ) denotes the Meijer s G-function, a i = (n i ), i rk(x), a i = (n i+rk(x) ), rk(x)+ i q, b i = q [n 3 + (i )], i q. Proof. It follows from Theorem that the Mellin transform of null density of λ is g Λ (z) = E( Λ z ) = q q(z ) Γ rk(x)(n / + z ) Γ q rk(x) (n / + z ) Γ (n 3 /), Re(z). (34) Γ rk(x) (n /) Γ q rk(x) (n /) Γ [n 3 / + q(z )] From Gauss s multiplication formula for gamma function [8], we have [ ] Γ n 3 + q(z ) = q(n 3 )/+q(z ) q [ ] Γ q (n 3 + i) + z. (35) On the other hand, (π) (q )/ i=0 rk(x) [ ] Γ rk(x) n + z = π rk(x)(rk(x) )/4 Γ (n i ) + z, (36) i=

7 P. Bai / Journal of Multivariate Analysis 00 (009) q rk(x) [ ] Γ q rk(x) n + z = π (q rk(x))(q rk(x) )/4 Γ (n i ) + z. (37) Substitute (35) (37) into (34) to get (π)(q )/ π [rk(x)(rk(x) )+(q rk(x))(q rk(x) )]/4 Γ (n 3 /) g Λ (z) = q (n 3 )/ Γ rk(x) (n /)Γ q rk(x) (n /) rk(x) i= Γ [(n i )/ + z] q rk(x) q i=0 i= i= Γ [(n 3 + i)/(q) + z ] Apply the definition of Meijer s G-function [9], the above equality and f Λ ( λ) = i λ z g Λ πi (z)dz, 0 < λ <, i we obtain (33). Γ [(n i )/ + z], Re(z). Remark. Davis [0] provided an effective algorithm for computing the quantile of distribution with the density expressed by Meijer s G p,0 p,p function. Furthermore, we have Theorem 4. When the sphericity hypothesis (3) holds, log Λ L χ q(q+)/, n, (38) where L denotes convergence in distribution. Proof. It follows from Theorem that the characteristic function of log Λ under hypothesis H is ϕ log Λ (t) = E(Λ it ) = q Γ iqnt rk(x)(n / int) Γ q rk(x) (n / int) Γ (n 3 /) Γ rk(x) (n /) Γ q rk(x) (n /) Γ (n 3 / iqnt), k= t (, + ), which shows that rk(x) { [ ] [ ]} log ϕ log Λ (t) = iqnt log q + log Γ (n k + ) int log Γ (n k + ) q rk(x) { [ ] [ ]} + log Γ (n k + ) int log Γ (n k + ) k= + log Γ n 3 log Γ n 3 iqnt, t (, + ). (39) Use the asymptotic formula for log(z + a) [3], ( log Γ (z + a) = z + a ) log z z + log(π) + O(z ) it is a simple matter from (39) to show that log ϕ log Λ (t) [ ] q(q + ) log( it), n, which indicates that (38) is true. In order to obtain and improve the order by approximating the null distribution of log Λ with χ q(q+)/ (as n ) based on Theorem 4, it follows from Theorem that, under the null hypothesis (3), we have E(Λ z ) = E( Λ nz/ )

8 3 P. Bai / Journal of Multivariate Analysis 00 (009) = C q y y x x k k k= z q Γ [x k ( + z) + ξ k ] Γ [y ( + z) + η ], Re(z) 0, k= where C is a constant determined by E(Λ 0 ) =, x k = n, k =,..., q, y = qn, ξ k = (rk(z) + q rk(x) + k ), k =,..., rk(x), ξ rk(x)+k = (rk(z )+k ), k =,..., q rk(x), η = [rk(x)rk(z)+(q rk(x))rk(z )]. Therefore, based on the discussions in pp of Muirhead [3] or Box [], we immediately obtain Theorem 5. When the sphericity hypothesis (3) holds, and where P( log Λ u) = P(χ q(q+)/ u) + O(n ), P( ρ log Λ u) = P(χ q(q+)/ u) + O(n ), ρ = [q(q + ) ]n (A A ), [ ( rk(x) A = rk(x) (rk(z) + q rk(x))(rk(z) + q + ) + rk(x) + )] 3 [ ( + (q rk(x)) rk(z )(rk(z ) + q rk(x) + ) + (q rk(x)) 3 (q rk(x)) + )] q 6, A = [ (rk(x)rk(z) + (q rk(x))rk(z ) + ) ]. q 3 Acknowledgments This research was supported by the National Science Foundation (No and 07600) of China. The author would like to express his thanks to the referees whose comments helped in improving his results. References [] V.M. Chinchilli, R.K. Elswick, A mixture of the MANOVA and GMANOVA models, Commun. Statist. Theory Methods 4 () (985) [] T.U. Kollo, D. von Rosen, Advanced Multivariate Statistics with Matrices: Mathematics and its Applications, Springer, Dordrecht, New York, 005. [3] R.J. Muirhead, Aspects of Multivariate Statistical Theory, Wiley, New York, 98. [4] P. Bai, Exact distribution of MLE of covariance matrix in a GMANOVA MANOVA model, Sci. China Ser. A: Math. 48 () (005) [5] M. Okamato, Distinctness of the eigenvalues of a quadratic form in a multivariate sample, Ann. Statist. 4 () (973) [6] J.R. Schott, Matrix Analysis for Statistics, second edn, Wiley, New York, 005. [7] P.J. Bickel, K.A. Doksum, Mathematical Statistics: Basic Ideas and Selected Topics, Holden-day, Inc., San Francisco, 977. [8] G.E. Andrews, R. Askey, R. Roy, Special Functions, Cambridge University Press, Cambridge, 999. [9] A. Erdélyi (Ed.), Higher Transcendental Functions, Vol. I, McGraw-Hill, New York, 953. [0] A.W. Davis, On the differential equation for Meijer s G p,0 p,p function, and further tables of Wilks s likelihood ratio criterion, Biometrika 3 (66) (979) [] G.E.P. Box, A general distribution theory for a class of likelihood criteria, Biometrika 3 (36) (949)

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY

ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY José A. Díaz-García and Raúl Alberto Pérez-Agamez Comunicación Técnica No I-05-11/08-09-005 (PE/CIMAT) About principal components under singularity José A.

More information

An Introduction to Multivariate Statistical Analysis

An Introduction to Multivariate Statistical Analysis An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents

More information

Random Matrices and Multivariate Statistical Analysis

Random Matrices and Multivariate Statistical Analysis Random Matrices and Multivariate Statistical Analysis Iain Johnstone, Statistics, Stanford imj@stanford.edu SEA 06@MIT p.1 Agenda Classical multivariate techniques Principal Component Analysis Canonical

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

Analysis of variance, multivariate (MANOVA)

Analysis of variance, multivariate (MANOVA) Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables

More information

Department of Statistics

Department of Statistics Research Report Department of Statistics Research Report Department of Statistics No. 05: Testing in multivariate normal models with block circular covariance structures Yuli Liang Dietrich von Rosen Tatjana

More information

MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT)

MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT) MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García Comunicación Técnica No I-07-13/11-09-2007 (PE/CIMAT) Multivariate analysis of variance under multiplicity José A. Díaz-García Universidad

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

High-Dimensional AICs for Selection of Redundancy Models in Discriminant Analysis. Tetsuro Sakurai, Takeshi Nakada and Yasunori Fujikoshi

High-Dimensional AICs for Selection of Redundancy Models in Discriminant Analysis. Tetsuro Sakurai, Takeshi Nakada and Yasunori Fujikoshi High-Dimensional AICs for Selection of Redundancy Models in Discriminant Analysis Tetsuro Sakurai, Takeshi Nakada and Yasunori Fujikoshi Faculty of Science and Engineering, Chuo University, Kasuga, Bunkyo-ku,

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský

More information

Testing Equality of Natural Parameters for Generalized Riesz Distributions

Testing Equality of Natural Parameters for Generalized Riesz Distributions Testing Equality of Natural Parameters for Generalized Riesz Distributions Jesse Crawford Department of Mathematics Tarleton State University jcrawford@tarleton.edu faculty.tarleton.edu/crawford April

More information

Quadratic forms in skew normal variates

Quadratic forms in skew normal variates J. Math. Anal. Appl. 73 (00) 558 564 www.academicpress.com Quadratic forms in skew normal variates Arjun K. Gupta a,,1 and Wen-Jang Huang b a Department of Mathematics and Statistics, Bowling Green State

More information

Pattern correlation matrices and their properties

Pattern correlation matrices and their properties Linear Algebra and its Applications 327 (2001) 105 114 www.elsevier.com/locate/laa Pattern correlation matrices and their properties Andrew L. Rukhin Department of Mathematics and Statistics, University

More information

Finite Singular Multivariate Gaussian Mixture

Finite Singular Multivariate Gaussian Mixture 21/06/2016 Plan 1 Basic definitions Singular Multivariate Normal Distribution 2 3 Plan Singular Multivariate Normal Distribution 1 Basic definitions Singular Multivariate Normal Distribution 2 3 Multivariate

More information

Multivariate Linear Models

Multivariate Linear Models Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements

More information

Statistical Inference On the High-dimensional Gaussian Covarianc

Statistical Inference On the High-dimensional Gaussian Covarianc Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional

More information

Quadratic Gauss sums on matrices

Quadratic Gauss sums on matrices Linear Algebra and its Applications 384 (2004) 187 198 www.elsevier.com/locate/laa Quadratic Gauss sums on matrices Mitsuru Kuroda 410, Haitsu-hosha, 61, Nojiri-cho, Sakai, Osaka 599-8116, Japan Received

More information

Measures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT)

Measures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT) Measures and Jacobians of Singular Random Matrices José A. Díaz-Garcia Comunicación de CIMAT No. I-07-12/21.08.2007 (PE/CIMAT) Measures and Jacobians of singular random matrices José A. Díaz-García Universidad

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

Joint work with Nottingham colleagues Simon Preston and Michail Tsagris.

Joint work with Nottingham colleagues Simon Preston and Michail Tsagris. /pgf/stepx/.initial=1cm, /pgf/stepy/.initial=1cm, /pgf/step/.code=1/pgf/stepx/.expanded=- 10.95415pt,/pgf/stepy/.expanded=- 10.95415pt, /pgf/step/.value required /pgf/images/width/.estore in= /pgf/images/height/.estore

More information

TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX.

TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX. Serdica Math J 34 (008, 509 530 TEST FOR INDEPENDENCE OF THE VARIABLES WITH MISSING ELEMENTS IN ONE AND THE SAME COLUMN OF THE EMPIRICAL CORRELATION MATRIX Evelina Veleva Communicated by N Yanev Abstract

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Log Covariance Matrix Estimation

Log Covariance Matrix Estimation Log Covariance Matrix Estimation Xinwei Deng Department of Statistics University of Wisconsin-Madison Joint work with Kam-Wah Tsui (Univ. of Wisconsin-Madsion) 1 Outline Background and Motivation The Proposed

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete

More information

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data Sri Lankan Journal of Applied Statistics (Special Issue) Modern Statistical Methodologies in the Cutting Edge of Science Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations

More information

On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions

On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions Journal of Applied Probability & Statistics Vol. 5, No. 1, xxx xxx c 2010 Dixie W Publishing Corporation, U. S. A. On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Random Vectors and Multivariate Normal Distributions

Random Vectors and Multivariate Normal Distributions Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each

More information

On testing the equality of mean vectors in high dimension

On testing the equality of mean vectors in high dimension ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 17, Number 1, June 2013 Available online at www.math.ut.ee/acta/ On testing the equality of mean vectors in high dimension Muni S.

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Likelihood Ratio Tests in Multivariate Linear Model

Likelihood Ratio Tests in Multivariate Linear Model Likelihood Ratio Tests in Multivariate Linear Model Yasunori Fujikoshi Department of Mathematics, Graduate School of Science, Hiroshima University, 1-3-1 Kagamiyama, Higashi Hiroshima, Hiroshima 739-8626,

More information

Exam 2. Jeremy Morris. March 23, 2006

Exam 2. Jeremy Morris. March 23, 2006 Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Lecture 15. Hypothesis testing in the linear model

Lecture 15. Hypothesis testing in the linear model 14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =

More information

Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions

Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions Central Limit Theorems for Classical Likelihood Ratio Tests for High-Dimensional Normal Distributions Tiefeng Jiang 1 and Fan Yang 1, University of Minnesota Abstract For random samples of size n obtained

More information

10-725/36-725: Convex Optimization Prerequisite Topics

10-725/36-725: Convex Optimization Prerequisite Topics 10-725/36-725: Convex Optimization Prerequisite Topics February 3, 2015 This is meant to be a brief, informal refresher of some topics that will form building blocks in this course. The content of the

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems

Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Analysis of the AIC Statistic for Optimal Detection of Small Changes in Dynamic Systems Jeremy S. Conner and Dale E. Seborg Department of Chemical Engineering University of California, Santa Barbara, CA

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

Near-exact approximations for the likelihood ratio test statistic for testing equality of several variance-covariance matrices

Near-exact approximations for the likelihood ratio test statistic for testing equality of several variance-covariance matrices Near-exact approximations for the likelihood ratio test statistic for testing euality of several variance-covariance matrices Carlos A. Coelho 1 Filipe J. Marues 1 Mathematics Department, Faculty of Sciences

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Analysis of Microtubules using. for Growth Curve modeling.

Analysis of Microtubules using. for Growth Curve modeling. Analysis of Microtubules using Growth Curve Modeling Md. Aleemuddin Siddiqi S. Rao Jammalamadaka Statistics and Applied Probability, University of California, Santa Barbara March 1, 2006 1 Introduction

More information

MAXIMUM LIKELIHOOD IN GENERALIZED FIXED SCORE FACTOR ANALYSIS 1. INTRODUCTION

MAXIMUM LIKELIHOOD IN GENERALIZED FIXED SCORE FACTOR ANALYSIS 1. INTRODUCTION MAXIMUM LIKELIHOOD IN GENERALIZED FIXED SCORE FACTOR ANALYSIS JAN DE LEEUW ABSTRACT. We study the weighted least squares fixed rank approximation problem in which the weight matrices depend on unknown

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction

More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

Research Article On the Marginal Distribution of the Diagonal Blocks in a Blocked Wishart Random Matrix

Research Article On the Marginal Distribution of the Diagonal Blocks in a Blocked Wishart Random Matrix Hindawi Publishing Corporation nternational Journal of Analysis Volume 6, Article D 59678, 5 pages http://dx.doi.org/.55/6/59678 Research Article On the Marginal Distribution of the Diagonal Blocks in

More information

A new test for the proportionality of two large-dimensional covariance matrices. Citation Journal of Multivariate Analysis, 2014, v. 131, p.

A new test for the proportionality of two large-dimensional covariance matrices. Citation Journal of Multivariate Analysis, 2014, v. 131, p. Title A new test for the proportionality of two large-dimensional covariance matrices Authors) Liu, B; Xu, L; Zheng, S; Tian, G Citation Journal of Multivariate Analysis, 04, v. 3, p. 93-308 Issued Date

More information

Quantum Statistics -First Steps

Quantum Statistics -First Steps Quantum Statistics -First Steps Michael Nussbaum 1 November 30, 2007 Abstract We will try an elementary introduction to quantum probability and statistics, bypassing the physics in a rapid first glance.

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

Likelihood Methods. 1 Likelihood Functions. The multivariate normal distribution likelihood function is

Likelihood Methods. 1 Likelihood Functions. The multivariate normal distribution likelihood function is Likelihood Methods 1 Likelihood Functions The multivariate normal distribution likelihood function is The log of the likelihood, say L 1 is Ly = π.5n V.5 exp.5y Xb V 1 y Xb. L 1 = 0.5[N lnπ + ln V +y Xb

More information

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County A Generalized p-value Approach to Inference on Common Mean Pranab K. Mitra and Bimal K. Sinha Department of Mathematics and Statistics, University Of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore,

More information

Some New Properties of Wishart Distribution

Some New Properties of Wishart Distribution Applied Mathematical Sciences, Vol., 008, no. 54, 673-68 Some New Properties of Wishart Distribution Evelina Veleva Rousse University A. Kanchev Department of Numerical Methods and Statistics 8 Studentska

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION

ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION K. KRISHNAMOORTHY 1 and YONG LU Department of Mathematics, University of Louisiana at Lafayette Lafayette, LA

More information

(Part 1) High-dimensional statistics May / 41

(Part 1) High-dimensional statistics May / 41 Theory for the Lasso Recall the linear model Y i = p j=1 β j X (j) i + ɛ i, i = 1,..., n, or, in matrix notation, Y = Xβ + ɛ, To simplify, we assume that the design X is fixed, and that ɛ is N (0, σ 2

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Statistical Inference with Monotone Incomplete Multivariate Normal Data

Statistical Inference with Monotone Incomplete Multivariate Normal Data Statistical Inference with Monotone Incomplete Multivariate Normal Data p. 1/4 Statistical Inference with Monotone Incomplete Multivariate Normal Data This talk is based on joint work with my wonderful

More information

On some interpolation problems

On some interpolation problems On some interpolation problems A. Gombani Gy. Michaletzky LADSEB-CNR Eötvös Loránd University Corso Stati Uniti 4 H-1111 Pázmány Péter sétány 1/C, 35127 Padova, Italy Computer and Automation Institute

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Chapter 7, continued: MANOVA

Chapter 7, continued: MANOVA Chapter 7, continued: MANOVA The Multivariate Analysis of Variance (MANOVA) technique extends Hotelling T 2 test that compares two mean vectors to the setting in which there are m 2 groups. We wish to

More information

Testing Block-Diagonal Covariance Structure for High-Dimensional Data

Testing Block-Diagonal Covariance Structure for High-Dimensional Data Testing Block-Diagonal Covariance Structure for High-Dimensional Data MASASHI HYODO 1, NOBUMICHI SHUTOH 2, TAKAHIRO NISHIYAMA 3, AND TATJANA PAVLENKO 4 1 Department of Mathematical Sciences, Graduate School

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS MAT 217 Linear Algebra CREDIT HOURS: 4.0 EQUATED HOURS: 4.0 CLASS HOURS: 4.0 PREREQUISITE: PRE/COREQUISITE: MAT 210 Calculus I MAT 220 Calculus II RECOMMENDED

More information