Improving the sample covariance matrix for a complex elliptically contoured distribution
|
|
- Abigayle Hutchinson
- 5 years ago
- Views:
Transcription
1 September 13, 006 Improving the sample covariance matrix for a complex elliptically contoured distribution Yoshihiko Konno Faculty of Science, Japan Women s University -8-1 Mejirodai, Bunkyo-ku, Tokyo , Japan Abstract. In this paper the problem of estimating the scale matrix in a complex elliptically contoured distribution(complex ECD is addressed. An extended Haff-Stein identity for this model is derived. It is shown that the minimax estimators of the covariance matrix obtained under the complex normal model remain robust under the complex ECD model when the Stein loss function is employed. Key words: covariance matrix, shrinkage estimator, complex Wishart distribution, minimax, unbiased risk of estimate, complex normal distribution, eigenvalue distribution in sample covariance matrix. MSC: primary; 6H1; secondary: 6F10 1 Introduction The focus of the present paper is on estimating the scale matrix Σ that appear in a standard complex MANOVA model Y = Cβ + E, (1 in which Y is an N p matrix of response complex variables, C is an N m known matrix of complex entries, β is an m p unknown parameter matrix of complex entries, and E is an Address: konno@fc.jwu.ac.jp Supported in part by the Japan Society for the Promotion of Science through Grants-in-Aid for Scientific Research(C(No
2 N p error matrix of complex entries. The following assumptions on (1 are made to hold; (i the error matrix E follows a complex elliptically contoured distribution with the density function Det(Σ N f(tr(σ 1 e e e C N p, in which Σ is a p p unknown positive definite Hermitian matrix, and f( is a nonnegative function on the nonnegative real line such that f(t dt = 1; (ii N m p +1 > 0. 0 Here C N p is the space of N p matrices of complex entries, stands for the conjugate transpose of a matrix of complex entries, Tr and Det mean the usual trace and the determinant of a squared matrix, respectively. The model (1, along the previous assumptions, includes the complex MANOVA model with the complex normal error, which is extensively explored by Andersen et al. (1995. The practical significance of this model in signal processing and communications is discussed, for example, in DoGondžić and Neborai (003. See Krishnaiah and Lin (1986 and Micheas el al. (005 for construction of an elliptical family of complex distributions via an extension of elliptical family of real distributions. Stein (1977 pointed out that the eigenvalues of the sample covariance matrix obtained from the multivariate real normal sample spread out more than the corresponding eigenvalues of the parameter covariance matrix, and that the problem gets more serious when the parameter covariance matrix is near spherical. Since this fact was pointed out, there has been growing literature on improved estimators of the covariance matrix of the multivariate real normal distributions. We refer to Lin and Perlman (1985, Young and Berger (1994, Kubokawa and Srivastava (1999, Daniels and Kass (1999, 001, and the cited references therein for further information on the improved estimation of the real normal covariance matrix. The sample covariance matrix obtained from independently and identically distributed complex random vectors may be improved by applying shrinkage-expansion techniques to its eigenvalues since the eigenvalues of the sample covariance matrix are biased away from those of the true parameter covariance matrix. Abraham and Dey (1994 demonstrated that the adaptive beamforming and detection performance loss due to the estimation of an unstructured covariance matrix is reduced by modifying the eigenvalues of the sample covariance matrix. The methods of Abraham and Dey (1994 heavily depend on the results of Dey and Srinivasan (1985 for estimating the covariance matrix of a real multivariate normal random vector. It is conjectured in the paper of Abraham and Dey (1994 that the results of Dey and Srinivasan (1985 may be extended to the complex multivariate normal
3 setting. The underlying techniques in the paper of Dey and Srinivasan (1985 are twofold; one is integration-by-parts for the real Wishart distribution, which is derived by Stein (1977 and Haff (1979 independently; the other is an algebraic derivation of the eigenstructure of a positive definite matrix of real entries, which is developed by Stein (1977 and Haff (1991. Later, Maiwald and Kraus (1994 used well-known isomorphism between real and complex random matrices in order to give the real representation of integration-by-parts for the complex Wishart distribution. More recently, Svensson and Lundberg (004 obtained an unbiased estimate of the risks for unitary invariant estimators of the scale matrix of the complex Wishart distribution and used the techniques of Haff (1991 in order to derive the variational form of the Bayes estimator. In this paper we consider the problem of estimating the Hermitian scale matrix Σ for the complex ECD model (1 in a decision theoretic manner. We employ the Stein loss function in order to evaluate the performance of every estimator of Σ. We first generalize the techniques of Dey and Srinivasan (1985 to the complex elliptically contoured distributions. Namely, we develop integration-by-parts for the complex ECD model (1 and some algebraic derivations which are applied to the eigenstructure of a positive definite Hermitian matrix. Using these results we show that the dominance results for the real ECD model given by Kubokawa and Srivastava (1999 are generalized to the complex setting. In other words we obtain dominance results in the complex case similar to those in the real case given by James and Stein (1961 and Dey and Srinivasan (1985. The present paper is organized in the following way. In Section, we give integration-byformula for the complex elliptically contoured distribution via an extension of a formula on the real ECD models obtained by Kubokawa and Srivastava (1999. We also give calculus lemmas on the eigenstructure of Hermitian matrices. These lemmas are similar to algebraic lemmas on the eigenstructure of the symmetric matrices given by Haff (1991. These lemmas appear in a unpublished manuscript of Svensson (004. Utilizing the Haff-Stein identify for the complex ECD models we show that minimax estimators which are obtained in a similar way to those under the real normal distributions remain robust under the complex ECD models (1. In Section 3 the proofs of the calculus lemmas are presented since our proofs of the lemmas are more elementary. 3
4 Main Result To describe a canonical form of our estimation problem, let P be an N N unitary matrix such that PC =((C C (1/, 0 m (N m and PP = P P = I N. Also set µ =(C C (1/ β. Let n = N m and let X and Z be m p and n p matrices of complex entries such that (X, Z = PY. Then the joint density function of X and Z has the form Det(Σ N f(tr(σ 1 (x µ (x µ+σ 1 z z, x C m p, z C n p. ( Set S = Z Z. Based on S, we consider the problem of estimating Σ under the Stein loss function L ( Σ, Σ =Tr( ΣΣ 1 Det ( ΣΣ 1 p, (3 in which Σ is an estimator of a p p Hermitian matrix Σ. The loss function given by (3 has been suggested by James and Stein (1961 for the problem of estimating the real normal covariance matrix. Certainly there are many other possible loss functions which we may employ. However the above loss function has the attractive feature that it is relatively easy to work with [see Muirhead (198]..1 Integration by parts formula Let C denote the field of complex numbers. The imaginary unit is denoted by 1. For every c in C we denote by Re c and Im c the real and the imaginary parts of c, respectively. The conjugation of a complex number c is given by c =Rec 1Im c. The absolute value of c is given by c =(c c 1/. We denote by C n p the set of all n p matrices of complex entries. To describe integration-by-parts for S = Z Z, we need the following notation: For a nonnegative x and f( in (, set F (x = x E F [h(x, Z] = E Here, h(, is a real-valued integrable function and f(t dt and define [ h(x, Z F (R f(r R =Tr(Σ 1 (X µ (X µ+σ 1 Z Z. ]. (4 The notation in (4 was introduced by Kubokawa and Srivastava (1999 in order to describe integration-by-parts for the real elliptically contoured distributions. Let G(S beap p 4
5 matrix of complex entries, the (i, j-element g ij (S of which is a function of S = (s ij. For a p p Hermitian matrix S =(s jk, let D S =(d jk S beap p operator matrix, the (j, k-element of which is given by d jk S = 1 { (1 + δ jk (Re s jk + } 1, j, k =1,,...,p. (5 (Im s jk Here, δ jk is the Kronecker delta ( = 1 if j = k and = 0 if j k. Thus the (j, k-element of D s G(S is {D S G(S} jk = p l=1 d jl S g lk(s = 1 (1 + δ jl p l=1 } (Re s jl (S+ g lk 1 (Im s jl (S. { glk Lemma.1 Assume that each entry of G(S is a partially differentiable function with respect to Re s jk and Im s jk, j, k =1,,...,p. Also assume that, for any real a, (i (ii E[ Tr(G(SΣ 1 ] is finite; lim z jk G(SS 1 F ( z jk + a =0, z jk for j =1,,...,nand k =1,,...,p. Then we have E[Tr(G(SΣ 1 ] = E F [(n ptr(g(ss 1 +Tr(D S G(S]. (6 Proof. The proof is given in Section 3. Remark.1 When f(x =π np exp( x, then the model (1 becomes the multivariate linear complex normal model and S follows the complex Wishart distribution CW p (n, Σ, the density function of which is Det (s n p exp( Tr (sσ 1 Det(Σ n π p(p 1/ p s Cp p j=1 Γ(n +1 j, Here Γ( indicates the usual gamma function and C p p S stands for the space of p p semipositive definite Hermitian matrices. See Goodman (1963, Khatri (1965, and Andersen et al. (1995 for further information of the complex Wishart distribution. Under the assumption of the complex normal error, E F in the right hand side of (6 becomes the usual expectation taken with respect to the above density function. Therefore the identity (6 becomes a complex extension of the Haff-Stein identity for the real Wishart distribution given by Haff (1979. This complex analogue of the Haff-Stein identity for the complex Wishart distribution is given by Svensson and Lundberg ( S.
6 The next two lemmas were given by Svensson (004. Since our proofs are more elementary and self-contained, we record the proofs of the lemmas in Section 3. Lemma. Decompose S = ULU in which U =(u jk is a p p unitary matrix and L = Diag(l 1,l,...,l p with l 1 l l p > 0. Then we have, for j, k, s, t = 1,,...,p, d st S l k = u sk u tk, d st S u jk = a k u ja u ta u sk l k l a, d st S u jk = u ja u sa u tk, l k l a a k in which d st S is given by (5. Proof. The proof is given in Section 3. Let R p = {(x 1,x,...,x p R p : x 1 x x p > 0}. The next lemma is a complex analogue of Theorem 5.1 given by Haff (1991. Lemma.3 Let ϕ 1 (L, ϕ (L,...,ϕ p (L be differentiable real-valued functions on R p with ϕ k (L 0,,,...,p, and let Φ(L =Diag(ϕ 1 (L, ϕ (L,..., ϕ p (L. Then, D S [UΦ(LU ]=UΦ (1 (LU, in which Φ (1 (L =Diag(ϕ (1 1 (L, ϕ (1 (L,...,ϕ (1 p (L and ϕ (1 k (L = ϕ k (L ϕ a (L + ϕ k(l, k =1,,...,p. l k l a l k a k Proof. The proof is given in Section 3. Lemma.4 Let ϕ 1 (L, ϕ (L,...,ϕ p (L be differentiable real-valued functions on R p with ϕ k (L 0,,,...,p, and let Φ(L =Diag(ϕ 1 (L, ϕ (L,..., ϕ p (L. Then, under suitable conditions corresponding to those of Lemma.1, we have [ p { E[Tr(UΦ(LU Σ 1 ] = E F ϕk (L+ ϕ k (L ϕ a (L +(n p ϕ }] k(l. l k l k l a l k a k Proof. The proof is obtained immediately from Lemmas.1 and.3. Remark. As in Remark.1, set f(x =π np exp( x. Then the identity in Lemma.4 reduces to that appearing in Theorem 1 of Svensson and Lundberg (004. 6
7 . Alternative estimators The next proposition is a complex analogue of Proposition 1 of Kubokawa and Srivastava (1999. Proposition.1 Decompose S = TT in which T is a p p lower triangular matrix. Also set V = Diag(v 1,...,v p with the v k s, k =1,..., p, being positive constants. Then we have E [ Tr (SΣ 1 +Tr(TVT Σ 1 ] p = E [np F + (n + p +1 kv k ], where E F is defined by (4. Proof. The above equality follows from two equalities, i.e., E [ Tr (SΣ 1 ] = E F [np] and E [ Tr (TVT Σ 1 ] = E [ F p (n + p +1 kv k]. The first equality is derived from Lemma.4 with ϕ k (L =l k,k =1,,..., p. By an invariance argument it suffices to prove that the second inequality holds in the case that Σ = I p. From an application of Lemma.1 and the fact that Tr (D S S=p, we have E[Tr (S] = E F [p(n p+tr(d S S] = np E F [1]. Furthermore, since the distribution of S is invariant under the permutation of the coordinates, we see that E[s kk ]=n E F [1] (7 for k =1,,...,p. Also, note that the Jacobian of the transformation from S to T depends only on the diagonal elements of T. Thus, it suffices to show that E[ t 1 ]=E F [1] in the case that p = in order to prove that, for k l, E[ t kl ]=E F [1]. (8 To prove that E[ t 1 ]=E F [1], let Z =(z 1, z with an n 1 vector z. Set v = Qz such that Qz 1 =(t 11, 0. We decompose v =(v 1, v with a scalar v 1. Recall that S = Z Z to see that s 1 = z z 1 = u Q Qz 1 = v 1 t 11. Furthermore the distribution of (X, z 1,v 1, v is Det (Σ m f(tr {Σ 1 (x θ (x θ} +Tr(u 1 u 1+ v 1 + v v. Proceed in a manner similar to that used in the derivation of (17 in Section 3 to see that E[ t 1 ]=E[ v 1 ]=E F [1]. By using (7 and (8 some algebraic computations complete the proof of the second equality. 7
8 Remark.3 If S follows the complex Wishart distribution with the degrees of freedom n and a scale matrix Σ, i.e., CW p (n, Σ, then we have F = f in (4 from which it follows that E F [1] = 1. Thus, Proposition.1 results in the equations E[Tr (TVT Σ 1 ] = p (n + p +1 kv k and E[Tr (SΣ 1 ] = np. Consider estimators of the form Σ (JS = TDiag(ṽ 1,..., ṽ p T, (9 in which T is defined in Proposition.1 and ṽ k =(n + p +1 k 1,,,...,p. Proposition. Using the loss function L ( Σ, Σ given by (3 the James-Stein type estimator Σ (JS given by (9 is better than the usual estimator n 1 S uniformly for every function f( in (. Proof. The proof is obtained immediately from an application of Proposition.1. Next we consider a class of unitary invariant estimators of the form Σ (Φ = UDiag(ϕ 1 (L,...,ϕ p (LU, (10 in which U is defined in Lemma. and the ϕ k (L s, k =1,,...,p, are differentiable functions from R p to R. This class of estimators has been suggested by Abraham and Dey (1994 via an extension of orthogonally invariant estimators for the real normal covariance matrix. Proposition.3 Using the loss function L ( Σ, Σ given by (3 the estimators Σ (Φ given by (10 are better than the James-Stein type estimator Σ (JS given by (9 uniformly for every function f( in (, if the following inequalities hold almost surely; { p ϕ k p (L+ } ϕ k (L ϕ j (L +(n p ϕ k(l 0, (11 l k l k l j l k j k p { log(n + p +1 k + log ϕ } k(l 0. (1 l k Proof. For simplicity, we omit the argument L in ϕ k (L asϕ k for k =1,,...,p. Using Lemma.4 and Proposition.1 the risk difference between the estimators Σ (JS and Σ (Φ is [ p ] L ( Σ (JS, Σ L ( Σ (Φ, Σ = E F [p] E { log(n + p +1 k + log l k } [ p { E F ϕ k + l k j k [ p ] +E log ϕ k. 8 ϕ k ϕ j l k l j +(n p ϕ k l k }]
9 Rearranging the terms inside each of two different types of the expectations separately we obtain the desired result. Theorem.1 Let ṽ k =(n + p +1 k 1,,,...,p, and decompose S = ULU in which U =(u jk is a p p unitary matrix and L = Diag(l 1,l,...,l p with l 1 l l p > 0. Using the loss function L ( Σ, Σ given by (3 the Dey-Srinivasan type estimator Σ (DS = UDiag(ṽ 1 l 1,..., ṽ p l p U is better than the James-Stein type estimator Σ (JS given by (9 uniformly for every function f( in (. Proof. The proof follows immediately from the verification of the inequalities (11 and (1 with ϕ k =ṽ k l k,,,...,p. Remark.4 Sheena and Takemura (199 called (10 order-preserving if it satisfies the condition ϕ 1 (L ϕ (L ϕ p (L. The estimator Σ (DS is further improved upon by a corresponding order-preserving estimators. This result follows from the fact that Lemma 1 of Sheena and Takemura holds for nonincreasing functions f( in (. It can be also seen that an estimator similar to that given by Perron (199 for the real normal covariance matrix improves upon the James-Stein type estimator Σ (JS uniformly for every function f(. 3 Proofs of Lemmas Before we get into proving the lemmas presented in the previous section, we state three approaches to prove the Wishart identity for the real case. Firstly, Stein (1977 used integration-by-parts for the multivariate real normal distribution in which the mean vector is zero and the covariance matrix is identity. Then Stein (1977 used an algebraic lemma on the eigenstructure of a symmetric matrix in order to obtain the result for the real case similar to that presented in Lemma.4. This approach was taken by Kubokawa and Srivastava (1999 in order to extend the Wishart identity to real elliptically contoured distributions. This approach is also taken in the present paper. Secondly, Haff used integration-by-parts for the real Wishart distribution and applied the same algebraic lemma as in Stein (1977. This approach was taken by Svensson (004 in order to obtain the Haff-Stein identity for the complex Wishart distribution. Thirdly, Sheena (1995 used integration-by-parts for the distribution of the eigenvalues of the real Wishart distribution. The third approach gives a direct proof of the result for the real case similar to that presented in Lemma.4. Also, this approach was taken by Konno (005 9
10 in order to obtain integration-by-parts for the Wishart distribution on irreducible symmetric cones. The next lemma describes the chain rule for the differentiation which is useful for the proof of Lemma.1. Lemma 3.1 Let Y =(y jk be an n p matrix of complex entries and set W = Y Y. Furthermore, let y jk = (Re y jk + 1 (Im y jk, w kl = (Re w kl + 1 (Im w kl, for k, l =1,,...,pand j =1,,..., n. Then we have p 1+δ kl = y jk. y jl w kl Proof. The real and the imaginary parts of the (c 1,c -element of the matrix W are, respectively, Re w c1 c = c 3 { Re yc3 c 1 Re y c3 c +Imy c3 c 1 Im y c3 c }, Im w c1 c = c 3 { Re yc3 c 1 Im y c3 c Im y c3 c 1 Re y c3 c }. From these two equations the real part of /y jl is = { (Re wc1 c (Re y jl (Re y c 1 c jl (Re w c1 c + (Im w } c 1 c (Re y jl (Im w c1 c [ = (δc3jδ c1lre y lc + δ c3jδ clre y c3c1 (Re w c 1 c,c c1 c 3 + ( ] δ c3 jδ c1 lim y c3 c δ c3 jδ c lim y c3 c 1, (Im w c1 c while the imaginary part is = { (Re wc1 c (Im y jl (Im y c 1 c jl (Re w c1 c + (Im w } c 1 c (Im y jl (Im w c1 c [ = (δc3jδ c1lim y c3c + δ c3jδ clim y c3c1 (Re w c 1 c,c c1 c 3 + ( ] δ c3 jδ c lre y c3 c 1 δ c3 jδ c1 lre y c3 c c. (Im w c1 c 10
11 Therefore we have (Re y jl + 1 (Im y jl = [ 1+δ c1c (δc1lre y jc + δ clre y jc1 (Re w c 1 c c1 c + ( δ c1 lim y jc δ c lim y jc1 (Im w c1 c + 1 ( δ c1 lim y jc + δ c lim y jc1 (Re w c1 c + 1 ( ] δ c lre y jc1 δ c1 lim y jc (Im w c1 c = [ 1+δ jc 1 Re y jc (Re w c lc +Imy 1 jc (Im w lc + ( ] 1 1 Im y jc (Re w lc Re y 1 jc (Im w lc + [ 1+δ jc1 1 Re y jc1 (Re w c c1 1 l Im y 1 jc 1 (Im w c1 l + ( ] 1 1 Im y jc1 (Re w c1 l +Rey 1 jc 1. (Im w c1 l Since the real part of W is symmetric and the imaginary part of W is skew-symmetric, this completes the proof. 3.1 Proof of Lemma.1 To reduce the equation (9 to the case when Σ = I p, we first observe that E [Tr (D S GS] = E F [Tr (SD S G+pTr G], (13 since the (i, i-element of D S GS can be written as (D S GS ii = = = p { p ( 1+δ ij (D S G ik s ki + g jk (Re s ij + 1 (Im s ij (Re s ki + } 1Im s ki j=1 p { p ( s ki (D S G ik + g jk δij δ ki + δ kj (1 δ ij } p s ki (D S G ik + j=1 j=1 p g jj. 11
12 The second last equality in the above equations follows from the fact that the real part of S is symmetric and the imaginary part is skew-symmetric. Replacing G with GS in (, we get E [Tr (GSΣ 1 ] = E F [(n ptr(g+tr(d S GS]. Substitute (13 into the above equation to get that E [Tr (GSΣ 1 ] = E F [n Tr (G+Tr(SD S G]. Hence it suffices to prove that, for a complex-valued function h(s, E [h(ssσ 1 ]=E F [nh(si p + SD S h(s]. (14 To this end, write Σ = AA and S = AWA in which W =(w ij isap p symmetric matrix and A =(a ij isap p nonsingular matrix with its inverse A 1 =(a ij. Then 1+δ ij = 1+δ {( ij s ij (Re s c 1 c ij + 1 (Im s ij 4 ( a c 1c Re s c c 3 + } 1Im s c c 3 a c 4c 3 w c,c c1 c 4 3 = δ ic3 δ jc a c1c a c4c3 w c 1 c 4,c,c c1 c 4 3 = a c1j a c4i = {(A 1 D W w A 1 } ij, c 1 c c1 c 4 4 which is the (i, j-element of (A 1 D W A 1. Here D W D W. Using this equation, we can reduce (14 to denotes the transpose of the matrix E [Wh(S] = E F [nh(si p + WD W h(s]. (15 Furthermore, let Z = AY with Y =(y ij. Since W = Y Y, it follows from Lemma 3.1 that c 3 w ic3 1+δ jc3 h (S = 1+δ jc3 y ki y kc3 w jc3 k, c 3 h w jc3 (S = 1 Furthermore, note that h {y ki h(s} =δ ij + y ki (S. y kj y kj Thus, it suffices to prove that E[w ij h(s] = k y ki h y kj (S. (16 n E[y ki y kj h(s] = E [δ F ij + 1 ] y h ki (S, (17 y kj 1
13 in which the expectation is taken with respect to a density function Det (Σ m f( R with R =Tr(Σ 1 x x+y jk y jk + But, from integration-by-parts, we note that y ki y kj h(sf( R dx dy kj = 1 (c 1,c (j, k (c 1,c (j, k dy c1 c y kj {y ki h(s}f ( R dx dy kj y c1 c. (c 1,c (j, k dy c1 c and that {y ki h(s} =δ ij + y ki {h(s}. y kj y kj Here, dx and dy c1 c indicate m j=1 p d (Re x jkd (Im x jk and d (Re y c1 c d (Im y c1 c, respectively. These two equations give the equation (17, which completes the proof of the lemma. 3. Proof of Lemma. Since the real and the imaginary parts of S are symmetric and skew-symmetric matrices respectively, we have ds ab (ds st = 1+δ st ( d(re sab + 1 d(im s ab ( (Re s st + 1 (Im s st = 1+δ { ( ( } st d(re s ab d(im s ab (Re s st (Im s st = 1+δ { } st δas δ bt + δ at δ bs (δ as δ bt δ at δ bs (1 δ st 1+δ st = δ at δ bs. (18 Now taking the differentials of S = ULU and multiplying on the left by U and on the right by U, we get U dsu =(U dul + L(U du + dl. (19 But, from the differentials of U U = I p, we can see that U du is skew-symmetric. Expressing the diagonal and the off-diagonal elements of (19 we get dl k = {U dsu} kk, (0 1 {U l du} ak = k l a {U dsu} ak (a k, (1 0 (a = k. 13
14 Using (18, (0, and (1, we get d st S l k = a, b u ak ds ab (d st S u bk = a, b ds st (u jk = u ja (U du ak (ds st = a k = u ak δ at δ bs u bk = u tk u sk, 1 u ja u c1 aδ c1 tδ c su c k = l k l a a k, c 1,c a k 1 u ja u c1 a ds c1 c l k l (ds st u c k a a k, c 1,c u ja u ta u sk l k l a, which completes the proof of the first and the second equalities in this lemma. The third equality in this lemma is obtained by taking the complex conjugate of (19 and proceeding via a similar calculation to that presented in the derivation of the second equality in this lemma. 3.3 Proof of Lemma.3 Use Lemma. to see that the (i, j-element of D S [UΦ(LU ]is d ik1 S u k 1 k ϕ k u jk = [ ϕ k u jk d ik 1 S u k 1 k + ϕ k u k1 k d ik 1 S u jk + u k1 k u jk k 1,k k 1,k k 3 = [ u k1au k1au ik ϕ k u jk + ϕ k u k1 k l k l a l k l a k 1,k a k ] ϕ k +u k1 k u jk u ik3 u k1 k l 3 k3 k 3 u jk, = k u ik ( ϕk l k + a k ϕ k ϕ a l k l a a k u jau iau k1k ] ϕ k d ik 1 S l l k 3 k3 which completes the proof. Acknowledgements I am grateful to the anonymous referee and the advisory editor, Professor Michael D. Perlman, for their detailed and helpful comments and suggestions which enabled me to improve the presentation of the paper. I would like to thank Dr. Lennart Svensson for providing me with his unpublished manuscript. References Abraham, D.A., Dey, D.K., The application of improved covariance estimation to 14
15 adaptive beamforming and detection. Signals, Systems and Computers, Conference Record of the Twenty-Eighth Asilomar Conference Volume 1, Andersen, H.H., Højbjerre, M., Sørensen, D., Eriksen, P.S., models, Springer-Verlag, New York. Linear and Graphical Daniels, M.J., Kass, R.E., Nonconjugate Bayesian estimation of covariance matrices and its use in hierarchical models. J. Amer. Statist. Assoc. 94, Daniels, M.J., Kass, R.E., 001. Shrinkage estimators for covariance matrices. Biometrics 57, Dey, D.K., Srinivasan, C., Estimation of a covariance matrix under Stein s loss. Ann. Statist. 13, DoGondžić, A., Neborai, A., 003. Generalized multivariate analysis of variance. IEEE Signal Processing Magazin, Goodman, N.R., Statistical analysis based on a certain multivariate complex Gaussian distribution (An introduction. Ann. Math. Statist. 34, Haff, L.R., Estimation of the inverse covariance matrix: random mixtures of the inverse Wishart matrix and the identity. Ann. Statist. 7, Haff, L.R., The variational form of certain Bayes estimators. Ann. Statist. 19, James, W., Stein, C., Estimation with quadratic loss. In: Proc. Fourth Berkeley Symp. Math. Statist. Prob. 1, , Univ. California Press. Khatri, C.G., Classical statistical analysis based on a certain multivariate complex Gaussian distribution. Ann. Math. Statist. 36, Konno, Y., 005. Estimation of a normal covariance matrix parametrized by irreducible symmetric cones under Stein s loss, accepted for publication in Journal of Multivariate Analysis. Krishnaiah, P.R., Lin, J., Complex elliptically symmetric distributions. Comm. Statist. A Theory Methods 15, Kubokawa, T., Srivastava, M.S., Robust improvement in estimation of a covariance matrix in an elliptically contoured distribution. Ann. Statist. 7, Lin, S.P., Perlman, M.D., A Monte Carlo comparison of four estimators for a covariance 15
16 matrix. In Multivariate Analysis VI (P.R. Krishnaiah, ed., North-Holland, Amsterdam, , Maiward, D., Kraus, D., Calculation of moments of complex Wishart and complex inverse Wishart distributed matrices. Radar, Sonar and Navigation, IEEE Proceedings 147, Micheas, A.C, Dey, D.K., Mardia, K.V., 005. Complex elliptical distributions with application to shape analysis. Journal of Statistical Planning and Inference, in press, corrected proof, available online 7 April 005. Muirhead, R.J., 198. Aspects of multivariate statistical analysis. John Wiley & Sons, Inc. Perron, F., 199. Minimax estimators of a covariance matrix. J. Multivariate Anal. 43, 6-8 Sheena, Y., Unbiased estimator of risk for an orthogonally invariant estimator of a covariance matrix. J. Japan Statist. Soc. 5, Sheena, Y., Takemura, A., 199. Inadmissibility of non-order-preserving orthogonally invariant estimators of the covariance matrix in the case of Stein s loss. J. Multivariate Anal. 41, Stein, C., Lectures on the theory of estimation of many parameters. In Studies in the Statistical Theory of Estimation I ( I.A. Ibragimov and M.S. Nikulin, eds., Proceedings of Scientific Seminars of the Steklov Institute, Leningrad Division 74, 4-65 ( In Russia. English translation of this article is available in J. Sov. Math. 34 ( Svensson, L., 004. A useful identity for complex wishart forms, technical reports, Department of Signals and Systems, Chalmers University of Technology, December 004. Svensson, L., Lundberg, M., 004. Estimating complex covariance matrix. Signals, Systems and Computers, Conference Record of the Thirty-Eighth Asilomar Conference on Volume, 7-10, Young, R., Berger, J.O., Estimation of a covariance matrix using the reference prior. Ann. Statist.,
SIMULTANEOUS ESTIMATION OF SCALE MATRICES IN TWO-SAMPLE PROBLEM UNDER ELLIPTICALLY CONTOURED DISTRIBUTIONS
SIMULTANEOUS ESTIMATION OF SCALE MATRICES IN TWO-SAMPLE PROBLEM UNDER ELLIPTICALLY CONTOURED DISTRIBUTIONS Hisayuki Tsukuma and Yoshihiko Konno Abstract Two-sample problems of estimating p p scale matrices
More informationA CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE 2
Journal of Statistical Studies ISSN 10-4734 A CLASS OF ORTHOGONALLY INVARIANT MINIMAX ESTIMATORS FOR NORMAL COVARIANCE MATRICES PARAMETRIZED BY SIMPLE JORDAN ALGEBARAS OF DEGREE Yoshihiko Konno Faculty
More informationCURRICULUM VITAE. Yoshihiko KONNO. August 31, Personal. Japan Women s University, Mejirodai, Bunkyo-ku, Tokyo , Japan
CURRICULUM VITAE Yoshihiko KONNO August 31, 2017 Personal Date/Place of Birth Citizenship Mailing address Email address Weg-page September 9, 1961, Japan Japan Japan Women s University, 2-8-1 Mejirodai,
More informationOn the conservative multivariate multiple comparison procedure of correlated mean vectors with a control
On the conservative multivariate multiple comparison procedure of correlated mean vectors with a control Takahiro Nishiyama a a Department of Mathematical Information Science, Tokyo University of Science,
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV
More informationORTHOGONALLY INVARIANT ESTIMATION OF THE SKEW-SYMMETRIC NORMAL MEAN MATRIX SATOSHI KURIKI
Ann. Inst. Statist. Math. Vol. 45, No. 4, 731-739 (1993) ORTHOGONALLY INVARIANT ESTIMATION OF THE SKEW-SYMMETRIC NORMAL MEAN MATRIX SATOSHI KURIKI Department of Mathematical Engineering and Information
More informationOn the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model
Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly
More informationCOMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS
Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department
More informationMULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT)
MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García Comunicación Técnica No I-07-13/11-09-2007 (PE/CIMAT) Multivariate analysis of variance under multiplicity José A. Díaz-García Universidad
More informationA NOTE ON ESTIMATION UNDER THE QUADRATIC LOSS IN MULTIVARIATE CALIBRATION
J. Japan Statist. Soc. Vol. 32 No. 2 2002 165 181 A NOTE ON ESTIMATION UNDER THE QUADRATIC LOSS IN MULTIVARIATE CALIBRATION Hisayuki Tsukuma* The problem of estimation in multivariate linear calibration
More informationOn the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors
On the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors Takashi Seo a, Takahiro Nishiyama b a Department of Mathematical Information Science, Tokyo University
More informationHigh-dimensional asymptotic expansions for the distributions of canonical correlations
Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic
More informationEstimation of the Mean Vector in a Singular Multivariate Normal Distribution
CIRJE--930 Estimation of the Mean Vector in a Singular Multivariate Normal Distribution Hisayuki Tsukuma Toho University Tatsuya Kubokawa The University of Tokyo April 2014 CIRJE Discussion Papers can
More informationMeasures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT)
Measures and Jacobians of Singular Random Matrices José A. Díaz-Garcia Comunicación de CIMAT No. I-07-12/21.08.2007 (PE/CIMAT) Measures and Jacobians of singular random matrices José A. Díaz-García Universidad
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationOn Expected Gaussian Random Determinants
On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationStatistics & Decisions 7, (1989) R. Oldenbourg Verlag, München /89 $
Statistics & Decisions 7, 377-382 (1989) R. Oldenbourg Verlag, München 1989-0721-2631/89 $3.00 + 0.00 A NOTE ON SIMULTANEOUS ESTIMATION OF PEARSON MODES L. R. Haff 1 ) and R. W. Johnson 2^ Received: Revised
More informationCalculation of Bayes Premium for Conditional Elliptical Risks
1 Calculation of Bayes Premium for Conditional Elliptical Risks Alfred Kume 1 and Enkelejd Hashorva University of Kent & University of Lausanne February 1, 13 Abstract: In this paper we discuss the calculation
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationOn Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions
Journal of Applied Probability & Statistics Vol. 5, No. 1, xxx xxx c 2010 Dixie W Publishing Corporation, U. S. A. On Bayesian Inference with Conjugate Priors for Scale Mixtures of Normal Distributions
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationDecomposable and Directed Graphical Gaussian Models
Decomposable Decomposable and Directed Graphical Gaussian Models Graphical Models and Inference, Lecture 13, Michaelmas Term 2009 November 26, 2009 Decomposable Definition Basic properties Wishart density
More informationA Note on Some Wishart Expectations
Metrika, Volume 33, 1986, page 247-251 A Note on Some Wishart Expectations By R. J. Muirhead 2 Summary: In a recent paper Sharma and Krishnamoorthy (1984) used a complicated decisiontheoretic argument
More informationInferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data
Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone
More informationEstimation of parametric functions in Downton s bivariate exponential distribution
Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationElliptically Contoured Distributions
Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal
More informationABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY
ABOUT PRINCIPAL COMPONENTS UNDER SINGULARITY José A. Díaz-García and Raúl Alberto Pérez-Agamez Comunicación Técnica No I-05-11/08-09-005 (PE/CIMAT) About principal components under singularity José A.
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationLECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota
LCTURS IN CONOMTRIC THORY John S. Chipman University of Minnesota Chapter 5. Minimax estimation 5.. Stein s theorem and the regression model. It was pointed out in Chapter 2, section 2.2, that if no a
More informationMultivariate Gaussian Analysis
BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationELEC633: Graphical Models
ELEC633: Graphical Models Tahira isa Saleem Scribe from 7 October 2008 References: Casella and George Exploring the Gibbs sampler (1992) Chib and Greenberg Understanding the Metropolis-Hastings algorithm
More informationMohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983),
Mohsen Pourahmadi PUBLICATIONS Books and Editorial Activities: 1. Foundations of Time Series Analysis and Prediction Theory, John Wiley, 2001. 2. Computing Science and Statistics, 31, 2000, the Proceedings
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationHigh-dimensional two-sample tests under strongly spiked eigenvalue models
1 High-dimensional two-sample tests under strongly spiked eigenvalue models Makoto Aoshima and Kazuyoshi Yata University of Tsukuba Abstract: We consider a new two-sample test for high-dimensional data
More informationSection 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices
3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product
More informationA note on 5 5 Completely positive matrices
A note on 5 5 Completely positive matrices Hongbo Dong and Kurt Anstreicher October 2009; revised April 2010 Abstract In their paper 5 5 Completely positive matrices, Berman and Xu [BX04] attempt to characterize
More informationTesting Some Covariance Structures under a Growth Curve Model in High Dimension
Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping
More informationTrace Inequalities for a Block Hadamard Product
Filomat 32:1 2018), 285 292 https://doiorg/102298/fil1801285p Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Trace Inequalities for
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationCLASSICAL GROUPS DAVID VOGAN
CLASSICAL GROUPS DAVID VOGAN 1. Orthogonal groups These notes are about classical groups. That term is used in various ways by various people; I ll try to say a little about that as I go along. Basically
More informationLecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems
Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University
More informationStatistical Inference On the High-dimensional Gaussian Covarianc
Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationIntroduction to Group Theory
Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)
More informationStein s Method and Characteristic Functions
Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method
More informationLecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)
Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries
More informationA CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES
A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES NICOLAE POPA Abstract In this paper we characterize the Schur multipliers of scalar type (see definition below) acting on scattered
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationAlgebra I Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary
More informationMultivariate Normal-Laplace Distribution and Processes
CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by
More informationDeterminants of Partition Matrices
journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised
More informationESTIMATORS FOR GAUSSIAN MODELS HAVING A BLOCK-WISE STRUCTURE
Statistica Sinica 9 2009, 885-903 ESTIMATORS FOR GAUSSIAN MODELS HAVING A BLOCK-WISE STRUCTURE Lawrence D. Brown and Linda H. Zhao University of Pennsylvania Abstract: Many multivariate Gaussian models
More information18Ï È² 7( &: ÄuANOVAp.O`û5 571 Based on this ANOVA model representation, Sobol (1993) proposed global sensitivity index, S i1...i s = D i1...i s /D, w
A^VÇÚO 1 Êò 18Ï 2013c12 Chinese Journal of Applied Probability and Statistics Vol.29 No.6 Dec. 2013 Optimal Properties of Orthogonal Arrays Based on ANOVA High-Dimensional Model Representation Chen Xueping
More informationSpectral Properties of Matrix Polynomials in the Max Algebra
Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract
More informationSTABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin
On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting
More informationOn Hadamard and Kronecker Products Over Matrix of Matrices
General Letters in Mathematics Vol 4, No 1, Feb 2018, pp13-22 e-issn 2519-9277, p-issn 2519-9269 Available online at http:// wwwrefaadcom On Hadamard and Kronecker Products Over Matrix of Matrices Z Kishka1,
More informationRadon Transforms and the Finite General Linear Groups
Claremont Colleges Scholarship @ Claremont All HMC Faculty Publications and Research HMC Faculty Scholarship 1-1-2004 Radon Transforms and the Finite General Linear Groups Michael E. Orrison Harvey Mudd
More informationOn The Weights of Binary Irreducible Cyclic Codes
On The Weights of Binary Irreducible Cyclic Codes Yves Aubry and Philippe Langevin Université du Sud Toulon-Var, Laboratoire GRIM F-83270 La Garde, France, {langevin,yaubry}@univ-tln.fr, WWW home page:
More informationPHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review
1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:
More informationTwo remarks on normality preserving Borel automorphisms of R n
Proc. Indian Acad. Sci. (Math. Sci.) Vol. 3, No., February 3, pp. 75 84. c Indian Academy of Sciences Two remarks on normality preserving Borel automorphisms of R n K R PARTHASARATHY Theoretical Statistics
More informationIntegral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract
Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract Let F be a field, M n (F ) the algebra of n n matrices over F and
More informationOn prediction and density estimation Peter McCullagh University of Chicago December 2004
On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating
More informationNew insights into best linear unbiased estimation and the optimality of least-squares
Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationBayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function
Communications in Statistics Theory and Methods, 43: 4253 4264, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0361-0926 print / 1532-415X online DOI: 10.1080/03610926.2012.725498 Bayesian Estimation
More informationExpected probabilities of misclassification in linear discriminant analysis based on 2-Step monotone missing samples
Expected probabilities of misclassification in linear discriminant analysis based on 2-Step monotone missing samples Nobumichi Shutoh, Masashi Hyodo and Takashi Seo 2 Department of Mathematics, Graduate
More informationTwo Results About The Matrix Exponential
Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about
More informationPolynomials of small degree evaluated on matrices
Polynomials of small degree evaluated on matrices Zachary Mesyan January 1, 2013 Abstract A celebrated theorem of Shoda states that over any field K (of characteristic 0), every matrix with trace 0 can
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationTesting linear hypotheses of mean vectors for high-dimension data with unequal covariance matrices
Testing linear hypotheses of mean vectors for high-dimension data with unequal covariance matrices Takahiro Nishiyama a,, Masashi Hyodo b, Takashi Seo a, Tatjana Pavlenko c a Department of Mathematical
More informationMean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances
METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators
More informationWavelets and Linear Algebra
Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More informationAlgebraic Information Geometry for Learning Machines with Singularities
Algebraic Information Geometry for Learning Machines with Singularities Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2
Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate
More informationA Characterization of (3+1)-Free Posets
Journal of Combinatorial Theory, Series A 93, 231241 (2001) doi:10.1006jcta.2000.3075, available online at http:www.idealibrary.com on A Characterization of (3+1)-Free Posets Mark Skandera Department of
More informationMathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector
On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationRandom Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg
Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles
More informationarxiv: v1 [math.dg] 1 Jul 2014
Constrained matrix Li-Yau-Hamilton estimates on Kähler manifolds arxiv:1407.0099v1 [math.dg] 1 Jul 014 Xin-An Ren Sha Yao Li-Ju Shen Guang-Ying Zhang Department of Mathematics, China University of Mining
More informationOn the smallest eigenvalues of covariance matrices of multivariate spatial processes
On the smallest eigenvalues of covariance matrices of multivariate spatial processes François Bachoc, Reinhard Furrer Toulouse Mathematics Institute, University Paul Sabatier, France Institute of Mathematics
More informationJournal of Statistical Research 2007, Vol. 41, No. 1, pp Bangladesh
Journal of Statistical Research 007, Vol. 4, No., pp. 5 Bangladesh ISSN 056-4 X ESTIMATION OF AUTOREGRESSIVE COEFFICIENT IN AN ARMA(, ) MODEL WITH VAGUE INFORMATION ON THE MA COMPONENT M. Ould Haye School
More informationMath 489AB Exercises for Chapter 1 Fall Section 1.0
Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T
More informationLinear Ordinary Differential Equations
MTH.B402; Sect. 1 20180703) 2 Linear Ordinary Differential Equations Preliminaries: Matrix Norms. Denote by M n R) the set of n n matrix with real components, which can be identified the vector space R
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationSometimes the domains X and Z will be the same, so this might be written:
II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables
More informationGaps between classes of matrix monotone functions
arxiv:math/0204204v [math.oa] 6 Apr 2002 Gaps between classes of matrix monotone functions Frank Hansen, Guoxing Ji and Jun Tomiyama Introduction April 6, 2002 Almost seventy years have passed since K.
More informationforms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms
Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.
More informationAn estimation of the spectral radius of a product of block matrices
Linear Algebra and its Applications 379 (2004) 267 275 wwwelseviercom/locate/laa An estimation of the spectral radius of a product of block matrices Mei-Qin Chen a Xiezhang Li b a Department of Mathematics
More information