On Expected Gaussian Random Determinants

Similar documents
Linear Systems and Matrices

Determinants - Uniqueness and Properties

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

2. Matrix Algebra and Random Vectors

Fundamentals of Engineering Analysis (650163)

Matrix Differentiation

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Numerical Analysis Lecture Notes

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

Section 9.2: Matrices.. a m1 a m2 a mn

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Lecture Notes in Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

LECTURE 4: DETERMINANT (CHAPTER 2 IN THE BOOK)

DETERMINANTS 1. def. (ijk) = (ik)(ij).

Tensor Analysis in Euclidean Space

Computational Linear Algebra

EE731 Lecture Notes: Matrix Computations for Signal Processing

Matrices. In this chapter: matrices, determinants. inverse matrix

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Matrices and Determinants

Review of Vectors and Matrices

II. Determinant Functions

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Intrinsic products and factorizations of matrices

Math Linear Algebra Final Exam Review Sheet

The Matrix-Tree Theorem

a(b + c) = ab + ac a, b, c k

Appendix A: Matrices

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

Boolean Inner-Product Spaces and Boolean Matrices

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

ANOVA: Analysis of Variance - Part I

Chapter 2. Square matrices

Week 15-16: Combinatorial Design

Mathematical Foundations of Applied Statistics: Matrix Algebra

Linear Algebra (Review) Volker Tresp 2018

A Brief Introduction to Tensors

Chapter 3. Basic Properties of Matrices

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Elementary Row Operations on Matrices

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Discrete Applied Mathematics

1 Matrices and Systems of Linear Equations. a 1n a 2n

Here are some additional properties of the determinant function.

Chapter 1 Matrices and Systems of Equations

1 Linear Algebra Problems

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Chapter 7. Linear Algebra: Matrices, Vectors,

MATH 106 LINEAR ALGEBRA LECTURE NOTES

Foundations of Matrix Analysis

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

Notes on Determinants and Matrix Inverse

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Submatrices and Partitioned Matrices

THREE LECTURES ON QUASIDETERMINANTS

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Diagonalization by a unitary similarity transformation

CS 246 Review of Linear Algebra 01/17/19

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Index Notation for Vector Calculus

Matrix & Linear Algebra

Phys 201. Matrices and Determinants

Basic Concepts in Matrix Algebra

Determinants. Beifang Chen

Vectors and Matrices Notes.

An unbiased estimator for the roughness of a multivariate Gaussian random field

Definition 2.3. We define addition and multiplication of matrices as follows.

arxiv:math/ v5 [math.ac] 17 Sep 2009

MATRICES. a m,1 a m,n A =

HOMEWORK Graduate Abstract Algebra I May 2, 2004

Invertible Matrices over Idempotent Semirings

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Math 4377/6308 Advanced Linear Algebra

Matrices. Chapter Definitions and Notations

Countable and uncountable sets. Matrices.

Linear Algebra March 16, 2019

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra (Review) Volker Tresp 2017

arxiv: v1 [math.ra] 11 Aug 2014

a11 a A = : a 21 a 22

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Elementary maths for GMT

Generalizations of Sylvester s determinantal identity

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

High-dimensional asymptotic expansions for the distributions of canonical correlations

Tensor Calculus. arxiv: v1 [math.ho] 14 Oct Taha Sochi. October 17, 2016

MAT 2037 LINEAR ALGEBRA I web:

Transcription:

On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose entries are real-valued, identically distributed, mean zero, correlated Gaussian random variables are examined using the Kronecker tensor products and some combinatorial arguments. This result is used to derive the expected determinants of X + B and AX + X B. Key words: Random Determinant; Covariance Matrix; Permutation Matrix; Kronecker Product; Principal Minor; Random Matrix 1 Introduction We shall consider an n n matrix X = (x ij ), where x ij is a real-valued, identically distributed Gaussian random variable with the expectation Ex ij = 0 for all i, j. The individual elements of the matrix are not required to be independent. We shall call such matrix a mean zero Gaussian random matrix and its determinant a Gaussian random determinant which shall be denoted by X. We are interested in finding the expectation of the Gaussian random determinant E X. When x ij is independent identically distributed, the odd order moments of X, E X 2k 1, k = 1, 2,... are equal to 0 since X has 1 E-mail address: mchung@stat.wisc.edu Preprint submitted to Linear Algebra and Its Applications 4 September 2000

a symmetrical distribution with respect to 0. The exact expression for the even order moments E X 2k, k = 1, 2,... is also well known in relation with the Wishart distribution [8, p. 85-108]. Although one can study the moments of the Gaussian random determinant through standard techniques of matrix variate normal distributions [2,8], the aim of this paper is to examine the the expected Gaussian random determinant whose entries are correlated via the Kronecker tensor products which will be used in representing the covariance structure of X. When n is odd, the expected determinant E X equals to zero regardless of the covariance structure of X. When n is even, the expected determinant can be computed if there is a certain underlying symmetry in the covariance structure of X. Let us start with the following well known Lemma [6]. Lemma 1 For mean zero Gaussian random variables Z 1,..., Z 2m+1, E[Z 1 Z 2 Z 2m+1 ] = 0, E[Z 1 Z 2 Z 2m ] = E[Z i1 Z i2 ] E[Z i2m 1 Z i2m ], i Q m where Q m is the set of the (2m)!/m!2 m different ways of grouping 2m distinct elements of {1, 2,..., 2m} into m distinct pairs (i 1, i 2 ),..., (i 2m 1, i 2m ) and each element of Q m is indexed by i = {(i 1, i 2 ),..., (i 2m 1, i 2m )}. Lemma 1 is unique Gaussian property. For example, E[Z 1 Z 2 Z 3 Z 4 ] = E[Z 1 Z 2 ]E[Z 3 Z 4 ] + E[Z 1 Z 3 ]E[Z 2 Z 4 ] + E[Z 1 Z 4 ]E[Z 2 Z 3 ]. The determinant of the matrix X = (x ij ) can be expanded by X = sgn(σ)x 1σ(1)... x nσ(n), σ S n where S n is the set whose n! elements are permutations of {1, 2,..., n} and sgn(σ) is the sign function for the permutation σ. Then applying Lemma 1 to this expansion, we have the expansion for E X in terms of the pair-wise covariances E[x ij x kl ] [1]. 2

Lemma 2 For an n n mean zero Gaussian random matrix X, For n odd, E X = 0 and for n = 2m even, E X = sgn(σ) E[x i1 σ(i 1 )x i2 σ(i 2 )] E[x i2m 1 σ(i 2m 1 )x i2m σ(i 2m )]. σ S 2m i Q m Lemma 2 will be our main tool for computing E X. Before we do any computations, let us introduce some known results on the Kronecker products and the vec operator which will be used in representing the covariance structures of random matrices. 2 Preliminaries The covariance structure of a random matrix X = (x ij ) is somewhat difficult to represent. We need to know cov(x ij, x kl ) = E[x ij x kl ] Ex ij Ex kl for all i, j, k, l which have 4 indexes and can be represented in terms of 4- dimensional array or 4th order tensor but by vectorizing the matrix, we may use the standard method of representing the covariance structure of random vectors by a covariance matrix. Let vecx be a vector of size pq defined by stalking the columns of the p q matrix X one underneath the other. If x i is the ith column of X, then vecx = (x 1,..., x q). The covariance matrix of a p q random matrix X denoted by covx shall be defined as the pq pq covariance matrix of vecx: covx cov(vecx) = E[vecX(vecX) ] E[vecX]E[(vecX) ]. Following the convention of multivariate normal distributions, if the mean zero Gaussian random matrix X has the covariance matrix Σ not necessarily nonsingular, we shall denote X N(0, Σ). For example, if the components of n n matrix X are independent and identically distributed as Gaussian with 3

zero mean and unit variance, X N(0, I n 2). Some authors have used vec(a ) instead of veca in defining the covariance matrix of random matrices [8, p. 79]. The pair-wise covariance E[x ij x kl ] is related to the covariance matrix covx by the following Lemma via the Kronecker tensor product. Lemma 3 For an n n mean zero random matrix X = (x ij ), n covx = E[x ij x kl ]U jl U ik, i,j,k,l=1 where U ij is an n n matrix whose ijth entry is 1 and whose remaining entries are 0. Proof. Note that covx = E[vecX(vecX) ] = n j,l=1 U jl E[x j x l ] and E[x j x l ] = n i,k=1 E[x ij x kl ]U ik. Combining the above two result proves the Lemma. Hence, the covariance matrix of X is an n n block matrix whose ijth submatrix is the cross-covariance matrix between ith and jth columns of X. Now we need to define two special matrices K pq and L pq. For a p q matrix X, vec(x ) can be obtained by permuting the elements of vecx. Then there exists a pq pq orthogonal matrix K pq called a permutation matrix [4] such that vec(x ) = K pq vecx. (1) The permutation matrix K pq has the following representation [3]: K pq = i=1..p j=1..q U ij U ij, (2) where U ij is an p q matrix whose ijth entry is 1 and whose remaining entries are 0. We shall define a companion matrix L pq of K pq as an p 2 q 2 matrix 4

given by L pq = i=1..p j=1..q U ij U ij. Unlike the permutation matrix K pq, the matrix L pq has not been studied much. The matrix L pp has the following properties: L pp = L pp K pp = K pp L pp = 1 p L2 pp. Let e i be the ith column of the p p identity matrix I p. Then L pp can be represented in a different way [7]. L pp = p (e i e j ) (e ie j ) = p (e i e i )(e j e j ) = veci p(veci p ). (3) i,j=1 i,j=1 Example 4 For p p matrices A and B, tr(a)tr(b) = (veca) L pp vecb. (4) To see this, use the identity tr(x) = (veci p ) vecx = vecx(veci p ) and apply Equation (3). It is interesting to compare Equation (4) with the identity tr(ab) = (veca) K pp vecb. Lemma 5 If p q matrix X N(0, I pq ) then for s p matrix A and q r matrix B, AXB N(0, (B B) (AA )) (5) Proof. Since vec(axb) = (B A)vecX [3, Theorem 16.2.1], cov(axb) = (B A)covX(B A) = (B A)(B A ) = (B B) (AA ). Lemma 6 If p p matrix X N(0, L pp ), then for s p matrix A and p r matrix B, AXB N(0, vec(ab) vec(ab)) (6) Proof. We have cov(axb) = (B A)covX(B A). Using the identity (3), cov(axb) = ( (B A)vecI p )( (B A)vecI p ) = vec(ab) ( vec(ab) ). 5

Note that for an orthogonal matrix Q and X N(0, L pp ), the above Lemma shows Q XQ N(0, L pp ). 3 3 Basic covariance structures In this section, we will consider three specific covariance structures E[x ij x kl ] = a ij a kl (Theorem 7), E[x ij x kl ] = a il a jk (Theorem 8) and E[x ij x kl ] = a ik a jl (Theorem 9). The results on these three basic types of covariance structures will be the basis of constructing more complex covariance structures. Theorem 7 For 2m 2m Gaussian random matrix X N(0, veca(veca) ), E X = (2m)! m!2 m A. Proof. Let a i be the ith column of A = (a ij ) and e i be the ith column of I 2m. Then veca(veca) = ( 2m )( 2m ) e j a j e l a 2m l = (e j e l) (a j a l). (7) j=1 l=1 j,l=1 Substituting a j a l = 2m i,k=1 a ij a kl U ik into Equation (7) and applying Lemma 3, we get E[x ij x kl ] = a ij a kl. Now apply Lemma 2 directly. E X = i Q m σ S 2m sgn(σ)a i1 σ(i 1 )... a i2m σ(i 2m ). Note that {i 1,..., i 2m } = {1, 2,..., 2m}. Therefore, the inner summation is the determinant of A and there are (2m)! m!2 m such determinant. When A = I 2m, we have X N(0, L 2m2m ) and E X = (2m)! m!2 m. One might try to generalize Theorem 7 to E[x ij x kl ] = a ij b kl or covx = veca(vecb) but this case degenerates into E[x ij x kl ] = a ij a kl. To see this, note that E[x ij x kl ] = E[x kl x ij ] = a ij b kl = a kl b ij. Then a ij and b ij should satisfy a ij = cb ij for some constant c and for all i, j. 6

The case when E[x ij x kl ] = ɛ ijkl δ ij δ kl, where ɛ ijkl is a symmetric function in i, j, k, l and δ ij is the Kronecker s delta, is given in [1, Lemma 5.3.2]. Theorem 8 For 2m 2m X = (x ij ) and symmetric A = (a ij ) with E[x ij x kl ] = a il a jk for all i, j, k, l, E X = ( 1) m (2m)! m!2 m A. Proof. The condition A = A is necessary. Too see this, note that E[x ij x kl ] = E[x kl x ij ] = a il a jk = a kj a li. By letting j = k, we get a il = a li for all i, l. Then by interchanging the order of the summations in Lemma 2., E X = i Q m σ S 2m sgn(σ)a i1 σ(i 2 )a i2 σ(i 1 )... a i2m 1 σ(i 2m )a i2m σ(i 2m 1 ). (8) There exists a permutation τ such that Then τ(i 1 ) = σ(i 2 ), τ(i 2 ) = σ(i 1 ),..., τ(i 2m 1 ) = σ(i 2m ), τ(i 2m ) = σ(i 2m 1 ). σ 1 τ(i 1 ) = i 2, σ 1 τ(i 2 ) = i 1,..., σ 1 τ(i 2m 1 ) = i 2m, σ 1 τ(i 2m ) = i 2m 1. Note that σ 1 τ is the product of m odd permutations called transposition which interchanges two numbers and leaves the other numbers fixed. Hence sgn(σ 1 τ) = ( 1) m. Then by changing the index from σ to τ in Equation (8) with sgn(σ) = ( 1) m sgn(τ), we get E X = ( 1) m i Q m τ S 2m sgn(τ)a i1 τ(i 1 )a i2 τ(i 2 )... a i2m τ(i 2m ). The inner summation is the determinant of A and there are (2m)! such determinant. m!2 m Suppose X N(0, A A) with A = (a ij ). Since covariance matrices are symmetric, A A = (A A) = A A. Then (a ij ) should satisfy a ij a kl = a ji a lk for all i, j, k, l. By letting i = j, a kl = a lk for all l, k so A should be symmetric. 7

Now let us find the pair-wise covariance E[x ij x kl ] when covx = A A and A = A. Note that n n n covx = A A = ( a jl U jl ) ( a ik U ik ) = a ik a jl U jl U ik. j,l=1 i,k=1 i,j,k,l=1 Following Lemma 3, the covariance structure covx = A A, A = A is equivalent to E[x ij x kl ] = a ik a jl and a ij = a ji for all i, j, k, l. Then we have the following Theorem for the case E[x ij x kl ] = a ik a jl. Theorem 9 For 2m 2m Gaussian random matrix X N(0, A A) and a symmetric positive definite m m matrix A, E X = 0. Proof. Since A is symmetric positive definite, there exists A 1/2. Then following the proof of Lemma 5, cov(a 1/2 XA 1/2 ) = (A 1/2 A 1/2 )(A A)(A 1/2 A 1/2 ) = I n 2. Hence Y = A 1/2 XA 1/2 N(0, I n 2). Since the components of Y are all independent, trivially E Y = 0. Then it follows E X = A E Y = 0. 4 The expected determinants of X + B and AX + X B The results developed in previous sections can be applied to wide range of Gaussian random matrices with more complex covariance structures. Since a linear combination of Gaussian random variables is again Gaussian, X + B and AX + X B will be also Gaussian random matrices if X is a Gaussian random matrix when A and B are constant matrices. In this section, we will examine the expected determinants of X + B and AX + X B. Theorem 10 Let n = 2m. For n n matrix X N(0, I n 2 + K nn ), E X = ( 1) m (2m)! m!2 m. 8

Proof. Note that n I n 2 = I n I n = δ ik δ jl U jl U ik (9) i,j,j,l=1 and from Equation (2), K nn = n j,l=1 U jl U lj = n i,j,k,l=1 δ jk δ il U jl U ik. Then from Lemma 3, E[x ij x kl ] = δ ik δ jl + δ jk δ il. Now apply Lemma 2 directly. E[x ij σ(i j )x ij+1 σ(i j+1 )] = δ ij,i j+1 δ σ(ij ),σ(i j+1 ) + δ ij,σ(i j+1 )δ ij+1,σ(i j ). Since i j i j+1, the first term vanishes. Then we can modify the covariance structure of X to E[x ij x kl ] = δ jk δ il and still get the same expected determinant. Now apply Theorem 8 with A = I 2m. Similarly we have the following. Theorem 11 Let n = 2m. For n n matrix X N(0, A A + vecb(vecb) ) and a symmetric positive definite n n matrix A, E X = (2m)! m!2 m B. Proof. Since A is symmetric positive definite, there exists A 1/2. Let Y = (y ij ) = A 1/2 XA 1/2. Note that E X = A E Y. Now find the pair-wise covariance E[y ij y kl ] and apply Lemma 2. Following the proof of Theorem 9, cov(y ) = I n 2 + (A 1/2 A 1/2 )(vecb(vecb) )(A 1/2 A 1/2 ). Since vec(a 1/2 BA 1/2 ) = (A 1/2 A 1/2 )vecb, cov(y ) = I n 2 + vec(a 1/2 BA 1/2 )(vec(a 1/2 BA 1/2 )). (10) Then E[y ij y kl ] = δ ik δ jl +..., where δ ik δ jl corresponds to the first term I n 2 Equation (10) and... indicates the second term which we do not compute. in 9

To apply Lemma 2, we need the pair-wise covariance E[y ij σ(i j )y ij+1 σ(i j+1 )] = δ ij i j+1 δ σ(ij )σ(i j+1 ) +.... Since i j i j+1, the first term vanishes. Therefore, the expectation E Y will not change even if we modify the covariance matrix from Equation (10) to cov(y ) = vec(a 1/2 BA 1/2 )(vec(a 1/2 BA 1/2 )). Then by applying Theorem 7, E Y = (2m)! m!2 m A 1/2 BA 1/2. By letting A = B = I n, we get Corollary 12 Let n = 2m. For n n matrix X N(0, I n 2 + L nn ), E X = (2m)! m!2 m. The following theorem is due to [9], where the covariance structure is slightly different. Theorem 13 For n n matrix X N(0, L nn ) and a constant symmetric n n matrix B, E X + B = n/2 j=0 (2j)! 2 j j! B n 2j, where n 2 is the smallest integer greater than n 2 principal minors of B. and B j is the sum of j j Proof. Let Q be an orthogonal matrix such that Q BQ = D, where D = Diag(λ 1,..., λ n ) is a diagonal matrix of eigenvalues of B. Then X + B = Q XQ + D. Q XQ + D can be expanded in the following way [3, p. 196]: Q XQ + D = λ i1 λ ir Q XQ {i 1,...,i r}, (11) {i 1,...,i r} where the summation is taken over all 2 n subsets of {1,..., n} and Q XQ {i 1,...,i r} is the (n r) (n r) principal submatrix of Q XQ obtained by striking out the i 1,..., i r th rows and columns. From Lemma 6, Q XQ N(0, L nn ). Then it follows the distribution of any (n r) (n r) principal submatrix of Q XQ 10

is N(0, L (n r)(n r) ). Using Theorem 7, E Q XQ {i 1,...,i r} = (2j)! j!2 j for any i 1,..., i r if n r = 2j. If n r = 2j + 1, the principal minor equals 0. Therefore, E X + B = n/2 j=0 (2j)! λ j!2 j i1 λ ir, {i 1,...,i r} where the inner summation is taken over subsets of {1,..., n} with r = n 2j fixed. The inner summation is called the rth elementary symmetric function of the n numbers λ 1,..., λ n and it is identical to the sum of the r r principal minors of B. [5, Theorem 1.2.12]. Theorem 14 For n n matrix X N(0, A A) and n n symmetric positive definite A and symmetric n n matrix B, E X + B = B Proof. Let Y = A 1/2 XA 1/2. Then Y N(0, I n 2). Note that E X + B = A E Y + A 1/2 BA 1/2. Following the proof of Theorem 13 closely, E Y + A 1/2 BA 1/2 = λ i1 λ ir E Q Y Q {i 1,...,i r}, i 1,...,i r where λ 1,..., λ n are the eigenvalues of A 1/2 BA 1/2. Since Q Y Q {i 1,...,i r} N(0, I (n r) 2), E Q Y Q {i 1,...,i r} = 0 if r < n while E Q Y Q {i 1,i 2,...,i n} = E Q Y Q {1,2,...,n} = 1. Hence E Y + A 1/2 BA 1/2 = λ 1 λ n = A 1 B. Theorem 15 Let n = 2m. For n n matrix X N(0, I n 2) and n n constant matrix A and B, E AX + X B = ( 1) m m! AB m, where AB m is the sum of m m principal minors of AB. 11

Proof. Let Y = (y ij ) = AX + X B. we need to find the pair-wise covariance E[y ij y kl ] using E[x ij x kl ] = δ ik δ jl. Note that y ij = u(a iu x uj + b uj x ui ). Let us use the Einstein convention of not writing down the summation u. We may write y ij a iu x uj + b uj x ui. Then the pair-wise covariances of Y can be easily computed. E[y ij y kl ] = a iu a kv E[x uj x vl ] + a iu b vl E[x uj x vk ] + b uj a kv E[x ui x vl ] + b uj b vl E[x ui x vk ] = a iu a ku δ jl + a iu b ul δ jk + b uj a ku δ il + b uj b ul δ ik. Let a (i) be the ith row of A and b i be the ith column of B respectively. Then a iu a ku n u=1 a iu a ku = a (i) a (k) and the other terms can be expressed similarly. E[y ij y kl ] = a (i)a (k) δ jl + a (i)b l δ jk + a (k)b j δ il + b jb l δ ik. (12) When we apply Equation (12) to Lemma 2, the first and the last term vanish. E[y ij σ(i j )y ij+1 σ(i j+1 )] = a (i j )b σ(ij+1 )δ ij+1 σ(i j ) + a (i j+1 )b σ(ij )δ ij σ(i j+1 ) Let τ be a permutation satisfying τ(i 1 ) = σ(i 2 ), τ(i 2 ) = σ(i 1 ),..., τ(i 2m 1 ) = σ(i 2m ), τ(i 2m ) = σ(i 2m 1 ). Then E[y ij σ(i j )y ij+1 σ(i j+1 )] = a (i j ) b τ(i j )δ ij+1 τ(i j+1 ) + a (i j+1 ) b τ(i j+1 )δ ij τ(i j ) and sgn(σ) = ( 1) m sgn(τ). By changing the summation index from σ to τ in Lemma 2, E Y = ( 1) m sgn(τ) ( a (i1 ) b τ(i 1 )δ i2 τ(i 2 ) + a (i 2 ) b ) τ(i 2 )δ i1 τ(i 1 ) τ S 2m i Q m ( a b (i2m 1) τ(i 2m 1)δ i2mτ(i 2m) + a (i b ) 2m) τ(i 2m)δ i2m 1τ(i 2m 1). The product term inside the inner summation can be expanded by δ j1τ(j1) δ jmτ(jm)ak 1 b τ(k1 ) a k m b τ(km), (13) j 1,...,jm k 1,...,k m where the summation is taken over 2 m possible ways of choosing (j l, k l ) {(i 2l 1, i 2l ), (i 2l, i 2l 1 )} for all l = 1,..., m. In order to have a non-vanishing 12

term in Equation (13), τ(j 1 ) = j 1,..., τ(j m ) = j m. Let ρ S m be a permutation of m numbers {k 1,..., k m }. Then by changing the index from τ to ρ, E Y = ( 1) m = ( 1) m i Q m k 1,...,k m ρ S m i Q m k 1,...,k m AB {k1,...,k m}, sgn(ρ)a (k 1 ) b ρ(k 1 ) a (k m) b ρ(k m) where AB {k1,...,k m} is the m m principal submatrix consisting of k 1,..., k m th rows and columns of AB. Note that there are (2m)! 2 m = (2m)! terms of m!2 m m! principal minors AB {k1,...,k m} in the summation i Q m k 1,...,k m but there are only ( ) 2m m unique principal minors of AB. Then there must be repetitions of principal minors in the summation. Because of symmetry, the number of repetition for each principal minor must be (2m)! / ( ) 2m m! m = m!. Hence i Q m k 1,...,k m AB {k1,...,k m} = m! AB m. Corollary 16 Let n = 2m. For n n matrix X N(0, I n 2), E X + X = ( 1) m (2m)! m! Proof. Let A = B = I 2m in Theorem 15. Use the fact that the sum of m m principal minors of I 2m is ( ) 2m m. Finally we propose an open problem. The difficulty of this problem arises from the restriction m > n. Problem 17 Let m > n. An m n random matrix X N(0, A I n ), where the m m matrix A is symmetric non-negative definite. For an m m symmetric matrix C, determine E X CX. References [1] R.J. Adler. The Geometry of Random Fields, John Wiley and Sons, 1980. 13

[2] S.F. Arnold, The Theory of Linear Models and Multivariate Analysis, John Wiley & Sons. [3] D.A. Harville, Matrix Algebra from a Statistian s Perspective, Springer-Verlag, 1997. [4] H.V. Henderson, S.R. Searle, Vec and Vech Operators for Matrices, with some uses in Jacobians and Multivariate Statistics, The Canadian Journal of Statistics 7 (1979) 65-81 [5] R. A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, 1985. [6] Y.K. Lin, Probabilistic Theory of Structural Dynamics, McGraw-Hill, 1967, pp. 78-80. [7] J.R. Magnus, The Commutation Matrix: Some Properties and Applications, The Annals of Statistics 7 (1979) 381-394. [8] R.B. Muirhead, Aspects of Multivariate Statistical Theory, John Wiley & Sons Inc, 1982. [9] K.J. Worsley, Local Maxima and the Expected Euler Characteristic of Excursion Sets of χ 2,F and t Fields, Advances in Applied Probability 26 (1995) 13-42. 14