# On Expected Gaussian Random Determinants

Size: px
Start display at page:

## Transcription

1 On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI Abstract The expectation of random determinants whose entries are real-valued, identically distributed, mean zero, correlated Gaussian random variables are examined using the Kronecker tensor products and some combinatorial arguments. This result is used to derive the expected determinants of X + B and AX + X B. Key words: Random Determinant; Covariance Matrix; Permutation Matrix; Kronecker Product; Principal Minor; Random Matrix 1 Introduction We shall consider an n n matrix X = (x ij ), where x ij is a real-valued, identically distributed Gaussian random variable with the expectation Ex ij = 0 for all i, j. The individual elements of the matrix are not required to be independent. We shall call such matrix a mean zero Gaussian random matrix and its determinant a Gaussian random determinant which shall be denoted by X. We are interested in finding the expectation of the Gaussian random determinant E X. When x ij is independent identically distributed, the odd order moments of X, E X 2k 1, k = 1, 2,... are equal to 0 since X has 1 address: Preprint submitted to Linear Algebra and Its Applications 4 September 2000

2 a symmetrical distribution with respect to 0. The exact expression for the even order moments E X 2k, k = 1, 2,... is also well known in relation with the Wishart distribution [8, p ]. Although one can study the moments of the Gaussian random determinant through standard techniques of matrix variate normal distributions [2,8], the aim of this paper is to examine the the expected Gaussian random determinant whose entries are correlated via the Kronecker tensor products which will be used in representing the covariance structure of X. When n is odd, the expected determinant E X equals to zero regardless of the covariance structure of X. When n is even, the expected determinant can be computed if there is a certain underlying symmetry in the covariance structure of X. Let us start with the following well known Lemma [6]. Lemma 1 For mean zero Gaussian random variables Z 1,..., Z 2m+1, E[Z 1 Z 2 Z 2m+1 ] = 0, E[Z 1 Z 2 Z 2m ] = E[Z i1 Z i2 ] E[Z i2m 1 Z i2m ], i Q m where Q m is the set of the (2m)!/m!2 m different ways of grouping 2m distinct elements of {1, 2,..., 2m} into m distinct pairs (i 1, i 2 ),..., (i 2m 1, i 2m ) and each element of Q m is indexed by i = {(i 1, i 2 ),..., (i 2m 1, i 2m )}. Lemma 1 is unique Gaussian property. For example, E[Z 1 Z 2 Z 3 Z 4 ] = E[Z 1 Z 2 ]E[Z 3 Z 4 ] + E[Z 1 Z 3 ]E[Z 2 Z 4 ] + E[Z 1 Z 4 ]E[Z 2 Z 3 ]. The determinant of the matrix X = (x ij ) can be expanded by X = sgn(σ)x 1σ(1)... x nσ(n), σ S n where S n is the set whose n! elements are permutations of {1, 2,..., n} and sgn(σ) is the sign function for the permutation σ. Then applying Lemma 1 to this expansion, we have the expansion for E X in terms of the pair-wise covariances E[x ij x kl ] [1]. 2

3 Lemma 2 For an n n mean zero Gaussian random matrix X, For n odd, E X = 0 and for n = 2m even, E X = sgn(σ) E[x i1 σ(i 1 )x i2 σ(i 2 )] E[x i2m 1 σ(i 2m 1 )x i2m σ(i 2m )]. σ S 2m i Q m Lemma 2 will be our main tool for computing E X. Before we do any computations, let us introduce some known results on the Kronecker products and the vec operator which will be used in representing the covariance structures of random matrices. 2 Preliminaries The covariance structure of a random matrix X = (x ij ) is somewhat difficult to represent. We need to know cov(x ij, x kl ) = E[x ij x kl ] Ex ij Ex kl for all i, j, k, l which have 4 indexes and can be represented in terms of 4- dimensional array or 4th order tensor but by vectorizing the matrix, we may use the standard method of representing the covariance structure of random vectors by a covariance matrix. Let vecx be a vector of size pq defined by stalking the columns of the p q matrix X one underneath the other. If x i is the ith column of X, then vecx = (x 1,..., x q). The covariance matrix of a p q random matrix X denoted by covx shall be defined as the pq pq covariance matrix of vecx: covx cov(vecx) = E[vecX(vecX) ] E[vecX]E[(vecX) ]. Following the convention of multivariate normal distributions, if the mean zero Gaussian random matrix X has the covariance matrix Σ not necessarily nonsingular, we shall denote X N(0, Σ). For example, if the components of n n matrix X are independent and identically distributed as Gaussian with 3

4 zero mean and unit variance, X N(0, I n 2). Some authors have used vec(a ) instead of veca in defining the covariance matrix of random matrices [8, p. 79]. The pair-wise covariance E[x ij x kl ] is related to the covariance matrix covx by the following Lemma via the Kronecker tensor product. Lemma 3 For an n n mean zero random matrix X = (x ij ), n covx = E[x ij x kl ]U jl U ik, i,j,k,l=1 where U ij is an n n matrix whose ijth entry is 1 and whose remaining entries are 0. Proof. Note that covx = E[vecX(vecX) ] = n j,l=1 U jl E[x j x l ] and E[x j x l ] = n i,k=1 E[x ij x kl ]U ik. Combining the above two result proves the Lemma. Hence, the covariance matrix of X is an n n block matrix whose ijth submatrix is the cross-covariance matrix between ith and jth columns of X. Now we need to define two special matrices K pq and L pq. For a p q matrix X, vec(x ) can be obtained by permuting the elements of vecx. Then there exists a pq pq orthogonal matrix K pq called a permutation matrix [4] such that vec(x ) = K pq vecx. (1) The permutation matrix K pq has the following representation [3]: K pq = i=1..p j=1..q U ij U ij, (2) where U ij is an p q matrix whose ijth entry is 1 and whose remaining entries are 0. We shall define a companion matrix L pq of K pq as an p 2 q 2 matrix 4

5 given by L pq = i=1..p j=1..q U ij U ij. Unlike the permutation matrix K pq, the matrix L pq has not been studied much. The matrix L pp has the following properties: L pp = L pp K pp = K pp L pp = 1 p L2 pp. Let e i be the ith column of the p p identity matrix I p. Then L pp can be represented in a different way [7]. L pp = p (e i e j ) (e ie j ) = p (e i e i )(e j e j ) = veci p(veci p ). (3) i,j=1 i,j=1 Example 4 For p p matrices A and B, tr(a)tr(b) = (veca) L pp vecb. (4) To see this, use the identity tr(x) = (veci p ) vecx = vecx(veci p ) and apply Equation (3). It is interesting to compare Equation (4) with the identity tr(ab) = (veca) K pp vecb. Lemma 5 If p q matrix X N(0, I pq ) then for s p matrix A and q r matrix B, AXB N(0, (B B) (AA )) (5) Proof. Since vec(axb) = (B A)vecX [3, Theorem ], cov(axb) = (B A)covX(B A) = (B A)(B A ) = (B B) (AA ). Lemma 6 If p p matrix X N(0, L pp ), then for s p matrix A and p r matrix B, AXB N(0, vec(ab) vec(ab)) (6) Proof. We have cov(axb) = (B A)covX(B A). Using the identity (3), cov(axb) = ( (B A)vecI p )( (B A)vecI p ) = vec(ab) ( vec(ab) ). 5

6 Note that for an orthogonal matrix Q and X N(0, L pp ), the above Lemma shows Q XQ N(0, L pp ). 3 3 Basic covariance structures In this section, we will consider three specific covariance structures E[x ij x kl ] = a ij a kl (Theorem 7), E[x ij x kl ] = a il a jk (Theorem 8) and E[x ij x kl ] = a ik a jl (Theorem 9). The results on these three basic types of covariance structures will be the basis of constructing more complex covariance structures. Theorem 7 For 2m 2m Gaussian random matrix X N(0, veca(veca) ), E X = (2m)! m!2 m A. Proof. Let a i be the ith column of A = (a ij ) and e i be the ith column of I 2m. Then veca(veca) = ( 2m )( 2m ) e j a j e l a 2m l = (e j e l) (a j a l). (7) j=1 l=1 j,l=1 Substituting a j a l = 2m i,k=1 a ij a kl U ik into Equation (7) and applying Lemma 3, we get E[x ij x kl ] = a ij a kl. Now apply Lemma 2 directly. E X = i Q m σ S 2m sgn(σ)a i1 σ(i 1 )... a i2m σ(i 2m ). Note that {i 1,..., i 2m } = {1, 2,..., 2m}. Therefore, the inner summation is the determinant of A and there are (2m)! m!2 m such determinant. When A = I 2m, we have X N(0, L 2m2m ) and E X = (2m)! m!2 m. One might try to generalize Theorem 7 to E[x ij x kl ] = a ij b kl or covx = veca(vecb) but this case degenerates into E[x ij x kl ] = a ij a kl. To see this, note that E[x ij x kl ] = E[x kl x ij ] = a ij b kl = a kl b ij. Then a ij and b ij should satisfy a ij = cb ij for some constant c and for all i, j. 6

7 The case when E[x ij x kl ] = ɛ ijkl δ ij δ kl, where ɛ ijkl is a symmetric function in i, j, k, l and δ ij is the Kronecker s delta, is given in [1, Lemma 5.3.2]. Theorem 8 For 2m 2m X = (x ij ) and symmetric A = (a ij ) with E[x ij x kl ] = a il a jk for all i, j, k, l, E X = ( 1) m (2m)! m!2 m A. Proof. The condition A = A is necessary. Too see this, note that E[x ij x kl ] = E[x kl x ij ] = a il a jk = a kj a li. By letting j = k, we get a il = a li for all i, l. Then by interchanging the order of the summations in Lemma 2., E X = i Q m σ S 2m sgn(σ)a i1 σ(i 2 )a i2 σ(i 1 )... a i2m 1 σ(i 2m )a i2m σ(i 2m 1 ). (8) There exists a permutation τ such that Then τ(i 1 ) = σ(i 2 ), τ(i 2 ) = σ(i 1 ),..., τ(i 2m 1 ) = σ(i 2m ), τ(i 2m ) = σ(i 2m 1 ). σ 1 τ(i 1 ) = i 2, σ 1 τ(i 2 ) = i 1,..., σ 1 τ(i 2m 1 ) = i 2m, σ 1 τ(i 2m ) = i 2m 1. Note that σ 1 τ is the product of m odd permutations called transposition which interchanges two numbers and leaves the other numbers fixed. Hence sgn(σ 1 τ) = ( 1) m. Then by changing the index from σ to τ in Equation (8) with sgn(σ) = ( 1) m sgn(τ), we get E X = ( 1) m i Q m τ S 2m sgn(τ)a i1 τ(i 1 )a i2 τ(i 2 )... a i2m τ(i 2m ). The inner summation is the determinant of A and there are (2m)! such determinant. m!2 m Suppose X N(0, A A) with A = (a ij ). Since covariance matrices are symmetric, A A = (A A) = A A. Then (a ij ) should satisfy a ij a kl = a ji a lk for all i, j, k, l. By letting i = j, a kl = a lk for all l, k so A should be symmetric. 7

8 Now let us find the pair-wise covariance E[x ij x kl ] when covx = A A and A = A. Note that n n n covx = A A = ( a jl U jl ) ( a ik U ik ) = a ik a jl U jl U ik. j,l=1 i,k=1 i,j,k,l=1 Following Lemma 3, the covariance structure covx = A A, A = A is equivalent to E[x ij x kl ] = a ik a jl and a ij = a ji for all i, j, k, l. Then we have the following Theorem for the case E[x ij x kl ] = a ik a jl. Theorem 9 For 2m 2m Gaussian random matrix X N(0, A A) and a symmetric positive definite m m matrix A, E X = 0. Proof. Since A is symmetric positive definite, there exists A 1/2. Then following the proof of Lemma 5, cov(a 1/2 XA 1/2 ) = (A 1/2 A 1/2 )(A A)(A 1/2 A 1/2 ) = I n 2. Hence Y = A 1/2 XA 1/2 N(0, I n 2). Since the components of Y are all independent, trivially E Y = 0. Then it follows E X = A E Y = 0. 4 The expected determinants of X + B and AX + X B The results developed in previous sections can be applied to wide range of Gaussian random matrices with more complex covariance structures. Since a linear combination of Gaussian random variables is again Gaussian, X + B and AX + X B will be also Gaussian random matrices if X is a Gaussian random matrix when A and B are constant matrices. In this section, we will examine the expected determinants of X + B and AX + X B. Theorem 10 Let n = 2m. For n n matrix X N(0, I n 2 + K nn ), E X = ( 1) m (2m)! m!2 m. 8

9 Proof. Note that n I n 2 = I n I n = δ ik δ jl U jl U ik (9) i,j,j,l=1 and from Equation (2), K nn = n j,l=1 U jl U lj = n i,j,k,l=1 δ jk δ il U jl U ik. Then from Lemma 3, E[x ij x kl ] = δ ik δ jl + δ jk δ il. Now apply Lemma 2 directly. E[x ij σ(i j )x ij+1 σ(i j+1 )] = δ ij,i j+1 δ σ(ij ),σ(i j+1 ) + δ ij,σ(i j+1 )δ ij+1,σ(i j ). Since i j i j+1, the first term vanishes. Then we can modify the covariance structure of X to E[x ij x kl ] = δ jk δ il and still get the same expected determinant. Now apply Theorem 8 with A = I 2m. Similarly we have the following. Theorem 11 Let n = 2m. For n n matrix X N(0, A A + vecb(vecb) ) and a symmetric positive definite n n matrix A, E X = (2m)! m!2 m B. Proof. Since A is symmetric positive definite, there exists A 1/2. Let Y = (y ij ) = A 1/2 XA 1/2. Note that E X = A E Y. Now find the pair-wise covariance E[y ij y kl ] and apply Lemma 2. Following the proof of Theorem 9, cov(y ) = I n 2 + (A 1/2 A 1/2 )(vecb(vecb) )(A 1/2 A 1/2 ). Since vec(a 1/2 BA 1/2 ) = (A 1/2 A 1/2 )vecb, cov(y ) = I n 2 + vec(a 1/2 BA 1/2 )(vec(a 1/2 BA 1/2 )). (10) Then E[y ij y kl ] = δ ik δ jl +..., where δ ik δ jl corresponds to the first term I n 2 Equation (10) and... indicates the second term which we do not compute. in 9

10 To apply Lemma 2, we need the pair-wise covariance E[y ij σ(i j )y ij+1 σ(i j+1 )] = δ ij i j+1 δ σ(ij )σ(i j+1 ) Since i j i j+1, the first term vanishes. Therefore, the expectation E Y will not change even if we modify the covariance matrix from Equation (10) to cov(y ) = vec(a 1/2 BA 1/2 )(vec(a 1/2 BA 1/2 )). Then by applying Theorem 7, E Y = (2m)! m!2 m A 1/2 BA 1/2. By letting A = B = I n, we get Corollary 12 Let n = 2m. For n n matrix X N(0, I n 2 + L nn ), E X = (2m)! m!2 m. The following theorem is due to [9], where the covariance structure is slightly different. Theorem 13 For n n matrix X N(0, L nn ) and a constant symmetric n n matrix B, E X + B = n/2 j=0 (2j)! 2 j j! B n 2j, where n 2 is the smallest integer greater than n 2 principal minors of B. and B j is the sum of j j Proof. Let Q be an orthogonal matrix such that Q BQ = D, where D = Diag(λ 1,..., λ n ) is a diagonal matrix of eigenvalues of B. Then X + B = Q XQ + D. Q XQ + D can be expanded in the following way [3, p. 196]: Q XQ + D = λ i1 λ ir Q XQ {i 1,...,i r}, (11) {i 1,...,i r} where the summation is taken over all 2 n subsets of {1,..., n} and Q XQ {i 1,...,i r} is the (n r) (n r) principal submatrix of Q XQ obtained by striking out the i 1,..., i r th rows and columns. From Lemma 6, Q XQ N(0, L nn ). Then it follows the distribution of any (n r) (n r) principal submatrix of Q XQ 10

11 is N(0, L (n r)(n r) ). Using Theorem 7, E Q XQ {i 1,...,i r} = (2j)! j!2 j for any i 1,..., i r if n r = 2j. If n r = 2j + 1, the principal minor equals 0. Therefore, E X + B = n/2 j=0 (2j)! λ j!2 j i1 λ ir, {i 1,...,i r} where the inner summation is taken over subsets of {1,..., n} with r = n 2j fixed. The inner summation is called the rth elementary symmetric function of the n numbers λ 1,..., λ n and it is identical to the sum of the r r principal minors of B. [5, Theorem ]. Theorem 14 For n n matrix X N(0, A A) and n n symmetric positive definite A and symmetric n n matrix B, E X + B = B Proof. Let Y = A 1/2 XA 1/2. Then Y N(0, I n 2). Note that E X + B = A E Y + A 1/2 BA 1/2. Following the proof of Theorem 13 closely, E Y + A 1/2 BA 1/2 = λ i1 λ ir E Q Y Q {i 1,...,i r}, i 1,...,i r where λ 1,..., λ n are the eigenvalues of A 1/2 BA 1/2. Since Q Y Q {i 1,...,i r} N(0, I (n r) 2), E Q Y Q {i 1,...,i r} = 0 if r < n while E Q Y Q {i 1,i 2,...,i n} = E Q Y Q {1,2,...,n} = 1. Hence E Y + A 1/2 BA 1/2 = λ 1 λ n = A 1 B. Theorem 15 Let n = 2m. For n n matrix X N(0, I n 2) and n n constant matrix A and B, E AX + X B = ( 1) m m! AB m, where AB m is the sum of m m principal minors of AB. 11

12 Proof. Let Y = (y ij ) = AX + X B. we need to find the pair-wise covariance E[y ij y kl ] using E[x ij x kl ] = δ ik δ jl. Note that y ij = u(a iu x uj + b uj x ui ). Let us use the Einstein convention of not writing down the summation u. We may write y ij a iu x uj + b uj x ui. Then the pair-wise covariances of Y can be easily computed. E[y ij y kl ] = a iu a kv E[x uj x vl ] + a iu b vl E[x uj x vk ] + b uj a kv E[x ui x vl ] + b uj b vl E[x ui x vk ] = a iu a ku δ jl + a iu b ul δ jk + b uj a ku δ il + b uj b ul δ ik. Let a (i) be the ith row of A and b i be the ith column of B respectively. Then a iu a ku n u=1 a iu a ku = a (i) a (k) and the other terms can be expressed similarly. E[y ij y kl ] = a (i)a (k) δ jl + a (i)b l δ jk + a (k)b j δ il + b jb l δ ik. (12) When we apply Equation (12) to Lemma 2, the first and the last term vanish. E[y ij σ(i j )y ij+1 σ(i j+1 )] = a (i j )b σ(ij+1 )δ ij+1 σ(i j ) + a (i j+1 )b σ(ij )δ ij σ(i j+1 ) Let τ be a permutation satisfying τ(i 1 ) = σ(i 2 ), τ(i 2 ) = σ(i 1 ),..., τ(i 2m 1 ) = σ(i 2m ), τ(i 2m ) = σ(i 2m 1 ). Then E[y ij σ(i j )y ij+1 σ(i j+1 )] = a (i j ) b τ(i j )δ ij+1 τ(i j+1 ) + a (i j+1 ) b τ(i j+1 )δ ij τ(i j ) and sgn(σ) = ( 1) m sgn(τ). By changing the summation index from σ to τ in Lemma 2, E Y = ( 1) m sgn(τ) ( a (i1 ) b τ(i 1 )δ i2 τ(i 2 ) + a (i 2 ) b ) τ(i 2 )δ i1 τ(i 1 ) τ S 2m i Q m ( a b (i2m 1) τ(i 2m 1)δ i2mτ(i 2m) + a (i b ) 2m) τ(i 2m)δ i2m 1τ(i 2m 1). The product term inside the inner summation can be expanded by δ j1τ(j1) δ jmτ(jm)ak 1 b τ(k1 ) a k m b τ(km), (13) j 1,...,jm k 1,...,k m where the summation is taken over 2 m possible ways of choosing (j l, k l ) {(i 2l 1, i 2l ), (i 2l, i 2l 1 )} for all l = 1,..., m. In order to have a non-vanishing 12

13 term in Equation (13), τ(j 1 ) = j 1,..., τ(j m ) = j m. Let ρ S m be a permutation of m numbers {k 1,..., k m }. Then by changing the index from τ to ρ, E Y = ( 1) m = ( 1) m i Q m k 1,...,k m ρ S m i Q m k 1,...,k m AB {k1,...,k m}, sgn(ρ)a (k 1 ) b ρ(k 1 ) a (k m) b ρ(k m) where AB {k1,...,k m} is the m m principal submatrix consisting of k 1,..., k m th rows and columns of AB. Note that there are (2m)! 2 m = (2m)! terms of m!2 m m! principal minors AB {k1,...,k m} in the summation i Q m k 1,...,k m but there are only ( ) 2m m unique principal minors of AB. Then there must be repetitions of principal minors in the summation. Because of symmetry, the number of repetition for each principal minor must be (2m)! / ( ) 2m m! m = m!. Hence i Q m k 1,...,k m AB {k1,...,k m} = m! AB m. Corollary 16 Let n = 2m. For n n matrix X N(0, I n 2), E X + X = ( 1) m (2m)! m! Proof. Let A = B = I 2m in Theorem 15. Use the fact that the sum of m m principal minors of I 2m is ( ) 2m m. Finally we propose an open problem. The difficulty of this problem arises from the restriction m > n. Problem 17 Let m > n. An m n random matrix X N(0, A I n ), where the m m matrix A is symmetric non-negative definite. For an m m symmetric matrix C, determine E X CX. References [1] R.J. Adler. The Geometry of Random Fields, John Wiley and Sons,

14 [2] S.F. Arnold, The Theory of Linear Models and Multivariate Analysis, John Wiley & Sons. [3] D.A. Harville, Matrix Algebra from a Statistian s Perspective, Springer-Verlag, [4] H.V. Henderson, S.R. Searle, Vec and Vech Operators for Matrices, with some uses in Jacobians and Multivariate Statistics, The Canadian Journal of Statistics 7 (1979) [5] R. A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, [6] Y.K. Lin, Probabilistic Theory of Structural Dynamics, McGraw-Hill, 1967, pp [7] J.R. Magnus, The Commutation Matrix: Some Properties and Applications, The Annals of Statistics 7 (1979) [8] R.B. Muirhead, Aspects of Multivariate Statistical Theory, John Wiley & Sons Inc, [9] K.J. Worsley, Local Maxima and the Expected Euler Characteristic of Excursion Sets of χ 2,F and t Fields, Advances in Applied Probability 26 (1995)

### Linear Systems and Matrices

Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

### Determinants - Uniqueness and Properties

Determinants - Uniqueness and Properties 2-2-2008 In order to show that there s only one determinant function on M(n, R), I m going to derive another formula for the determinant It involves permutations

### ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product

ENGG5781 Matrix Analysis and Computations Lecture 9: Kronecker Product Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Kronecker product and

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

### 2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

### Fundamentals of Engineering Analysis (650163)

Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

### Matrix Differentiation

Matrix Differentiation CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Matrix Differentiation

### Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

### Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Converting Matrices Into (Long) Vectors Convention:

### Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

### Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product

### Section 9.2: Matrices.. a m1 a m2 a mn

Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in

### Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

### Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

### 1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

### Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated

### LECTURE 4: DETERMINANT (CHAPTER 2 IN THE BOOK)

LECTURE 4: DETERMINANT (CHAPTER 2 IN THE BOOK) Everything with is not required by the course syllabus. Idea Idea: for each n n matrix A we will assign a real number called det(a). Properties: det(a) 0

### DETERMINANTS 1. def. (ijk) = (ik)(ij).

DETERMINANTS 1 Cyclic permutations. A permutation is a one-to-one mapping of a set onto itself. A cyclic permutation, or a cycle, or a k-cycle, where k 2 is an integer, is a permutation σ where for some

### Tensor Analysis in Euclidean Space

Tensor Analysis in Euclidean Space James Emery Edited: 8/5/2016 Contents 1 Classical Tensor Notation 2 2 Multilinear Functionals 4 3 Operations With Tensors 5 4 The Directional Derivative 5 5 Curvilinear

### Computational Linear Algebra

Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

### EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

### Matrices. In this chapter: matrices, determinants. inverse matrix

Matrices In this chapter: matrices, determinants inverse matrix 1 1.1 Matrices A matrix is a retangular array of numbers. Rows: horizontal lines. A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a

### GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

### Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number

### Matrices and Determinants

Chapter1 Matrices and Determinants 11 INTRODUCTION Matrix means an arrangement or array Matrices (plural of matrix) were introduced by Cayley in 1860 A matrix A is rectangular array of m n numbers (or

### Review of Vectors and Matrices

A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

### II. Determinant Functions

Supplemental Materials for EE203001 Students II Determinant Functions Chung-Chin Lu Department of Electrical Engineering National Tsing Hua University May 22, 2003 1 Three Axioms for a Determinant Function

### Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

### Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Introduction Vectors and Matrices Dr. TGI Fernando 1 2 Data is frequently arranged in arrays, that is, sets whose elements are indexed by one or more subscripts. Vector - one dimensional array Matrix -

### Intrinsic products and factorizations of matrices

Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

### Math Linear Algebra Final Exam Review Sheet

Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

### The Matrix-Tree Theorem

The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries

### a(b + c) = ab + ac a, b, c k

Lecture 2. The Categories of Vector Spaces PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 2. We discuss the categories of vector spaces and linear maps. Since vector spaces are always

### Appendix A: Matrices

Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

### PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

### Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

### Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

### Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III

### Cartesian Tensors. e 2. e 1. General vector (formal definition to follow) denoted by components

Cartesian Tensors Reference: Jeffreys Cartesian Tensors 1 Coordinates and Vectors z x 3 e 3 y x 2 e 2 e 1 x x 1 Coordinates x i, i 123,, Unit vectors: e i, i 123,, General vector (formal definition to

### ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

### Chapter 2. Square matrices

Chapter 2. Square matrices Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/18 Invertible matrices Definition 2.1 Invertible matrices An n n matrix A is said to be invertible, if there is a

### Week 15-16: Combinatorial Design

Week 15-16: Combinatorial Design May 8, 2017 A combinatorial design, or simply a design, is an arrangement of the objects of a set into subsets satisfying certain prescribed properties. The area of combinatorial

### Mathematical Foundations of Applied Statistics: Matrix Algebra

Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.

### Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

### A Brief Introduction to Tensors

A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction,

### Chapter 3. Basic Properties of Matrices

3.1. Basic Definitions and Notations 1 Chapter 3. Basic Properties of Matrices Note. This long chapter (over 100 pages) contains the bulk of the material for this course. As in Chapter 2, unless stated

### Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

### Elementary Row Operations on Matrices

King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

### Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

### Discrete Applied Mathematics

Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

### 1 Matrices and Systems of Linear Equations. a 1n a 2n

March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

### Here are some additional properties of the determinant function.

List of properties Here are some additional properties of the determinant function. Prop Throughout let A, B M nn. 1 If A = (a ij ) is upper triangular then det(a) = a 11 a 22... a nn. 2 If a row or column

### Chapter 1 Matrices and Systems of Equations

Chapter 1 Matrices and Systems of Equations System of Linear Equations 1. A linear equation in n unknowns is an equation of the form n i=1 a i x i = b where a 1,..., a n, b R and x 1,..., x n are variables.

### 1 Linear Algebra Problems

Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

### Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

### Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

### MATH 106 LINEAR ALGEBRA LECTURE NOTES

MATH 6 LINEAR ALGEBRA LECTURE NOTES FALL - These Lecture Notes are not in a final form being still subject of improvement Contents Systems of linear equations and matrices 5 Introduction to systems of

### Foundations of Matrix Analysis

1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

### DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

DETERMINANTS 1 Solving linear equations The simplest type of equations are linear The equation (1) ax = b is a linear equation, in the sense that the function f(x) = ax is linear 1 and it is equated to

### Notes on Determinants and Matrix Inverse

Notes on Determinants and Matrix Inverse University of British Columbia, Vancouver Yue-Xian Li March 17, 2015 1 1 Definition of determinant Determinant is a scalar that measures the magnitude or size of

### DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

### Submatrices and Partitioned Matrices

2 Submatrices and Partitioned Matrices Two very important (and closely related) concepts are introduced in this chapter that of a submatrix and that of a partitioned matrix. These concepts arise very naturally

### THREE LECTURES ON QUASIDETERMINANTS

THREE LECTURES ON QUASIDETERMINANTS Robert Lee Wilson Department of Mathematics Rutgers University The determinant of a matrix with entries in a commutative ring is a main organizing tool in commutative

### Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

### Diagonalization by a unitary similarity transformation

Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

### CS 246 Review of Linear Algebra 01/17/19

1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

### Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations

### Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

### Index Notation for Vector Calculus

Index Notation for Vector Calculus by Ilan Ben-Yaacov and Francesc Roig Copyright c 2006 Index notation, also commonly known as subscript notation or tensor notation, is an extremely useful tool for performing

### Matrix & Linear Algebra

Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

### Phys 201. Matrices and Determinants

Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

### Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

### Determinants. Beifang Chen

Determinants Beifang Chen 1 Motivation Determinant is a function that each square real matrix A is assigned a real number, denoted det A, satisfying certain properties If A is a 3 3 matrix, writing A [u,

### Vectors and Matrices Notes.

Vectors and Matrices Notes Jonathan Coulthard JonathanCoulthard@physicsoxacuk 1 Index Notation Index notation may seem quite intimidating at first, but once you get used to it, it will allow us to prove

### An unbiased estimator for the roughness of a multivariate Gaussian random field

An unbiased estimator for the roughness of a multivariate Gaussian random field K.J. Worsley July 28, 2000 Abstract Images from positron emission tomography (PET) and functional magnetic resonance imaging

### Definition 2.3. We define addition and multiplication of matrices as follows.

14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

### arxiv:math/ v5 [math.ac] 17 Sep 2009

On the elementary symmetric functions of a sum of matrices R. S. Costas-Santos arxiv:math/0612464v5 [math.ac] 17 Sep 2009 September 17, 2009 Abstract Often in mathematics it is useful to summarize a multivariate

### MATRICES. a m,1 a m,n A =

MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

### HOMEWORK Graduate Abstract Algebra I May 2, 2004

Math 5331 Sec 121 Spring 2004, UT Arlington HOMEWORK Graduate Abstract Algebra I May 2, 2004 The required text is Algebra, by Thomas W. Hungerford, Graduate Texts in Mathematics, Vol 73, Springer. (it

### Invertible Matrices over Idempotent Semirings

Chamchuri Journal of Mathematics Volume 1(2009) Number 2, 55 61 http://www.math.sc.chula.ac.th/cjm Invertible Matrices over Idempotent Semirings W. Mora, A. Wasanawichit and Y. Kemprasit Received 28 Sep

### ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

### Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

### Math 4377/6308 Advanced Linear Algebra

2.3 Composition Math 4377/6308 Advanced Linear Algebra 2.3 Composition of Linear Transformations Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math4377

### Matrices. Chapter Definitions and Notations

Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

### Countable and uncountable sets. Matrices.

CS 441 Discrete Mathematics for CS Lecture 11 Countable and uncountable sets. Matrices. Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Arithmetic series Definition: The sum of the terms of the

### Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

### Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

### Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

### Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

### arxiv: v1 [math.ra] 11 Aug 2014

Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

### a11 a A = : a 21 a 22

Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

### Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

### Elementary maths for GMT

Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

### Generalizations of Sylvester s determinantal identity

Generalizations of Sylvester s determinantal identity. Redivo Zaglia,.R. Russo Università degli Studi di Padova Dipartimento di atematica Pura ed Applicata Due Giorni di Algebra Lineare Numerica 6-7 arzo

### PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

### High-dimensional asymptotic expansions for the distributions of canonical correlations

Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic