Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices

Size: px
Start display at page:

Download "Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices"

Transcription

1 Ann Inst Stat Math : DOI 0007/s Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices Huiqin Li Zhi Dong Bai Jiang Hu Received: 4 June 204 / Revised: 27 December 204 / Published online: 22 March 205 The Institute of Statistical Mathematics, Tokyo 205 Abstract In this paper, we establish the limit of empirical spectral distributions of quaternion sample covariance matrices Motivated by Bai and Silverstein Spectral analysis of large dimensional random matrices, Springer, New York, 200 and Marčenko and Pastur Matematicheskii Sb, 4: , 967, we can extend the results of the real or complex sample covariance matrix to the quaternion case Suppose X n = x n p n is a quaternion random matrix For each n, the entries {x n ij } are independent random quaternion variables with a common mean μ and variance σ 2 > 0 It is shown that the empirical spectral distribution of the quaternion sample covariance matrix S n = n X n Xn converges to the Marčenko Pastur law as p, n and p/n y 0, + Keywords Empirical spectral distribution LSD Quaternion matrices Sample covariance matrix Z D Bai was partially supported by CNSF 7057, the Fundamental Research Funds for the Central Universities, and PCSIRT; J Hu was partially supported by a grant CNSF H Li Z D Bai B J Hu KLASMOE and School of Mathematics and Statistics, Northeast Normal University, Changchun 30024, People s Republic of China baizd@nenueducn H Li lihq8@nenueducn J Hu huj56@nenueducn 23

2 766 H Li et al Introduction In 843, Hamilton described the hyper-complex number of rank 4, to which he gave the name of quaternion see Kuipers 999 Research on the quaternion matrices can be traced back to Wolf 936 After a long blank period, people gradually discovered that quaternions and quaternion matrices play important roles in quantum physics, robot technology and artificial satellite attitude control, among other applications, see Adler 995 and Finkelstein et al 962 Consequently, studies on quaternions have attracted considerable attention in recent years, see So et al 994, Zhang 995, Kanzieper 2002, Akemann 2005, and Akemann and Phillips 203, among others In the following, we introduce the quaternion notation A quaternion can be represented as a 2 2 complex matrix a + bi c + di x = a e + b i + c j + d k = c + di a bi, a, b, c, d R where i denotes the imaginary unit and the quaternion units can be represented as e = 0 i 0, i =, j = 0 0 i The conjugate of x is defined as 0 0 i, k = 0 i 0 a bi c di x = a e b i c j d k = c di a + bi and its norm as x = a 2 + b 2 + c 2 + d 2 More details can be found in Kuipers 999,Zhang 997, and Mehta 2004Using the matrix representation of quaternions, an n n quaternion matrix X can be rewritten as a 2n 2n complex matrix ψx, and so we can deal with quaternion matrices as complex matrices for convenience Denote S = n XX and ψs = n ψxψx It is known see Zhang 997 that the multiplicities of all the eigenvalues obviously they are all real of ψs are even Taking one from each of the n pairs of eigenvalues of ψs,then values are defined to be the eigenvalues of S In addition, wide application of computer science has increased a thousand fold in terms of computing speed and storage capability in the recent decades Due to the failure of the applications of many classical conclusions, we need a new theory to analyze very large data sets with high dimensions Luckily, the theory of random matrices RMT might be a possible route for dealing with these problems The sample covariance matrix is one of the most important random matrices in RMT, which can be traced back to Wishart 928 Marčenko and Pastur 967 proved that the empirical spectral distribution ESD of a large dimensional complex sample covariance matrix tends to the Marčenko Pastur M P law Since 23

3 Convergence of ESD 767 then, several successive studies on large dimensional complex or real sample covariance matrices have been completed Here, the readers are referred to three books Anderson et al 200, Bai and Silverstein 200, and Mehta 2004 formore details Under the normality assumption, there are three classic random matrix models: Gaussian orthogonal ensemble GOE, for which all entries of the matrix are real normal random variables, Gaussian unitary ensemble GUE, for which all entries of the matrix are complex normal random variables, and Gaussian symplectic ensemble GSE, for which all entries of the matrix are normal quaternion random variables Benefiting from the density function of the ensemble and the joint density of the eigenvalues, the results have gotten their own style If we remove the normality assumption, the corresponding first two models have already had satisfactory results For quaternion matrices, there are only a few references see Yin and Bai 204; Yin et al 203, 204 In this paper, we prove that the ESD of the quaternion sample covariance matrix also converges to the M P law However, due to the multiplication of quaternions is not commutative, when the entries of X n are quaternion random variables, few works on the spectral properties are found in the literature unless the random variables are normality distributed, because in this case the joint density of the eigenvalues is available Thanks to the tool provided by Yin et al 203, it makes the quaternion case possible For the proof of this result, we first introduce two definitions about ESD and Stieltjes transform Let A be a p p Hermitian matrix and denote its eigenvalues by s j, j =, 2,,p The ESD of A is defined by F A x = p p I s j x, j= where I D is the indicator function of an event D and the Stieltjes transform of F A x is given by mz = + x z dfa x, where z = u + υi C + Letgx and m g x denote the density function and the Stieltjes transform of the M P law, which are gx = { 2π xyσ b xx a, 2 a x b; 0, otherwise, 2 and m g z = σ 2 y z + z σ 2 yσ 2 2 4yσ 2 2yzσ 2, 3 23

4 768 H Li et al respectively, where a = σ 2 y 2, b = σ 2 + y 2 Here, the constant y is the limit of dimension p to sample size n ratio and σ 2 is the scale parameter If y >, Gx, the distribution function of gx, has a point mass /y at the origin Now, our main theorem can be described as follows Theorem Let X n = x n, j =,,p, k =,,n Suppose for each n, { } x n are independent quaternion random variables with a common mean μ and variance σ 2 Assume that y n = p/n y 0, and for any constant η>0, E x n 2 I x n >η n 0 4 Then, with probability one, the ESD of the sample covariance matrix S n = n X nx n converges to the M P law in distribution which has density function 2 and a point mass /y at the origin when y > Here, superscript stands for the complex conjugate transpose Remark 2 Without loss of generality, in the proof of Theorem, we assume that σ 2 = Furthermore, one can see that removing the common mean of the entries of X n does not alter the LSD of sample covariance matrices In fact, let T n = n X n EX n X n EX n By Lemma 7, we have, for all large p, F S n F T n KS 2p rankex n p 0 where f KS = sup x f x Consequently, we assume that μ = 0 The paper is organized as follows In Sect 2, the structure of the inverse of some matrices about quaternions is established which is the key tool of proving Theorem Section 3 demonstrates the proof of the main theorem by two steps and in Sect 4,we outline some auxiliary lemmas that can be used in last section 2 Preliminaries We shall use Lemma 25 of Yin et al 203 to prove our main result in next section To keep this work self-contained, the lemma is now stated as follows 23

5 Convergence of ESD 769 Definition 3 A matrix is of Type-I, if it has the following structure: t 0 a 2 b 2 a n b n 0 t c 2 d 2 c n d n d 2 b 2 t 2 0 a 2n b 2n c 2 a 2 0 t 2 c 2n d 2n d n b n d 2n b 2n t n 0 c n a n c 2n a 2n 0 t n Here, all the entries are complex Definition 4 A matrix is of Type-II, if it has the following structure: t 0 a 2 + c 2 i b 2 + d 2 i a n + c n i b n + d n i 0 t b 2 d 2 i ā 2 + c 2 i b n d n i ā n + c n i ā 2 + c 2 i b 2 d 2 i t 2 0 a 2n + c 2n i b 2n + d 2n i b 2 + d 2 i a 2 + c 2 i 0 t 2 b 2n d 2n i ā 2n + c 2n i ā n + c n i b n d n i ā 2n + c 2n i b 2n d 2n i t n 0 b n + d n i a n + c n i b 2n + d 2n i a 2n + c 2n i 0 t n Here, i = denotes the usual imaginary unit and all the other entries are complex numbers Definition 5 A matrix is of Type-III, if it has the following structure: t 0 a 2 b 2 a n b n 0 t b 2 ā 2 b n ā n ā 2 b 2 t 2 0 a 2n b 2n b 2 a 2 0 t 2 b 2n ā 2n ā n b n ā 2n b 2n t n 0 b n a n b 2n a 2n 0 t n Here, all the entries are complex Lemma 6 For all n, if a complex matrix n is invertible and of Type-II, then is a Type-I matrix n The following corollary is immediate Corollary 7 For all n, if a complex matrix n is invertible and of Type-III, then is a Type-I matrix n 23

6 770 H Li et al 3 Proof of Theorem In this section, we present the proof in two steps The first one is to truncate, centralize and rescale the random variables {x n ij }, then we may assume the additional conditions which are given in Remark 2 The other is the proof of the Theorem under the additional conditions Throughout the remainder of this paper, a local constant C may take different value at different place 3 Truncation, centralization and rescaling 3 Truncation Note that, condition 4 is equivalent to: for any η>0, lim n η 2 E x n 2 I x n >η n = 0 5 Applying Lemma 5, one can select a sequence η n 0 such that 5 remains true when η is replaced by η n Lemma 8 Suppose that the assumptions of Theorem refth: hold Truncate the variables x n n at η n n, and denote the resulting variables by x, ie, xn = x n I xn η n n Also denote Then, with probability, X n = Proof Using Lemma 7, one has x n and Ŝ n = n X n X n F S n FŜn KS 0 F S n FŜn KS 2p rank n X n X n n 2p Taking condition 5 into consideration, we get E I 2p 23 x n >η n n I 2η 2 n x n >η n n 6 E x n 2 I x n >η n n =o

7 Convergence of ESD 77 and Var I 2p 4η 2 n p2 n x n >η n n E x n 2 I x n >η n n = o p Then, by Bernstein s inequality see Lemma 8, for all small ε>0and large n, we obtain P I x n 2p >η n n ε 2e εp/2 which is summable Combining 6, the above inequality with the Borel Cantelli lemma, it follows that F S n FŜn KS as 0 This completes the proof of the lemma 32 Centralization Lemma 9 Suppose that the assumptions of Lemma 8 hold Denote x n = xn E xn, X n = x n and S n = n X n X n Then, we obtain where L, denotes the Lévy distance LFŜn, F S n = o, Proof Using Lemma 6 and condition 5, we have L 4 FŜn, F S n trŝn 2p 2 + S n tr n X n X n n X n X n n n = 2n 2 p 2 = x n 2 + x n E xn 2 x n 2 + x n E xn 2 E x n 2 2 E x n

8 772 H Li et al To complete the proof of this lemma, we need to show that the first parentheses of the right-hand side of 7 is almost surely bounded Applying Lemma 22, one has E x n 2 E x n 2 This indicates by the Borel Cantelli lemma Moreover, we can similarly obtain Now, turning to 7, for all large n, 4 C n 4 p 4 Cn 2 η 6 n n y 3 n x n 2 E x n 2 as 0 E x n 8 + E x n j,k + ηn 4 y 2 n 4 2 x n E xn 2 E x n E xn 2 as 0 8 L 4 FŜn, F S n E x n 2 + E x n E xn 2 + o as C C E x n 2 E x n 2 I x n >η n n 0 2 E x n 2 The proof of the lemma is complete 33 Rescaling Define σ 2 = E xn 2,ξ = { ζ, σ 2 < /2 x n, σ 2, = ξ, σ 2 /2 n = E ξ 2, where ζ is a bounded quaternion random variable with Eζ = 0, Varζ = and independent with all other variables 23

9 Convergence of ESD 773 Lemma 0 Write x n = σ ξ, X n = x n, and S n = n X n X n Under the conditions assumed in Lemma 9, we have L F S n, F S n = o Proof a: Our first goal is to show that L F S n, F as 0 Let E n be the set of pairs j, k : σ 2 < 2 and N n = j,k E n I σ 2 < /2 be the number of such pairs Due to σ 2, we conclude that N n = o Owing to Lemma 6 and 8, we get L 4 F S n, F 2p 2 tr S n + tr n X n n X n = 2n 2 p 2 x n 2 + ξ 2 ξ x n 2 = 2n 2 p 2 E x n 2 + ξ 2 +o as ξ x n 2 C ξ x n 2 := C K u h 9 where K = N n and u h = ξ x n 2 Using the fact that for all l, l! l/3 l, we have m K E u h = n m p m h= n m p m C C m + +m K =m m l= m + +m l =m m t h= m! m! m K! Eum Eu m K K m! l!m! m l! l K t= m n m p m l m l! 2ηn 2 nm l 2 l K l l= m l= 6K l 2η 2 m l n l C p h= Eu m t h 6K + 2η2 n m m p 23

10 774 H Li et al By selecting m =[log p] that implies 2η2 n m p for any fixed t,ε >0, 0, and noticing 6K 0, one obtains m K E u h o p t ε h= From the inequality above with t = 2 and 9, it follows that L F S n, F as 0 b: Our next goal is to show that L F S n, F as 0 Applying Lemma 6, wehave L 4 F S n, F 2p 2 tr S n + tr n X n n X n Using the fact x n = 2n 2 p 2 = 2n 2 p 2 σ C σ 2 + ξ 2 ξ x n + σ 2 E ξ 2 + o as 2 ξ 2 2 ξ 2 E C 2 σ ξ 2 = C 2 C σ Cη2 n η 2 n 0 23 j,k/ E n 2 σ 2 [ E x n 2 I x n η n n + E x n I x n η ] 2 n n

11 Convergence of ESD 775 and Lemma 22, one gets E C C n 4 p 4 σ j,k Cn 2 [ n η 6 n y 3 n 2 ξ 2 E ξ 2 E x n 8 I x n η n n+ j,k ] + ηn 4 y 2 n 4 E x n 4 I x n η 2 n n which is summable Together with the Borel Cantelli lemma, it follows that L F S n, F as 0 c: Finally, from a and b, we can easily get the lemma Combining the results of Lemmas 8, 9, and 0, we have the following remarks Remark For brevity, we shall drop the superscript n from the variables Also the truncated and renormalized variables are still denoted by x Remark 2 Under the conditions assumed in Theorem, we can further assume that x η n n, 2 Ex = 0 and Varx = 32 Completion of the proof Denote where z = u + υi C + 32 Random part First, we should show that m n z = 2p tr S n zi 2p, 0 m n z Em n z as 0 Let π j denote the jth column of X n, S k n = S n n π kπ k and E k denote the conditional expectation given {π k+, π k+2,,π 2n } Then, m n z Em n z = 2p 2n k= γ k, 23

12 776 H Li et al where γ k = E k tr Ek S n zi 2p tr S n zi 2p = E k E k [tr ] S n zi 2p tr S k n zi 2p When k = 2t t =, 2,,n, due to k + th column is a function of the kth column, we obtain γ k = E k tr S n zi 2p Ek tr S n zi 2p = 0 2 When k = 2tt = 0,,,n, together with the formula A + αβ = A A αβ A + β A α, one finds [ ] γ k = E k E k trs n zi 2p tr S k n zi 2p n = E k E k π k S k 2π n zi 2p k + n π k S k π n zi 2p k Since n π k S k 2π n zi 2p k + n π k S k π n zi 2p k S n k π k 2 π n ui 2p + υ 2 I 2p k I + n π k Sk n zi 2p π k = υ, we can easily get γ k 2 υ Using Lemma 2, it follows that 2n 2 E m n z Em n z 4 K 4 2p 4 E γ k 2 4K 4n 2 p 4 v 4 k= = O n 2 Combining the Borel Cantelli lemma with the Chebyshev inequality, we conclude 23 m n z Em n z as 0

13 Convergence of ESD Mean convergence When σ 2 =, 3 turns into Next, we show that By Lemma 20 and 0, one has m n z = 2p k= mz = y z + z y 2 4yz 2 2yz Em n z mz p tr n φ k φ k zi 2 n 2 φ k X nk n X nkxnk zi 2p 2 X φ nk k where X nk is the matrix resulting from deleting the kth quaternion row of X n, and φ k is the vector obtained from the kth quaternion row of X n Here, superscript only stands for the transpose and φ k is a n quaternion matrix Write and ε k = n φ k φ k zi 2 n 2 φ k X nk n X nkxnk zi 2p 2 X φ nk k 3 z y n y n zem n zi 2 4 δ n = 2p z y n y n zem n z p { Etr ε k z y n y n zem n z I 2 + ε k } 5 k= where y n = p/n This implies that Em n z = z y n y n zem n z + δ n Solving Em n z from the equation above, we get Em n z = z y n + y n zδ n ± z y n y n zδ n 2y n z 2 4y n z 23

14 778 H Li et al As proved in the Eq 37 of Bai 993, we can assert that Em n z = z y n + y n zδ n + z y n y n zδ n 2 4y n z 6 2y n z Comparing 2 with 6, it suffices to show that δ n 0 For this purpose, we need the following two lemmas Lemma 3 Under the conditions of Remark 2, for any z = u + vi with v>0and for any k =,,p, we have Etrε k 0 7 Proof By calculation, we have Etrε k = n 2 EtrX nk n X nkxnk zi 2p 2 X nk + 2y n + 2y n zem n z = n Etr n X nkxnk zi 2p 2 n X nkxnk + 2y n + 2y n zem n z [ 2 n + z ] n E tr n X nxn zi 2p tr n X nkxnk zi 2p 2 2 n + 2 z nυ 0 where the last inequality has used Lemma 9 twice Then, the proof is complete Lemma 4 Under the conditions of Remark 2, for any z = u + vi with v>0and any k =,,p, we have E trε 2 k 0 Proof Write the form of S n zi 2p t 0 a 2 b 2 0 t b 2 ā 2 ā 2 b 2 t 2 0 b 2 a 2 0 t 2 By Corollary 7 and 3, we have n φ k φ k zi 2 n 2 φ k X nk R kx nk φ k is a scalar matrix Let R k = n X nk X nk zi 2p 2 Denote by αk the first column of φ k and by β k the second column of φ k, then combining 4 wehave 23

15 Convergence of ESD 779 where ε k = θ k I 2 8 θ k = n α kᾱk z n 2 α k X nk R kx nk ᾱ k z y n y n zem n z = n β k β k z n 2 β k X nk R kx nk β k z y n y n zem n z Let Ẽ denote the conditional expectation given {x jl, j =,,p, l =,,n; l = k}, then we get E trε 2 k = 2 E trε k 2 [ 2 E trε k Ẽtrε k 2 + E Ẽtrε k Etrε k 2 + Etrε k 2] 9 According to the inequality above, we proceed to complete the estimation of E trε 2 k by the following three steps a For the first term of the right-hand side of 9, denote T = t jl = I 2n n e X nk R kx nk where t jl = jl f jl Then, rewrite h jl g jl trε k Ẽtrε k = tr n φ k φ k n 2 φ k X nk R kx nk φ k tr I 2 n 2 X nk R kx nk = n tr φ k T φ k T = n tr x kj 2 t jj + trxkl n x kjt jl j= j =l By elementary calculation, we obtain Ẽ trε k Ẽtrε k 2 = n n 2 Ẽ tr x kj 2 t jj 2 + [ Ẽ tr xkl x kjt jl trx kj x kl t jl j= j =l +tr xkl x ] kjt jl tr xkl x kjtlj n n 2 Ẽ x kj 2 2 e jj + g jj Ẽ trxkl x kjt jl 2 j= j =l C n η 2 n 2 n n jj j= e 2 + g jj 2 + e jl 2 + f jl 2 + g jl 2 + h jl 2 j =l 23

16 780 H Li et al Cη2 n n n e jj 2 + g jj 2 + C n 2 e jl 2 + f jl 2 + g jl 2 + h jl 2 j= j,l Cη2 n n trtt + C n 2 trtt 20 For n X nk, there exists a 2p 2 q orthonormal matrix U anda2n q orthonormal matrix V such that n X nk = Udiags,,s q V where s,,s q are the singular values of n X nk and q = min{2p 2, 2n} Then, we get I 2n T = n X nk = Vdiag R k n X nk s 2 s 2 z,, sq 2 sq 2 z V which implies that T = Vdiag z s 2 z,, z sq 2 z V Consequently, it follows that trtt = q s 2 j= j z 2 z 2 2n z 2 υ 2 2 By 20 and 2, we obtain E trε k Ẽtrε k b Next, the second term of right-hand side of 9 is estimated Note that Ẽtrε k Etrε k = z n EtrR k trr k Using the martingale decomposition method, we have E Ẽtrε k Etrε k 2 = z 2 n 2 E EtrR k trr k 2 4 z nυ2 23

17 Convergence of ESD 78 c Finally, combining 7, 9, 22, and 23, we conclude that E trε 2 k 0 These indicate that we complete the proof of the lemma Now, we are in a position to show that δ n 0 By 5 and 8, we can write δ n = 2p z y n y n zem n z 2 Note that + 2p z y n y n zem n z 2 p Etrε k k= p trε 2 k E z y n y n zem n z + θ k k= I z y n y n zem n z < υ, which implies that z y n y n zem n z >υ 24 By Lemma 3 and 24, we have 2p z y n y n zem n z 2 Together with Lemma 4, 24, and p k= Etrε k 2pυ 2 p Etrε k 0 25 k= Iθ k + z y n y n zem n z =I n α kᾱk z n 2 α k X nk n X nkxnk zi 2p 2 X nkᾱ k [ ] 2 = υ + α k X nk n X nkxnk ui 2p 2 + υ 2 I 2p 2 X nkᾱ k < υ 23

18 782 H Li et al one finds that 2p z y n y n zem n z 2 2pυ 3 p k= Combining 25 with 26, we get p trε 2 k E z y n y n zem n z + θ k k= E trε 2 k 0 26 p δ n 2p z y n y n zem n z 2 Etrε k k= p trε 2 k + 2p z y n y n zem n z 2 E z y k= n y n zem n z + θ k 0 So far, we have completed the proof of the mean convergence Em n z m z 323 Completion of the proof of Theorem By Sects 32 and 322, for any fixed z C +,wehave m n z as m z To complete the proof Theorem, we need the last part of Chapter 2 of Bai and Silverstein 200 For the readers convenience, we repeat here That is, for each z C +, there exists a null set N z ie, P N z = 0 such that m n z,w m z, for all w N c z Now, let C + 0 be a dense subset of C+ eg, all z of rational real and imaginary parts and let N = z C + N z Then, 0 m n z,w m z, for all w N c and z C + 0 Let C + m ={z C+ : Iz > /m, z m} When z C + m,wehave m n z m Applying Lemma 23, wehave 23 m n z,w m z, for all w N c and z C + m

19 Convergence of ESD 783 Since the convergence above holds for every m, we conclude that m n z,w m z, for all w N c and z C + Applying Lemma 24, we conclude that F S n w F, as 4 Appendix In this section, some results are listed which are used in the proof of the main theorem Lemma 5 Suppose for any η>0 n= f η, n <, then we can select a slowly decreasing sequence of constants η n 0 such that f η n, n < n= where f is a nonnegative function Similarly, if f η, n 0 for any fixed η > 0, then there exists a decreasing sequence η n 0 such that f η n, n 0 Proof Letting η = m, one has n= f m, n < Moreover, there exists a increasing sequence N m such that n=nm f m, n 2 m Define a sequence η n = m when N m n < N m+ We get f η n, n = n= m= N n= N m+ n=n m f f, n + m, n m= m= n=n m 2 m < f m, n This completes the proof of this lemma Lemma 6 Corollary A42 of Bai and Silverstein 200 Let A and B be two p n matrices and denote the ESD of S = AA and S = BB by F S and F S, respectively Then, L 4 F S, F S 2 p 2 traa + BB tr[a BA B ], where L, denotes the Lévy distance, that is, LF S, F S = inf{ε : F S x ε, y ε ε F S F S x + ε, y + ε + ε} 23

20 784 H Li et al Lemma 7 Theorem A44 of Bai and Silverstein 200 Let A and B be p n complex matrices Then, F AA F BB KS rank A B p Lemma 8 Bernstein s inequality If τ,,τ n are independent random variables with means zero and uniformly bounded by b, then, for any ε>0, n P j= τ j where B 2 n = Eτ + +τ n 2 ] ε 2exp ε 2 / [2B n 2 + bε Lemma 9 see A2 of Bai and Silverstein 200 Let z = u + iv, v > 0, and A be an n n Hermitian matrix Denote by A k the kth major sub-matrix of A of order n, to be the matrix resulting from deleting the kth row and column from A Then, tra zi n tra k zi n υ Lemma 20 Suppose that the matrix has the partition as given by 2 If 2 22 and are nonsingular, then the inverse of has the form = where 22 = Lemma 2 Burkholder s inequality Let {X k } be a complex martingale difference sequence with respect to the increasing σ -field Then, for p >, p p/2 E X k K p E X k 2 k Lemma 22 Rosenthal s inequality Let X i be independent with zero means, then we have, for some constant C k, 2k E X i C k i i k E X i 2k + i k E X i 2 Lemma 23 Lemma 24 of Bai and Silverstein 200 Let f, f 2,be analytic in D, a connected open set of C, satisfying f n z M for every n and z in D, and 23

21 Convergence of ESD 785 f n z converges as n for each z in a subset of D having a limit point in D Then, there exists a function f analytic in D for which f n z f z and f n f z for all z D Moreover, on any set bounded by a contour interior to D, the convergence is uniform and { f n z} is uniformly bounded Lemma 24 Theorem B9 of Bai and Silverstein 200 Assume that {G n } is a sequence of functions of bounded variation and G n = 0 for all n Then, lim n m G n z = m z z D where D {z C :Iz > 0} if and only if there is a function of bounded variation G with G = 0 and Stieltjes transform m z and such that G n G vaguely References Adler, S L 995 Quaternionic quantum mechanics and quantum fields Vol Oxford: Oxford University Press Akemann, G 2005 The complex Laguerre symplectic ensemble of non-hermitian matrices Nuclear Physics B, 7303, Akemann, G, Phillips, M 203 The interpolating airy kernels for the beta = and beta = 4 elliptic Ginibre ensembles arxiv: Anderson, G W, Guionnet, A, Zeitouni, O 200 An introduction to random matrices Vol 8 Cambridge: Cambridge University Press Bai, Z D 993 Convergence rate of expected spectral distributions of large random matrices Part II Sample covariance matrices The Annals of Probability, 22, Bai, Z D, Silverstein, J W 200 Spectral analysis of large dimensional random matrices 2nd ed New York: Springer Finkelstein, D, Jauch, J M, Schiminovich, S, Speiser, D 962 Foundations of quaternion quantum mechanics Journal of Mathematical Physics, 32, 207 Kanzieper, E 2002 Eigenvalue correlations in non-hermitean symplectic random matrices Journal of Physics A: Mathematical and General, 353, 663 Kuipers, J B 999 Quaternions and rotation sequences Princeton: Princeton University Press Marčenko, V A, Pastur, L A 967 Distribution of eigenvalues for some sets of random matrices Matematicheskii Sbornik, 44, Mehta, M L 2004 Random matrices Vol 42 Access online via Elsevier So, W, Thompson, R C, Zhang, F 994 The numerical range of normal matrices with quaternion entries Linear and Multilinear Algebra, 37 3, Wishart, J 928 The generalised product moment distribution in samples from a normal multivariate population Biometrika, 20/2, Wolf, L A 936 Similarity of matrices in which the elements are real quaternions Bulletin of the American Mathematical Society, 420, Yin, Y, Bai, Z D 204 Convergence rates of the spectral distributions of large random quaternion self-dual Hermitian matrices Journal of Statistical Physics, 576, Yin, Y, Bai, Z D, Hu, J 203 On the semicircular law of large dimensional random quaternion matrices Journal of Theoretical Probability to appear arxiv: Yin, Y, Bai, Z D, Hu, J 204 On the limit of extreme eigenvalues of large dimensional random quaternion matrices Physics Letters A, 37867, Zhang, F 995 On numerical range of normal matrices of quaternions Journal of Mathmatical and Physical Science, 296, Zhang, F 997 Quaternions and matrices of quaternions Linear Algebra and Its Applications, 25,

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

Large sample covariance matrices and the T 2 statistic

Large sample covariance matrices and the T 2 statistic Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and

More information

arxiv: v1 [math.pr] 13 Dec 2013

arxiv: v1 [math.pr] 13 Dec 2013 arxiv:131.3747v1 [math.pr] 13 Dec 013 CONVERGENCE RATES OF THE SPECTRAL DISTRIBUTIONS OF LARGE RANDOM QUATERNION SELF-DUAL HERMITIAN MATRICES YANQING YIN, ZHIDONG BAI Abstract. In this paper, convergence

More information

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no.

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Preface to the Second Edition...vii Preface to the First Edition... ix

Preface to the Second Edition...vii Preface to the First Edition... ix Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh,

More information

Triangular matrices and biorthogonal ensembles

Triangular matrices and biorthogonal ensembles /26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

A new method to bound rate of convergence

A new method to bound rate of convergence A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

The norm of polynomials in large random matrices

The norm of polynomials in large random matrices The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

Estimation of the Global Minimum Variance Portfolio in High Dimensions

Estimation of the Global Minimum Variance Portfolio in High Dimensions Estimation of the Global Minimum Variance Portfolio in High Dimensions Taras Bodnar, Nestor Parolya and Wolfgang Schmid 07.FEBRUARY 2014 1 / 25 Outline Introduction Random Matrix Theory: Preliminary Results

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

Freeness and the Transpose

Freeness and the Transpose Freeness and the Transpose Jamie Mingo (Queen s University) (joint work with Mihai Popa and Roland Speicher) ICM Satellite Conference on Operator Algebras and Applications Cheongpung, August 8, 04 / 6

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

Random Matrices for Big Data Signal Processing and Machine Learning

Random Matrices for Big Data Signal Processing and Machine Learning Random Matrices for Big Data Signal Processing and Machine Learning (ICASSP 2017, New Orleans) Romain COUILLET and Hafiz TIOMOKO ALI CentraleSupélec, France March, 2017 1 / 153 Outline Basics of Random

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

A Generalization of Wigner s Law

A Generalization of Wigner s Law A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao A note on a Marčenko-Pastur type theorem for time series Jianfeng Yao Workshop on High-dimensional statistics The University of Hong Kong, October 2011 Overview 1 High-dimensional data and the sample covariance

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Random Matrix Theory and its Applications to Econometrics

Random Matrix Theory and its Applications to Econometrics Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices

Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices COMMUNICATIONS IN ALGEBRA, 29(6, 2363-2375(200 Jordan Canonical Form of A Partitioned Complex Matrix and Its Application to Real Quaternion Matrices Fuzhen Zhang Department of Math Science and Technology

More information

arxiv: v1 [math.pr] 13 Aug 2007

arxiv: v1 [math.pr] 13 Aug 2007 The Annals of Probability 2007, Vol. 35, No. 4, 532 572 DOI: 0.24/009790600000079 c Institute of Mathematical Statistics, 2007 arxiv:0708.720v [math.pr] 3 Aug 2007 ON ASYMPTOTICS OF EIGENVECTORS OF LARGE

More information

Gaussian Models (9/9/13)

Gaussian Models (9/9/13) STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate

More information

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

A new type of PT-symmetric random matrix ensembles

A new type of PT-symmetric random matrix ensembles A new type of PT-symmetric random matrix ensembles Eva-Maria Graefe Department of Mathematics, Imperial College London, UK joint work with Steve Mudute-Ndumbe and Matthew Taylor Department of Mathematics,

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

An Introduction to Large Sample covariance matrices and Their Applications

An Introduction to Large Sample covariance matrices and Their Applications An Introduction to Large Sample covariance matrices and Their Applications (June 208) Jianfeng Yao Department of Statistics and Actuarial Science The University of Hong Kong. These notes are designed for

More information

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 198 NOTES ON MATRIX METHODS 1. Matrix Algebra Margenau and Murphy, The Mathematics of Physics and Chemistry, Chapter 10, give almost

More information

A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

More information

arxiv: v2 [math.pr] 2 Nov 2009

arxiv: v2 [math.pr] 2 Nov 2009 arxiv:0904.2958v2 [math.pr] 2 Nov 2009 Limit Distribution of Eigenvalues for Random Hankel and Toeplitz Band Matrices Dang-Zheng Liu and Zheng-Dong Wang School of Mathematical Sciences Peking University

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Clarkson Inequalities With Several Operators

Clarkson Inequalities With Several Operators isid/ms/2003/23 August 14, 2003 http://www.isid.ac.in/ statmath/eprints Clarkson Inequalities With Several Operators Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute, Delhi Centre 7, SJSS Marg,

More information

Some inequalities for sum and product of positive semide nite matrices

Some inequalities for sum and product of positive semide nite matrices Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

Numerical Range of non-hermitian random Ginibre matrices and the Dvoretzky theorem

Numerical Range of non-hermitian random Ginibre matrices and the Dvoretzky theorem Numerical Range of non-hermitian random Ginibre matrices and the Dvoretzky theorem Karol Życzkowski Institute of Physics, Jagiellonian University, Cracow and Center for Theoretical Physics, PAS, Warsaw

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Distribution for the Standard Eigenvalues of Quaternion Matrices

Distribution for the Standard Eigenvalues of Quaternion Matrices International Mathematical Forum, Vol. 7, 01, no. 17, 831-838 Distribution for the Standard Eigenvalues of Quaternion Matrices Shahid Qaisar College of Mathematics and Statistics, Chongqing University

More information

Stein s Method for Matrix Concentration

Stein s Method for Matrix Concentration Stein s Method for Matrix Concentration Lester Mackey Collaborators: Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp University of California, Berkeley California Institute of Technology

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Operator norm convergence for sequence of matrices and application to QIT

Operator norm convergence for sequence of matrices and application to QIT Operator norm convergence for sequence of matrices and application to QIT Benoît Collins University of Ottawa & AIMR, Tohoku University Cambridge, INI, October 15, 2013 Overview Overview Plan: 1. Norm

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

Convergence rates of spectral distributions of large sample covariance matrices

Convergence rates of spectral distributions of large sample covariance matrices Title Convergence rates of spectral distributions of large sample covariance matrices Authors) Bai, ZD; Miao, B; Yao, JF Citation SIAM Journal On Matrix Analysis And Applications, 2004, v. 25 n., p. 05-27

More information

METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW

METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW Statistica Sinica 9(1999), 611-677 METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW Z. D. Bai National University of Singapore Abstract: In this paper, we give a brief

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Valerio Cappellini. References

Valerio Cappellini. References CETER FOR THEORETICAL PHYSICS OF THE POLISH ACADEMY OF SCIECES WARSAW, POLAD RADOM DESITY MATRICES AD THEIR DETERMIATS 4 30 SEPTEMBER 5 TH SFB TR 1 MEETIG OF 006 I PRZEGORZAłY KRAKÓW Valerio Cappellini

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

BULK SPECTRUM: WIGNER AND MARCHENKO-PASTUR THEOREMS

BULK SPECTRUM: WIGNER AND MARCHENKO-PASTUR THEOREMS BULK SPECTRUM: WIGNER AND MARCHENKO-PASTUR THEOREMS 1. GAUSSIAN ORTHOGONAL AND UNITARY ENSEMBLES Definition 1. A random symmetric, N N matrix X chosen from the Gaussian orthogonal ensemble (abbreviated

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information