LIMITING SPECTRAL DISTRIBUTION OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX. University of Science and Technology of China, National University of

Size: px
Start display at page:

Download "LIMITING SPECTRAL DISTRIBUTION OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX. University of Science and Technology of China, National University of"

Transcription

1 Submitted to the Annals of Applied Probability arxiv: math.pr/ LIMITING SPECTRAL DISTRIBUTION OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX By Baisuo Jin,, Chen Wang Z. D. Bai,, K. Krishnan Nair and Matthew Harding, University of Science and Technology of China, National University of Singapore, Northeast Normal University, and Stanford University This paper studies the limiting spectral distribution (LSD) of a symmetrized auto-cross covariance matrix. The auto-cross covariance matrix is defined as M τ = 2T T j= (eje j+τ + e j+τ e j ), where e j is an N dimensional vectors of independent standard complex components with properties stated in Theorem (.) and τ is the lag. M 0 is well studied in the literature whose LSD is the Marčenko-Pastur (MP) Law. The contribution of this paper is in determining the LSD of M τ where τ. It should be noted that the LSD of the M τ does not depend on τ. This study raised from the investigation and plays an key role in the model selection of any large dimensional model with a lagged time series structure which are central to large dimensional factor models and singular spectrum analysis. Research of this author was supported by NSF China Young Scientist Grant 0397 Research of this author was supported by NSF China 7057 as well as by Program for Changjiang Scholars and Innovative Research Team in University The research of this author was supported by Stanford Presidential Fund for Innovation in International Studies AMS 2000 subject classifications: Primary 60F5, 5A52, 62H25; secondary 60F05, 60F7 Keywords and phrases: Auto-cross covariance, Factor analysis, Mačhenko-Pastur law, Limiting spectral distribution, Order detection, Random matrix theory, Stieltjes transform

2 2 JIN, B. S. ET AL.. Introduction. Over the last decade and as a result of new sources of large data, the analysis of high dimensional statistical models has received renewed attention. These models are currently being analyzed within the context of Random Matrix Theory (RMT) in many areas such as statistics (Bai & Silverstein, 200), economics (Harding, 202; Onatski, 2009, 202) and engineering (Rao & Edelman, 2008; Tulino & Verdu, 2004). The asymptotic framework assumes that both the dimension corresponding to the number of individual units, N, and the number of samples T are large. Suppose A n is an n n random Hermitian matrix with eigenvalues λ j, j =, 2,, n. Define a one-dimensional distribution function of the eigenvalues F A n (x) = n {j n : λ j x} and F An (x) is called the empirical spectral distribution (ESD) of matrix A n. Here E denotes the cardinality of the set E. The limit distribution of {F An } for a given sequence of random matrices {A n } is called the limiting spectral distribution (LSD). For any function of bounded variation G, the Stieltjes transform of G is defined as m G (z) λ z dg(λ) where z C+ {z C : Iz > 0}. For any n n matrix A n with real eigenvalues λ,...,λ n, the Stieltjes transform of F A n is m F An (z) = n n i= λ i z = n tr(a n zi). Similar to the Fourier transformation in probability theory, there is also a one to one correspondence between the distributions and their Stieltjes transforms via the inversion formula: for any distribution function G, G{[a, b]} = π lim η 0 + b a Im G(ξ + iη)dξ.

3 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 3 Besides, the continuity theorem holds, that is, a sequence of distributions tends to a weak limit if and only if their Stieltjes transforms tends to that of the limiting distribution. Therefore, to find the limiting distribution, one can work on finding the limiting Stieltjes transform and use the inversion formula to obtain the limiting distribution. Research on the LSD of large dimensional random matrices dates back to the Wigner (955,958). In these studies, he established that the ESD of a large dimensional Wigner matrix tends to the so-called semicircular law. The LSD of large dimensional sample covariance matrices was studied by Marčenko and Pastur (967) and the limiting distribution is referred to as the MP law. Further research efforts were conducted to estimate the LSD of a product of two random matrices. To this end, pioneering work was done by Wachter (980), who considered the LSD of the multivariate F -matrix, the explicit form of which was derived by Bai, Yin and Krishnaiah(986) and Silverstein(995). The existence of the LSD of the matrix sequence {S n T n } was established by Yin and Krishnaiah (983) where S n is a standard Wishart matrix and T n is a positive definite matrix. Bai, Miao and Jin (2007) proved the existence of the LSD of {S n T n } where S n is a sample covariance matrix and T n is an arbitrary Hermitian matrix. In particular, Bai, Miao and Jin (2007) established the explicit form of LSD of {S n T n } where S n is a sample covariance matrix and T n is Wigner matrix. Random matrices of the form A n + XnT n X n where A n is Hermitian matrix, T n is diagonal and X n consists of iid (independently and identically distributed) entries, was extensively investigated by many researchers, including Marčenko and Pastur (967), Grenander and Silverstein (977), Wachter (978), Jonsson

4 4 JIN, B. S. ET AL. (982), and Silverstein and Bai (995). Furthermore the LSD of a circulant random matrix was derived by Bose and Mitra (2002) and the LSD of sample correlation matrices was studied by Jiang (2004). Bai and Zhou (2008) considered the LSD of a large-dimensional sample matrix where the assumption of column independence has been relaxed. A large-dimensional vector autoregressive moving average models (LDVARMA) is a special case of the random matrix framework considered by Bai and Zhou (2008). Jin et al. (2009) established the explicit forms of the LSD of covariance matrices of LDVAR() and LDVMA(). Wang et al. (20) established the relationship between the power spectral density function and LSD of covariance matrices of LDVARMA(p,q). A detailed exposition of spectral properties of random matrices is presented in Bai and Silverstein (2004, 200) and Zheng (202)... Motivation and Main Result. In this paper, we will focus our attention on the LSD of a symmetrized auto-cross covariance matrix M τ = T k= (γ kγ k+τ + γ k+τ γ k ). where γ k = 2T e k, e k = (ε k,, ε Nk ) and {ε it } are independent random variables with mean 0 and variance σ 2. Here, τ denotes the number of lags. The motivation of this paper comes from any large dimensional model with a lagged time series structure which are central to large dimensional dynamic factor models (Forni & Lippi, 200) and singular spectrum analysis (Vautard et al., 992; Zhigljavsky, 202). Consider the framework of a large dimensional dynamic k-factor model with lag q to understand the underlying motivation of this work. This takes the following form R t = q Λ i F t i + e t, t =,..., T i=0

5 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 5 where Λ i s are N k non-random matrices with full rank. For t =,..., T, F t s are k dimensional vectors of iid standard complex components with finite fourth moment and e t s are N dimensional vectors of iid standard complex components with finite second moment, independent of F t. This model can be viewed as a large dimensional information-plus-noise type model (Dozier and Silverstein (2007a, 2007b); Bai and Silverstein (202)), with information contained in the summation part and noise in e t s. Here large dimension refers to N and T, while the number of factors k and the number of lags q are small and fixed. Under this high dimensional setting, an important statistical problem is the estimation of k and q (Bai & Ng, 2002; Harding, 202). Let τ be a nonnegative integer. For j =,..., T, define and Φ(τ) = 2T (R j Rj+τ + R j+τ Rj ) j= M τ = T j= (γ jγ j+τ + γ j+τ γ j ), where γ j = 2T e j. Note that essentially, M τ and Φ(τ) are symmetrized auto-cross covariance matrices at lag τ and generalize the usual sample covariance matrices M 0 and Φ(0). The matrix M 0 is well studied in the literature and it is well known that the limiting spectral distribution (LSD) has an MP law (Marcenko and Pastur, 967). Moreover, when τ = 0 and assuming that Cov(F t ) = Σ f, the population covariance matrix of R t has the same eigenvalues as those of σ2 I + Λ Σ f Λ 0 0 σ 2 I with the two diagonal blocks of size k(q + ) k(q + ) and (N k(q + )) (N k(q + )), respectively. Therefore, we have the spiked population

6 6 JIN, B. S. ET AL. model framework (Johnstone, 200), Baik and Silverstein (2006), Bai and Yao (2008). In fact, under certain conditions, we can estimate k(q + ) by counting the number of eigenvalues of Φ(0) that are larger than σ 2 (+ c) 2, where c is the limiting ratio of N/T. However, to estimate the values of k and q separately, we need to study the LSD of M τ for at least one τ. It is interesting to note that for τ (τ being a fixed integer), the LSD of M τ does not depend on τ (see Theorem. for details). However the number of eigenvalues of Φ(τ) that lie outside the support of the LSD of M τ at lags τ q is dependent on the lag τ; and is different with those obtained at lags τ > q. This is mainly because of the contribution of eigenvalues of the terms containing factor and error components are non-zero for τ. Thus, we can separate the estimates of k and q by counting the number of eigenvalues of Φ(τ) that lie outside the support of the LSD of M τ from τ = 0,, 2,, q, q +,. Unlike the case τ = 0, not much is known in the literature for M τ as τ. The goal of this paper is to derive the LSD of M τ denoted as F τ i.e. F τ (x) = lim N F M τ (x). Here F A denotes the ESD of A. In our derivation of the LSD of M τ, a recursive method is created to solve a disturbed difference equations of order 2. The main result of this paper is the following theorem. Theorem.. Assume: (a) τ is a fixed integer. (b) e k = (ε k,, ε Nk ), k =, 2,..., T +τ, are N dimensional vectors of independent standard complex components with sup in,tt +τ E ε it 2+δ

7 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 7 M < for some δ (0, 2], and for any η > 0, (.) η 2+δ NT i= T +τ t= E( ε it 2+δ I( ε it ηt /(2+δ) )) = o(). (c) N/(T + τ) c > 0 as N, T. (d) M τ = T k= (γ kγ k+τ + γ k+τ γ k ), where γ k = 2T e k. Then as N, T, F M τ ϕ c (x) = 2cπ D F τ a.s. and F τ has a density function given by y 2 0 +y 0 ( c x + +y0 ) 2, x ( c) +y y, where y 0 is the largest real root of the equation: y 3 ( c)2 x 2 x 2 y 2 4 x 2 y 4 x 2 = 0 and y is the only real root of the equation: (.2) (( c) 2 )y 3 + y 2 + y = 0 such that y > if c < and y (0, ) if c >. Further, if c >, then F τ has a point mass /c at the origin. Remark.. Notice that as long as τ, F τ is the same as τ takes different values. However, F τ is different from and can not be reduced to F 0, the distribution of MP law. Figure displays the density functions ϕ c (x) with c = 0.2, 0.5 and 0.7. Figure 2 displays the density functions ϕ c (x) with c =.5, 2 and 2.5. It is shown from these two figures that as c increases, the support of ϕ c (x) becomes wider, and ϕ c (x) has the maximum at x = 0 which is sharper as c gets closer to. The rest of this paper is organized as follows. The truncation and centralization steps are provided in Section 2. Section 3 outlines the proof of

8 8 JIN, B. S. ET AL. phi(x) x Fig. Density functions ϕ c (x) of the LSD of M τ with c = 0.2 (the solid line), c = 0.5 (the dashed line) and c = 0.7 (the dotted line). phi(x) x Fig 2. Density functions ϕ c (x) of the LSD of M τ with c =.5 (the solid line), c = 2 (the dashed line) and c = 2.5 (the dotted line). Note that the area under each density function curve is /c.

9 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 9 the main theorem Theorem.. Justification of variable truncation, centralization and standardization is provided in Appendix A and some technical lemmas used for the derivation of Theorem. are presented in Appendix B. 2. Truncation, Centralization and Standardization. First, we can select a sequence η N 0 such that (.) remains true when η is replaced by η N. After truncation at η N T /(2+δ), centralization and standardization, in what follows, we may assume that ε ij η N T /(2+δ), Eε ij = 0, E ε ij 2 =, E ε ij 2+δ < M. The details of verification is provided in Appendix A. 3. Derivation of the LSD of M(τ ). In this section, we will provide the proof for the derivation of Theorem.. To this end, we start with a section on notation followed by the proof. 3.. Notation. Let the Stieltjes transform of M τ be denoted by m N (z) = N tr(m(τ) zi N) where z = u + iv, v > 0. We shall prove that m N (z) m(z) for some m(z). It follows that the LSD of M τ exists and has a proba- bility density function lim v 0 π I(m(x + iv)). Define A = M(τ) zi N, A k = A γ k (γ k+τ + γ k τ ) (γ k+τ + γ k τ )γ k, A k,k+τ,,k+nτ = A k,k+τ,,k+(n )τ γ k+(n+)τ γ k+nτ γ k+nτ γ k+(n+)τ, n,

10 0 JIN, B. S. ET AL. for k [τ +, T ]. Note that A k,k+τ,,k+nτ is independent of γ k,, γ k+nτ. For k τ or k > T, we still use the definition of A k with the convention that γ l = 0 for l 0 or l > T + τ Derivation. By A = (γ k γk+τ + γ k+τ γk ) zi N k= we have I N = (A γ k γk+τ + A γ k+τ γk ) za. k= Taking trace and dividing by N, we obtain (3.) + zm n (z) = N (γk+τ A γ k + γk A γ k+τ ). k= Taking expectation on both sides, we obtain (3.2) + zem n (z) = N Applying the identity (Eγk+τ A γ k + Eγk A γ k+τ ). k= (3.3) (B + αγ ) = B B αγ B + γ B α, for any nonsingular matrix B, we have γk A (γ k+τ + γ k τ ) = γ kã k (γ k+τ + γ k τ ) + γ kã k (γ k+τ + γ k τ ) = + γ kã k (γ k+τ + γ k τ ), where Ãk = A (γ k+τ + γ k τ )γ k and we have used the previously made convention that γ l = 0 for l 0 or l > T +τ. Note that A k = Ãk γ k (γ k+τ + γ k τ ). Using (3.3) again, we have γ kã k (γ k+τ + γ k τ ) = γ k A k (γ k+τ + γ k τ ) γ k A k γ k(γk+τ + γ k τ )A k (γ k+τ + γ k τ ) + (γk+τ + γ k τ )A k γ. k

11 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES By Lemmas B. and B.2, we have γ kã k (γ k+τ + γ k τ ) = c 2 m n(z)(γk+τ + γ k τ )A k (γ k+τ + γ k τ ) + o a.s. (). Consequently, γk A (γ k+τ + γ k τ ) = c 2 m n(z)(γk+τ + γ k τ )A k (γ k+τ + γ k τ ) + o a.s.(). Write A k,k+τ = A k γ k+2τ γ k+τ γ k+τ γ k+2τ γ k+τ. Then, using (3.3) again, we obtain which is independent of γ k+τ A k γ k+τ = γ k+τ (A k,k+τ + γ k+2τ γ k+τ ) γ k+τ + γ k+2τ (A k,k+τ + γ k+2τ γ k+τ ) γ k+τ (3.4) = = By the same reason, (3.5) γ k τ A k γ k τ = γ k+τ A k,k+τ γ k+τ γ k+τ A + γk+2τ A k,k+τ γ k+τ γ k+2τ A c 2 m n(z) c 2 m n(z)γ k+2τ A k,k+τ γ k+2τ c 2 m n(z) c 2 m n(z)γ k 2τ A k,k τ γ k 2τ Next, we consider the cross terms. We have γ k τ A k γ k+τ = (3.6) = = k,k+τ γ k+2τ γk+τ A k,k+τ γ k+τ +γk+τ A k,k+τ γ k+2τ k,k+τ γ k+2τ γ k+τ A k,k+τ γ k+τ +γ k+τ A k,k+τ γ k+2τ + o a.s. (). γ k τ (A k,k+τ + γ k+2τ γ k+τ ) γ k+τ + γ k+2τ (A k,k+τ + γ k+2τ γ k+τ ) γ k+τ γ k τ A k,k+τ γ k+τ γ k τ A + γk+2τ A k,k+τ γ k+τ γ k+2τ A c 2 m n(z)γ k τ A k,k+τ γ k+2τ c 2 m n(z)γ k+2τ A k,k+τ γ k+2τ + o a.s. (). k,k+τ γ k+2τ γk+τ A k,k+τ γ k+τ +γk+τ A k,k+τ γ k+2τ k,k+τ γ k+2τ γ k+τ A k,k+τ γ k+τ +γ k+τ A k,k+τ γ k+2τ + o a.s. (). Suppose that m n (z) converges to m(z) along some subsequence N = n, by Lemmas B.3 and B.4, (3.2) will converge to (3.7) c + czm(z) =, c2 m 2 (z) 2x

12 2 JIN, B. S. ET AL. where x is the root of the equation x 2 = x c2 m 2 (z) 4 with the larger absolute value. Substituting the expression of x, we obtain (3.8) ( c 2 m 2 (z))(c + czm(z) ) 2 =. This can be further simplified to (3.9) (cm(z)) 4 2( c) (cm(z)) 3 + ( c)2 z 2 z z 2 (cm(z)) 2 2( c) ( c)2 + cm(z) + z z 2 = 0. Now, we shall employ the method developed in Bai et al (2007) to solve the 4th degree polynomial equation and identify the unique solution of the limiting spectral distribution. Rewrite the equation (3.9) as (3.0) ((cm(z)) 2 ( c) cm(z) + y z 2 )2 = ( + y)(cm(z)) 2 ( c) (y + 2)cm(z) + y2 ( c)2 z 4 z 2. Let y 0 be the largest real root of the equation: y 3 ( c)2 z 2 z 2 y 2 4 z 2 y 4 z 2 = 0. Let f(y) = y 3 ( c)2 z 2 z 2 y 2 4 z 2 y 4 z 2. For f(+ ) > 0, f(0) < 0, we have y 0 > 0. Further if z 0, then y 0 and z 2 y 0 ( c) 2. If we replace y by y 0 in equation (3.0), the solutions to (3.9) will be those to equations (cm(z)) 2 c z cm(z) + 2 y 0 = ± + y 0 (cm(z) (y 0 + 2)( c) ), 2z( + y 0 )

13 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 3 from which we get four roots: m (z) = m 2 (z) = m 3 (z) = m 4 (z) = ( c z + + y 0 ) + ( c z + + y 0 ) ( c z + y 0 ) + ( c z + y 0 ) 2c 2c 2c ( c z +y0 ) 2 y2 0 +y 0 ( c z +y0 ) 2 y2 0 +y 0 ( c z + +y0 ) 2 y2 0 +y 0 ( c z + +y0 ) 2 y2 0 +y 0 2c. Now, we claim that the density of F τ at the origin lim z 0 zm(z) satisfies lim zm(z) = /c, c >, (3.) z 0 0, c. To show our claim, first by equation (3.8), we have (z 2 c 2 z 2 m 2 (z))(czm(z) ( c)) 2 = z 2. This means zm(z) must be bounded as z 0. Otherwise, the LHS of the equation above is unbounded while the RHS tends to 0, which is a contradiction. Hence, the equation above can be simplified as (czm(z)) 2 (czm(z) ( c)) 2 = 0. This means there exists a convergent subsequence {z k m(z k )} such that as lim zk 0 z k m(z k ) can only be either 0 or c. Notice that lim z 0 zm(z) is the density of F τ at 0, which is nonnegative. Therefore, as c <, lim zk 0 z k m(z k ) c. Hence lim z k 0 z k m(z k ) = 0 and the second

14 4 JIN, B. S. ET AL. part of our claim is proved. When c, assume lim zk 0 z k m(z k ) c i.e. lim zk 0 z k m(z k ) = 0, then (3.7) becomes Solve this for x, and we have c =. c2 m 2 (z) 2x x = c2 m 2 (z)(c ) 2c = c 2 2(c ). Here the last equality is due to the fact c 2 m 2 (z) = ( c) 2 which can be derived from (3.8) and our assumption that lim z 0 zm(z) = 0. However, solve the equation x 2 = x c2 m 2 (z) 4 and use the fact c 2 m 2 (z) = ( c) 2 again, we have x = 2 ( + c ) = c 2(c ), which contradicts our last expression of x. Hence the first part of the claim is proved. By (3.), lim z 0 + z y c, lim z 0 z y c and ϕ c (x) = lim v 0 π I(m(x + iv)) > 0, we get m (z), z < 0, m(z) = m 3 (z), z > 0. Therefore, we have ϕ c (x) = lim I(m(x + iv)) = v 0 π 2cπ y0 2 ( c + ) + y 0 x 2, x a, + y0 where y 0 is the largest real root of the equation: y 3 ( c)2 x 2 x 2 y 2 4 x 2 y 4 x 2 = 0 and a satisfies equations y 2 +y ( c a + +y ) 2 = 0 y 3 ( c)2 a 2 a 2 y 2 4 a 2 y 4 a 2 = 0.

15 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 5 Solving these equations under the condition a > 0, we have ( c) +y y a =, c, 2, c =, where y can be chosen as a real root of the equation: (3.2) (( c) 2 )y 3 + y 2 + y = 0 such that y > if c < and y (0, ) if c >. To show the unique existence of y, let f(y) = (( c) 2 )y 3 +y 2 +y. If c <, by f( ) > 0, f(0) < 0, f() > 0 and f( ) < 0, there are three real roots y >, y 2 (0, ) and y 3 < 0 of (3.2). Similarly, if < c < 2, there are three real roots y (0, ), y 2 > and y 3 < 0 of (3.2). If c = 2, it is easy to see that there are two real roots of (3.2): y = ( 5 )/2 (0, ) and y 2 = ( 5 )/2 < 0. If c > 2, by f(0) < 0 and f() > 0, there is a real root y (0, ). If there are more than one real roots in the interval (0, ) when c > 2, then by the continuity of f(y), the three roots y, y 2, y 3 of f(y) are all in the interval (0, ), that would contradict y + y 2 + y 3 = /(( c) 2 ) < 0. Thus there is only one real root y (0, ) if c > 2. The proof of Theorem. is complete. APPENDIX A: JUSTIFICATION OF TRUNCATION, CENTRALIZATION AND STANDARDIZATION Note that rank(ab CD) rank(a C) + rank(b D), because AB CD = (A C)B + C(B D). Let ε it = ε it I( ε it < η N T /(2+δ) ), γ k = 2T ( ε k,, ε Nk ) and M τ = T k= ( γ k γ k+τ + γ k+τ γ k ).

16 6 JIN, B. S. ET AL. By Theorem A.43 of Bai & Silverstein (200), F Mτ F M τ N rank(m τ M τ ) 2 T N rank( T (γ k γk+τ ) ( γ k γ k+τ )) 4 N i= k= T +τ t= k= I( ε it η N T /(2+δ) ). By (.) we have E( N (A.) and i= η 2+δ NT V ar( N η 2+δ N 2 T i= T +τ t= i= T +τ I( ε it ηt /(2+δ) )) T +τ t= t= T +τ i= t= E( ε it (2+δ) I( ε it ηt /(2+δ) )) = o(), I( ε it ηt /(2+δ) )) E( ε it 2+δ I( ε it ηt /(2+δ) )) = o(/n). Applying Bernstein s inequality, for all small ε > 0 and large N, we have P( T +τ I( ε it ηt /(2+δ) ) ε) 2e 2 ε2n. N i= t= By Borel-Cantelli lemma, with probability, we have F M τ F M τ 0. Let ˆε it = ε it E ε it, ˆγ k = 2T (ˆε k,, ˆε Nk ), and ˆM τ = T k= (ˆγ kˆγ k+τ + ˆγ k+τ ˆγ k ). By Theorem A.46 of Bai & Silverstein (200), L(F M τ F ˆM τ ) max λ k ( M τ ) λ k ( ˆM τ ) M τ ˆM τ k T 2 (ˆγ k E γ k+τ + ˆγ k+τ E γ k ) + (E γ k E γ k+τ + E γ k+τ E γ k ), k= k=

17 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 7 where L is the Levy distance between two distribution functions. For the second part, we have (E γ k E γ k+τ + E γ k+τ E γ k ) T k= k= i= C T +τ T 2 E(ε ik I( ε ik ηt /(2+δ) )E(ε i(k+τ) I( ε i(k+τ) ηt /(2+δ) ) k= i= E( ε ik 2+δ I( ε ik ηt /(2+δ) ) = o(). For the first part, notice that (ˆγ k E γ k+τ + ˆγ k+τ E γ k ) 2 2( k= ˆγ k E γ k+τ 2 + k= ˆγ k+τ E γ k 2 ). k= ˆγ k E γ k+τ 2 k= = = 4T 2 4T 2 4T 2 ( ˆε ki E ε (k+τ)j ) 2 i= j= k= i= j= k = k 2 = i= j= J + J 2. (ˆε k iˆε k2 ie ε (k +τ)je ε (k2 +τ)j) ( T ˆε 2 k i (E ε (k +τ)j) 2 + k = k k 2 ˆε k iˆε k2 ie ε (k +τ)je ε (k2 +τ)j ) For Eˆε 2 k i such that EJ = < and E ε2+δ (k +τ)j <, there exist constants C, C 2 and C 3 4T 2 C 4T 2 i= j= k = i= j= k = C 4T 2 η 2(+δ) T 2(+δ) 2+δ = O(T δ 2+δ ) Eˆε 2 k i (E ε (k +τ)j) 2 (E( ε jk I( ε jk ηt /(2+δ) ))) 2 i= j= k = (E( ε jk 2+δ I( ε jk ηt /(2+δ) ))) 2

18 8 JIN, B. S. ET AL. and V arj = 4 2 T 4 C T 4 i= k = i= k = 4δ = O(T 2+δ ). E(ˆε 2 k i Eˆε2 k i )2 ( (E ε (k +τ)j) 2 ) 2 E ε 4 k i (N j= η 2(+δ) T 2(+δ) 2+δ ) 2 The previous two equations imply that J 0, a.s.. Furthermore, we have V arj 2 = 4 2 T 4 C T 4 2δ = O(T 2+δ ), Eˆε 2 k i Eˆε2 k 2 i ( i= k k 2 j= ( i= k k 2 j= η 2(+δ) T 2(+δ) 2+δ E ε (k +τ)je ε (k2 +τ)j) 2 ) 2 which implies J 2 0, a.s. Hence, we have T k= ˆγ ke γ k+τ 2 0, a.s.. Similarly T k= ˆγ k+τ E γ k 2 0 a.s. Thus L(F M τ F ˆM τ ) 0, a.s.. Now, we want to rescale the variables. Let σij 2 = E ˆε ij 2 = E ε ij E ε ij 2. Define E {(i, j) : σij 2 < } and δ X it, (i, t) E, ˇε it = otherwise. ˆε it σ it, Here = T 4+2δ and Xit s are iid random variables taking values and, each with probability 2. Note that Eˇε it = 0 and V ar(ˇε it ) =. Let ˇγ k = 2T (ˇε k,, ˇε Nk ) and ˇM τ = T k= (ˇγ kˇγ k+τ +ˇγ k+τ ˇγ k ). For simplicity, denote A = (ˆγ, ˆγ 2,..., ˆγ T ), B = (ˆγ +τ, ˆγ 2+τ,..., ˆγ T +τ ), Ǎ = (ˇγ, ˇγ 2,..., ˇγ T ) and ˇB = (ˇγ +τ, ˇγ 2+τ,..., ˇγ T +τ ). Then by Corollary A.4 of Bai & Silverstein

19 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 9 (200), we have L 3 (F ˆM τ, F ˇM τ ) = L 3 (F AB +BA F Ǎ ˇB + ˇBǍ ) N tr[(ab + BA (Ǎ ˇB + ˇBǍ ))(AB + BA (Ǎ ˇB + ˇBǍ )) ] 2 N tr[(ab Ǎ ˇB )(AB Ǎ ˇB ) ] + 2 N tr[(ba ˇBǍ )(BA ˇBǍ ) ] and N tr[(ab Ǎ ˇB )(AB Ǎ ˇB ) ] = N tr[((a Ǎ)B + Ǎ(B ˇB) )((A Ǎ)B + Ǎ(B ˇB) ) ] 2 N tr[((a Ǎ)B )((A Ǎ)B ) + (Ǎ(B ˇB) )(Ǎ(B ˇB) ) ] Define J 2 = N tr((a Ǎ)B )((A Ǎ)B ), then we have J 2 = N tr((a Ǎ) (A Ǎ))(B B) = C T 3 N = C T 3 N i= j= ( (ˇε ik ˆε ik ) ˆε j(k+τ) 2 k= i= j= k = i= j= k= (ˇε ik ˆε ik ) ˆε j(k +τ))( = C N T 3 [ ˇε ik ˆε ik 2 ˆε j(k+τ) 2 + ( ˇε ik ˆε ik )ˆε j(k2 +τ)) k 2 = k,k 2 =,k >k 2 (ˇε ik ˆε ik )( ˇε ik2 ˆε ik2 ) ˆε j(k +τ)ˆε j(k2 +τ) + k,k 2 =,k <k 2 (ˇε ik ˆε ik )( ˇε ik2 ˆε ik2 ) ˆε j(k +τ)ˆε j(k2 +τ)] J 2 + J 22 + J 23.

20 20 JIN, B. S. ET AL. Furthermore, J 2 = C N T 3 [ j= i=,...n,k=,...,t,(i,k) E i=,...n,k=,...,t,(i,k)/ E J 2 + J 22 By definition of E, we have σ2 ik EJ 2 = C T 3 N C T 3 N C T 3 N C T 3 N j= i=,...n,k=,...,t,(i,k) E ˇε ik ˆε ik 2 ˆε j(k+τ) 2 + ˇε ik ˆε ik 2 ˆε j(k+τ) 2 ] > for any (i, k) E and therefore j= i=,...n,k=,...,t,(i,k) E j= i=,...n,k=,...,t,(i,k) E j= i= k= = O(T δ 4+2δ ). For any (i, k) / E, we have ( σ ik )2 = (σ ik ) 2 σ 2 ik T δ 4+2δ E ˇε ik ˆε ik 2 E ˆε j(k+τ) 2 E ˆε j(k+τ) 2 σ 2 ik E ˆε j(k+τ) 2 = ( σ2 ik )2 σik 2 ( + σ ik) 2 C( σ2 ik )2 Cη 2δ T 2δ 2+δ, Here and in what follows, we assume that η 0 slow enough such that the above upper bound tends to 0 as T. This together with Eˆε 2 ij < implies EJ 22 = C T 3 N C T 3 N j= i=,...n,k=,...,t,(i,k)/ E j= i=,...n,k=,...,t,(i,k)/ E = O(T 2δ 2+δ ) ( σ ik )2 Eˆε ik 2 Eˆε j(k+τ) 2 ( σ ik )2

21 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 2 Note that summands in J 22 and J 23 are pairwise orthogonal, hence we have EJ 22 = EJ 23 = 0. Therefore, we have EJ 2 0. Now, we want to compute V arj 2. First, we have V ar(j 2 ) = C T 6 N C T 6 N C T 6 N j= i=,...n,k=,...,t,(i,k) E j= i=,...n,k=,...,t,(i,k) E j= i=,...n,k=,...,t,(i,k) E 4δ = O(T 2+δ ) E ˇε ik ˆε ik 4 E ˆε j(k+τ) 4 (E ˇε ik 4 + E ˆε ik 4 )E ˆε j(k+τ) 4 E ˆε ik 4 E ˆε j(k+τ) 4 For simplicity, write J 22 = j= i=,...n,k=,...,t,(i,k)/ E ( σ ik )2 [( ˆε ik 2 σik 2 )( ˆε j(k+τ) 2 σj(k+τ) 2 ) + σ 2 ik ( ˆε j(k+τ) 2 σ 2 j(k+τ) ) + σ2 j(k+τ) ( ˆε ik 2 σ 2 ik ) + σ2 ik σ2 j(k+τ) ] J 22 + J J J 224 J 22 = C N T 3 [ (ˇε ik ˆε ik )( ˇε ik2 ˆε ik2 ) ˆε j(k +τ)ˆε j(k2 +τ) + i= j= k,k 2 =,k >k 2,k k 2 +τ (ˇε i(k2 +τ) ˆε i(k2 +τ))( ˇε ik2 ˆε ik2 ) ˆε j(k2 +2τ)ˆε j(k2 +τ) + i,j=,i j k 2 = i= k 2 = (ˇε i(k2 +τ) ˆε i(k2 +τ))( ˇε ik2 ˆε ik2 ) ˆε i(k2 +2τ)ˆε i(k2 +τ) J 22 + J J 223 Note that in all expressions except J 224, components are orthogonal to each other. In addition, as a constant, J 224 does not contribute to V arj 2.

22 22 JIN, B. S. ET AL. Therefore, we have V arj 22 C T 6 N i= j= k= C T 6 T 3 T 4δ 2(2 δ) 2+δ T 2+δ 8δ = O(T 2+δ ) ( σ ik )4 (E ˆε ik 4 ) 2 V arj 222 = V arj 223 C N T 6 i= j= k= C T 6 T 3 T 4δ 2 δ 2+δ T 2+δ 6δ 2 = O(T 2+δ ) ( σ ik )4 σik 4 E ˆε ik 4 V arj 22 C T 6 N C T 6 T 4 = O(T 2 ) V arj 222 C T 6 N C T 6 T 3 = O(T 3 ) V arj 223 C T 6 N C T 6 N i= j= k,k 2 =,k >k 2,k k 2 +τ i,j=,i j k 2 = i= k 2 = i= k 2 = E ˇε ik ˆε ik 2 E ˇε ik2 ˆε ik2 2 E ˆε j(k +τ) 2 E ˆε j(k2 +τ) 2 E ˇε i(k2 +τ) ˆε i(k2 +τ) 2 E ˇε ik2 ˆε ik2 2 E ˆε j(k2 +2τ) 2 E ˆε j(k2 +τ) 2 E (ˇε i(k2 +τ) ˆε i(k2 +τ))ˆε i(k2 +τ) 2 E ˇε ik2 ˆε ik2 2 E ˆε i(k2 +2τ) 2 C T 6 T 2 T 2 δ 2+δ 2δ 3 = O(T 2+δ ) E ˆε i(k2 +τ) 4

23 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 23 Therefore, we have V arj 2 = O(T ε ) for some ε > 0. Thus, J 2 0 a.s.. Similarly, we have E N tr(ǎ(b ˇB) )(Ǎ(B ˇB) ) 0 and V ar N tr(ǎ(b ˇB) )(Ǎ(B ˇB) ) = O(T ε ) for some ε > 0. Hence, we have N tr[(ab Ǎ ˇB )(AB Ǎ ˇB ) ] 0 a.s.. By interchanging A and B, we have N tr[(ba ˇBǍ )(BA ˇBǍ ) ] 0 a.s.. Therefore, L 3 (F ˆM τ, F ˇM τ ) 0 a.s.. Lemma B.. APPENDIX B: SOME TECHNICAL LEMMAS Under the assumptions of Theorem., γk A k γ k c 2 m n(z) 0 almost surely and uniformly in k T + τ, where m n (z) = N tra. The proof of Lemma B. is similar to the proof of Lemma 9. of Bai and Silverstein (200). Proof. Write A k = (a ij ). For any given r, we have E γ k A k γ k 2T tra k 2r 2 2r (E S 2r + E S 2 2r ), where S = N 2T i= a ii(ε 2 ki ) and S 2 = 2T i jn a ijε ki ε kj.

24 24 JIN, B. S. ET AL. By noting a ii A k v and E ε 2 ki (2+δ)/2 M <, we get E S 2r = (B.) N 4 r T 2r E( (ε 2 ki )a ii 2r ) 4 r T 2r η 4r r 4 r v 2r T 2δr 2+δ i= l= j < <j l N i + i l =2r, i 2,,i l 2 (2r)! r η (2+δ)l T l N l M l l 2r. l= l E ε 2 kj t i t a jt j t i t t= i t! Next, let us consider E S 2 2r = 4 r T 2r ai j ā t l a ir j r ā tr l r E(ε ki ε kt ε kj ε kl ε klr ε ktr ε kjr ε klr ). Draw a directional graph G of 2r edges that link i s to j s and l s to t s, s =, r. Note that if G has a vertex whose degree is, then the graph corresponds to a term with expectation 0. That is, for any nonzero term, the vertex degrees of the graph are not less than 2. Write the non-coincident vertices as v,, v m with degrees p,, p m greater than. We have m r. By assumption, we have E(ε ki ε kt ε kj ε kl ε klr ε ktr ε kjr ε klr ) (η 2 T 2/(2+δ) ) r m. Now, suppose that the graph consists of q connected components G,, G q with m,, m q noncoincident vertices, respectively. Let us consider the con- tribution by G to E S 2 r. Assume that G has s edges, e,, e s. Choose a tree G from G, and assume its edges are e,, e m, without loss of generality. Note that v,,v m N m t= a et 2 A 2m 2 N N v 2m 2

25 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 25 and v,,v m N s t=m a et 2 N m v 2s 2m +2. Here, the first inequality follows from the fact that v a v v 2 2 A 2 v 2 since it is a diagonal element of A (A ). The second inequality follows from the fact that v a v v 2 l v l for any l 2 and that s m since all vertices have degrees not less than 2. Therefore, the contribution of G is bounded by s v,,v m N t= ( a et v,,v m N m t= a et 2 v,,v m N s ) /2 a et 2 N m /2 v s. t=m Noting that m + + m q = m and s + + s q = 2r, eventually we obtain that the contribution of the isomorphic class for a given canonical graph is N m/2. Because the two vertices of each edge cannot coincide, we v 2r have q m/2. The number of canonical graphs is less than ( m 2 ) 2r m 4r. We finally obtain (B.2) E S 2 2r 4 r v 2r T 2r 4 r v 2r T 2δr 2+δ r N m/2 (η 2 T 2/(2+δ) ) 2r m m 4r m=2 r N /2 ( η 2 T 2/(2+δ) )m m 4r. m=2 Using (B.) and (B.2), for any t > 0, there exists r > t/δ + t/2 such that E γk A k γ k 2T tra k 2r = O(T t ). Therefore, by Borel-Cantelli lemma, (B.3) γ k A k γ k 2T tra k 0 almost surely and uniformly in k T + τ. Let F N denote the ESD of M(τ) and F Nk the ESD of M(τ) γ k (γ k+τ + γ k τ ) (γ k+τ + γ k τ )γ k.

26 26 JIN, B. S. ET AL. By Theorem A.43 of Bai and Silverstein (200), we have F N F Nk 4 N, where f = sup x f(x). Thus N This implies that ( tr(a ) tr(a k )) = N x u iv d(f N F NK ) v F N F Nk 4 vn. ( tr(a ) tr(a k )) 0, a.s. uniformly in k T + τ. Substituting the above into (B.3), the proof of the lemma is complete. Lemma B.2. Under the assumptions of Theorem., we have γk A k γ l 0, almost surely and uniformly in k l. Proof. Let A k γ l = b = (b,, b N ) T. Noting ε lj < η N T /(2+δ) and E ε lj 2+δ = v 2+δ <, we have E(γk A k γ l) 2r = N 2 r T r E( ε ki b i 2r ) r 2 r T r E η 2r 2 r T δr 2+δ i= l= j < <j l N i + i l =2r E r η (2+δ)l T l v2+δ l l= (2r)! i! i l! εi kj b i j ε i l kj l b i l j j < <j l <N i + i l =2r, i 2,,i l 2 (2r)! i! i l! b j i b jl i l.

27 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 27 By N j= b j 2 = A k γ l 2 and Cauchy-Schwartz inequality, we have (2r)! i! i l! b j i b jl i l j < <j l N i + i l =2r, i 2,,i l 2 l 2r A k 2r γ l 2r l2r v 2r γ l 2r. i + i l =2r, i 2,,i l 2 (2r)! N i! i l! ( b j 2 ) r j= Noting ε lj < η N T /(2+δ) and E ε lj 2+δ = v 2+δ <, we get E γ l 2r = N 2 r T r E( = E 2 r T r E 2 r r η 2r l= 2 r T δr 2+δ j= r ε 2 lj )r l= j < <j l N i + i l =r v l 2+δ η2r (2+δ)l T δr 2+δ l r l= ( η2+δ T Nv 2+δ ) l l r. r! i! i l! ε2i lj ε 2i l lj l j < <j l N i + i l =r r! i! i l! For any t > 0, there exists r > 2t/δ + t such that E γ k A k γ l r = O(T t ). Therefore by Borel-Cantelli lemma, γ k A k γ l 0 almost surely and uniformly in k l. The proof of the lemma is complete. In the next lemma, we find the limit of γk+τ A k γ k+τ when T k and that of γ k τ A k γ k τ when k., Lemma B.3. Assume that N tr(a zi) m = m(z). When T k γk+τ A k γ k+τ c 2 m x,

28 28 JIN, B. S. ET AL. where x is the root of the quadratic equation x 2 x + 4 c2 m 2 = 0 with the larger absolute value. When k, where x is the same as above. γk τ A k γ k τ c 2 m x, Proof. Write a = c 2 m, W k = γk+τ A k γ k+τ and W k,k+τ,,k+lτ = γ k+(l+)τ A k,k+τ,,k+lτ γ k+(l+)τ. Then by (3.4), we have W k = a + r(k) aw k,k+τ, where r(k) = o a.s. (), uniformly in k T + τ. Using this relation again, we obtain (B.4) W k = a a + r(k) = a+r(k+τ) aw k,k+τ,k+2τ (a + r(k))( aw k,k+τ,k+2τ ) aw k,k+τ,k+2τ a(a + r(k + τ)). Applying this relation l times, we may express W k in the following form W k = (a + r(k))(α k,l aα k,l 2 W k,k+τ,,k+lτ ) α k,l aα k,l W k,k+τ,,k+lτ where the coefficients satisfy the recursive relation (B.5) α k,l = α k,l a(a + r(k + lτ))α k,l 2, α k, =, α k,0 =. Define x and x 0 as the roots of the equation x 2 = x a 2 with x > x 0 ( Note that the equal sign happens only when a = ± 2 which is impossible for I(z) > 0 because a is the Stieltjes transform of a distribution function). Then we have 2 x = ( 4a 2 ) ifi(a 2 ) > 0, 2 2 ( +, x 0 = ( + 4a 2 ) ifi(a 2 ) > 0, 4a 2 ) ifi(a 2 ) < 0, 2 ( 4a 2 ) ifi(a 2 ) < 0.

29 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 29 Similarly, define ν k, and ν k,0 as the the roots of the equation x 2 = x a(a + r(k)), with ν k, > ν k,0. By this definition, we have ) 2( 4a(a + r(k))) ifi(a(a + r(k)) > 0, ν k, = ( ) 2 + 4a(a + r(k)) ifi(a(a + r(k))) < 0. Further, define α such that αν k, + ( α)ν k,0 =. Then, define ν k,,j = ν k,j, for j = 0,. For t, define (B.6) ν k,t+,j = a(a + r(k + (t + )τ) ν k,t,j, for j = 0,. From this, we have αν k,, + ( α)ν k,,0 = = α k,. And αν k,2, ν k,, + ( α)ν k,2,0 ν k,,0 = α(ν k,, a(a + r(k + 2τ))) + ( α)(ν k,,0 a(a + r(k + 2τ))) = α k, a(a + r(k + 2τ)) = α k,2. Using (B.5) and (B.6), we may prove by induction that l l α ν k,t, + ( α) ν k,t,0 = α k,l. t= t= Our next goal is to estimate the difference between ν k,t+,j and x j. First, by noticing the definition of ν k,j, we have ν k,j x j = ν k,,j x j = o a.s. ()

30 30 JIN, B. S. ET AL. where, and in what follows, the remainder term o a.s. () is uniform in k and l. Then, by ν k,,j = νk,,j 2 + a(a + r(k)) we have = ν k,2,j x j = ν k,,j a(a + r(k + 2τ)) x j ν k,,j a(r(k) r(k + 2τ)) + ν k,,j x j = o a.s. (). ν k,,j By induction, we can prove that ν k,t+,j x j = o a.s. (), provided that t is bounded by a fixed amount M. Therefore, for any given η > 0, when N is large, we have M ν k,t+,0 ν k,t+, t= ( ) x0 M x + η. Note that x 0 < x. Thus, for any given ε > 0, we may choose η > 0 and l <, such that l ν k,t+,0 ε. ν k,t+, t= That means, when l slowly, we have α k,l+ = x α k,l ( + o a.s. ()). Consequently, W k = (a + r(k))(α k,l aα k,l W k,k+τ,,k+lτ ) α k,l+ aα k,l W k,k+τ,,k+lτ = a x ( + o a.s. ()). The first conclusion of the lemma is proved. By duality, the second conclusion follows. Lemma B.4. Assume the conditions of Lemma B.3 hold. For all k [, T + τ], we have γ k τ A k γ k+τ 0, a.s.

31 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 3 where the convergence is uniform in k. Proof. Obviously, when τ < k 2τ, the lemma is true because γ k τ is independent of A k. Similarly, the lemma is true when T τ < k T. When 2τ < k T τ, by (3.6) and what is proved in the last lemma, γ k τ A k γ k+τ = = a a2 x + o a.s. () γ k τ A a x + o a.s. () γ k τ A k,k+τ γ k+2τ k,k+τ γ k+2τ For x x 0 = a 2 and x > x 0, we have a < x. Thus γ k τ A k γ k+τ ( a x + η) γ k τ A k,k+τ γ k+2τ for some η > 0 such that a/x + η <. Now, the lemma can be proved by induction. REFERENCES [] Bai, J. and Ng, S. (2002) Determining the number of factors in approximate factor models. Econometrica 70(), [2] Bai, Z. D., Miao, B. Q., and Jin, B. S. (2007) On limit theorem for the eigenvalues of product of two random matrices. J. Multivariate Anal. 98(), [3] Bai, Z. D. and Zhou, W. (2008) Large sample covariance matrices without independence structures in columns. Statistica Sinica 8, [4] Bai, Z. D., Silverstein, J. W. (2004) CLT for linear spectral statistics of large dimensional sample covariance matrices. Ann. Probab. 32, [5] Bai, Z. D., Silverstein, J. W. (200) Spectral Analysis of Large Dimensional Random Matrices, 2nd ed.. Springer Verlag, New York. [6] Bai, Z. D., Silverstein, J. W. (202) No Eigenvalues Outside the Support of the Limiting Spectral Distribution of Information-Plus-Noise Type Matrices. Matrices: Theory and Applications, Random

32 32 JIN, B. S. ET AL. [7] Bai, Z. D. and Yao, J. F. (2008) Central limit theorems for eigenvalues in a spiked population model. Ann. Inst. H. Poincare Probab. Statist. Volume 44, Number 3, [8] Bai, Z. D., Yin, Y. Q., and Krishnaiah, P. R. (986) On limiting spectral distribution of product of two random matrices when the underlying distribution is isotropic. J. Multivariate Anal. 9, [9] Baik, Jinho and Silverstein, J. W. (2006) Eigenvalues of large sample covariance matrices of spiked population models. J. Multivariate Anal. 97(6), [0] Bose, A. Mitra, J. (2002) Limiting spectral distribution of a special circulant. Statistics and Probability Letters 60, 20. [] Dozier, R. B. and Silverstein, J. W. (2007a) On the Empirical Distribution of Eigenvalues of Large Dimensional Information-Plus-Noise Type Matrices. J. Multivariate Anal. 98(4), [2] Dozier, R. B. and Silverstein, J. W. (2007b) Analysis of the Limiting Spectral Distribution of Large Dimensional Information-Plus-Noise Type Matrices. J. Multivariate Anal. 98(6), [3] Forni M. and Lippi, M. (200) The generalized dynamic factor model. Econometric Theory 7(6), 3 4. [4] Grenander, U. and Silverstein, J. W. (977) Spectral analysis of networks with random topologies. SIAM J. Appl. Math. 32(2), [5] Harding, M. C. (202) Estimating the number of factors in large dimensional factor models. Under Review. [6] Jiang, T. F. (2004) The limiting distributions of eigenvalues of sample covariance matrices. Sankhya 66, [7] Jin, B. S., Wang, C., Miao, B. Q., and Huang, Lo. M. N. (2009) Limiting spectral distribution of large-dimensional sample covariance matrices generated by VARMA. J. Multivariate Anal. 00, [8] Johnstone, I. (200) On the distribution of the largest eigenvalue in principal components analysis. Ann. Statist. 29,

33 SPECTRAL ANALYSIS OF AUTO-CROSS COVARIANCE MATRICES 33 [9] Jonsson, D. (982) Some limit theorems for the eigenvalues of a sample covariance matrix. J. Multivariate Anal. 2, 38. [20] Marčhenko, V. A. and Pastur, L. A. (967) Distribution of eigenvalues for some sets of matrices. Mat. Sb. 72, [2] Onatski, A. (2009) A formal statistical test for the number of factors in the approximate factor models. Econometrica 77(5) [22] Onatski, A. (202) Asymptotics of the principal components estimator of large factor models with weakly influential factors. Journal of Econometrics [23] Raj Rao, N. and Edelman, A. (2008) Sample eigenvalue based detection of highdimensional signals in white noise using relatively few samples. IEEE Transactions of Signal Processing Volume 56, Issue [24] Silverstein, J. W. (995) Strong convergence of the empirical distribution of eigenvalues of large dimensional random matrices. J. Multivariate Anal. 5, [25] Silverstein, J. W. Bai, Z. D. (995). On the empirical distribution of eigenvalues of a class of large dimensional random matrices. J. Multivariate Anal. 54, [26] Tulino, A. M. Verdu, S. (2004) Random matrix theory and wireless communications. Jnl. of Comm. and Inf. Theory. Now Publishers Volume, Number [27] Wachter, K. W. (978) The strong limits of random matrix spectra for sample matrices of independent elements. Ann Probab. 6(), 8. [28] Wachter, K. W. (980) The limiting empirical measure of multiple discriminant ratios. Ann. Statist. 8, [29] Wang, C., Jin, B. S., and Miao, B. Q. (20) On limiting spectral distribution of large sample covariance matrices by VARMA(p,q), J. of Time Series Analysis 32(5), [30] Wigner, E. P. (955) Characteristic vectors bordered matrices with infinite dimensions. Ann. Math. 62, [3] Wigner, E. P.(958) On the distributions of the roots of certain symmetric matrices. Ann. Math. 67, [32] Yin, Y. Q. and Krishnaiah, P. R. (983) A limit theorem for the eigenvalues of product of two random matrices. J. Multivariate Anal. 3,

34 34 JIN, B. S. ET AL. [33] Vautard, R., Tiou, P., and Ghil, M. (992) Singular-spectrum analysis: A toolkit for short, noisy chaotic signal. Physica D. Volume [34] Zheng, S. R. (202) Central limit theorems for linear spectral statistics of large dimensional F-matrices. Ann. Inst. H. Poincare Probab. Statist. Volume 48, Number [35] Zhigljavsky, A. (202) Singular spectrum analysis: Present, past and future. Proc. Intl. Conf. on Forecasting Economic and Financial Systems, Beijing. Department of Statistics and Finance University of Science and Technology of China. jbs@ustc.edu.cn Department of Statistics and Applied Probability National University of Singapore. a @nus.edu.sg KLASMOE and School of Math. and Stat., Northeast Normal University. baizd@nenu.edu.cn Department of Civil and Environmental Engineering, 439 Panama Mall, Stanford University, CA kknair@stanford.edu Department of Economics, Stanford University, 579 Serra Mall, Stanford, CA Phone: (650) ; Fax: (650) ; mch@stanford.edu URL:

SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX

SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX WANG CHEN NATIONAL UNIVERSITY OF SINGAPORE 203 SPECTRAL ANALYSIS OF A SYMMETRIZED AUTO-CROSS COVARIANCE MATRIX WANG CHEN (B.Sc.(Hons) National

More information

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no.

More information

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh,

More information

Large sample covariance matrices and the T 2 statistic

Large sample covariance matrices and the T 2 statistic Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Preface to the Second Edition...vii Preface to the First Edition... ix

Preface to the Second Edition...vii Preface to the First Edition... ix Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

COMPUTING MOMENTS OF FREE ADDITIVE CONVOLUTION OF MEASURES

COMPUTING MOMENTS OF FREE ADDITIVE CONVOLUTION OF MEASURES COMPUTING MOMENTS OF FREE ADDITIVE CONVOLUTION OF MEASURES W LODZIMIERZ BRYC Abstract. This short note explains how to use ready-to-use components of symbolic software to convert between the free cumulants

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

Estimation of the Global Minimum Variance Portfolio in High Dimensions

Estimation of the Global Minimum Variance Portfolio in High Dimensions Estimation of the Global Minimum Variance Portfolio in High Dimensions Taras Bodnar, Nestor Parolya and Wolfgang Schmid 07.FEBRUARY 2014 1 / 25 Outline Introduction Random Matrix Theory: Preliminary Results

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

A new method to bound rate of convergence

A new method to bound rate of convergence A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution

More information

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Math 489AB Exercises for Chapter 1 Fall Section 1.0 Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T

More information

METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW

METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW Statistica Sinica 9(1999), 611-677 METHODOLOGIES IN SPECTRAL ANALYSIS OF LARGE DIMENSIONAL RANDOM MATRICES, A REVIEW Z. D. Bai National University of Singapore Abstract: In this paper, we give a brief

More information

Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices

Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices Ann Inst Stat Math 206 68:765 785 DOI 0007/s0463-05-054-0 Convergence of empirical spectral distributions of large dimensional quaternion sample covariance matrices Huiqin Li Zhi Dong Bai Jiang Hu Received:

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

arxiv: v2 [math.pr] 2 Nov 2009

arxiv: v2 [math.pr] 2 Nov 2009 arxiv:0904.2958v2 [math.pr] 2 Nov 2009 Limit Distribution of Eigenvalues for Random Hankel and Toeplitz Band Matrices Dang-Zheng Liu and Zheng-Dong Wang School of Mathematical Sciences Peking University

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao A note on a Marčenko-Pastur type theorem for time series Jianfeng Yao Workshop on High-dimensional statistics The University of Hong Kong, October 2011 Overview 1 High-dimensional data and the sample covariance

More information

Design of MMSE Multiuser Detectors using Random Matrix Techniques

Design of MMSE Multiuser Detectors using Random Matrix Techniques Design of MMSE Multiuser Detectors using Random Matrix Techniques Linbo Li and Antonia M Tulino and Sergio Verdú Department of Electrical Engineering Princeton University Princeton, New Jersey 08544 Email:

More information

Applications of random matrix theory to principal component analysis(pca)

Applications of random matrix theory to principal component analysis(pca) Applications of random matrix theory to principal component analysis(pca) Jun Yin IAS, UW-Madison IAS, April-2014 Joint work with A. Knowles and H. T Yau. 1 Basic picture: Let H be a Wigner (symmetric)

More information

Weiming Li and Jianfeng Yao

Weiming Li and Jianfeng Yao LOCAL MOMENT ESTIMATION OF POPULATION SPECTRUM 1 A LOCAL MOMENT ESTIMATOR OF THE SPECTRUM OF A LARGE DIMENSIONAL COVARIANCE MATRIX Weiming Li and Jianfeng Yao arxiv:1302.0356v1 [stat.me] 2 Feb 2013 Abstract:

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Fluctuations from the Semicircle Law Lecture 4

Fluctuations from the Semicircle Law Lecture 4 Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

Random Matrix Theory and its Applications to Econometrics

Random Matrix Theory and its Applications to Econometrics Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

PATTERNED RANDOM MATRICES

PATTERNED RANDOM MATRICES PATTERNED RANDOM MATRICES ARUP BOSE Indian Statistical Institute Kolkata November 15, 2011 Introduction Patterned random matrices. Limiting spectral distribution. Extensions (joint convergence, free limit,

More information

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data Sri Lankan Journal of Applied Statistics (Special Issue) Modern Statistical Methodologies in the Cutting Edge of Science Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations

More information

Eigenvectors of some large sample covariance matrix ensembles

Eigenvectors of some large sample covariance matrix ensembles Probab. Theory Relat. Fields DOI 0.007/s00440-00-0298-3 Eigenvectors of some large sample covariance matrix ensembles Olivier Ledoit Sandrine Péché Received: 24 March 2009 / Revised: 0 March 200 Springer-Verlag

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

LARGE DIMENSIONAL RANDOM MATRIX THEORY FOR SIGNAL DETECTION AND ESTIMATION IN ARRAY PROCESSING

LARGE DIMENSIONAL RANDOM MATRIX THEORY FOR SIGNAL DETECTION AND ESTIMATION IN ARRAY PROCESSING LARGE DIMENSIONAL RANDOM MATRIX THEORY FOR SIGNAL DETECTION AND ESTIMATION IN ARRAY PROCESSING J. W. Silverstein and P. L. Combettes Department of Mathematics, North Carolina State University, Raleigh,

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

Applications and fundamental results on random Vandermon

Applications and fundamental results on random Vandermon Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)

More information

arxiv: v1 [math.ra] 11 Aug 2014

arxiv: v1 [math.ra] 11 Aug 2014 Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix

Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix Taras Bodnar a, Holger Dette b and Nestor Parolya c a Department of Mathematics, Stockholm University, SE-069

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Unit Roots in White Noise?!

Unit Roots in White Noise?! Unit Roots in White Noise?! A.Onatski and H. Uhlig September 26, 2008 Abstract We show that the empirical distribution of the roots of the vector auto-regression of order n fitted to T observations of

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

High-dimensional two-sample tests under strongly spiked eigenvalue models

High-dimensional two-sample tests under strongly spiked eigenvalue models 1 High-dimensional two-sample tests under strongly spiked eigenvalue models Makoto Aoshima and Kazuyoshi Yata University of Tsukuba Abstract: We consider a new two-sample test for high-dimensional data

More information

A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES

A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES NICOLAE POPA Abstract In this paper we characterize the Schur multipliers of scalar type (see definition below) acting on scattered

More information

On the decay of elements of inverse triangular Toeplitz matrix

On the decay of elements of inverse triangular Toeplitz matrix On the decay of elements of inverse triangular Toeplitz matrix Neville Ford, D. V. Savostyanov, N. L. Zamarashkin August 03, 203 arxiv:308.0724v [math.na] 3 Aug 203 Abstract We consider half-infinite triangular

More information

Random Matrices for Big Data Signal Processing and Machine Learning

Random Matrices for Big Data Signal Processing and Machine Learning Random Matrices for Big Data Signal Processing and Machine Learning (ICASSP 2017, New Orleans) Romain COUILLET and Hafiz TIOMOKO ALI CentraleSupélec, France March, 2017 1 / 153 Outline Basics of Random

More information

INVERSE EIGENVALUE STATISTICS FOR RAYLEIGH AND RICIAN MIMO CHANNELS

INVERSE EIGENVALUE STATISTICS FOR RAYLEIGH AND RICIAN MIMO CHANNELS INVERSE EIGENVALUE STATISTICS FOR RAYLEIGH AND RICIAN MIMO CHANNELS E. Jorswieck, G. Wunder, V. Jungnickel, T. Haustein Abstract Recently reclaimed importance of the empirical distribution function of

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

On the principal components of sample covariance matrices

On the principal components of sample covariance matrices On the principal components of sample covariance matrices Alex Bloemendal Antti Knowles Horng-Tzer Yau Jun Yin February 4, 205 We introduce a class of M M sample covariance matrices Q which subsumes and

More information

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

arxiv: v1 [cs.na] 6 Jan 2017

arxiv: v1 [cs.na] 6 Jan 2017 SPECTRAL STATISTICS OF LATTICE GRAPH STRUCTURED, O-UIFORM PERCOLATIOS Stephen Kruzick and José M. F. Moura 2 Carnegie Mellon University, Department of Electrical Engineering 5000 Forbes Avenue, Pittsburgh,

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Numerical Solutions to the General Marcenko Pastur Equation

Numerical Solutions to the General Marcenko Pastur Equation Numerical Solutions to the General Marcenko Pastur Equation Miriam Huntley May 2, 23 Motivation: Data Analysis A frequent goal in real world data analysis is estimation of the covariance matrix of some

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

The chromatic number and the least eigenvalue of a graph

The chromatic number and the least eigenvalue of a graph The chromatic number and the least eigenvalue of a graph Yi-Zheng Fan 1,, Gui-Dong Yu 1,, Yi Wang 1 1 School of Mathematical Sciences Anhui University, Hefei 30039, P. R. China fanyz@ahu.edu.cn (Y.-Z.

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

Linear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices

Linear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices Linear maps preserving rank of tensor products of matrices Journal: Manuscript ID: GLMA-0-0 Manuscript Type: Original Article Date Submitted by the Author: -Aug-0 Complete List of Authors: Zheng, Baodong;

More information

Week 15-16: Combinatorial Design

Week 15-16: Combinatorial Design Week 15-16: Combinatorial Design May 8, 2017 A combinatorial design, or simply a design, is an arrangement of the objects of a set into subsets satisfying certain prescribed properties. The area of combinatorial

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Random Matrices and Multivariate Statistical Analysis

Random Matrices and Multivariate Statistical Analysis Random Matrices and Multivariate Statistical Analysis Iain Johnstone, Statistics, Stanford imj@stanford.edu SEA 06@MIT p.1 Agenda Classical multivariate techniques Principal Component Analysis Canonical

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

arxiv: v1 [math.pr] 13 Aug 2007

arxiv: v1 [math.pr] 13 Aug 2007 The Annals of Probability 2007, Vol. 35, No. 4, 532 572 DOI: 0.24/009790600000079 c Institute of Mathematical Statistics, 2007 arxiv:0708.720v [math.pr] 3 Aug 2007 ON ASYMPTOTICS OF EIGENVECTORS OF LARGE

More information

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R, Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

A NUMERICAL ITERATIVE SCHEME FOR COMPUTING FINITE ORDER RANK-ONE CONVEX ENVELOPES

A NUMERICAL ITERATIVE SCHEME FOR COMPUTING FINITE ORDER RANK-ONE CONVEX ENVELOPES A NUMERICAL ITERATIVE SCHEME FOR COMPUTING FINITE ORDER RANK-ONE CONVEX ENVELOPES XIN WANG, ZHIPING LI LMAM & SCHOOL OF MATHEMATICAL SCIENCES, PEKING UNIVERSITY, BEIJING 87, P.R.CHINA Abstract. It is known

More information

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31 STAT C26A / MATH C223A : Stein s method and applications Lecture 3 Lecture date: Nov. 7, 27 Scribe: Anand Sarwate Gaussian concentration recap If W, T ) is a pair of random variables such that for all

More information

Wittmann Type Strong Laws of Large Numbers for Blockwise m-negatively Associated Random Variables

Wittmann Type Strong Laws of Large Numbers for Blockwise m-negatively Associated Random Variables Journal of Mathematical Research with Applications Mar., 206, Vol. 36, No. 2, pp. 239 246 DOI:0.3770/j.issn:2095-265.206.02.03 Http://jmre.dlut.edu.cn Wittmann Type Strong Laws of Large Numbers for Blockwise

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Eigenvectors of some large sample covariance matrix ensembles.

Eigenvectors of some large sample covariance matrix ensembles. oname manuscript o. (will be inserted by the editor) Eigenvectors of some large sample covariance matrix ensembles. Olivier Ledoit Sandrine Péché Received: date / Accepted: date AbstractWe consider sample

More information

ABSOLUTELY FLAT IDEMPOTENTS

ABSOLUTELY FLAT IDEMPOTENTS ABSOLUTELY FLAT IDEMPOTENTS JONATHAN M GROVES, YONATAN HAREL, CHRISTOPHER J HILLAR, CHARLES R JOHNSON, AND PATRICK X RAULT Abstract A real n-by-n idempotent matrix A with all entries having the same absolute

More information

An Introduction to Large Sample covariance matrices and Their Applications

An Introduction to Large Sample covariance matrices and Their Applications An Introduction to Large Sample covariance matrices and Their Applications (June 208) Jianfeng Yao Department of Statistics and Actuarial Science The University of Hong Kong. These notes are designed for

More information

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

An Even Order Symmetric B Tensor is Positive Definite

An Even Order Symmetric B Tensor is Positive Definite An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Singular value decomposition (SVD) of large random matrices. India, 2010

Singular value decomposition (SVD) of large random matrices. India, 2010 Singular value decomposition (SVD) of large random matrices Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu India, 2010 Motivation New challenge of multivariate statistics:

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

A Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes

A Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes Classical and Modern Branching Processes, Springer, New Yor, 997, pp. 8 85. Version of 7 Sep. 2009 A Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes by Thomas G. Kurtz,

More information

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG

More information