The largest eigenvalue of finite rank deformation of large Wigner matrices: convergence and non-universality of the fluctuations
|
|
- Christal Pierce
- 5 years ago
- Views:
Transcription
1 The largest eigenvalue of finite rank deformation of large Wigner matrices: convergence and non-universality of the fluctuations M. Capitaine, C. Donati-Martin and D. Féral Abstract In this paper, we investigate the asymptotic spectrum of complex or real Deformed Wigner matrices (M ) defined by M = W / + A where W is a Hermitian (resp. symmetric) Wigner matrix whose entries have a symmetric law satisfying a Poincaré inequality. The matrix A is Hermitian (resp. symmetric) and deterministic with all but finitely many eigenvalues equal to zero. We first show that, as soon as the first largest or last smallest eigenvalues of A are sufficiently far from zero, the corresponding eigenvalues of M almost surely exit the limiting semicircle compact support as the size becomes large. The corresponding limits are universal in the sense that they only involve the variance of the entries of W. On the other hand, when A is diagonal with a sole simple non-null (fixed) eigenvalue large enough, we prove that the fluctuations of the largest eigenvalue are not universal and vary with the particular distribution of the entries of W. Introduction This paper lies in the lineage of recent works studying the influence of some perturbations on the asymptotic spectrum of classical random matrix models. Such questions come from Statistics (cf. [Jo]) and appeared in the framework of empirical covariance matrices, also called non-white Wishart matrices or spiked population models, considered by J. Baik, G. Ben Arous and S. Péché [Bk-B-P] and by J. Baik and J. Silverstein [Bk-S]. The work [Bk-B-P] deals with random sample covariance matrices (S ) defined by S = Y Y (.) where Y is a p complex matrix whose sample column vectors are i.i.d, centered, Gaussian and of covariance matrix a deterministic Hermitian matrix Σ p having all but finitely many eigenvalues equal to one. Besides, the size of the samples and the size of the population p = p are assumed of the same order (as ). The authors of [Bk-B-P] first noticed that, as in the classical case (known as the Wishart model) where Σ p = I p is the identity matrix, the global limiting behavior of the spectrum of S is not affected by the matrix Σ p. Thus, the limiting spectral measure is the well-known Marchenko-Pastur law. On the other hand, they pointed out a phase transition phenomenon for the fluctuations of the largest eigenvalue according to the value of the largest eigenvalue(s) of Σ p. The approach of [Bk-B-P] does not extend to the real Gaussian setting and the whole analog of their result is still an open question. evertheless, D. Paul was able to establish in [P] the Gaussian fluctuations of the largest eigenvalue of the real Gaussian matrix S when the largest eigenvalue of Σ p is simple and sufficiently larger than one. More recently, J. Baik and J. Silverstein investigated in [Bk-S] the almost sure limiting behavior of the extremal eigenvalues of complex or real non necessarily Gaussian Institut de Mathématiques de Toulouse, Equipe de Statistique et Probabilités, F-3062 Toulouse Cedex capitain@math.ups-tlse.fr Laboratoire de Probabilités et Modèles Aléatoires, Université Paris 6, Site Chevaleret, 3 rue Clisson, F-7503 Paris. donati@ccr.jussieu.fr Institut de Mathématiques de Toulouse, Equipe de Statistique et Probabilités, F-3062 Toulouse Cedex dferal@math.ups-tlse.fr
2 matrices. Under assumptions on the first four moments of the entries of Y, they showed in particular that when exactly k eigenvalues of Σ p are far from one, the k first eigenvalues of S are almost surely outside the limiting Marchenko-Pastur support. Fluctuations of the eigenvalues that jump are universal and have been recently found by Z. Bai and J. F. Yao in [B-Ya2] (we refer the reader to [B-Ya2] for the precise restrictions made on the definition of the covariance matrix Σ p ). ote that the problem of the fluctuations in the very general setting of [Bk-S] is still open. Our purpose here is to investigate the asymptotic behavior of the first extremal eigenvalues of some complex or real Deformed Wigner matrices. These models can be seen as the additive analogue of the spiked population models and are defined by a sequence (M ) given by M = W + A := X + A (.2) where W is a Wigner matrix such that the common distribution of its entries satisfied some technical conditions (given in (i) below) and A is a deterministic matrix of finite rank. We establish the analog of the main result of [Bk-S] namely that, once A has exactly k (fixed) eigenvalues far enough from zero, the k first eigenvalues of M jump almost surely outside the limiting semicircle support. This result is universal (as the one of [Bk-S]) since the corresponding almost sure limits only involve the variance of the entries of W. On the other hand, at the level of the fluctuations, we exhibit a striking phenomenon in the particular case where A is diagonal with a sole simple non-null eigenvalue large enough. Indeed, we find that in this case, the fluctuations of the largest eigenvalue of M are not universal and strongly depend on the particular law of the entries of W. More precisely, we prove that the limiting distribution of the (properly rescaled) largest eigenvalue of M is the convolution of the distribution of the entries of W with a Gaussian law. In particular, if the entries of W are not Gaussian, the fluctuations of the largest eigenvalue of M are not Gaussian. In the following section, we first give the precise definition of the Deformed Wigner matrices (.2) considered in this paper and we recall the known results on their asymptotic spectrum. Then, we present our results and sketch their proof. We also outline the organization of the paper. 2 Model and results Throughout this paper, we consider complex or real Deformed Wigner matrices (M ) of the form (.2) where the matrices W and A are defined as follows: (i) W is a Wigner Hermitian (resp. symmetric) matrix such that the 2 random variables (W ) ii, 2Re((W ) ij ) i<j, 2Im((W ) ij ) i<j (resp. the (+) 2 random variables 2 (W ) ii, (W ) ij, i < j) are independent identically distributed with a symmetric distribution µ of variance σ 2 and satisfying a Poincaré inequality (see Section 3). (ii) A is a deterministic Hermitian (resp. symmetric) matrix of fixed finite rank r and built from a family of J fixed real numbers θ > > θ J independent of with some j o such that θ jo = 0. We assume that the non-null eigenvalues θ j of A are of fixed multiplicity k j (with j j o k j = r) i.e. A is similar to the diagonal matrix D = diag(θ,..., θ }{{},..., θ jo,..., θ jo, 0,......, 0, θ }{{}}{{} jo+,..., θ jo+,..., θ J,..., θ J ). (2.) }{{}}{{} k k jo r k jo+ k J Before going into details of the results, we want to point out that the condition made on µ (namely that µ satisfies a Poincaré inequality) is just a technical condition: we conjecture that our results still hold under weaker assumptions (see Remark 2. below). evertheless, a lot of measures satisfy a Poincaré inequality (we refer the reader to [B-G] for a characterization of such measures on R, see also [A. et al]). For instance, consider µ(dx) = exp( x α )dx with α. Futhermore, note that this condition implies that µ has moments of any order (cf. Corollary 3.2 and Proposition.0 in [L]). 2
3 Let us now introduce some notations. When the entries of W are further assumed to be Gaussian that is, in the complex (resp. real) setting when W is of the so-called GUE (resp. GOE), we will write W G instead of W. Then X G := W G σ2 will be said of the GU(O)E(, ) and we will let M G = XG + A be the corresponding Deformed GU(O)E model. In the following, given an arbitrary Hermitian matrix B of order, we will denote by λ (B) λ (B) its ordered eigenvalues and by µ B = i= δ λ i(b) its empirical measure. For notational convenience, we will also define λ 0 (B) = + and λ + (B) =. The Deformed Wigner model is built in such a way that the Wigner Theorem is still satisfied. Thus, as in the classical Wigner model (A 0), the spectral measure (µ M ) converges a.s. to the semicircle law µ sc whose density is given by dµ sc dx (x) = 2πσ 2 4σ2 x 2 [ 2σ,2σ] (x). (2.2) This result follows from Lemma 2.2 of [B]. ote that it only relies on the two first moment assumptions on the entries of W and the fact that the A s are of finite rank. On the other hand, the asymptotic behavior of the extremal eigenvalues may be affected by the perturbation A. Recently, S. Péché studied in [Pe] the Deformed GUE under a finite rank perturbation A defined by (ii). Following the method of [Bk-B-P], she highlighted the effects of the non-null eigenvalues of A at the level of the fluctuations of the largest eigenvalue of M G. To explain this in more detail, let us recall that when A 0, it was established in [T-W] that as, σ 2/3( λ (X G ) 2σ ) L F2 (2.3) where F 2 is the well-known GUE Tracy-Widom distribution (see [T-W] for the precise definition). Dealing with the Deformed GUE M G, it appears that this result is modified as soon as the first largest eigenvalue(s) of A are quite far from zero. In the particular case of a rank one perturbation A having a fixed non-null eigenvalue θ > 0, [Pe] proved that the fluctuations of the largest eigenvalue of M G are still given by (2.3) when θ is small enough and precisely when θ < σ. The limiting law is changed when θ = σ. As soon as θ > σ, [Pe] established that the largest eigenvalue λ (M G) fluctuates around ρ θ = θ + σ2 (2.4) θ (which is > 2σ since θ > σ) as where (λ (M G ) ρ θ ) L (0, σ 2 θ ) (2.5) σ θ = σ σ2 θ 2. (2.6) Similar results are conjectured for the Deformed GOE but S. Péché emphasized that her approach fails in the real framework. Indeed, it is based on the explicit Fredholm determinantal representation for the distribution of the largest eigenvalue(s) that is specific to the complex setting. evertheless, M. Maïda [M] obtained a large deviation principle for the largest eigenvalue of the Deformed GOE M G under a rank one deformation A ; from this result she could deduce the almost sure limit with respect to the non-null eigenvalue of A. Thus, under a rank one perturbation A such that D = diag(θ, 0,, 0) where θ > 0, [M] showed that and λ (M G ) a.s ρ θ, if θ > σ (2.7) λ (M G ) a.s 2σ, if θ σ. (2.8) 3
4 ote that the approach of [M] extends with minor modifications to the Deformed GUE. Following the investigations of [Bk-S] in the context of general spiked population models, one can conjecture that such a phenomenon holds in a more general and non necessarily Gaussian setting. The first result of our paper, namely the following Theorem 2., is related to this question. Before being more explicit, let us recall that when A 0, the whole spectrum of the rescaled complex or real Wigner matrix X = W belongs almost surely to the semicircle support [ 2σ, 2σ] as goes to infinity and that (cf. [B-Yi] or Theorem 2.2 in [B]) λ (X ) a.s 2σ and λ (X ) a.s 2σ. (2.9) ote that this last result holds true in a more general setting than the one considered here (see [B-Yi] for details) and in particular only requires the finiteness of the fourth moment of the law µ. Moreover, one can readily extend the previous limits to the first extremal eigenvalues of X i.e. for any fixed k, λ k (X ) a.s 2σ and λ k (X ) a.s 2σ. (2.0) Here, we prove that, under the assumptions (i)-(ii), (2.0) fails when some of the θ j s are sufficiently far from zero: as soon as some of the first largest (resp. the last smallest) non-null eigenvalues θ j s of A are taken strictly larger then σ (resp. strictly smaller than σ), the same part of the spectrum of M almost surely exits the semicircle support [ 2σ, 2σ] as and the new limits are the ρ θj s defined by ρ θj = θ j + σ2 θ j. (2.) Observe that ρ θj is > 2σ (resp. < 2σ) when θ j > σ (resp. < σ) (and ρ θj = ±2σ if θ j = ±σ). Here is the precise formulation of our result. For definiteness, we set k + + k j := 0 if j =. Theorem 2.. Let J +σ (resp. J σ ) be the number of j s such that θ j > σ (resp. θ j < σ). (a) j J +σ, i k j, λ k+ +k j +i(m ) ρ θj a.s., (b) λ k+ +k J+σ +(M ) 2σ a.s., (c) λ k+ +k J J σ (M ) 2σ a.s., (d) j J J σ +, i k j, λ k+ +k j +i(m ) ρ θj a.s. Remark 2.. Let us notice that, following [Bk-S], one can expect that this theorem holds true in a more general setting than the one considered here, namely only requires four first moment conditions on the law µ of the Wigner entries. As we will explain in the following, the assumption (i) that µ satisfies a Poincaré inequality is actually fundamental in our reasoning since we will need several variance estimates. This theorem will be proved in Section 4. The second part of this work is devoted to the study of the particular rank one diagonal deformation A = diag(θ, 0,, 0) such that θ > σ. We investigate the fluctuations of the largest eigenvalue of any real or complex Deformed model M satisfying (i) around its limit ρ θ (given by the previous theorem). We obtain the following result. Theorem 2.2. Let A = diag(θ, 0,, 0) and assume that θ > σ. Define v θ = t 4 ( m4 3σ 4 σ 4 ) θ 2 + t 2 θ 2 σ 2 (2.2) where t = 4 (resp. t = 2) when W is real (resp. complex) and m 4 := x 4 dµ(x). Then ) L σ (λ 2 { } (M ) ρ θ ( θ 2 ) µ (0, v θ ). (2.3) 4
5 ote that when m 4 = 3σ 4 as in the Gaussian case, the variance of the limiting distribution in (2.3) is equal to σ 2 θ (resp. 2σ2 θ ) in the complex (resp. real) setting (with σ θ given by (2.6)). Remark 2.2. Since µ is symmetric, it readily follows from Theorem 2.2 that when A = diag(θ, 0,, 0) and θ < σ, the smallest eigenvalue of M fluctuates as ) L σ (λ 2 { } (M ) ρ θ ( θ 2 ) µ (0, v θ ). In particular, one derives the analog of (2.5) for the Deformed GOE that is Theorem 2.3. Let A be an arbitrary deterministic symmetric matrix of rank one having a non-null eigenvalue θ such that θ > σ. Then the largest eigenvalue of the Deformed GOE fluctuates as (λ (M G ) ρ θ ) L (0, 2σ 2 θ ). (2.4) Obviously, thanks to the orthogonal invariance of the GOE, this result is a direct consequence of Theorem 2.2. It is worth noticing that, according to the Cramer-Lévy Theorem (cf. [F], Theorem p. 525), the limiting distribution (2.3) is not Gaussian if µ is not Gaussian. Thus, (2.3) depends on the particular law µ of the entries of the Wigner matrix W which implies the non-universality of the fluctuations of the largest eigenvalue of rank one diagonal deformation of symmetric or Hermitian Wigner matrices (as conjectured in Remark.7 of [Fe-Pe]). The latter also shows that in the non-gaussian setting, the fluctuations of the largest eigenvalue depend, not only on the spectrum of the deformation A, but also on the particular definition of the matrix A. Indeed, in collaboration with S. Péché, the third author of the present article has recently stated in [Fe-Pe] the universality of the fluctuations of some Deformed Wigner models under a full deformation A defined by (A ) ij = θ for all i, j (see also [Fu-K]). Before giving some details on this work, we have to precise that [Fe-Pe] considered Deformed models such that the entries of the Wigner matrix W have sub-gaussian moments. evertheless, thanks to the analysis made in [R], one can observe that the assumptions of [Fe-Pe] can be reduced and that it is for example sufficient to assume that the W i,j s have moment of order 9 (the precise condition of [R] is given by (2.5) below). Thus, the conclusions of [Fe-Pe] apply to the setting considered in our paper. The main result of [Fe-Pe] establishes the universality of the fluctuations of the largest eigenvalue of the complex Deformed model M associated to a full deformation A and for any value of the parameter θ. In particular, when θ > σ, it is proved therein the universality of the Gaussian fluctuations (2.5). otice that the approach of [Fe-Pe] is completely different from the one developed below in Section 5 to derive Theorem 2.2. It is mainly based on a combinatorial method inspired by the work [So] (which handles the non-deformed Wigner model) and the known fluctuations for the Deformed GUE (given by [Pe]). The combinatorial arguments of [Fe-Pe] also work (with minor modifications) in the real framework and yields the universality of the fluctuations if θ < σ. In the case where θ > σ which is of particular interest here, the analysis made in [Fe-Pe] reduced the universality problem in the real setting to the knowledge of the particular Deformed GOE model which was unknown up to now (note that this remark is also valid in the case where θ = σ). Thus, thanks to our previous Theorem 2.3 and the analysis of [Fe-Pe] and [R], we are now in position to claim the following universality. Theorem 2.4. Let A be a full perturbation given by (A ) ij = θ for all (i, j). Assume that θ > σ. Let W be an arbitrary real Wigner matrix with the underlying measure µ being symmetric with a variance σ 2 and such that there exists some p > 8 satisfying µ([x, + [) x p. (2.5) Then the largest eigenvalue of the Deformed model M has the Gaussian fluctuations (2.4). Remark 2.3. To be complete, let us notice that the previous result still holds when we allow the distribution ν of the diagonal entries of W being different from µ provided that ν is symmetric and satisfies (2.5). 5
6 The fundamental tool of this paper is the Stieltjes transform. For z C\R, we denote the resolvent of the matrix M by G (z) = (zi M ) and the Stieltjes transform of the expectation of the empirical measure of the eigenvalues of M by g (z) = E(tr (G (z))) where tr is the normalized trace. We also denote by g σ (z) = E((z s) ) the Stieltjes transform of a random variable s with semi-circular distribution µ sc. Theorem 2. is the analog of the main statement of [Bk-S] established in the context of general spiked population models. The conclusion of [Bk-S] requires numerous results obtained previously by J. Silverstein and co-authors in [Ch-S], [B-S] and [B-S2] (a summary of all this literature can be found in [B] pp ). From very clever and tedious manipulations of some Stieltjes transforms and the use of the matricial representation (.), these works highligh a very close link between the spectra of the Wishart matrices and the covariance matrix (for quite general covariance matrix which include the spiked population model). Our approach mimics the one of [Bk-S]. Thus, using the fact that the Deformed Wigner model is the additive analog of the spiked population model, several arguments can be quite easily adapted here (this point has been explained in Chapter 4 of the PhD Thesis [Fe]). Actually, the main point in the proof consists in establishing that for any ε > 0, almost surely, for all large, where we have defined Spect(M ) K ε σ(θ,, θ J ) (2.6) K ε σ (θ,, θ J ) = K σ (θ,, θ J ) + [ ε; ε] and K σ (θ,, θ J ) := { } } ρ θj ; ρ θj ; ; ρ θj J σ + [ 2σ; 2σ] {ρ θj+σ ; ρ θj+σ ; ; ρ θ. This point is the analog of the main result of [B-S]. The analysis of [B-S] is based on technical and numerous considerations of Stieltjes transforms strongly related to the Wishart context and that can not be directly transposed here. Thus, our approach to prove such an inclusion of the spectrum of M is very different from the one of [B-S]. Indeed, we use the methods developed by U. Haagerup and S. Thorbjørnsen in [H-T], by H. Schultz [S] and by the two first authors of the present article [C-D]. The key point of this approach is to obtain a precise estimation at any point z C\R of the following type g σ (z) g (z) + L σ(z) = O( ), (2.7) 2 where L σ is the Stieltjes transform of a distribution Λ σ with compact support in K σ (θ,, θ J ). Indeed such an estimation allows us through the inverse Stieltjes transform and some variance estimates to deduce that tr c K ε (θ,,θ J )(M ) = O(/ 4 3 ) a.s.. Thus the number of eigenvalues of M in c K ε (θ,, θ J ) is almost surely a O(/ 3 ) and since for each this number has to be an integer, we deduce that it is actually equal to zero as goes to infinity. Dealing with the particular diagonal perturbation A = diag(θ, 0,..., 0), we obtain the fluctuations of the largest eigenvalue λ (M ) by an approach close to the one of [P] and the ideas of [B-B-P]. The reasoning relies on the writing of λ (M ) in terms of the resolvent of a non-deformed Wigner matrix. The paper is organized as follows. In Section 3, we introduce preliminary lemmas which will be of basic use later on. Section 4 is devoted to the proof of Theorem 2.. We first establish an equation (called Master equation ) satisfied by g up to some correction of order (see Section 4.). Then 2 ote that in some papers for which we make reference, the Stieltjes transform is defined with the opposite sign. 6
7 we explain how this master equation gives rise to an estimation of type (2.7) and thus to the inclusion (2.6) of the spectrum of M in K ε (θ,, θ J ) (see Sections 4.2 and 4.3). In Section 4.4, we use this inclusion to relate the asymptotic spectra of A and M and then deduce Theorem 2.. The last section states Theorem 2.2. Acknowledgments. The authors are very grateful to Jack Silverstein and Jinho Baik for providing them their proof of Theorem 5.2 (that is a fundamental argument in the proof of Theorem 2.2) which is presented in the Appendix of the present article. 3 Basic lemmas We assume that the distribution µ of the entries of the Wigner matrix W satisfies a Poincaré inequality: there exists a positive constant C such that for any C function f : R C such that f and f are in L 2 (µ), V(f) C f 2 dµ, with V(f) = E( f E(f) 2 ). For any matrix M, define M 2 = (Tr(M M)) /2 the Hilbert-Schmidt norm. Let Ψ : (M (C) sa ) R 2 (resp. Ψ : (M (C) s ) R (+) 2 ) be the canonical isomorphism which maps an Hermitian (resp. symmetric) matrix to the real parts and the imaginary parts of its entries (resp. to the entries) (M) ij, i j. Lemma 3.. Let M be the complex (resp. real) Wigner Deformed matrix introduced in Section 2. For any C function f : R 2 (resp. R (+) 2 ) C such that f and the gradient (f) are both polynomially bounded, Proof: According to Lemma 3.2 in [C-D], V[f Ψ(M )] C E{ [f Ψ(M )] 2 2 }. (3.) V[f Ψ(X )] C E{ [f Ψ(X )] 2 2 }. (3.2) ote that even if the result in [C-D] is stated in the Hermitian case, the proof is valid and the result still holds in the symmetric case. ow (3.) follows putting g(x ij ; i j) := f(x ij + (A ) ij ; i j) in (3.2) and noticing that the (A ) ij are uniformly bounded in i, j,. This lemma will be useful to estimate many variances. ow, we recall some useful properties of the resolvent (see [K-K-P], [C-D]). For any Hermitian matrix M we denote its spectrum by Spect(M). Lemma 3.2. For a Hermitian or symmetric matrix M, for any z C\Spect(M), we denote by G(z) := (zi M) the resolvent of M. Let z C\R, (i) G(z) Im(z) where. denotes the operator norm. (ii) G(z) ij Im(z) for all i, j =,.... (iii) For p 2, G(z) ij p ( Im(z) ) p. i,j= (iv) The derivative with respect to M of the resolvent G(z) satisfies: G M (z).b = G(z)BG(z) for any matrix B. 7
8 (v) Let z C such that z > M ; we have G(z) z M. Proof: We just mention that (v) comes readily noticing that the eigenvalues of the normal matrix G(z) are the z λ i(m), i =,...,. We will also need the following estimations on the Stieltjes transform g σ of the semi-circular distribution µ sc. Lemma 3.3. g σ is analytic on C\[ 2σ, 2σ] and z {z C : Imz 0}, σ 2 g 2 σ (z) zg σ(z) + = 0. (3.3) g σ (z) Imz. (3.4) g σ (z) z + σ 2 Imz. (3.5) g σ (z) = (z t) 2 dµ σ(t) Im(z) 2. (3.6) For a > 0, θ R, ag σ (z) z + θ Im(z). (3.7) z {z C : z > 2σ}, g σ (z) = g σ (z) z 2σ. (3.8) (z t) 2 dµ σ(t) ( z 2σ) 2. (3.9) g σ (z) z + σ2 z 2σ. (3.0) Proof: For the equation (3.3), we refer the reader to Section 3. of [B]. (3.7) is a consequence of Im(g σ (z))im(z) < 0. Other inequalities derive from (3.3) and the definition of g σ. 4 Almost sure convergence of the first extremal eigenvalues Sections 4., 4.2 and 4.3 below describe the different steps of the proof of the inclusion (2.6). We choose to develop the case of the complex Deformed Wigner model and just to point out some differences with the real model case (at the end of Section 4.3) since the approach would be basically the same. In these sections, we will often refer the reader to the paper [C-D] where the authors deal with several independent non Deformed Wigner matrices. The reader needs to fix r =, m =, a 0 = 0, a = σ and to change the notations λ = z, G = g, G = g σ in [C-D] in order to use the different proofs we refer to in the present framework. We shall denote by P k any polynomial of degree k with positive coefficients and by C, K any constants; P k, C, K can depend on the fixed eigenvalues of A and may vary from line to line. We also adopt the following convention to simplify the writing: we sometimes state in the proofs below that a quantity (z), z C\R is O( ), p =, 2. This means p precisely that (z) ( z + K) l P k( Im(z) ) p for some k and some l and we give the precise majoration in the statements of the theorems or propositions. Section 4.4 explains how to deduce Theorem 2. from the inclusion (2.6). 8
9 4. The master equation 4.. A first master inequality In order to obtain a master equation for g (z), we first consider the Gaussian case, i.e. X = X G is distributed as the GUE(, σ2 ) distribution.2 Let us recall the integration by part formula for the Gaussian distribution. Lemma 4.. Let Φ be a complex valued C function on (M (C) sa ) and X GUE(, σ2 ). Then, E[φ (X ).H] = σ 2 E[φ(X ) Tr(X H)] (4.) for any Hermitian matrix H, or by linearity for H = E jk, j, k where E jk, j, k is the canonical basis of the complex space of matrices. We apply the above lemma to the function Φ(X ) = (G (z)) ij = ((zi X A ) ) ij, z C\R, i, j. In order to simplify the notation, we write (G (z)) ij = G ij. We obtain, for H = E ij : E((GHG) ij ) = σ 2 E(G ij Tr(X H)] E(G ii G jj ) = σ 2 E(G ij(x ) ji ] ow, we consider the normalized sum ij of the previous identities to obtain: 2 Then, since E((tr G) 2 ) = σ 2 E(tr (GX )). GX = (z X A ) (X + A zi A + zi ) = I GA + zg, we obtain the following master equation: E((tr G) 2 ) + σ 2 ( ze(tr G) + + E(tr GA )) = 0. ow, it is well known (see [C-D], [H-T] and Lemma 3.) that: thus, we obtain: Var(tr (G)) C Imz 4 2, Proposition 4.. The Stieltjes transform g satisfies the following inequality: σ 2 g 2 (z) zg (z) + + E(Tr(G (z)a )) C Imz 4 2 (4.2) ote that since A is of finite rank, E(Tr(G (z)a )) C where C is a constant independent of (depending on the eigenvalues of A and z). We now explain how to obtain the corresponding equation (4.2) in the Wigner case. Since the computations are the same as in [C-D] 3 and [K-K-P] 4, we just give some hints of the proof. Step : The integration by part formula for the Gaussian distribution is replaced by the following tool: 2 Throughout this section, we will drop the subscript G in the interest of clarity. 3 This paper treats the case of several independent non Deformed Wigner matrices. 4 The authors considered a non Deformed Wigner matrix in the symmetric real setting. 9
10 Lemma 4.2. Let ξ be a real-valued random variable such that E( ξ p+2 ) <. Let φ be a function from R to C such that the first p + derivatives are continuous and bounded. Then, E(ξφ(ξ)) = p a=0 κ a+ E(φ (a) (ξ)) + ɛ (4.3) a! where κ a are the cumulants of ξ, ɛ C sup t φ (p+) (t) E( ξ p+2 ), C depends on p only. We apply this lemma with the function φ(ξ) given, as before, by φ(ξ) = G ij and ξ is now one of the variable Re((X ) kl ), Im((X ) kl ). ote that, since the above random variables are symmetric, only the odd derivatives in (4.3) give a non null term. Moreover, as we are concerned by estimation of order of g 2, we only need to consider (4.3) up to the third derivative (see [C-D]). The computation of the first derivative will provide the same term as in the Gaussian case. Step 2: Study of the third derivative. We refer to [C-D] or [K-K-P] for a detailed study of the third derivative. Using some bounds on G, see Lemma 3.2, we can prove that the only term arising from the third derivative in the master equation, giving a contribution of order, is: ( ) 2 E G 2 kk. In conclusion, the first master equation in the Wigner case reads as follows: Theorem 4.. For z C\R, g (z) satisfies ( ) 2 σ2 g (z) 2 zg (z) + + E[Tr(G (z)a )] + κ 4 2 E (G (z)) 2 kk k= where κ 4 is the fourth cumulant of the distribution µ Estimation of g g σ Since ( E[Tr(G (z)a )] E Theorem 4. implies that for any z C\R, k= P 6( Im(z) ) 2 (4.4) ) 2 (G (z)) 2 kk P 4 ( Im(z) ), k= σ 2 g (z) 2 zg (z) + P 6( Im(z) ) To estimate g g σ from the equation (3.3) satisfied by the Stieltjes transform g σ on the one hand and from the equation (4.5) on the other hand, we follow the method initiated in [H-T] and [S]. We don t develop it here since it follows exactly the lines of Section 3.4 in [C-D] but we briefly recall the main arguments and results which will be useful later on. We define the open connected set O = {z C, Im(z) > 0, P 6( Imz ) (σ 2 Im(z) + z ) < }. 4 Imz One can prove that for any z in O, (4.5) g (z) 0 and g (z) 2(σ2 Im(z) + z ) (4.6) 0
11 Λ (z) := σ 2 g (z) + g (z) and we have is such that Λ (z) z P 6( Imz ) 2(σ 2 Imz + z ) (4.7) Im(Λ (z)) Im(z) 2 Writing the equation (3.3) at the point Λ (z), we easily get that > 0. (4.8) g (z) = g σ (Λ (z)) (4.9) on the non empty open subset O = {z O, Imz > 2σ} and then on O of uniqueness of continuation. by the principle This allows us to get an estimation of g (z) g σ (z) on O and then to deduce: Proposition 4.2. For any z C such that Im(z) > 0, 4..3 Study of the additional term E[Tr(A G (z))] g (z) g σ (z) ( z + K) P 9( Imz ). (4.0) From now on and until the end of Section 4., we denote by γ,..., γ r the non-null eigenvalues of A (γ i = θ j for some j j 0 ) in order to simplify the writing. Let U := U be a unitary matrix such that A = U U where is the diagonal matrix with entries ii = γ i, i r; ii = 0, i > r. We set h (z) = E[Tr(A G (z))] (4.) h (z) = r k= γ k i,j= U ik U kje[g ji ] Our aim is to express h (z) in terms of the Stieltjes transform g (z) for large, using the integration by part formula. ote that since we want an estimation of order O( ) in the master inequality (4.4), 2 we only need an estimation of h (z) of order O( ). As in the previous subsection, we first write the equation in the Gaussian case and then study the additional term (third derivative) in the Wigner case. a) Gaussian case Apply the formula (4.) to Φ(X ) = G jl and H = E il to get and E[G ji G ll ] = σ 2 E[G jl(x ) li ] E[G ji G ll ] = σ 2 E[(GX ) ji ]. l= Expressing GX in terms of GA, we obtain: I ji := σ 2 E[G ji tr (G)] + δ ij ze[g ji ] + E[(GA ) ji ] = 0. (4.2) ow, we consider the sum i,j U ik U kji ji, k =,... r fixed and we denote α k = i,j U ik U kjg ji = (UGU ) kk. Then, we have the following equality, using that U is unitary: σ 2 E[α k tr (G)] + ze[α k ] + i,j U ik U kje[(ga ) ji ] = 0.
12 ow, Uik U kje[(ga ) ji ] = E[(UGA U ) kk ] = E[(UGU UU ) kk ] i,j = γ k E[(UGU ) kk ] = γ k E[α k ]. Therefore, σ 2 E[α k tr (G)] + + (γ k z)e[α k ] = 0. Since α k is bounded and Var(tr (G)) = O( 2 ), we obtain E[α k ](σ 2 g (z) + γ k z) + = O( ). (4.3) Then using (4.0) we deduce that E[α k ](σ 2 g σ (z) + γ k z) + = O( ) and using (3.7) h (z) = r γ k E[α k ] = k= r k= γ k z σ 2 + O( ). (4.4) g σ (z) γ k b) The general Wigner case We shall prove that (4.3) still holds. We now rely on Lemma 4.2 to obtain the analogue of (4.2) [ ] J ij := σ 2 E[G ji tr (G)] + δ ij ze[g ji ] + E[(GA ) ji ] + κ 4 E[A i,j,l ] = O( ). (4.5) 6 2 The term A i,j,l is a fixed linear combination of the third derivative of Φ := G jl with respect to Re(X ) il (i.e. in the direction e il = E il + E li ) and Im(X ) il (i.e. in the direction f il := (E il E li )). We don t need to write the exact form of this term since we just want to show that this term will give a contribution of order O( ) in the equation for h (z). Let us write the derivative in the direction e il : which is the sum of eight terms of the form: E[(Ge il Ge il Ge il G) jl ] where if i 2q+ = i (resp. l), then i 2q+2 = l (resp. i), q = 0,, 2. Lemma 4.3. Let k r fixed, then for a numerical constant C. F () := i,j= l= E[G ji G i2i 3 G i4i 5 G i6l] (4.6) U iku kj E[A i,j,l ] C Imz 4 (4.7) Proof: F () is the sum of eight terms corresponding to (4.6). Let us write for example the term corresponding to i = i, i 3 = i, i 5 = i: l= Uik U kje[g ji G li G li G ll ] i,j,l = E Uik (UG) kig li G li G ll i,l [ ] = E Uik (UG) ki(g T G D G T ) ii i where the superscript T denotes the transpose of the matrix and G D is the diagonal matrix with entries G ii. From the bounds G (z) Imz and U =, we get the bound given in the 2
13 lemma. We give the majoration for the term corresponding to i = l, i 3 = l, i 5 = l: Uik U kje[g jl G 3 il ] i,j,l = E Uik (UG) klg 3 il i,l [ ] Its absolute value is bounded by E i,l G il 3 Imz and thanks to lemma 3.2 by Imz 4. The other terms are treated in the same way. As in the Gaussian case, we now consider the sum i,j U ik U kjj ji. From Lemma 4.3 and the bound (using Cauchy-Schwarz inequality) Uik U kj, i,j= we still get (4.3) and thus (4.4). More precisely, we proved Proposition 4.3. Let h (z) = E[Tr(A G (z))], then r γ k h (z) z σ 2 P ( Im(z) ) (K + z ). g σ (z) γ k k= [ ( ) ] Convergence of E k= G2 kk We now study the last term in the master inequality of Theorem 4.. For the non Deformed Wigner matrices, it is shown in [K-K-P] that ( ) 2 R (z) := E G 2 kk g4 σ(z). k= Moreover, Proposition 3.2 in [C-D], in the more general setting of several independent Wigner matrices, gives an estimate of R (z) gσ 4 (z). The above convergence holds true in the Deformed case. We just give some hints of the proof of the estimate of R (z) gσ(z) 4 since the computations are almost the same as in the non Deformed case. Let us set d (z) = G 2 kk. We start from the resolvent identity and zd (z) = zg kk = + k= (M ) kl G lk l= = + (A ) kl G lk + (X ) kl G lk G kk + k= l= l= (A G) kk G kk + k= (X ) kl G lk G kk. For the last term, we apply an integration by part formula (Lemma 4.2) to obtain (see [K-K-P], [C-D]) [ ] E (X ) kl G lk G kk = σ 2 E ( G kk )d (z) + O( ). k,l= k= k,l= 3
14 It remains to see that the additional term due to A is of order O( ). and (A G) kk G kk = k= k= r γ p (UGG D U ) pp p= r (A G) kk G kk ( γ p ) Imz 2. p= We thus obtain (again with the help of a variance estimate) ze[d (z)] = g (z) + σ 2 g (z)e[d (z)] + O( ). Then using (4.0) and since d (z) is bounded we deduce that Thus (using (3.7)) ow, using some variance estimate, ze[d (z)] = g σ (z) + σ 2 g σ (z)e[d (z)] + O( ). g σ (z) E[d (z)] = z σ 2 g σ (z) + O( ) g σ (z) z σ 2 g σ (z) = g2 σ (z). E[d 2 (z)] = (E[d (z)]) 2 + O( ) = g4 σ (z) + O( ). We can now give our final master inequality for g (z) following our previous estimates: Theorem 4.2. For z C such that Im(z) > 0, g (z) satisfies σ2 g(z) 2 zg (z) + + E σ(z) P 4( Imz ) 2 ( z + K) where E σ (z) = r γ k k= z σ 2 g σ(z) γ k + κ4 2 g4 σ (z), κ 4 is the fourth cumulant of the distribution µ. ote that E σ (z) can be written in terms of the distinct eigenvalues θ j s of A as Let us set E σ (z) = J j=,j j 0 k j θ j z σ 2 + κ 4 g σ (z) θ j 2 g4 σ (z). L σ (z) = g σ (z) E((z s) 2 )E σ (z) (4.8) where s is a centered semicircular random variable with variance σ Estimation of g σ (z) g (z) + L σ(z) The method is roughly the same as the one described in Section 3.6 in [C-D]. evertheless we choose to develop it here for the reader convenience. We have for any z in O n, by using (4.9), 4
15 g σ (z) g (z) + L σ(z) = g σ (z) g σ (Λ (z)) + L σ(z) [ = E (z s) (Λ (z) s) (Λ (z) z) + ] g σ(z) (z s) 2 E σ (z) [ E (z s) (Λ (z) s) (Λ (z) z + ] g σ(z) E σ (z)) +E [ (z s) {(z s) (Λ (z) s) } ] g σ(z) E σ (z) 2 Im(z) 2 Λ (z) z + E σ(z)g σ (z) + P 8( Im(z) ) Λ (z) z ( z + K) where we made use of the estimates (3.5), (4.8), z C\R, x R, z x Im(z), Let us write E σ (z) P 4 ( Im(z) ) (using (3.7)). (4.9) Λ n (z) z + E σ(z)g σ (z) { = g (z) σ 2 g 2 (z) zg (z) + + E σ(z) } + We get from Theorem 4.2, (4.6), (4.0), (4.9), (3.5) g (z)g σ(z) {g (z) g σ (z)} E σ (z). Λ (z) z + E σ(z)g σ (z) ( z + K) 3 P 5( Imz ) 2. Finally, using also (4.7), we get for any z in O n, ow, for z / O n, such that Im(z) > 0 g σ (z) g (z) + L σ(z) ( z + K) 3 P 7( Im(z) ) 2. We get 4 P 6( (Im(z)) ) ( z + σ 2 Im(z) ) Im(z) ( z + K) P 8( Im(z) ). g σ (z) g (z) + L σ(z) g σ (z) g (z) + L σ(z) Thus, for any z such that Im(z) > 0, ( z + K) P 8( Im(z) ) [ ( z + K) P 9( Im(z) ) + ] P 7( Im(z) )( z + K) ( z + K) 2 P 7( Im(z) ) 2. g σ (z) g (z) + L σ(z) ( z + K) 3 P 7( Im(z) ) 2. (4.20) 5
16 . ote that we get exactly the same estimation (z) = g A ( z) (using ( z), it readily follows that (4.20) is also valid Let us denote for a while g = g A and L σ = L A σ (4.20) dealing with A instead of A. Hence since g σ (z) = g σ ( z), g A the symmetry assumption on µ) and L A σ for any z such that Imz < 0. In conclusion, Proposition 4.4. For any z C\R, (z) = L A σ g σ (z) g (z) + L σ(z) ( z + K) 3 P 7( Im(z) ) 2. (4.2) 4.3 Inclusion of the spectrum of M The following step now consists in deducing Proposition 4.6 from Proposition 4.4 (from which we will easily deduce the appropriate inclusion of the spectrum of M ). Since this transition is based on the inverse Stieltjes transform, we start with establishing the fundamental Proposition 4.5 below concerning the nature of L σ. ote that one can rewrite L σ as ) L σ (z) = g σ (z) g σ (z) ( J j= k j θ j g σ(z) θ j + κ 4 2 g4 σ (z). (4.22) We recall that J +σ (resp. J σ ) denotes the number of j s such that θ j > σ (resp. θ j < σ). As in the introduction, we define ρ θj = θ j + σ2 θ j (4.23) which is > 2σ (resp. < 2σ) when θ j > σ (resp. < σ). Proposition 4.5. L σ is the Stieltjes transform of a distribution Λ σ with compact support { } } K σ (θ,, θ J ) := ρ θj ; ρ θj ; ; ρ θj J σ + [ 2σ; 2σ] {ρ θj+σ ; ρ θj+σ ; ; ρ θ. As in [S], we will use the following characterization: Theorem 4.3. [T] Let Λ be a distribution on R with compact support. Define the Stieltjes transform of Λ, l : C\R C by ( ) l(z) = Λ. z x Then l is analytic in C\R and has an analytic continuation to C\supp(Λ). Moreover (c ) l(z) 0 as z, (c 2 ) there exists a constant C > 0, an n and a compact set K R containing supp(λ) such that for any z C\R, l(z) Cmax{dist(z, K) n, }, (c 3 ) for any φ C (R, R) with compact support Λ(φ) = π lim Im y 0 + R φ(x)l(x + iy)dx. Conversely, if K is a compact subset of R and if l : C\K C is an analytic function satisfying (c ) and (c 2 ) above, then l is the Stieltjes transform of a compactly supported distribution Λ on R. Moreover, supp(λ) is exactly the set of singular points of l in K. In our proof of Proposition 4.5, we will refer to the following lemma which gives several properties on g σ. 6
17 Lemma 4.4. g σ is analytic and invertible on C\[ 2σ, 2σ] and its inverse z σ satisfied z σ (g) = g + σ2 g, g g σ (C\[ 2σ, 2σ]). (a) The complement of the support of µ σ is characterized as follows x R\[ 2σ, 2σ] g R such that g > σ and x = z σ(g). (b) Given x R\[ 2σ, 2σ] and θ R such that θ > σ, one has σ2 = θ x = θ + g σ (x) θ := ρ θ. This lemma can be easily proved using for example the explicit expression of g σ (derived from (3.3)) namely for all x R\[ 2σ, 2σ], ( g σ (x) = x ) 2σ 2 4σ2 x 2. Proof of Proposition 4.5: Using (4.22), one readily sees that the singular points of L σ is the set [ 2σ; 2σ] { } x R\[ 2σ, 2σ] and g Spect(A σ(x) ). Hence (using point (b) of Lemma 4.4) the set of singular points of L σ is exactly K σ (θ,, θ J ). ow, we are going to show that L σ satisfies (c ) and (c 2 ) of Theorem 4.3. We have obviously that z σ 2 g σ (z) θ j z θ j σ 2 g σ (z). ow, let α > 0 such that α > 2σ and for any j =,..., J, α θ j > σ2 α 2σ. For any z C such that z > α, and according to (3.8) z θ j z θ j > σ2 α 2σ σ 2 g σ (z) Thus we get that for z C such that z > α, L σ (z) ) ( z + σ2 z 2σ σ 2 z 2σ σ2 α 2σ. z σ 2 g σ (z) θ j z θ j σ2 α 2σ. Using also (3.8), (3.9), (3.0), we get readily that for z > α, ( z 2σ) 2 J j= k j θ j z θ j Then, it is clear than L σ (z) 0 when z + and (c ) is satisfied. σ2 α 2σ κ 4 + 2( z 2σ) 4. ow we follow the approach of [S](Lemma 5.5) to prove (c 2 ). Denote by E the convex envelope of K σ (θ,, θ J ) and define the interval K := {x R; dist(x, E) } = [ min{x K σ (θ,, θ J )} ; max{x K σ (θ,, θ J )} + ] and D = {z C; 0 < dist(z, K) }. 7
18 Let z D C\R with Re(z) K. We have dist(z, K) = Imz. Using the upper bounds (3.4), (3.5), (3.6) and (3.7), we easily deduce that there exists some constant C 0 such that for any z D C\R with Re(z) K. L σ (z) C 0 Imz 7 = C 0 dist(z, K) 7 = C 0 max(dist(z, K) 7 ; ). Let z D C\R with Re(z) / K. Then dist(z, K σ (θ,, θ J )). Since L σ is bounded on compact subsets of C\K σ (θ,, θ J ), we easily deduce that there exists some constant C such that for any z D with Re(z) / K, L σ (z) C C dist(z, K) 7 = C max(dist(z, K) 7 ; ). Since L σ (z) 0 when z +, L σ is bounded on C\D. Thus, there exists some constant C 2 such that for any z C\D, L σ (z) C 2 = C 2 max(dist(z, K) 7 ; ). Hence (c 2 ) is satisfied with C = max(c 0, C, C 2 ) and n = 7 and Proposition 4.5 follows from Theorem 4.3. We are now in position to deduce the following proposition from the estimate (4.2). Proposition 4.6. For any smooth function ϕ with compact support E[tr (ϕ(m ))] = ϕ dµ sc + Λ σ(ϕ) + O( ). (4.24) 2 Consequently, for ϕ smooth, constant outside a compact set and such that supp(ϕ) K σ (θ,, θ J ) =, tr (ϕ(m )) = O( ) a.s. (4.25) 4 3 Proof: Using the inverse Stieltjes tranform, we get respectively that, for any ϕ in C (R, R) with compact support, E[tr (ϕ(m ))] ϕdµ sc Λ σ(ϕ) = π lim Im ϕ(x)r (x + iy)dx (4.26) y 0 + R where r = g σ (z) g (z) + L σ(z) satisfies, according to Proposition 4.4, for any z C\R, r (z) 2 ( z + K)α P k ( Im(z) ) (4.27) where α = 3 and k = 7. We refer the reader to the Appendix of [C-D] where it is proved using the ideas of [H-T] that lim sup ϕ(x)h(x + iy)dx C < + (4.28) y 0 + when h is an analytic function on C\R which satisfies R h(z) ( z + K) α P k ( Im(z) ). (4.29) Dealing with h(z) = 2 r (z), we deduce that lim sup ϕ(x)r (x + iy)dx C y (4.30) R and then (4.24). Following the proof of Lemma 5.6 in [S], one can show that Λ σ () = 0. Then, the rest of the proof of 8
19 (4.25) sticks to the proof of Lemma 6.3 in [H-T] (using Lemma 3.). Following [H-T](Theorem 6.4), we set K = K σ (θ,, θ J )+( ε 2, ε 2 ), F = {t R; d(t, K σ(θ,, θ J )) ε} and take ϕ C (R, R) such that 0 ϕ, ϕ(t) = 0 for t K and ϕ(t) = for t F. Then according to (4.25), tr (ϕ(m )) = O( ) a.s. Since ϕ F, it follows that tr ( F (M )) = O( ) a.s. and thus the number of eigenvalues of M in F is almost surely a O( ) as goes to infinity. Since for each this number has to be an integer we deduce that the number of eigenvalues of M in F is zero almost surely as goes to infinity. The fundamental inclusion (2.6) follows, namely, for any ε > 0, almost surely Spect(M ) K σ (θ,, θ J ) + ( ε, ε) when goes to infinity. Such a method can be carried out in the case of Wigner real symmetric matrices; then the approximate Master equation is the following σ 2 g (z) 2 zg (z)++ κ 4 2 E[( G kk (z) 2) 2] σ 2 + k= E ( tr G (z) 2) +E (tr [A G (z)]) = O( 2 ). ote that the additionnal term σ2 E ( tr G (z) 2) already appears in the non-deformed GOE case in [S]. One can establish in a similar way the analog of (4.0) and then, following the proof of Corollary 3.3 in [S], deduce that E ( tr G (z) 2) = E ( (z s) 2) + O( ), where s is a centered semi-circular variable with variance σ 2. Hence by similar arguments as in the complex case, one get the master equation where E σ (z) = σ 2 g (z) 2 zg (z) + + E σ(z) = O( 2 ) J j=,j j 0 k j θ j z σ 2 + κ 4 g σ (z) θ j 2 g4 σ(z) + E ( (z s) 2). It can be proved that L σ (z) := g σ (z) E((z s) 2 )E σ (z) is the Stieltjes transform of a distribution Λ σ with compact support K σ (θ,, θ J ) too. The last arguments hold likewise in the real symmetric case. Hence we have established Theorem 4.4. Let J +σ (resp. J σ ) be the number of j s such that θ j > σ (resp. θ j < σ). Then for any ε > 0, almost surely, there is no eigenvalue of M in when is large enough. (, ρ θj ɛ) (ρ θj + ɛ, ρ θj ɛ) (ρ θj J σ + + ɛ, 2σ ɛ) (2σ + ɛ, ρ θj+σ ɛ) (ρ θ2 + ɛ, ρ θ ɛ) (ρ θ + ɛ, + ) (4.3) Remark 4.. As soon as ɛ > 0 is small enough that is when 2ɛ < min ( ρ θj ρ θj, J J σ + 2 j J or 2 j J +σ ; 2σ ρ θj J σ +; ρ θj+σ 2σ ) the union (4.3) is made of non-empty disjoint intervals. 9
20 4.4 Almost sure convergence of the first extremal eigenvalues As announced in the introduction, Theorem 2. is the analog of the main statement of [Bk-S] established for general spiked population models (.). The previous Theorem 4.4 is the main step of the proof since now, we can quite easily adapt the arguments needed for the conclusion of [Bk-S] viewing the Deformed Wigner model (.2) as the additive analog of the spiked population model (.). Let us consider one of the positive eigenvalue θ j of the A s. We recall that this implies that λ k+ +k j +i(a ) = θ j for all i k j. We want to show that if θ j > σ (i.e. with our notations, if j {,, J +σ }), the corresponding eigenvalues of M almost surely jump above the right endpoint 2σ of the semicircle support as i k j, λ k+ +k j +i(m ) ρ θj a.s. whereas the rest of the asymptotic spectrum of M lies below 2σ with λ k+ +k J+σ +(M ) 2σ a.s.. Analog results hold for the negative eigenvalues θ j (see points (c) and (d) of Theorem 2.). To describe the phenomenon, one can say that, when is large enough, the (first extremal) eigenvalues of M can be viewed as a smoothed deformation of the (first extremal) eigenvalues of A. So, our main purpose now is to establish the link between the spectra of the matrices M and A. According to the analysis made in the previous section (Proposition 4.5), we yet know that the θ j s are related to the ρ θj s through the Stieltjes transform g σ. More precisely, one has for all j such that θ j > σ, g σ (ρ θj ) = θ j. Actually, one can refine this analysis and state the following important Lemma 4.5 on g σ. As before, we denote (recall Lemma 4.4) by z σ its inverse which is given by z σ (g) = g + σ2 g. Using Lemma 4.4, one readily sees that the set c K σ (θ,, θ J ) can be characterized as follows x c K σ (θ,, θ J ) g G σ such that x = z σ (g) (4.32) where G σ := { g R : g > σ and } g / Spect(A ). Obviously, when x c K σ (θ,, θ J ), one has g = g σ (x). Lemma 4.5. Let [a, b] be a compact set contained in c K σ (θ,, θ J ). Then, (i) [ g σ (a), g σ (b) ] (Spect(A )) c. (ii) For all 0 < ˆσ < σ, the interval [zˆσ (g σ (a)); zˆσ (g σ (b))] is contained in zˆσ (g σ (b)) zˆσ (g σ (a)) b a. c Kˆσ (θ,, θ J ) and Proof: The function g σ being increasing, (i) readily follows from (4.32). oticing that G σ Gˆσ for all ˆσ < σ implies (recall also that g σ decreases on [a, b]) that [g σ (b); g σ (a)] Gˆσ. Relation (4.32) combined with the fact that the function zˆσ is decreasing on [g σ (b); g σ (a)] leads to [zˆσ (g σ (a)); zˆσ (g σ (b))] c Kˆσ (θ,, θ J ) and the first part of (ii) is stated. ow, we have l σ (ˆσ) := zˆσ (g σ (b)) zˆσ (g σ (a)) = g σ (b) g σ (a) + ˆσ2 (g σ (b) g σ (a)). 20
21 Since g σ decreases on [a, b], we have g σ (b) g σ (a) 0 and thus l σ is decreasing on R +. Then the last point of (ii) follows since l σ (σ) = b a. Thanks to this lemma and the previous Theorem 4.4, one can state the asymptotic relation between the spectrum of A and the one of M. Let [a, b] be an interval contained in c K σ (θ,, θ J ). By Theorem 4.4, [a, b] is outside the spectrum of M. Moreover, from Lemma 4.5 (i), it corresponds an interval [a, b ] outside the spectrum of A i.e. there is an integer i {0,..., } such that λ i +(A ) < g σ (a) := a and λ i (A ) > g σ (b) := b. (4.33) a and a (resp. b and b ) are linked as follows a = ρ a := a + σ2 a (resp. b = ρ b ). Our aim now is to prove that [a, b] splits the eigenvalues of M exactly as [a, b ] splits the spectrum of A. In [B-S2], one talks about the exact separation phenomenon. Theorem 4.5. With i satisfying (4.33), one has P[λ i +(M ) < a and λ i (M ) > b, for all large ] =. (4.34) Remark 4.2. This result is the analog of the main statement of [B-S2] (cf. Theorem.2 of [B-S2]) established in the spiked population setting (and in fact for quite general sample covariance matrices). Intuitively, the statement of Theorem 4.5 seems rather natural when σ is close to zero. Indeed, when goes to, since the spectrum of W is concentrated in [ 2σ, 2σ] (recall (2.9)), the spectrum of M would be close to the one of A as soon as σ will be close to zero (in other words, the spectrum of M is, viewed as a deformation of the one of A, continuous in σ in a neighborhood of zero). Actually, this can be justified regardless of the size of σ thanks to the following classical result (due to Weyl). Lemma 4.6. (cf. Theorem of [H-J]) Let B and C be two Hermitian matrices. For any pair of integers j, k such that j, k and j + k +, we have λ j+k (B + C) λ j (B) + λ k (C). For any pair of integers j, k such that j, k and j + k +, we have λ j (B) + λ k (C) λ j+k (B + C). ote that this lemma is the additive analogue of Lemma. of [B-S2] needed for the investigation of the spiked population model. Proof of Theorem 4.5: With our choice of [a, b] and the very definition of the spectrum of the A s, one can consider ɛ > 0 small enough such that, for all large, λ i +(A ) < g σ (a) ɛ and λ i (A ) > g σ (b) + ɛ. Given L > 0 and k 0 (their size will be determined later), we introduce the matrix W k,l W + k and M k,l = A + W k,l. We also define L σ k,l = σ, a k,l = z σk,l (g σ (a)) and b k,l = z σk,l (g σ (b)) + k L = 2
22 where we recall that z σk,l (g) = g + σ2 0,L k,lg. ote that for all L > 0, one has: M = M, a 0,L = a and b 0,L = b. We first choose the size of L as follows. We take L 0 large enough such that for all L L 0, ( max L σ2 ( g σ (a) + g σ (b) ); 3σ ) < b a (4.35) L 4 From the very definition of the a k,l s and b k,l s, one can easily see that b k,l a k,l b a (using the last point of (ii) in Lemma 4.5) and that this choice of L 0 ensures that, for all L L 0 and for all k 0, a k+,l a k,l < b a 4 and b k+,l b k,l < b a 4 ow, we fix L such that L L 0 and we write a k = a k,l, b k = b k,l and σ k = σ k,l. Lemma 4.6 first gives that λ i +(M k,l ) a k ɛ σ 2 k g σ(a) + + k L λ ( W ) for i < (4.36) and λ i (M k,l ) b k + ɛ σ 2 kg σ (b) + + k L λ ( W ) for i > 0. Furthermore, according to (2.9), the two first extremal eigenvalues of W are such that, almost surely, at least for large enough 0 < max( λ ( W ), λ ( W )) < 3σ. Thus, for all k, almost surely, at least for large enough ( does not depend on k), 0 < + k L max( λ ( W ), λ ( W )) < 3σ k. As σ k 0 when k +, there is K large enough such that for all k K, and then a.s for large enough max( 3σ k σ 2 kg σ (a), 3σ k + σ 2 kg σ (b) ) < ɛ λ i +(M k,l ) < a k if i <, (4.37) λ i (M k,l ) > b k if i > 0 (4.38) Since (4.37) respectively (4.38) are obviously satisfied too for i = resp. i = 0, we have established that for any i {0,..., }, for all k K, In particular, P [ λ i +(M k,l ) < a k and λ i (M k,l ) > b k for all large ] =. P [ λ i +(M K,L ) < a K and λ i (M K,L ) > b K for all large ] =. (4.39) ow, we shall show that with probability : for large, [a K, b K ] and [a, b] split the eigenvalues of, respectively, M K,L and M having equal amount of eigenvalues to the left sides of the intervals. To this aim, we will proceed by induction on k and show that, for all k 0, [a k, b k ] and [a, b] split the eigenvalues of M k,l and M (recall that M = M 0,L ) in exactly the same way. To begin, let us consider for all k 0, the set E k = {no eigenvalues of M k,l in [a k, b k ], for all large }. 22
arxiv: v2 [math.pr] 23 Feb 2011
The Annals of Probability 2009, Vol. 37, o., 47 DOI: 0.24/08-AOP394 c Institute of Mathematical Statistics, 2009 arxiv:0706.036v2 [math.pr] 23 Feb 20 THE LARGEST EIGEVALUES OF FIITE RAK DEFORMATIO OF LARGE
More informationNon white sample covariance matrices.
Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions
More informationCOMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW
Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the
More informationWigner s semicircle law
CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are
More informationThe norm of polynomials in large random matrices
The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.
More informationEigenvalue variance bounds for Wigner and covariance random matrices
Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual
More informationLecture I: Asymptotics for large GUE random matrices
Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationThe circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)
The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues
More informationMarkov operators, classical orthogonal polynomial ensembles, and random matrices
Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random
More informationExponential tail inequalities for eigenvalues of random matrices
Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify
More informationA Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices
A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr
More informationRandom matrices and free probability. Habilitation Thesis (english version) Florent Benaych-Georges
Random matrices and free probability Habilitation Thesis (english version) Florent Benaych-Georges December 4, 2011 2 Introduction This text is a presentation of my main researches in mathematics since
More informationConcentration Inequalities for Random Matrices
Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic
More informationA Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices
A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems
More informationStein s Method and Characteristic Functions
Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method
More informationTHE EIGENVALUES AND EIGENVECTORS OF FINITE, LOW RANK PERTURBATIONS OF LARGE RANDOM MATRICES
THE EIGENVALUES AND EIGENVECTORS OF FINITE, LOW RANK PERTURBATIONS OF LARGE RANDOM MATRICES FLORENT BENAYCH-GEORGES AND RAJ RAO NADAKUDITI Abstract. We consider the eigenvalues and eigenvectors of finite,
More informationInhomogeneous circular laws for random matrices with non-identically distributed entries
Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST
More informationIsotropic local laws for random matrices
Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:
More information1 Intro to RMT (Gene)
M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i
More informationPainlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE
Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,
More informationhere, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional
15. Perturbations by compact operators In this chapter, we study the stability (or lack thereof) of various spectral properties under small perturbations. Here s the type of situation we have in mind:
More informationInvertibility of symmetric random matrices
Invertibility of symmetric random matrices Roman Vershynin University of Michigan romanv@umich.edu February 1, 2011; last revised March 16, 2012 Abstract We study n n symmetric random matrices H, possibly
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationRandom Fermionic Systems
Random Fermionic Systems Fabio Cunden Anna Maltsev Francesco Mezzadri University of Bristol December 9, 2016 Maltsev (University of Bristol) Random Fermionic Systems December 9, 2016 1 / 27 Background
More informationOn the principal components of sample covariance matrices
On the principal components of sample covariance matrices Alex Bloemendal Antti Knowles Horng-Tzer Yau Jun Yin February 4, 205 We introduce a class of M M sample covariance matrices Q which subsumes and
More informationarxiv: v3 [math-ph] 21 Jun 2012
LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.
More informationFluctuations from the Semicircle Law Lecture 4
Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More information1 Tridiagonal matrices
Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate
More informationarxiv: v2 [math.pr] 16 Aug 2014
RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationLocal semicircle law, Wegner estimate and level repulsion for Wigner random matrices
Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:
More informationPCA with random noise. Van Ha Vu. Department of Mathematics Yale University
PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical
More informationThe Hadamard product and the free convolutions
isid/ms/205/20 November 2, 205 http://www.isid.ac.in/ statmath/index.php?module=preprint The Hadamard product and the free convolutions Arijit Chakrabarty Indian Statistical Institute, Delhi Centre 7,
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationRandom Correlation Matrices, Top Eigenvalue with Heavy Tails and Financial Applications
Random Correlation Matrices, Top Eigenvalue with Heavy Tails and Financial Applications J.P Bouchaud with: M. Potters, G. Biroli, L. Laloux, M. A. Miceli http://www.cfm.fr Portfolio theory: Basics Portfolio
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationSpectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues
Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues László Erdős Antti Knowles 2 Horng-Tzer Yau 2 Jun Yin 2 Institute of Mathematics, University of Munich, Theresienstrasse
More informationApplications of random matrix theory to principal component analysis(pca)
Applications of random matrix theory to principal component analysis(pca) Jun Yin IAS, UW-Madison IAS, April-2014 Joint work with A. Knowles and H. T Yau. 1 Basic picture: Let H be a Wigner (symmetric)
More information1 Fourier Integrals of finite measures.
18.103 Fall 2013 1 Fourier Integrals of finite measures. Denote the space of finite, positive, measures on by M + () = {µ : µ is a positive measure on ; µ() < } Proposition 1 For µ M + (), we define the
More informationA new method to bound rate of convergence
A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution
More informationThe Isotropic Semicircle Law and Deformation of Wigner Matrices
The Isotropic Semicircle Law and Deformation of Wigner Matrices Antti Knowles 1 and Jun Yin 2 Department of Mathematics, Harvard University Cambridge MA 02138, USA knowles@math.harvard.edu 1 Department
More informationComparison Method in Random Matrix Theory
Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH
More informationSelfadjoint Polynomials in Independent Random Matrices. Roland Speicher Universität des Saarlandes Saarbrücken
Selfadjoint Polynomials in Independent Random Matrices Roland Speicher Universität des Saarlandes Saarbrücken We are interested in the limiting eigenvalue distribution of an N N random matrix for N. Typical
More informationarxiv: v5 [math.na] 16 Nov 2017
RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationThe Matrix Dyson Equation in random matrix theory
The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC
More informationFree Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada
Free Probability Theory and Random Matrices Roland Speicher Queen s University Kingston, Canada We are interested in the limiting eigenvalue distribution of N N random matrices for N. Usually, large N
More informationarxiv: v4 [math.pr] 10 Sep 2018
Lectures on the local semicircle law for Wigner matrices arxiv:60.04055v4 [math.pr] 0 Sep 208 Florent Benaych-Georges Antti Knowles September, 208 These notes provide an introduction to the local semicircle
More informationRefined Inertia of Matrix Patterns
Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl
More informationThe Free Central Limit Theorem: A Combinatorial Approach
The Free Central Limit Theorem: A Combinatorial Approach by Dennis Stauffer A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 (Honour s Seminar)
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationWe are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero
Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.
More informationB. Appendix B. Topological vector spaces
B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function
More informationAssessing the dependence of high-dimensional time series via sample autocovariances and correlations
Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),
More informationDISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES
DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz
More informationThe Kadison-Singer Conjecture
The Kadison-Singer Conjecture John E. McCarthy April 8, 2006 Set-up We are given a large integer N, and a fixed basis {e i : i N} for the space H = C N. We are also given an N-by-N matrix H that has all
More informationBulk scaling limits, open questions
Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationSmall Ball Probability, Arithmetic Structure and Random Matrices
Small Ball Probability, Arithmetic Structure and Random Matrices Roman Vershynin University of California, Davis April 23, 2008 Distance Problems How far is a random vector X from a given subspace H in
More informationFailure of the Raikov Theorem for Free Random Variables
Failure of the Raikov Theorem for Free Random Variables Florent Benaych-Georges DMA, École Normale Supérieure, 45 rue d Ulm, 75230 Paris Cedex 05 e-mail: benaych@dma.ens.fr http://www.dma.ens.fr/ benaych
More informationCommutative Banach algebras 79
8. Commutative Banach algebras In this chapter, we analyze commutative Banach algebras in greater detail. So we always assume that xy = yx for all x, y A here. Definition 8.1. Let A be a (commutative)
More informationFluctuations of Random Matrices and Second Order Freeness
Fluctuations of Random Matrices and Second Order Freeness james mingo with b. collins p. śniady r. speicher SEA 06 Workshop Massachusetts Institute of Technology July 9-14, 2006 1 0.4 0.2 0-2 -1 0 1 2-2
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationFunctional Analysis HW #3
Functional Analysis HW #3 Sangchul Lee October 26, 2015 1 Solutions Exercise 2.1. Let D = { f C([0, 1]) : f C([0, 1])} and define f d = f + f. Show that D is a Banach algebra and that the Gelfand transform
More informationConvergence of spectral measures and eigenvalue rigidity
Convergence of spectral measures and eigenvalue rigidity Elizabeth Meckes Case Western Reserve University ICERM, March 1, 2018 Macroscopic scale: the empirical spectral measure Macroscopic scale: the empirical
More informationRandom Matrix: From Wigner to Quantum Chaos
Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution
More informationQuantum Chaos and Nonunitary Dynamics
Quantum Chaos and Nonunitary Dynamics Karol Życzkowski in collaboration with W. Bruzda, V. Cappellini, H.-J. Sommers, M. Smaczyński Phys. Lett. A 373, 320 (2009) Institute of Physics, Jagiellonian University,
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationOn singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title
itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no.
More informationBasic Properties of Metric and Normed Spaces
Basic Properties of Metric and Normed Spaces Computational and Metric Geometry Instructor: Yury Makarychev The second part of this course is about metric geometry. We will study metric spaces, low distortion
More informationare Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication
7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =
More informationEigenvalue Statistics for Toeplitz and Circulant Ensembles
Eigenvalue Statistics for Toeplitz and Circulant Ensembles Murat Koloğlu 1, Gene Kopp 2, Steven J. Miller 1, and Karen Shen 3 1 Williams College 2 University of Michigan 3 Stanford University http://www.williams.edu/mathematics/sjmiller/
More informationThe deterministic Lasso
The deterministic Lasso Sara van de Geer Seminar für Statistik, ETH Zürich Abstract We study high-dimensional generalized linear models and empirical risk minimization using the Lasso An oracle inequality
More information2. The Concept of Convergence: Ultrafilters and Nets
2. The Concept of Convergence: Ultrafilters and Nets NOTE: AS OF 2008, SOME OF THIS STUFF IS A BIT OUT- DATED AND HAS A FEW TYPOS. I WILL REVISE THIS MATE- RIAL SOMETIME. In this lecture we discuss two
More informationSemicircle law on short scales and delocalization for Wigner random matrices
Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)
More informationTools from Lebesgue integration
Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationRepresentations of moderate growth Paul Garrett 1. Constructing norms on groups
(December 31, 2004) Representations of moderate growth Paul Garrett Representations of reductive real Lie groups on Banach spaces, and on the smooth vectors in Banach space representations,
More informationRectangular Young tableaux and the Jacobi ensemble
Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau
More informationA NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES
A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES S. DEKEL, G. KERKYACHARIAN, G. KYRIAZIS, AND P. PETRUSHEV Abstract. A new proof is given of the atomic decomposition of Hardy spaces H p, 0 < p 1,
More informationSpectral law of the sum of random matrices
Spectral law of the sum of random matrices Florent Benaych-Georges benaych@dma.ens.fr May 5, 2005 Abstract The spectral distribution of a matrix is the uniform distribution on its spectrum with multiplicity.
More informationInvertibility of random matrices
University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationRandom Toeplitz Matrices
Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n
More informationSecond Order Freeness and Random Orthogonal Matrices
Second Order Freeness and Random Orthogonal Matrices Jamie Mingo (Queen s University) (joint work with Mihai Popa and Emily Redelmeier) AMS San Diego Meeting, January 11, 2013 1 / 15 Random Matrices X
More informationMASTERS EXAMINATION IN MATHEMATICS SOLUTIONS
MASTERS EXAMINATION IN MATHEMATICS PURE MATHEMATICS OPTION SPRING 010 SOLUTIONS Algebra A1. Let F be a finite field. Prove that F [x] contains infinitely many prime ideals. Solution: The ring F [x] of
More informationPart V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory
Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite
More informationarxiv: v1 [math.pr] 22 May 2008
THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered
More informationFluctuations of random tilings and discrete Beta-ensembles
Fluctuations of random tilings and discrete Beta-ensembles Alice Guionnet CRS (E S Lyon) Workshop in geometric functional analysis, MSRI, nov. 13 2017 Joint work with A. Borodin, G. Borot, V. Gorin, J.Huang
More informationarxiv: v1 [math.pr] 22 Dec 2018
arxiv:1812.09618v1 [math.pr] 22 Dec 2018 Operator norm upper bound for sub-gaussian tailed random matrices Eric Benhamou Jamal Atif Rida Laraki December 27, 2018 Abstract This paper investigates an upper
More informationarxiv: v1 [math-ph] 19 Oct 2018
COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]
More informationANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.
ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n
More informationRandom Matrices: Invertibility, Structure, and Applications
Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37
More information285K Homework #1. Sangchul Lee. April 28, 2017
285K Homework #1 Sangchul Lee April 28, 2017 Problem 1. Suppose that X is a Banach space with respect to two norms: 1 and 2. Prove that if there is c (0, such that x 1 c x 2 for each x X, then there is
More information