Local Semicircle Law and Complete Delocalization for Wigner Random Matrices

Size: px
Start display at page:

Download "Local Semicircle Law and Complete Delocalization for Wigner Random Matrices"

Transcription

1 Local Semicircle Law and Complete Delocalization for Wigner Random Matrices The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published Version Accessed Citable Link Terms of Use Erds, László, Benjamin Schlein, and Horng-Tzer Yau Local Semicircle Law and Complete Delocalization for Wigner Random Matrices. Communications in Mathematical Physics 287 (2) (September 24): doi:0.007/s doi:0.007/s January 2, 208 0:6:56 PM EST This article was downloaded from Harvard University's DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at (Article begins on next page)

2 arxiv: v4 [math-ph] 23 Sep 2008 Local semicircle law and complete delocalization for Wigner random matrices László Erdős, Benjamin Schlein and Horng-Tzer Yau 2 Institute of Mathematics, University of Munich, Theresienstr. 39, D Munich, Germany Department of Mathematics, Harvard University Cambridge MA 0238, USA 2 Jun 6, 2008 Abstract We consider N N Hermitian random matrices with independent identical distributed entries. The matrix is normalized so that the average spacing between consecutive eigenvalues is of order /N. Under suitable assumptions on the distribution of the single matrix element, we prove that, away from the spectral edges, the density of eigenvalues concentrates around the Wigner semicircle law on energy scales η N (log N) 8. Up to the logarithmic factor, this is the smallest energy scale for which the semicircle law may be valid. We also prove that for all eigenvalues away from the spectral edges, the l -norm of the corresponding eigenvectors is of order O(N /2 ), modulo logarithmic corrections. The upper bound O(N /2 ) implies that every eigenvector is completely delocalized, i.e., the maximum size of the components of the eigenvector is of the same order as their average size. In the Appendix, we include a lemma by J. Bourgain which removes one of our assumptions on the distribution of the matrix elements. AMS Subject Classification: 5A52, 82B44 Running title: Local semicircle law Key words: Semicircle law, Wigner random matrix, random Schrödinger operator, density of states, localization, extended states. Supported by Sofja-Kovalevskaya Award of the Humboldt Foundation. On leave from Cambridge University, UK Partially supported by NSF grant DMS

3 Introduction The Wigner semicircle law states that the empirical density of the eigenvalues of a random matrix is given by the universal semicircle distribution. This statement has been proved for many different ensembles, in particular for the case when the distributions of the entries of the matrix are independent, identically distributed (i.i.d.). To fix the scaling, we normalize the matrix so that the bulk of the spectrum lies in the energy interval [ 2, 2], i.e., the average spacing between consecutive eigenvalues is of order /N. We now consider a window of size η in the bulk so that the typical number of eigenvalues is of order Nη. In the usual statement of the semicircle law, η is a fixed number independent of N and it is taken to zero only after the limit N. This can be viewed as the largest scale on which the semicircle law is valid. On the other extreme, for the smallest scale, one may take η = k/n and take the limit N followed by k. If the semicircle law is valid in this sense, we shall say that the local semicircle law holds. Below this smallest scale, the eigenvalue distribution is expected to be governed by the Dyson statistics related to sine kernels. The Dyson statistics was proved for many ensembles (see [, 4] for a review), including Wigner matrices with Gaussian convoluted distributions [7]. In this paper, we establish the local semicircle law up to logarithmic factors in the energy scale, i.e., for η N (log N) 8. The result holds for any energy window in the bulk spectrum away from the spectral edges. In [5] we have proved the same statement for η N 2/3 (modulo logarithmic corrections). Prior to our work the best result was obtained in [2] for η N /2. See also [6] and [8] for related and earlier results. As a corollary, our result also proves that no gap between consecutive bulk eigenvalues can be bigger than C(log N) 8 /N, to be compared with the expected average /N behavior given by Dyson s law. It is widely believed that the eigenvalue distribution of the Wigner random matrix and the random Schrödinger operator in the extended (or delocalized) state regime are the same up to normalizations. Although this conjecture is far from the reach of the current method, a natural uestion arises as to whether the eigenvectors of random matrices are extended. More precisely, if v = (v,..., v N ) is an l 2 -normalized eigenvector, v =, we say that v is completely delocalized if v = max j v j is bounded from above by CN /2, the average size of v j. In this paper, we shall prove that all eigenvectors with eigenvalues away from the spectral edges are completely delocalized (modulo logarithmic corrections) in probability. Similar results, but with CN /2 replaced by CN /3 were proved in [5]. Notice that our new result, in particular, answers (up to logarithmic factors) the uestion posed by T. Spencer whether v 4 is of order N /4. Denote the (i, j)-th entry of an N N matrix H by h i,j = h ij. When there is no confusion, we omit the comma between the two subscripts. We shall assume that the matrix is Hermitian, i.e., h ij = h ji. These matrices form a Hermitian Wigner ensemble if h ij = N /2 [x ij + y ij ], (i < j), and h ii = N /2 x ii, (.) where x ij, y ij (i < j) and x ii are independent real random variables with mean zero. We assume that x ij, y ij (i < j) all have a common distribution ν with variance /2 and with a strictly positive density function: dν(x) = (const.)e g(x) dx. The diagonal elements, x ii, also have a common distribution, d ν(x) = (const.)e eg(x) dx, that may be different from dν. Let P and E denote the probability and the expectation value, respectively, w.r.t the joint distribution of all matrix elements. We need to assume further conditions on the distributions of the matrix elements in addition to (.). C) The function g is twice differentiable and it satisfies sup g (x) <. (.2) x R 2

4 C2) There exists a δ > 0 such that e δx2 d ν(x) <. (.3) C4) The measure ν satisfies the logarithmic Sobolev ineuality, i.e., there exists a constant C sob such that for any density function u > 0 with u dν =, u logu dν C sob u 2 dν. (.4) Here we have followed the convention in [5] to use the label C4) for the logarithmic Sobolev bound and reserved C3) for a spectral gap condition in [5]. We will also need the decay condition (.3) for the measure dν, i.e., for some small δ > 0, e δx2 dν(x) <. (.5) This condition was assumed in the earlier version of the manuscript, but J.-D. Deuschel and M. Ledoux kindly pointed out to us that (.5) follows from C4), see [9]. Condition C) is needed only because we will use Lemma 2.3 of [5] in the proof of the following Theorem.. J. Bourgain has informed us that this lemma can also be proved without this condition. We include the precise statement and his proof in the Appendix. Notation. We will use the notation A both for the Lebesgue measure of a set A R and for the cardinality of a discrete set A Z. The usual Hermitian scalar product for vectors x,y C N will be denoted by x y or by (x,y). We will use the convention that C denotes generic large constants and c denotes generic small positive constants whose values may change from line to line. Since we are interested in large matrices, we always assume that N is sufficiently large. Let H be the N N Wigner matrix with eigenvalues µ µ 2... µ N. For any spectral parameter z = E+iη C, η > 0, we denote the Green function by G z = (H z). Let F(x) = F N (x) be the empirical distribution function of the eigenvalues We define the Stieltjes transform of F as m = m(z) = N Tr G z = and we let ρ = ρ η (E) = Im m(z) π F(x) = : µ x }. (.6) N R = Nπ Im Tr G z = Nπ df(x) x z, (.7) = η (µ E) 2 + η 2 (.8) be the normalized density of states of H around energy E and regularized on scale η. The random variables m and also depend on N, when necessary, we will indicate this fact by writing m N and N. For any z = E + iη we let sc (x)dx m sc = m sc (z) = x z 3 R

5 be the Stieltjes transform of the Wigner semicircle distribution function whose density is given by For κ, η > 0 we define the set S N,κ,eη := sc (x) = 4 x2 ( x 2). 2π } z = E + iη C : E 2 κ, η η and for η = N (log N) 8 we write S N,κ := z = E + iη C : E 2 κ, The following two theorems are the main results of this paper. (log N) 8 N } η. Theorem. Let H be an N N Wigner matrix as described in (.) and assume the conditions (.2), (.3) and (.4). Then for any κ > 0 and ε > 0, the Stieltjes transform m N (z) (see (.7)) of the empirical eigenvalue distribution of the N N Wigner matrix satisfies } P sup m N (z) m sc (z) ε e c(log N)2 (.9) z S N,κ where c > 0 depends on κ, ε. In particular, the density of states η(e) converges to the Wigner semicircle law in probability uniformly for all energies away from the spectral edges and for all energy windows at least N (log N) 8. Furthermore, let η = η (N) such that (log N) 8 /N η as N, then we have the convergence of the counting function as well: P sup E 2 κ N η (E) 2Nη } sc (E) ε e c(log N)2 (.0) for any ε > 0, where N η (E) = : µ E η } denotes the number of eigenvalues in the interval [E η, E + η ]. This result identifies the density of states away from the spectral edges in a window where the typical number of eigenvalues is of order bigger than (log N) 8. Our scale is not sufficiently small to identify individual eigenvalues, in particular we do not know whether the local eigenvalue spacing follows the expected Dyson statistics characterized by the sine-kernel or some other local statistics, e.g., that of a Poisson point process. Theorem.2 Let H be an N N Wigner matrix as described in (.) and satisfying the conditions (.2), (.3) and (.4). Fix κ > 0, and assume that C is large enough. Then there exists c > 0 such that } C(log N)9/2 P v with Hv = µv, v =, µ [ 2 + κ, 2 κ] and v e c(log N)2. N /2 We now sketch the key idea to prove Theorem.; Theorem.2 can be proved following similar ideas used in [5]. 4

6 Let B (k) denote the (N ) (N ) minor of H after removing the k-th row and k-th column and let m k (z) denote the Stieltjes transform of the eigenvalue distribution function associated with B (k). It is known that m(z), defined in (.7), satisfies a recurrence relation m(z) = N h kk z ( ), (.) N m (k) (z) X k k= where X k (defined precisely in (2.4)) is an error term depending on B (k) and the k-th column and row elements of the random matrix H. If we neglect X k (and h kk which is of order N /2 by definition) and replace m (k) by m, we obtain an euation for m and this leads to the Stieltjes transform of the semi-circle law. So our main task is to prove that X k is negligible. Unfortunately, X k depends crucially on the eigenvalues and eigenfunctions of B (k). In an earlier work [2], the estimate on X k was done via an involved bootstrap argument (and valid up to order N /2 ). The bootstrapping is needed in [2] since X k depends critically on properties of B (k) for which there was only limited a priori information. In our preceding paper [5], we split m and m (k) into their means and variances; the variances were then shown to be negligible up to the scale N 2/3 (The variance control of m up to the scale N /2 was already in [6]). On the other hand, the means of m and m (k) are very close due to the fact that the eigenvalues of H and B (k) are interlaced. Finally, X k was controlled via an estimate on its fourth moment. We have thus arrived at a fixed point euation for the mean of m whose uniue solution is the Stieltjes transform of the semi-circle law. In the current paper, we avoid the variance control by viewing m and m (k) directly as random variables in the recurrence relation (.). Furthermore, the moment control on X k is now improved to an exponential moment estimate. Since our estimate on the fourth moment of X k was done via a spectral gap argument, it is a folklore that moment estimates usually can be lifted to an exponential moment estimate provided the spectral gap estimate is replaced by a logarithmic Sobolev ineuality. In this paper, we use a concentration of measure ineuality to avoid all bootstrap arguments appearing both in [2] and [8]. In the previous version of this paper we obtained the concentration ineuality by using the logarithmic Sobolev ineuality together with the Gibbs entropy ineuality. We would like to thank the referee who pointed out to us that the concentration ineuality of Bobkov and Götze [3] can be used directly to shortcut our original proof. The applicability of the Bobkov-Götze ineuality as well as some of the heuristic arguments presented here, however, depend crucially on an a priori upper bound on m(z) ; this was obtained via a large deviation estimate on the eigenvalue concentration [5]. 2 Proof of Theorem. The proof of (.0) follows from (.9) exactly as in Corollary 4.2 of [5], so we focus on proving (.9). We first remove the supremum in (.9). For any two points z, z S N,κ,η we have m N (z) m N (z ) N 2 z z since the gradient of m N (z) is bounded by Im z 2 N 2 on S N,κ. We can choose a set of at most Q = Cε 2 N 4 points, z, z 2,...,z Q, in S N,κ,η such that for any z S N,κ,η there exists a point z j with z z j 4 εn 2. In particular, m N (z) m N (z j ) ε/4 if N is large enough and m sc (z) m sc (z j ) ε/4. Since Im z j η, under the condition that η N (log N) 8 we have } P sup m N (z) m sc (z) ε z S N,κ Q j= P m N (z j ) m sc (z j ) ε } 2 5

7 Therefore, in order to conclude (.9), it suffices to prove that } P m N (z) m sc (z) ε e c(log N)2 (2.) for each fixed z S N,κ. Let B (k) denote the (N ) (N ) minor of H after removing the k-th row and k-th column. Note that B (k) is an (N ) (N ) Hermitian Wigner matrix with a normalization factor off by ( N )/2. Let λ (k) λ (k) 2... λ (k) eigenvectors. N denote its eigenvalues and u(k),...,u(k) N the corresponding normalized Let a (k) = (h k,, h k,2,...h k,k, h k,k+,... h k,n ) C N, i.e. the k-th column after removing the diagonal element h k,k = h kk. Computing the (k, k) diagonal element of the resolvent G z, we have where we defined G z (k, k) = [h h kk z a (k) (B (k) z) a = (k) kk z N N = ξ (k) := Na (k) u (k) 2. λ (k) ξ (k) z ] (2.2) Similarly to the definition of m(z) in (.7), we also define the Stieltjes transform of the density of states of B (k) m (k) = m (k) (z) = N Tr B (k) z = df (k) (x) R x z with the empirical counting function F (k) (x) = : λ (k) x }. N The spectral parameter z is fixed throughout the proof and we will often omit it from the argument of the Stieltjes transforms. It follows from (2.2) that m = m(z) = N G z (k, k) = N k= k=. (2.3) h kk z a (k) (B (k) z) a (k) Let E k denote the expectation value w.r.t the random vector a (k). Define the random variable X k (z) = X k := a (k) B (k) z a(k) E k a (k) B (k) z a(k) = N N = ξ (k) λ (k) z (2.4) where we used that E k ξ (k) We note that = u (k) 2 =. E k a (k) With this notation it follows from (2.2) that B (k) z a(k) = N m = N k= λ (k) z = h kk z ( N 6 ( ) m (k) N ) m (k) X k. (2.5)

8 We use that ( m ) m (k) df(x) = ( N x z N ) df (k) (x) x z We recall that the eigenvalues of H and B (k) are interlaced, = NF(x) (N )F (k) (x) N (x z) 2 dx. µ λ (k) µ 2 λ (k) 2... λ (k) N µ N, (2.6) (see e.g. Lemma 2.5 of [5]), therefore we have max x NF(x) (N )F (k) (x). Thus ( m ) m (k) dx N N x z 2 C Nη. (2.7) We postpone the proof of the following lemma: Lemma 2. Suppose that v and λ are eigenvectors and eigenvalues of an N N random matrix with a law satisfying the assumption of Theorem.. Let X = N ξ λ z with z = E +iη, ξ = b v 2, where the components of b are i.i.d. random variables satisfying (.4). Then there exist sufficiently small positive constants ε 0 and c such that in the joint product probability space of b and the law of the random matrices we have for any ε ε 0 and η (log N) 8 /N. cε(log N)2 P[ X ε] e For a given ε > 0 and z = E + iη S N,κ, we set z n = E + i2 n η and we define the event Ω = N k= [log 2 (/η)] n=0 X k (z n ) ε/3} N h kk ε/3}, where [ ] denotes the integer part. Since h kk = N /2 b kk with b kk satisfying (.3), we have k= P h kk ε/3} Ce δε2 N/9. We now apply Lemma 2. for each X k (z n ) and conclude that with a sufficiently small c > 0. On the complement Ω c we have from (2.5) m(z n ) = N cε(log N)2 P(Ω) e k= m(z n ) z n + δ k 7

9 where ( δ k = δ k (z n ) = h kk + m(z n ) ) m k (z n ) X k (z n ) N are random variables satisfying δ k ε by (2.7). After expansion, the last euation implies that m(z n ) + m(z n ) + z n ε ( Im(m(zn ) + z n ) )( Im(m(z n ) + z n ) ε ), (2.8) if Im(m(z n ) + z n ) > ε, using that m(z n ) z n + δ k Im(m(z n ) + z n ) ε. We note that for any z S N,κ the euation M + M + z = 0 (2.9) has a uniue solution with ImM > 0, namely M = m sc (z), the Stieltjes transform of the semicircle law. Note that there exists c(κ) > 0 such that Imm sc (E + iη) c(κ) for any E 2 κ, uniformly in η. The euation (2.9) is stable in the following sense. For any small δ, let M = M(z, δ) be a solution to with Im M > 0. Explicitly, we have M + M + z = δ (2.0) M = z + z zδ + δ 2 2 where we have chosen the suare root so that ImM > 0 when δ = 0 and Imz > 0. On the compact set z S N,κ, z 2 4 is bounded away from zero and thus + δ 2, M m sc C κ δ (2.) for some constant C κ depending only on κ. Now we perform a bootstrap argument in the imaginary part of z to prove that m(z) m sc (z) C ε (2.2) uniformly in z S N,κ with a sufficiently large constant C. Fix z = E + iη with E 2 κ and let z n = E + i2 n η. For n = [log 2 (/η)], we have Im z n [ 2, ], (2.2) follows from (2.8) with some small ε, since the right hand side of (2.8) is bounded by Cε. Suppose now that (2.2) has been proven for z = z n, for some n with η n = Im z n [2N (log N) 8, ]. We want to prove it for z = z n, with Im z n = η n /2. By integrating the ineuality η n /2 (x E) 2 + (η n /2) 2 η n 2 (x E) 2 + ηn 2 with respect to df(x) we obtain that Im m(z n ) 2 Im m(z n) 2 c(κ) C ε > c(κ) 4 for sufficiently small ε, where (2.2) and Imm sc (z n ) c(κ) were used. Thus the right hand side of (2.8) with z n replaced with z n is bounded by Cε, the constant depending only on κ. Applying the stability bound (2.), we get (2.2) for z = z n. Continuing the induction argument, finally we obtain (2.2) for z = z 0 = E + iη. 8

10 3 Proof of Lemma 2. Let I n = [nη, (n + )η] and K 0 be a sufficiently large number. We have [ K 0, K 0 ] m n= mi n with m CK 0 /η. Denote by Ω the event Ω := max N I n n Nη(log N) 2} max λ K 0 } where N In = : λ I n } is the number of eigenvalues in the interval I n. From Theorem 2. and Lemma 7.4 of [5], the probability of Ω is bounded by P(Ω) e c(log N)2. for some sufficiently small c > 0. Therefore, if P b denotes the probability w.r.t. the variable b, we find [ ] P[ X ε] e c(log N)2 + E Ω cp b [ X ε] [ ] [ ] e c(log N)2 + E Ω c P b [ReX ε/2] + E Ω c P b [ReX ε/2] (3.) [ ] [ ] + E Ω c P b [ImX ε/2] + E Ω c P b [ImX ε/2]. The last four terms on the r.h.s. of the last euation can all be handled with similar arguments; we show, for example, how to bound the last term. For any T > 0, we have [ ] [ E Ω c P b [ImX ε/2] e Tε/2 E Ω c E b e T ImX] (3.2) where E b denotes the expectation w.r.t. the variable b. Using the fact that the distribution of the components of b satisfies the log-sobolev ineuality (.4), it follows from the concentration ineuality (Theorem 2. from [3]) that ( E b e T ImX Csob T 2 ) E b exp (ImX) 2 (3.3) 2 where (ImX) 2 = k = k ( (ImX) 2 + (ImX) 2 ) (Re b k ) (Im b k ) ( η ( (b v N λ z 2 )v (k) + (b v )v (k) ) 2 + η ( (b v N λ z 2 )v (k) (b v )v (k) ) 2) = 4 η2 ξ N 2 λ z 4 4 Nη Y. Here we defined the random variable Y = N 9 ξ λ z. (3.4)

11 From (3.2), choosing T = 2(log N) 2 and using that Nη (log N) 8, we obtain [ ] [ ( ) E Ω c P b [ImX ε/2] e ε(log 8Csob ] N)2 E Ω c E b exp (log N) 4 Y. (3.5) Let ν = 8C sob /(log N) 4. By Hölder ineuality, we can estimate E b e νy = E b where c =. We shall choose where is given by = ν N [ ν ] exp N λ z ξ λ z ν log N Nη c = N λ z ν ( [ νc ] ) /c E b exp N λ z ξ, (3.6) max N I n ν(log N) 3 = 8C sob n log N. Here we have used max n N In Nη(log N) 2 due to that we are in the set Ω c. Notice that with this choice, νc N λ z 8C sob log N is a small number. In the proof of Lemma 7.4 of [5] (see euation (7.3) of [5]) we showed that E b e τξ < K with a universal constant K if τ is sufficiently small depending on δ in (.3). From (3.5), it follows that [ ] E Ω c P b [ImX ε/2] Ke ε(log N)2 (3.7) Since similar bounds hold for the other terms on the r.h.s. of (3.) as well, this concludes the proof of the lemma. 4 Delocalization of eigenvectors Here we prove Theorem.2, the argument follows the same line as in [5] (Proposition 5.3). Let η = N (log N) 9 and partition the interval [ 2 + κ, 2 κ] into n 0 = O(/η ) O(N) intervals I, I 2,...I n0 of length η. As before, let N I = β : µ β I} denote the eigenvalues in I. By using (.0) in Theorem., we have P max N I n εnη } e c(log N)2. n if ε is sufficiently small (depending on κ). Suppose that µ I n, and that Hv = µv. Consider the decomposition ( ) h a H = (4.) a B 0

12 where a = (h,2,...h,n ) and B is the (N ) (N ) matrix obtained by removing the first row and first column from H. Let λ and u (for =, 2,..., N ) denote the eigenvalues and the normalized eigenvectors of B. From the eigenvalue euation Hv = µv and from (4.) we find that hv + a w = µv, and av + Bw = µw (4.2) with w = (v 2,...,v N ) t. From these euations we obtain w = (µ B) av and thus Since w 2 = v 2, we obtain w 2 = w w = v 2 a (µ B) 2 a v 2 = + a (µ B) 2 a = + N ξ (µ λ ) 2 4N[η ]2 λ I n ξ, (4.3) where in the second euality we set ξ = Na u 2 and used the spectral representation of B. By the interlacing property of the eigenvalues of H and B, there exist at least N In eigenvalues λ in I n. Therefore, using that the components of any eigenvector are identically distributed, we have C(log ) N)9/2 P ( v with Hv = µv, v =, µ [ 2 + κ, 2 κ] and v N /2 Nn 0 sup P ( v with Hv = µv, v =, µ I n and v 2 n ( ) constn 2 sup P n constn 2 sup P n ξ 4Nη C λ I n ( ξ 4Nη C λ I n and N I n εnη constn 2 e c(log N)9 + constn 2 e c(log N)2 e c (log N) 2, ) C(log N)9 N + constn 2 sup n ) P (N In εnη ) by choosing C sufficiently large, depending on κ via ε. Here we used Corollary 2.4. of [5] that states that under condition C) in (.2) there exists a positive c such that for any δ small enough ( ) P ξ δm e cm (4.5) A for all A,, N } with cardinality A = m. We remark that by applying Lemma 4. from the Appendix instead of Lemma 2.3 in [5], the bound (4.5) also holds without condition C) if the matrix elements are bounded random variables. It is clear that the boundedness assumption in Lemma 4. can be relaxed by performing an appropriate cutoff argument; we will not pursue this direction in this article. Appendix: Removal of the assumption C) Jean Bourgain School of Mathematics (4.4)

13 Institute for Advanced Study Princeton, NJ 08540, USA The following Lemma shows that the assumption C) in Lemma 2.3 and its corollary in [5] can be removed. Lemma 4. Suppose that z,..., z N are bounded, complex valued i.i.d. random variables with Ez i = 0 and E z i 2 = a > 0. Let P : C N C N be a rank-m projection, and z = (z,...,z N ). Then, if δ is small enough, there exists c > 0 such that P ( Pz 2 δm ) e cm. Lemma 2.3 in [5] stated that the same conclusion holds under the condition C), but it reuired no assumption on the boundedness of the random variables. Proof. It is enough to prove that P ( Pz 2 am > τm ) e cτ 2 m (4.6) for all τ sufficiently small. Introduce the notation X = [ E X ] /. Since the bound (4.6) follows by showing that P ( Pz 2 am > τm ) Pz 2 am (τm), (4.7) Pz 2 am C m for all < m (4.8) (and then choosing = τ 2 m with a small enough ). To prove (4.8), observe that (with the notation e i = (0,...,0,, 0..., 0) for the standard basis of C N ) Pz 2 = z i 2 Pe i 2 ( + z i z j Pe i Pe j = am + zi 2 E z i 2) Pe i 2 + z i z j Pe i Pe j (4.9) i= i j i= i j and thus Pz 2 am ( zi 2 E z i 2) Pe i 2 i= + z i z j Pe i Pe j. (4.0) i j To bound the first term, we use that for arbitrary i.i.d. random variables x,...,x N with Ex j = 0 and Ee δ xj 2 < for some δ > 0, we have the bound X C X 2 (4.) for X = N j= a jx j, for arbitrary a j C. The bound (4.) is an extension of Khintchine s ineuality and it can be proven as follows using the representation X = dy y P( X y). (4.2) 0 2

14 Writing a j = a j e iθj, θ j R, and decomposing e iθj x j into real and imaginary parts, it is clearly sufficient to prove (4.) for the case when a j, x j R are real and x j s are independent with Ex j = 0 and Ee δ xj 2 <. To bound the probability P( X y) we observe that P(X y) e ty Ee tx = e ty N j= Ee tajxj e ty e Ct2 P N j= a2 j because Ee τx e Cτ2 from the moment assumptions on x j with a sufficiently large C depending on δ. Repeating this argument for X, we find P( X y) 2 e ty e Ct2 P N j= a2 j e y 2 /(2C P N j= a2 j ) after optimizing in t. The estimate (4.) follows then by plugging the last bound into (4.2) and computing the integral. Applying (4.) with x i = z i 2 E z i 2 (Ee δx2 i < follows from the assumption zi < ), the first term on the r.h.s. of (4.0) can be controlled by ( zi 2 E z i 2) Pe i 2 i= ( C N ) /2 ( Pe i 4 C N ) /2 Pe i 2 = C m. (4.3) i= As for the second term on the r.h.s. of (4.0), we define the functions ξ j (s), s [0, ], j =,...,N by ξ j (s) = if s 2 j [ 2k ) k=0 2, 2k+ j 2 j. 0 otherwise i= Since ds ξ i (s)( ξ j (s)) = 4 0 for all i j, the second term on the r.h.s. of (4.0) can be estimated by z i z j Pe i Pe j 4 ds ξ i (s)( ξ j (s))z i z j Pe i Pe j. (4.4) 0 i j For fixed s [0, ], set i j I(s) = i N : ξ i (s) = } and J(s) =,..., N}\I(s). Then ξ i (s)( ξ j (s))z i z j Pe i Pe j = i j i I(s),j J(s) z i z j Pe i Pe j = j J(s) z j i I(s) z i Pe i e j. 3

15 Since by definition I J =, the variable z i } i I and the variable z j } j J are independent. Therefore, we can apply Khintchine s ineuality (4.) in the variables z j } j J (separating the real and imaginary parts) to conclude that ξ i (s)( ξ j (s))z i z j Pe i Pe j C 2 z i Pe i e j /2 i j j J(s) i I(s) (4.5) C z i Pe i C Pz i I(s) for every s [0, ]. It follows from (4.4) that z i z j Pe i Pe j C Pz. i j Inserting the last euation and (4.3) into the r.h.s. of (4.0), it follows that Pz 2 am C ( m + Pz ). Since clearly the bound (4.8) follows immediately. Pz Pz 2 am /2 + am Acknowledgments: We thank the referee for very useful comments on earlier versions of this paper. We also thank J. Bourgain for the kind permission to include his result in the appendix. References [] Anderson, G. W., Guionnet, A., Zeitouni, O.: Lecture notes on random matrices. Book in preparation. [2] Bai, Z. D., Miao, B., Tsay, J.: Convergence rates of the spectral distributions of large Wigner matrices. Int. Math. J. (2002), no., [3] Bobkov, S. G., Götze, F.: Exponential integrability and transportation cost related to logarithmic Sobolev ineualities. J. Funct. Anal. 63 (999), no., 28. [4] Deift, P.: Orthogonal polynomials and random matrices: a Riemann-Hilbert approach. Courant Lecture Notes in Mathematics 3, American Mathematical Society, Providence, RI, 999. [5] Erdős, L., Schlein, B., Yau, H.-T.: Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices. 2007, preprint. arxiv.org: [6] Guionnet, A., Zeitouni, O.: Concentration of the spectral measure for large matrices. Electronic Comm. in Probability 5 (2000), Paper 4. 4

16 [7] Johansson, K.: Universality of the local spacing distribution in certain ensembles of Hermitian Wigner matrices. Comm. Math. Phys. 25 (200), no [8] Khorunzhy, A.: On smoothed density of states for Wigner random matrices. Random Oper. Stoch. E. 5 (997), no.2., [9] Ledoux, M.: The concentration of measure phenomenon. Mathematical Surveys and Monographs, 89 American Mathematical Society, Providence, RI, 200. [0] Quastel, J, Yau, H.-T.: Lattice gases, large deviations, and the incompressible Navier-Stokes euations, Ann. Math, 48, 5-08,

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues

Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues László Erdős Antti Knowles 2 Horng-Tzer Yau 2 Jun Yin 2 Institute of Mathematics, University of Munich, Theresienstrasse

More information

Local law of addition of random matrices

Local law of addition of random matrices Local law of addition of random matrices Kevin Schnelli IST Austria Joint work with Zhigang Bao and László Erdős Supported by ERC Advanced Grant RANMAT No. 338804 Spectrum of sum of random matrices Question:

More information

arxiv: v4 [math.pr] 10 Sep 2018

arxiv: v4 [math.pr] 10 Sep 2018 Lectures on the local semicircle law for Wigner matrices arxiv:60.04055v4 [math.pr] 0 Sep 208 Florent Benaych-Georges Antti Knowles September, 208 These notes provide an introduction to the local semicircle

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

Local Kesten McKay law for random regular graphs

Local Kesten McKay law for random regular graphs Local Kesten McKay law for random regular graphs Roland Bauerschmidt (with Jiaoyang Huang and Horng-Tzer Yau) University of Cambridge Weizmann Institute, January 2017 Random regular graph G N,d is the

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

Invertibility of symmetric random matrices

Invertibility of symmetric random matrices Invertibility of symmetric random matrices Roman Vershynin University of Michigan romanv@umich.edu February 1, 2011; last revised March 16, 2012 Abstract We study n n symmetric random matrices H, possibly

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction

ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY SERGIU KLAINERMAN AND MATEI MACHEDON Abstract. The purpose of this note is to give a new proof of uniqueness of the Gross- Pitaevskii hierarchy,

More information

Spectral Universality of Random Matrices

Spectral Universality of Random Matrices Spectral Universality of Random Matrices László Erdős IST Austria (supported by ERC Advanced Grant) Szilárd Leó Colloquium Technical University of Budapest February 27, 2018 László Erdős Spectral Universality

More information

arxiv: v2 [math.pr] 16 Aug 2014

arxiv: v2 [math.pr] 16 Aug 2014 RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation

Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 5 (00), Paper no. 8, pages 56 603. Journal URL http://www.math.washington.edu/~ejpecp/ Universality of sine-kernel for Wigner matrices with

More information

DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES

DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES MARK RUDELSON Preliminary version Abstract. Let x S n 1 be a unit eigenvector of an n n random matrix. This vector is delocalized if it is distributed

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

YAN GUO, JUHI JANG, AND NING JIANG

YAN GUO, JUHI JANG, AND NING JIANG LOCAL HILBERT EXPANSION FOR THE BOLTZMANN EQUATION YAN GUO, JUHI JANG, AND NING JIANG Abstract. We revisit the classical ork of Caflisch [C] for compressible Euler limit of the Boltzmann equation. By using

More information

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v1 [math-ph] 19 Oct 2018 COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]

More information

Universality for a class of random band matrices. 1 Introduction

Universality for a class of random band matrices. 1 Introduction Universality for a class of random band matrices P. Bourgade New York University, Courant Institute bourgade@cims.nyu.edu H.-T. Yau Harvard University htyau@math.harvard.edu L. Erdős Institute of Science

More information

arxiv: v2 [math.pr] 13 Jul 2018

arxiv: v2 [math.pr] 13 Jul 2018 Eigenvectors distribution and quantum unique ergodicity for deformed Wigner matrices L. Benigni LPSM, Université Paris Diderot lbenigni@lpsm.paris arxiv:7.0703v2 [math.pr] 3 Jul 208 Abstract We analyze

More information

On the principal components of sample covariance matrices

On the principal components of sample covariance matrices On the principal components of sample covariance matrices Alex Bloemendal Antti Knowles Horng-Tzer Yau Jun Yin February 4, 205 We introduce a class of M M sample covariance matrices Q which subsumes and

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

Inhomogeneous circular laws for random matrices with non-identically distributed entries

Inhomogeneous circular laws for random matrices with non-identically distributed entries Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Dynamical approach to random matrix theory

Dynamical approach to random matrix theory Dynamical approach to random matrix theory László Erdős, Horng-Tzer Yau May 9, 207 Partially supported by ERC Advanced Grant, RAMAT 338804 Partially supported by the SF grant DMS-307444 and a Simons Investigator

More information

III. Quantum ergodicity on graphs, perspectives

III. Quantum ergodicity on graphs, perspectives III. Quantum ergodicity on graphs, perspectives Nalini Anantharaman Université de Strasbourg 24 août 2016 Yesterday we focussed on the case of large regular (discrete) graphs. Let G = (V, E) be a (q +

More information

Universality of Random Matrices and Dyson Brownian Motion

Universality of Random Matrices and Dyson Brownian Motion Universality of Random Matrices and Dyson Brownian Motion Horng-Tzer Yau Harvard University Joint work with L. Erdős, and B. Schlein, J. Yin 1 Lecture 1: History of Universality and Main Results Basic

More information

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures S.G. Bobkov School of Mathematics, University of Minnesota, 127 Vincent Hall, 26 Church St. S.E., Minneapolis, MN 55455,

More information

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 )

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 ) Electronic Journal of Differential Equations, Vol. 2006(2006), No. 5, pp. 6. ISSN: 072-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) A COUNTEREXAMPLE

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

Lecture 3. Random Fourier measurements

Lecture 3. Random Fourier measurements Lecture 3. Random Fourier measurements 1 Sampling from Fourier matrices 2 Law of Large Numbers and its operator-valued versions 3 Frames. Rudelson s Selection Theorem Sampling from Fourier matrices Our

More information

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES HOI NGUYEN, TERENCE TAO, AND VAN VU Abstract. Gaps (or spacings) between consecutive eigenvalues are a central topic in random matrix theory. The

More information

Delocalization of eigenvectors of random matrices Lecture notes

Delocalization of eigenvectors of random matrices Lecture notes Delocalization of eigenvectors of random matrices Lecture notes Mark Rudelson Abstract. Let x S n 1 be a unit eigenvector of an n n random matrix. This vector is delocalized if it is distributed roughly

More information

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot Free probabilities and the large N limit, IV March 27th 2014 Loop equations and all-order asymptotic expansions... Gaëtan Borot MPIM Bonn & MIT based on joint works with Alice Guionnet, MIT Karol Kozlowski,

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

A new method to bound rate of convergence

A new method to bound rate of convergence A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution

More information

Lecture Notes on the Matrix Dyson Equation and its Applications for Random Matrices

Lecture Notes on the Matrix Dyson Equation and its Applications for Random Matrices Lecture otes on the Matrix Dyson Equation and its Applications for Random Matrices László Erdős Institute of Science and Technology, Austria June 9, 207 Abstract These lecture notes are a concise introduction

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational

More information

Mesoscopic eigenvalue statistics of Wigner matrices

Mesoscopic eigenvalue statistics of Wigner matrices Mesoscopic eigenvalue statistics of Wigner matrices Yukun He Antti Knowles May 26, 207 We prove that the linear statistics of the eigenvalues of a Wigner matrix converge to a universal Gaussian process

More information

Limiting behaviour of large Frobenius numbers. V.I. Arnold

Limiting behaviour of large Frobenius numbers. V.I. Arnold Limiting behaviour of large Frobenius numbers by J. Bourgain and Ya. G. Sinai 2 Dedicated to V.I. Arnold on the occasion of his 70 th birthday School of Mathematics, Institute for Advanced Study, Princeton,

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

Quantum Diffusion and Delocalization for Random Band Matrices

Quantum Diffusion and Delocalization for Random Band Matrices Quantum Diffusion and Delocalization for Random Band Matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany Montreal, Mar 22, 2012 Joint with Antti Knowles (Harvard University) 1 INTRODUCTION

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials Maxim L. Yattselev joint work with Christopher D. Sinclair International Conference on Approximation

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Concentration inequalities: basics and some new challenges

Concentration inequalities: basics and some new challenges Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,

More information

arxiv: v1 [math.ap] 17 May 2017

arxiv: v1 [math.ap] 17 May 2017 ON A HOMOGENIZATION PROBLEM arxiv:1705.06174v1 [math.ap] 17 May 2017 J. BOURGAIN Abstract. We study a homogenization question for a stochastic divergence type operator. 1. Introduction and Statement Let

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no.

More information

Microscopic behavior for β-ensembles: an energy approach

Microscopic behavior for β-ensembles: an energy approach Microscopic behavior for β-ensembles: an energy approach Thomas Leblé (joint with/under the supervision of) Sylvia Serfaty Université Paris 6 BIRS workshop, 14 April 2016 Thomas Leblé (Université Paris

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh,

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s

More information

Derivation of the cubic non-linear Schrödinger equation from quantum dynamics of many-body systems

Derivation of the cubic non-linear Schrödinger equation from quantum dynamics of many-body systems Derivation of the cubic non-linear Schrödinger equation from quantum dynamics of many-body systems The Harvard community has made this article openly available. Please share how this access benefits you.

More information

Applications of random matrix theory to principal component analysis(pca)

Applications of random matrix theory to principal component analysis(pca) Applications of random matrix theory to principal component analysis(pca) Jun Yin IAS, UW-Madison IAS, April-2014 Joint work with A. Knowles and H. T Yau. 1 Basic picture: Let H be a Wigner (symmetric)

More information

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston Operator-Valued Free Probability Theory and Block Random Matrices Roland Speicher Queen s University Kingston I. Operator-valued semicircular elements and block random matrices II. General operator-valued

More information

Jordan Journal of Mathematics and Statistics (JJMS) 8(3), 2015, pp THE NORM OF CERTAIN MATRIX OPERATORS ON NEW DIFFERENCE SEQUENCE SPACES

Jordan Journal of Mathematics and Statistics (JJMS) 8(3), 2015, pp THE NORM OF CERTAIN MATRIX OPERATORS ON NEW DIFFERENCE SEQUENCE SPACES Jordan Journal of Mathematics and Statistics (JJMS) 8(3), 2015, pp 223-237 THE NORM OF CERTAIN MATRIX OPERATORS ON NEW DIFFERENCE SEQUENCE SPACES H. ROOPAEI (1) AND D. FOROUTANNIA (2) Abstract. The purpose

More information

Diffusion Profile and Delocalization for Random Band Matrices

Diffusion Profile and Delocalization for Random Band Matrices Diffusion Profile and Delocalization for Random Band Matrices Antti Knowles Courant Institute, New York University Newton Institute 19 September 2012 Joint work with László Erdős, Horng-Tzer Yau, and Jun

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

A note on the convex infimum convolution inequality

A note on the convex infimum convolution inequality A note on the convex infimum convolution inequality Naomi Feldheim, Arnaud Marsiglietti, Piotr Nayar, Jing Wang Abstract We characterize the symmetric measures which satisfy the one dimensional convex

More information

Groups of Prime Power Order with Derived Subgroup of Prime Order

Groups of Prime Power Order with Derived Subgroup of Prime Order Journal of Algebra 219, 625 657 (1999) Article ID jabr.1998.7909, available online at http://www.idealibrary.com on Groups of Prime Power Order with Derived Subgroup of Prime Order Simon R. Blackburn*

More information

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX MARK RUDELSON AND ROMAN VERSHYNIN Abstract. We prove an optimal estimate on the smallest singular value of a random subgaussian matrix, valid

More information

Failure of the Raikov Theorem for Free Random Variables

Failure of the Raikov Theorem for Free Random Variables Failure of the Raikov Theorem for Free Random Variables Florent Benaych-Georges DMA, École Normale Supérieure, 45 rue d Ulm, 75230 Paris Cedex 05 e-mail: benaych@dma.ens.fr http://www.dma.ens.fr/ benaych

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators Lei Ni And I cherish more than anything else the Analogies, my most trustworthy masters. They know all the secrets of

More information

Weak and strong moments of l r -norms of log-concave vectors

Weak and strong moments of l r -norms of log-concave vectors Weak and strong moments of l r -norms of log-concave vectors Rafał Latała based on the joint work with Marta Strzelecka) University of Warsaw Minneapolis, April 14 2015 Log-concave measures/vectors A measure

More information

The Isotropic Semicircle Law and Deformation of Wigner Matrices

The Isotropic Semicircle Law and Deformation of Wigner Matrices The Isotropic Semicircle Law and Deformation of Wigner Matrices Antti Knowles 1 and Jun Yin 2 Department of Mathematics, Harvard University Cambridge MA 02138, USA knowles@math.harvard.edu 1 Department

More information

OPTIMAL UPPER BOUND FOR THE INFINITY NORM OF EIGENVECTORS OF RANDOM MATRICES

OPTIMAL UPPER BOUND FOR THE INFINITY NORM OF EIGENVECTORS OF RANDOM MATRICES OPTIMAL UPPER BOUND FOR THE INFINITY NORM OF EIGENVECTORS OF RANDOM MATRICES BY KE WANG A dissertation submitted to the Graduate School New Brunswick Rutgers, The State University of New Jersey in partial

More information

arxiv: v2 [math.pr] 2 Nov 2009

arxiv: v2 [math.pr] 2 Nov 2009 arxiv:0904.2958v2 [math.pr] 2 Nov 2009 Limit Distribution of Eigenvalues for Random Hankel and Toeplitz Band Matrices Dang-Zheng Liu and Zheng-Dong Wang School of Mathematical Sciences Peking University

More information

Eigenvector Statistics of Sparse Random Matrices. Jiaoyang Huang. Harvard University

Eigenvector Statistics of Sparse Random Matrices. Jiaoyang Huang. Harvard University Eigenvector Statistics of Sparse Random Matrices Paul Bourgade ew York University E-mail: bourgade@cims.nyu.edu Jiaoyang Huang Harvard University E-mail: jiaoyang@math.harvard.edu Horng-Tzer Yau Harvard

More information

RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM. 1. introduction

RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM. 1. introduction RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM HOI H. NGUYEN Abstract. We address overcrowding estimates for the singular values of random iid matrices, as well as for the eigenvalues of random

More information