Local Semicircle Law and Complete Delocalization for Wigner Random Matrices

Similar documents
Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices

arxiv: v3 [math-ph] 21 Jun 2012

Exponential tail inequalities for eigenvalues of random matrices

Universality for random matrices and log-gases

The Matrix Dyson Equation in random matrix theory

Isotropic local laws for random matrices

Comparison Method in Random Matrix Theory

Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues

Local law of addition of random matrices

arxiv: v4 [math.pr] 10 Sep 2018

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

Random Matrix: From Wigner to Quantum Chaos

Local Kesten McKay law for random regular graphs

Concentration Inequalities for Random Matrices

arxiv: v1 [math.pr] 22 May 2008

Eigenvalue variance bounds for Wigner and covariance random matrices

Invertibility of symmetric random matrices

Universality of local spectral statistics of random matrices

Random Toeplitz Matrices

On the concentration of eigenvalues of random symmetric matrices

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

Invertibility of random matrices

ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction

Spectral Universality of Random Matrices

arxiv: v2 [math.pr] 16 Aug 2014

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation

DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES

Random Matrices: Invertibility, Structure, and Applications

Determinantal point processes and random matrix theory in a nutshell

Rectangular Young tableaux and the Jacobi ensemble

YAN GUO, JUHI JANG, AND NING JIANG

arxiv: v1 [math-ph] 19 Oct 2018

Universality for a class of random band matrices. 1 Introduction

arxiv: v2 [math.pr] 13 Jul 2018

On the principal components of sample covariance matrices

Random regular digraphs: singularity and spectrum

Lectures 2 3 : Wigner s semicircle law

Inhomogeneous circular laws for random matrices with non-identically distributed entries

arxiv: v5 [math.na] 16 Nov 2017

Dynamical approach to random matrix theory

III. Quantum ergodicity on graphs, perspectives

Universality of Random Matrices and Dyson Brownian Motion

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 )

Lectures 6 7 : Marchenko-Pastur Law

Lecture 3. Random Fourier measurements

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction

Delocalization of eigenvectors of random matrices Lecture notes

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

A new method to bound rate of convergence

Lecture Notes on the Matrix Dyson Equation and its Applications for Random Matrices

MATH 205C: STATIONARY PHASE LEMMA

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

Mesoscopic eigenvalue statistics of Wigner matrices

Limiting behaviour of large Frobenius numbers. V.I. Arnold

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

Quantum Diffusion and Delocalization for Random Band Matrices

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Lecture I: Asymptotics for large GUE random matrices

From the mesoscopic to microscopic scale in random matrix theory

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials

Kernel Method: Data Analysis with Positive Definite Kernels

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Concentration inequalities: basics and some new challenges

arxiv: v1 [math.ap] 17 May 2017

Lecture 8 : Eigenvalues and Eigenvectors

1 Math 241A-B Homework Problem List for F2015 and W2016

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

Microscopic behavior for β-ensembles: an energy approach

Markov operators, classical orthogonal polynomial ensembles, and random matrices

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

1 Last time: least-squares problems

1 Intro to RMT (Gene)

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Derivation of the cubic non-linear Schrödinger equation from quantum dynamics of many-body systems

Applications of random matrix theory to principal component analysis(pca)

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Jordan Journal of Mathematics and Statistics (JJMS) 8(3), 2015, pp THE NORM OF CERTAIN MATRIX OPERATORS ON NEW DIFFERENCE SEQUENCE SPACES

Diffusion Profile and Delocalization for Random Band Matrices

be the set of complex valued 2π-periodic functions f on R such that

A note on the convex infimum convolution inequality

Groups of Prime Power Order with Derived Subgroup of Prime Order

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

Failure of the Raikov Theorem for Free Random Variables

Non white sample covariance matrices.

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

Weak and strong moments of l r -norms of log-concave vectors

The Isotropic Semicircle Law and Deformation of Wigner Matrices

OPTIMAL UPPER BOUND FOR THE INFINITY NORM OF EIGENVECTORS OF RANDOM MATRICES

arxiv: v2 [math.pr] 2 Nov 2009

Eigenvector Statistics of Sparse Random Matrices. Jiaoyang Huang. Harvard University

RANDOM MATRICES: OVERCROWDING ESTIMATES FOR THE SPECTRUM. 1. introduction

Transcription:

Local Semicircle Law and Complete Delocalization for Wigner Random Matrices The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published Version Accessed Citable Link Terms of Use Erds, László, Benjamin Schlein, and Horng-Tzer Yau. 2008. Local Semicircle Law and Complete Delocalization for Wigner Random Matrices. Communications in Mathematical Physics 287 (2) (September 24): 64 655. doi:0.007/s00220-008-0636-9. doi:0.007/s00220-008-0636-9 January 2, 208 0:6:56 PM EST http://nrs.harvard.edu/urn-3:hul.instrepos:32706728 This article was downloaded from Harvard University's DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:hul.instrepos:dash.current.terms-ofuse#oap (Article begins on next page)

arxiv:0803.0542v4 [math-ph] 23 Sep 2008 Local semicircle law and complete delocalization for Wigner random matrices László Erdős, Benjamin Schlein and Horng-Tzer Yau 2 Institute of Mathematics, University of Munich, Theresienstr. 39, D-80333 Munich, Germany Department of Mathematics, Harvard University Cambridge MA 0238, USA 2 Jun 6, 2008 Abstract We consider N N Hermitian random matrices with independent identical distributed entries. The matrix is normalized so that the average spacing between consecutive eigenvalues is of order /N. Under suitable assumptions on the distribution of the single matrix element, we prove that, away from the spectral edges, the density of eigenvalues concentrates around the Wigner semicircle law on energy scales η N (log N) 8. Up to the logarithmic factor, this is the smallest energy scale for which the semicircle law may be valid. We also prove that for all eigenvalues away from the spectral edges, the l -norm of the corresponding eigenvectors is of order O(N /2 ), modulo logarithmic corrections. The upper bound O(N /2 ) implies that every eigenvector is completely delocalized, i.e., the maximum size of the components of the eigenvector is of the same order as their average size. In the Appendix, we include a lemma by J. Bourgain which removes one of our assumptions on the distribution of the matrix elements. AMS Subject Classification: 5A52, 82B44 Running title: Local semicircle law Key words: Semicircle law, Wigner random matrix, random Schrödinger operator, density of states, localization, extended states. Supported by Sofja-Kovalevskaya Award of the Humboldt Foundation. On leave from Cambridge University, UK Partially supported by NSF grant DMS-0602038

Introduction The Wigner semicircle law states that the empirical density of the eigenvalues of a random matrix is given by the universal semicircle distribution. This statement has been proved for many different ensembles, in particular for the case when the distributions of the entries of the matrix are independent, identically distributed (i.i.d.). To fix the scaling, we normalize the matrix so that the bulk of the spectrum lies in the energy interval [ 2, 2], i.e., the average spacing between consecutive eigenvalues is of order /N. We now consider a window of size η in the bulk so that the typical number of eigenvalues is of order Nη. In the usual statement of the semicircle law, η is a fixed number independent of N and it is taken to zero only after the limit N. This can be viewed as the largest scale on which the semicircle law is valid. On the other extreme, for the smallest scale, one may take η = k/n and take the limit N followed by k. If the semicircle law is valid in this sense, we shall say that the local semicircle law holds. Below this smallest scale, the eigenvalue distribution is expected to be governed by the Dyson statistics related to sine kernels. The Dyson statistics was proved for many ensembles (see [, 4] for a review), including Wigner matrices with Gaussian convoluted distributions [7]. In this paper, we establish the local semicircle law up to logarithmic factors in the energy scale, i.e., for η N (log N) 8. The result holds for any energy window in the bulk spectrum away from the spectral edges. In [5] we have proved the same statement for η N 2/3 (modulo logarithmic corrections). Prior to our work the best result was obtained in [2] for η N /2. See also [6] and [8] for related and earlier results. As a corollary, our result also proves that no gap between consecutive bulk eigenvalues can be bigger than C(log N) 8 /N, to be compared with the expected average /N behavior given by Dyson s law. It is widely believed that the eigenvalue distribution of the Wigner random matrix and the random Schrödinger operator in the extended (or delocalized) state regime are the same up to normalizations. Although this conjecture is far from the reach of the current method, a natural uestion arises as to whether the eigenvectors of random matrices are extended. More precisely, if v = (v,..., v N ) is an l 2 -normalized eigenvector, v =, we say that v is completely delocalized if v = max j v j is bounded from above by CN /2, the average size of v j. In this paper, we shall prove that all eigenvectors with eigenvalues away from the spectral edges are completely delocalized (modulo logarithmic corrections) in probability. Similar results, but with CN /2 replaced by CN /3 were proved in [5]. Notice that our new result, in particular, answers (up to logarithmic factors) the uestion posed by T. Spencer whether v 4 is of order N /4. Denote the (i, j)-th entry of an N N matrix H by h i,j = h ij. When there is no confusion, we omit the comma between the two subscripts. We shall assume that the matrix is Hermitian, i.e., h ij = h ji. These matrices form a Hermitian Wigner ensemble if h ij = N /2 [x ij + y ij ], (i < j), and h ii = N /2 x ii, (.) where x ij, y ij (i < j) and x ii are independent real random variables with mean zero. We assume that x ij, y ij (i < j) all have a common distribution ν with variance /2 and with a strictly positive density function: dν(x) = (const.)e g(x) dx. The diagonal elements, x ii, also have a common distribution, d ν(x) = (const.)e eg(x) dx, that may be different from dν. Let P and E denote the probability and the expectation value, respectively, w.r.t the joint distribution of all matrix elements. We need to assume further conditions on the distributions of the matrix elements in addition to (.). C) The function g is twice differentiable and it satisfies sup g (x) <. (.2) x R 2

C2) There exists a δ > 0 such that e δx2 d ν(x) <. (.3) C4) The measure ν satisfies the logarithmic Sobolev ineuality, i.e., there exists a constant C sob such that for any density function u > 0 with u dν =, u logu dν C sob u 2 dν. (.4) Here we have followed the convention in [5] to use the label C4) for the logarithmic Sobolev bound and reserved C3) for a spectral gap condition in [5]. We will also need the decay condition (.3) for the measure dν, i.e., for some small δ > 0, e δx2 dν(x) <. (.5) This condition was assumed in the earlier version of the manuscript, but J.-D. Deuschel and M. Ledoux kindly pointed out to us that (.5) follows from C4), see [9]. Condition C) is needed only because we will use Lemma 2.3 of [5] in the proof of the following Theorem.. J. Bourgain has informed us that this lemma can also be proved without this condition. We include the precise statement and his proof in the Appendix. Notation. We will use the notation A both for the Lebesgue measure of a set A R and for the cardinality of a discrete set A Z. The usual Hermitian scalar product for vectors x,y C N will be denoted by x y or by (x,y). We will use the convention that C denotes generic large constants and c denotes generic small positive constants whose values may change from line to line. Since we are interested in large matrices, we always assume that N is sufficiently large. Let H be the N N Wigner matrix with eigenvalues µ µ 2... µ N. For any spectral parameter z = E+iη C, η > 0, we denote the Green function by G z = (H z). Let F(x) = F N (x) be the empirical distribution function of the eigenvalues We define the Stieltjes transform of F as m = m(z) = N Tr G z = and we let ρ = ρ η (E) = Im m(z) π F(x) = : µ x }. (.6) N R = Nπ Im Tr G z = Nπ df(x) x z, (.7) = η (µ E) 2 + η 2 (.8) be the normalized density of states of H around energy E and regularized on scale η. The random variables m and also depend on N, when necessary, we will indicate this fact by writing m N and N. For any z = E + iη we let sc (x)dx m sc = m sc (z) = x z 3 R

be the Stieltjes transform of the Wigner semicircle distribution function whose density is given by For κ, η > 0 we define the set S N,κ,eη := sc (x) = 4 x2 ( x 2). 2π } z = E + iη C : E 2 κ, η η and for η = N (log N) 8 we write S N,κ := z = E + iη C : E 2 κ, The following two theorems are the main results of this paper. (log N) 8 N } η. Theorem. Let H be an N N Wigner matrix as described in (.) and assume the conditions (.2), (.3) and (.4). Then for any κ > 0 and ε > 0, the Stieltjes transform m N (z) (see (.7)) of the empirical eigenvalue distribution of the N N Wigner matrix satisfies } P sup m N (z) m sc (z) ε e c(log N)2 (.9) z S N,κ where c > 0 depends on κ, ε. In particular, the density of states η(e) converges to the Wigner semicircle law in probability uniformly for all energies away from the spectral edges and for all energy windows at least N (log N) 8. Furthermore, let η = η (N) such that (log N) 8 /N η as N, then we have the convergence of the counting function as well: P sup E 2 κ N η (E) 2Nη } sc (E) ε e c(log N)2 (.0) for any ε > 0, where N η (E) = : µ E η } denotes the number of eigenvalues in the interval [E η, E + η ]. This result identifies the density of states away from the spectral edges in a window where the typical number of eigenvalues is of order bigger than (log N) 8. Our scale is not sufficiently small to identify individual eigenvalues, in particular we do not know whether the local eigenvalue spacing follows the expected Dyson statistics characterized by the sine-kernel or some other local statistics, e.g., that of a Poisson point process. Theorem.2 Let H be an N N Wigner matrix as described in (.) and satisfying the conditions (.2), (.3) and (.4). Fix κ > 0, and assume that C is large enough. Then there exists c > 0 such that } C(log N)9/2 P v with Hv = µv, v =, µ [ 2 + κ, 2 κ] and v e c(log N)2. N /2 We now sketch the key idea to prove Theorem.; Theorem.2 can be proved following similar ideas used in [5]. 4

Let B (k) denote the (N ) (N ) minor of H after removing the k-th row and k-th column and let m k (z) denote the Stieltjes transform of the eigenvalue distribution function associated with B (k). It is known that m(z), defined in (.7), satisfies a recurrence relation m(z) = N h kk z ( ), (.) N m (k) (z) X k k= where X k (defined precisely in (2.4)) is an error term depending on B (k) and the k-th column and row elements of the random matrix H. If we neglect X k (and h kk which is of order N /2 by definition) and replace m (k) by m, we obtain an euation for m and this leads to the Stieltjes transform of the semi-circle law. So our main task is to prove that X k is negligible. Unfortunately, X k depends crucially on the eigenvalues and eigenfunctions of B (k). In an earlier work [2], the estimate on X k was done via an involved bootstrap argument (and valid up to order N /2 ). The bootstrapping is needed in [2] since X k depends critically on properties of B (k) for which there was only limited a priori information. In our preceding paper [5], we split m and m (k) into their means and variances; the variances were then shown to be negligible up to the scale N 2/3 (The variance control of m up to the scale N /2 was already in [6]). On the other hand, the means of m and m (k) are very close due to the fact that the eigenvalues of H and B (k) are interlaced. Finally, X k was controlled via an estimate on its fourth moment. We have thus arrived at a fixed point euation for the mean of m whose uniue solution is the Stieltjes transform of the semi-circle law. In the current paper, we avoid the variance control by viewing m and m (k) directly as random variables in the recurrence relation (.). Furthermore, the moment control on X k is now improved to an exponential moment estimate. Since our estimate on the fourth moment of X k was done via a spectral gap argument, it is a folklore that moment estimates usually can be lifted to an exponential moment estimate provided the spectral gap estimate is replaced by a logarithmic Sobolev ineuality. In this paper, we use a concentration of measure ineuality to avoid all bootstrap arguments appearing both in [2] and [8]. In the previous version of this paper we obtained the concentration ineuality by using the logarithmic Sobolev ineuality together with the Gibbs entropy ineuality. We would like to thank the referee who pointed out to us that the concentration ineuality of Bobkov and Götze [3] can be used directly to shortcut our original proof. The applicability of the Bobkov-Götze ineuality as well as some of the heuristic arguments presented here, however, depend crucially on an a priori upper bound on m(z) ; this was obtained via a large deviation estimate on the eigenvalue concentration [5]. 2 Proof of Theorem. The proof of (.0) follows from (.9) exactly as in Corollary 4.2 of [5], so we focus on proving (.9). We first remove the supremum in (.9). For any two points z, z S N,κ,η we have m N (z) m N (z ) N 2 z z since the gradient of m N (z) is bounded by Im z 2 N 2 on S N,κ. We can choose a set of at most Q = Cε 2 N 4 points, z, z 2,...,z Q, in S N,κ,η such that for any z S N,κ,η there exists a point z j with z z j 4 εn 2. In particular, m N (z) m N (z j ) ε/4 if N is large enough and m sc (z) m sc (z j ) ε/4. Since Im z j η, under the condition that η N (log N) 8 we have } P sup m N (z) m sc (z) ε z S N,κ Q j= P m N (z j ) m sc (z j ) ε } 2 5

Therefore, in order to conclude (.9), it suffices to prove that } P m N (z) m sc (z) ε e c(log N)2 (2.) for each fixed z S N,κ. Let B (k) denote the (N ) (N ) minor of H after removing the k-th row and k-th column. Note that B (k) is an (N ) (N ) Hermitian Wigner matrix with a normalization factor off by ( N )/2. Let λ (k) λ (k) 2... λ (k) eigenvectors. N denote its eigenvalues and u(k),...,u(k) N the corresponding normalized Let a (k) = (h k,, h k,2,...h k,k, h k,k+,... h k,n ) C N, i.e. the k-th column after removing the diagonal element h k,k = h kk. Computing the (k, k) diagonal element of the resolvent G z, we have where we defined G z (k, k) = [h h kk z a (k) (B (k) z) a = (k) kk z N N = ξ (k) := Na (k) u (k) 2. λ (k) ξ (k) z ] (2.2) Similarly to the definition of m(z) in (.7), we also define the Stieltjes transform of the density of states of B (k) m (k) = m (k) (z) = N Tr B (k) z = df (k) (x) R x z with the empirical counting function F (k) (x) = : λ (k) x }. N The spectral parameter z is fixed throughout the proof and we will often omit it from the argument of the Stieltjes transforms. It follows from (2.2) that m = m(z) = N G z (k, k) = N k= k=. (2.3) h kk z a (k) (B (k) z) a (k) Let E k denote the expectation value w.r.t the random vector a (k). Define the random variable X k (z) = X k := a (k) B (k) z a(k) E k a (k) B (k) z a(k) = N N = ξ (k) λ (k) z (2.4) where we used that E k ξ (k) We note that = u (k) 2 =. E k a (k) With this notation it follows from (2.2) that B (k) z a(k) = N m = N k= λ (k) z = h kk z ( N 6 ( ) m (k) N ) m (k) X k. (2.5)

We use that ( m ) m (k) df(x) = ( N x z N ) df (k) (x) x z We recall that the eigenvalues of H and B (k) are interlaced, = NF(x) (N )F (k) (x) N (x z) 2 dx. µ λ (k) µ 2 λ (k) 2... λ (k) N µ N, (2.6) (see e.g. Lemma 2.5 of [5]), therefore we have max x NF(x) (N )F (k) (x). Thus ( m ) m (k) dx N N x z 2 C Nη. (2.7) We postpone the proof of the following lemma: Lemma 2. Suppose that v and λ are eigenvectors and eigenvalues of an N N random matrix with a law satisfying the assumption of Theorem.. Let X = N ξ λ z with z = E +iη, ξ = b v 2, where the components of b are i.i.d. random variables satisfying (.4). Then there exist sufficiently small positive constants ε 0 and c such that in the joint product probability space of b and the law of the random matrices we have for any ε ε 0 and η (log N) 8 /N. cε(log N)2 P[ X ε] e For a given ε > 0 and z = E + iη S N,κ, we set z n = E + i2 n η and we define the event Ω = N k= [log 2 (/η)] n=0 X k (z n ) ε/3} N h kk ε/3}, where [ ] denotes the integer part. Since h kk = N /2 b kk with b kk satisfying (.3), we have k= P h kk ε/3} Ce δε2 N/9. We now apply Lemma 2. for each X k (z n ) and conclude that with a sufficiently small c > 0. On the complement Ω c we have from (2.5) m(z n ) = N cε(log N)2 P(Ω) e k= m(z n ) z n + δ k 7

where ( δ k = δ k (z n ) = h kk + m(z n ) ) m k (z n ) X k (z n ) N are random variables satisfying δ k ε by (2.7). After expansion, the last euation implies that m(z n ) + m(z n ) + z n ε ( Im(m(zn ) + z n ) )( Im(m(z n ) + z n ) ε ), (2.8) if Im(m(z n ) + z n ) > ε, using that m(z n ) z n + δ k Im(m(z n ) + z n ) ε. We note that for any z S N,κ the euation M + M + z = 0 (2.9) has a uniue solution with ImM > 0, namely M = m sc (z), the Stieltjes transform of the semicircle law. Note that there exists c(κ) > 0 such that Imm sc (E + iη) c(κ) for any E 2 κ, uniformly in η. The euation (2.9) is stable in the following sense. For any small δ, let M = M(z, δ) be a solution to with Im M > 0. Explicitly, we have M + M + z = δ (2.0) M = z + z 2 4 + 2zδ + δ 2 2 where we have chosen the suare root so that ImM > 0 when δ = 0 and Imz > 0. On the compact set z S N,κ, z 2 4 is bounded away from zero and thus + δ 2, M m sc C κ δ (2.) for some constant C κ depending only on κ. Now we perform a bootstrap argument in the imaginary part of z to prove that m(z) m sc (z) C ε (2.2) uniformly in z S N,κ with a sufficiently large constant C. Fix z = E + iη with E 2 κ and let z n = E + i2 n η. For n = [log 2 (/η)], we have Im z n [ 2, ], (2.2) follows from (2.8) with some small ε, since the right hand side of (2.8) is bounded by Cε. Suppose now that (2.2) has been proven for z = z n, for some n with η n = Im z n [2N (log N) 8, ]. We want to prove it for z = z n, with Im z n = η n /2. By integrating the ineuality η n /2 (x E) 2 + (η n /2) 2 η n 2 (x E) 2 + ηn 2 with respect to df(x) we obtain that Im m(z n ) 2 Im m(z n) 2 c(κ) C ε > c(κ) 4 for sufficiently small ε, where (2.2) and Imm sc (z n ) c(κ) were used. Thus the right hand side of (2.8) with z n replaced with z n is bounded by Cε, the constant depending only on κ. Applying the stability bound (2.), we get (2.2) for z = z n. Continuing the induction argument, finally we obtain (2.2) for z = z 0 = E + iη. 8

3 Proof of Lemma 2. Let I n = [nη, (n + )η] and K 0 be a sufficiently large number. We have [ K 0, K 0 ] m n= mi n with m CK 0 /η. Denote by Ω the event Ω := max N I n n Nη(log N) 2} max λ K 0 } where N In = : λ I n } is the number of eigenvalues in the interval I n. From Theorem 2. and Lemma 7.4 of [5], the probability of Ω is bounded by P(Ω) e c(log N)2. for some sufficiently small c > 0. Therefore, if P b denotes the probability w.r.t. the variable b, we find [ ] P[ X ε] e c(log N)2 + E Ω cp b [ X ε] [ ] [ ] e c(log N)2 + E Ω c P b [ReX ε/2] + E Ω c P b [ReX ε/2] (3.) [ ] [ ] + E Ω c P b [ImX ε/2] + E Ω c P b [ImX ε/2]. The last four terms on the r.h.s. of the last euation can all be handled with similar arguments; we show, for example, how to bound the last term. For any T > 0, we have [ ] [ E Ω c P b [ImX ε/2] e Tε/2 E Ω c E b e T ImX] (3.2) where E b denotes the expectation w.r.t. the variable b. Using the fact that the distribution of the components of b satisfies the log-sobolev ineuality (.4), it follows from the concentration ineuality (Theorem 2. from [3]) that ( E b e T ImX Csob T 2 ) E b exp (ImX) 2 (3.3) 2 where (ImX) 2 = k = k ( (ImX) 2 + (ImX) 2 ) (Re b k ) (Im b k ) ( η ( (b v N λ z 2 )v (k) + (b v )v (k) ) 2 + η ( (b v N λ z 2 )v (k) (b v )v (k) ) 2) = 4 η2 ξ N 2 λ z 4 4 Nη Y. Here we defined the random variable Y = N 9 ξ λ z. (3.4)

From (3.2), choosing T = 2(log N) 2 and using that Nη (log N) 8, we obtain [ ] [ ( ) E Ω c P b [ImX ε/2] e ε(log 8Csob ] N)2 E Ω c E b exp (log N) 4 Y. (3.5) Let ν = 8C sob /(log N) 4. By Hölder ineuality, we can estimate E b e νy = E b where c =. We shall choose where is given by = ν N [ ν ] exp N λ z ξ λ z ν log N Nη c = N λ z ν ( [ νc ] ) /c E b exp N λ z ξ, (3.6) max N I n ν(log N) 3 = 8C sob n log N. Here we have used max n N In Nη(log N) 2 due to that we are in the set Ω c. Notice that with this choice, νc N λ z 8C sob log N is a small number. In the proof of Lemma 7.4 of [5] (see euation (7.3) of [5]) we showed that E b e τξ < K with a universal constant K if τ is sufficiently small depending on δ in (.3). From (3.5), it follows that [ ] E Ω c P b [ImX ε/2] Ke ε(log N)2 (3.7) Since similar bounds hold for the other terms on the r.h.s. of (3.) as well, this concludes the proof of the lemma. 4 Delocalization of eigenvectors Here we prove Theorem.2, the argument follows the same line as in [5] (Proposition 5.3). Let η = N (log N) 9 and partition the interval [ 2 + κ, 2 κ] into n 0 = O(/η ) O(N) intervals I, I 2,...I n0 of length η. As before, let N I = β : µ β I} denote the eigenvalues in I. By using (.0) in Theorem., we have P max N I n εnη } e c(log N)2. n if ε is sufficiently small (depending on κ). Suppose that µ I n, and that Hv = µv. Consider the decomposition ( ) h a H = (4.) a B 0

where a = (h,2,...h,n ) and B is the (N ) (N ) matrix obtained by removing the first row and first column from H. Let λ and u (for =, 2,..., N ) denote the eigenvalues and the normalized eigenvectors of B. From the eigenvalue euation Hv = µv and from (4.) we find that hv + a w = µv, and av + Bw = µw (4.2) with w = (v 2,...,v N ) t. From these euations we obtain w = (µ B) av and thus Since w 2 = v 2, we obtain w 2 = w w = v 2 a (µ B) 2 a v 2 = + a (µ B) 2 a = + N ξ (µ λ ) 2 4N[η ]2 λ I n ξ, (4.3) where in the second euality we set ξ = Na u 2 and used the spectral representation of B. By the interlacing property of the eigenvalues of H and B, there exist at least N In eigenvalues λ in I n. Therefore, using that the components of any eigenvector are identically distributed, we have C(log ) N)9/2 P ( v with Hv = µv, v =, µ [ 2 + κ, 2 κ] and v N /2 Nn 0 sup P ( v with Hv = µv, v =, µ I n and v 2 n ( ) constn 2 sup P n constn 2 sup P n ξ 4Nη C λ I n ( ξ 4Nη C λ I n and N I n εnη constn 2 e c(log N)9 + constn 2 e c(log N)2 e c (log N) 2, ) C(log N)9 N + constn 2 sup n ) P (N In εnη ) by choosing C sufficiently large, depending on κ via ε. Here we used Corollary 2.4. of [5] that states that under condition C) in (.2) there exists a positive c such that for any δ small enough ( ) P ξ δm e cm (4.5) A for all A,, N } with cardinality A = m. We remark that by applying Lemma 4. from the Appendix instead of Lemma 2.3 in [5], the bound (4.5) also holds without condition C) if the matrix elements are bounded random variables. It is clear that the boundedness assumption in Lemma 4. can be relaxed by performing an appropriate cutoff argument; we will not pursue this direction in this article. Appendix: Removal of the assumption C) Jean Bourgain School of Mathematics (4.4)

Institute for Advanced Study Princeton, NJ 08540, USA The following Lemma shows that the assumption C) in Lemma 2.3 and its corollary in [5] can be removed. Lemma 4. Suppose that z,..., z N are bounded, complex valued i.i.d. random variables with Ez i = 0 and E z i 2 = a > 0. Let P : C N C N be a rank-m projection, and z = (z,...,z N ). Then, if δ is small enough, there exists c > 0 such that P ( Pz 2 δm ) e cm. Lemma 2.3 in [5] stated that the same conclusion holds under the condition C), but it reuired no assumption on the boundedness of the random variables. Proof. It is enough to prove that P ( Pz 2 am > τm ) e cτ 2 m (4.6) for all τ sufficiently small. Introduce the notation X = [ E X ] /. Since the bound (4.6) follows by showing that P ( Pz 2 am > τm ) Pz 2 am (τm), (4.7) Pz 2 am C m for all < m (4.8) (and then choosing = τ 2 m with a small enough ). To prove (4.8), observe that (with the notation e i = (0,...,0,, 0..., 0) for the standard basis of C N ) Pz 2 = z i 2 Pe i 2 ( + z i z j Pe i Pe j = am + zi 2 E z i 2) Pe i 2 + z i z j Pe i Pe j (4.9) i= i j i= i j and thus Pz 2 am ( zi 2 E z i 2) Pe i 2 i= + z i z j Pe i Pe j. (4.0) i j To bound the first term, we use that for arbitrary i.i.d. random variables x,...,x N with Ex j = 0 and Ee δ xj 2 < for some δ > 0, we have the bound X C X 2 (4.) for X = N j= a jx j, for arbitrary a j C. The bound (4.) is an extension of Khintchine s ineuality and it can be proven as follows using the representation X = dy y P( X y). (4.2) 0 2

Writing a j = a j e iθj, θ j R, and decomposing e iθj x j into real and imaginary parts, it is clearly sufficient to prove (4.) for the case when a j, x j R are real and x j s are independent with Ex j = 0 and Ee δ xj 2 <. To bound the probability P( X y) we observe that P(X y) e ty Ee tx = e ty N j= Ee tajxj e ty e Ct2 P N j= a2 j because Ee τx e Cτ2 from the moment assumptions on x j with a sufficiently large C depending on δ. Repeating this argument for X, we find P( X y) 2 e ty e Ct2 P N j= a2 j e y 2 /(2C P N j= a2 j ) after optimizing in t. The estimate (4.) follows then by plugging the last bound into (4.2) and computing the integral. Applying (4.) with x i = z i 2 E z i 2 (Ee δx2 i < follows from the assumption zi < ), the first term on the r.h.s. of (4.0) can be controlled by ( zi 2 E z i 2) Pe i 2 i= ( C N ) /2 ( Pe i 4 C N ) /2 Pe i 2 = C m. (4.3) i= As for the second term on the r.h.s. of (4.0), we define the functions ξ j (s), s [0, ], j =,...,N by ξ j (s) = if s 2 j [ 2k ) k=0 2, 2k+ j 2 j. 0 otherwise i= Since ds ξ i (s)( ξ j (s)) = 4 0 for all i j, the second term on the r.h.s. of (4.0) can be estimated by z i z j Pe i Pe j 4 ds ξ i (s)( ξ j (s))z i z j Pe i Pe j. (4.4) 0 i j For fixed s [0, ], set i j I(s) = i N : ξ i (s) = } and J(s) =,..., N}\I(s). Then ξ i (s)( ξ j (s))z i z j Pe i Pe j = i j i I(s),j J(s) z i z j Pe i Pe j = j J(s) z j i I(s) z i Pe i e j. 3

Since by definition I J =, the variable z i } i I and the variable z j } j J are independent. Therefore, we can apply Khintchine s ineuality (4.) in the variables z j } j J (separating the real and imaginary parts) to conclude that ξ i (s)( ξ j (s))z i z j Pe i Pe j C 2 z i Pe i e j /2 i j j J(s) i I(s) (4.5) C z i Pe i C Pz i I(s) for every s [0, ]. It follows from (4.4) that z i z j Pe i Pe j C Pz. i j Inserting the last euation and (4.3) into the r.h.s. of (4.0), it follows that Pz 2 am C ( m + Pz ). Since clearly the bound (4.8) follows immediately. Pz Pz 2 am /2 + am Acknowledgments: We thank the referee for very useful comments on earlier versions of this paper. We also thank J. Bourgain for the kind permission to include his result in the appendix. References [] Anderson, G. W., Guionnet, A., Zeitouni, O.: Lecture notes on random matrices. Book in preparation. [2] Bai, Z. D., Miao, B., Tsay, J.: Convergence rates of the spectral distributions of large Wigner matrices. Int. Math. J. (2002), no., 65 90. [3] Bobkov, S. G., Götze, F.: Exponential integrability and transportation cost related to logarithmic Sobolev ineualities. J. Funct. Anal. 63 (999), no., 28. [4] Deift, P.: Orthogonal polynomials and random matrices: a Riemann-Hilbert approach. Courant Lecture Notes in Mathematics 3, American Mathematical Society, Providence, RI, 999. [5] Erdős, L., Schlein, B., Yau, H.-T.: Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices. 2007, preprint. arxiv.org:07.730. [6] Guionnet, A., Zeitouni, O.: Concentration of the spectral measure for large matrices. Electronic Comm. in Probability 5 (2000), Paper 4. 4

[7] Johansson, K.: Universality of the local spacing distribution in certain ensembles of Hermitian Wigner matrices. Comm. Math. Phys. 25 (200), no.3. 683 705. [8] Khorunzhy, A.: On smoothed density of states for Wigner random matrices. Random Oper. Stoch. E. 5 (997), no.2., 47 62. [9] Ledoux, M.: The concentration of measure phenomenon. Mathematical Surveys and Monographs, 89 American Mathematical Society, Providence, RI, 200. [0] Quastel, J, Yau, H.-T.: Lattice gases, large deviations, and the incompressible Navier-Stokes euations, Ann. Math, 48, 5-08, 998. 5