Eigenvector Statistics of Sparse Random Matrices. Jiaoyang Huang. Harvard University

Size: px
Start display at page:

Download "Eigenvector Statistics of Sparse Random Matrices. Jiaoyang Huang. Harvard University"

Transcription

1 Eigenvector Statistics of Sparse Random Matrices Paul Bourgade ew York University Jiaoyang Huang Harvard University Horng-Tzer Yau Harvard University Abstract We prove that the bulk eigenvectors of sparse random matrices, i.e. the adjacency matrices of Erdős- Rényi graphs or random regular graphs, are asymptotically jointly normal, provided the averaged degree increases with the size of the graphs. Our methodology follows [6] by analyzing the eigenvector flow under Dyson Brownian motion, combining with an isotropic local law for Green s function. As an auxiliary result, we prove that for the eigenvector flow of Dyson Brownian motion with general initial data, the eigenvectors are asymptotically jointly normal in the direction q after time η t r, if in a window of size r, the initial density of states is bounded below and above down to the scale η, and the initial eigenvectors are delocalized in the direction q down to the scale η. Introduction In this paper, we consider the following two models of sparse random matrices H with sparsity p = p():. (Erdős-Rényi Graph Model G(, p/)) H := A/ p( p/), where A is the adjacency matrix of the Erős-Rényi graph on vertices obtained by drawing an edge between each pair of vertices randomly and independently, with probability p/. 2. (p-regular Graph Model G,p ) H := A/ p, where A is the adjacency matrix of the uniform random p-regular graph on vertices, i.e. a uniformly chosen symmetric matrix with entries in {0, } such that all rows and columns have sum equal to p and all diagonal entries vanish. Given a graph G on vertices with adjacency matrix A, many interesting properties of graphs are revealed by the eigenvalues and eigenvectors of A. Such phenomena and the applications have been intensively The work of P. B. is partially supported by the SF grant DMS The work of H.-T. Y. is partially supported by the SF grant DMS , DMS and a Simons Investigator award.

2 investigated for over half a century. To mention some, we refer the readers to the books [7,8] for a general discussion on spectral graph theory, the survey article [22] for the connection between eigenvalues and expansion properties of graphs, and the articles [9, 0, 28 3, 33 35] on the applications of eigenvalues and eigenvectors in various algorithms, i.e., combinatorial optimization, spectral partitioning and clustering. We study the spectral properties of sparse random graphs from the random matrix theory point of view, i.e. the local eigenvalue statistics and the eigenvector statistics. It is expected that: i) the gap distribution for the bulk eigenvalues (λ i+ λ i ) is universal, with density approximately given by Wigner surmise; ii) the distribution of the second largest eigenvalue is given by the Tracy-Widom distribution (the largest eigenvalue of GOE); iii) the eigenvectors are asymptotically normal. For Wigner type random matrices, it is proved in a series of papers [5,3 20,25,26,37] for the bulk and [2,32,36] for the edge, that the eigenvalue statistics are universal; it is proved in [6, 24, 38] that the eigenvectors are asymptotically normal. Sparser models are harder to analyze. The bulk universality for both Erdős-Rényi graphs and regular graphs in the regime p were proved in [, 2, 23]. The edge universality was only proved for Erdős-Rényi graphs in the regime p /3 in [, 2, 27]. Less was known for the distribution of eigenvectors. To our knowledge, only recently, in [], Backhausz and Szegedy proved that the components of almost eigenvectors of p-regular graphs with fixed p converges to normal distribution in weak topology. However the proof heavily depends on the special structure of regular graphs and is hard to be generalized to other models. Let H be the normalized adjacency matrix of G(, p/) or G,p in the sparse regime, i.e. p = p(). We denote its eigenvalues as λ λ 2 λ and the corresponding normalized eigenvectors u, u 2,, u. The main goal of this paper is to prove that the bulk eigenvectors for H in the regime p are asymptotically jointly normal. Comparing with [], our results give explicitly the variance of the limit distribution, the asymptotical normality holds in any direction, and the argument does not depend on the special symmetry of the models. Theorem.. Fix arbitrary small constant δ, κ > 0. Let H be the normalized adjacency matrix of sparse Erdős-Rényi graphs G(, p/) with sparsity δ p /2; or the normalized adjacency matrix of p-regular graphs G,p with sparsity δ p 2/3 δ. Fix a positive integer n > 0 and a polynomial P of n variables. Then for any unit vector q R, such that q e (where e = (,,, ) / ), and deterministic indexes i, i 2,, i n [κ, ( κ) ], there exists a constant d > 0 depending on δ such that E[P ( q, ui 2, q, u i2 2,, q, u in 2 )] E[P ( 2, 2 2,, n 2 )] C d, (.) provided is large enough, where u i are eigenvectors of H, i are independent standard normal random variables. In particular, Theorem. implies that the entries of eigenvectors are asymptotically independent Gaussian. Indeed, for any fixed l and deterministic i [κ, ( κ) ], α,..., α l [, ], possibly depending on, we have (u i (α ),..., u i (α l )) (,..., l ), a vector with independent normal entries (provided the sign of the first entry of u i, say, is uniformly and independently chosen). The proof of Theorem. consists of three steps, analogous to the three-step strategy developed in a series of papers [6, 7, 20, 23] for proving bulk eigenvalue universality:. Establish the (isotropic) local semicircle law for sparse random matrices down to the optimal scale (log ) C /. 2. Analyze the eigenvector flow of Dyson Brownian motion to derive asymptotical normality of eigenvectors for sparse random matrices with a small Gaussian component. 2

3 3. Prove by comparison that the eigenvector statistics of sparse random matrices are the same as those of ones with a small Gaussian component. For the first step, the local semicircle laws for sparse random matrices were established in [2] for Erdős-Rényi graphs, and in [3] for p-regular graphs. For the third step, a robust comparison argument was developed in [23], and our case follows directly. The main content of this paper is the second step. We study the eigenvector flow of Dyson Brownian motion with general initial data. For any real deterministic matrix H, we define the following random matrix process, the Dyson Brownian motion: dh ij (t) = dw ij (t)/. (.2) where W t = (w ij (t)) i,j is symmetric with (w ij (t)) i j a family of independent Brownian motions of variance ( + δ ij )t. We denote H t = (h ij (t)) i,j, and so H 0 = H is our original matrix. We denote the eigenvalues of H t as λ(t) : λ (t) λ 2 (t) λ (t) and the corresponding eigenvectors u (t), u 2 (t),, u (t), where we write the j-th entry of u i (t) as u ij (t). Under some local regularity conditions (see Assumption.3 and.4) of the initial matrix H 0, we first prove the isotropic local law for Green s function of H t. With it as input, combining with the rigidity estimates of eigenvalues from [26], we analyze the eigenvector moment flow of Dyson Brownian motion following the approach developed by the first and last author in [6]. We prove that the eigenvectors of H t corresponding to bulk eigenvalues are asymptotically normal after short time. Our result can be viewed as a local version of [6, Theorem 7.] with general initial data.. Preliminary notations A fundamental quantity is the Stieltjes transform of the empirical eigenvalue distributions of H t, we denote the resolvent of H t by G(t; z) := (H t z), and the Stieltjes transform as m t (z) := Tr G(t, z) = λ i (t) z, for z C + the upper half complex plane. Often, we will write z as the sum of its real and imaginary parts z = E + ıη where E = Re[z], η = Im[z]. We denote by ρ fc,t the free convolution of the empirical eigenvalue distribution of H 0, i.e. ρ 0 = / δ λi(0) and the semicircle law with variance t, and m fc,t the Stieltjes transform of ρ fc,t. The density ρ fc,t is analytic on its support for any t > 0. And the function m fc,t solves the equation m fc,t (z) = m 0 (z + tm fc,t (z)) = g i (t, z), g i (t, z) := λ i (0) z tm fc,t (z), (.3) where refer to [4] for a detailed study of free convolution with semi-circle law. For any t 0, we denote the classical eigenvalues of ρ fc,t by γ i (t), which is given by { x γ i (t) = sup ρ fc,t (x)dx i }, i [, ]. (.4) x Throughout the paper we use the following notion of overwhelming probability. 3

4 Definition.2. We say that a family of events F(u) indexed by some parameter(s) u holds with overwhelming probability, if for any large D > 0 and (D, u) large enough, uniformly in u. P[F(u)] D, (.5) We use C to represent a large universal constant, and c for a small universal constant, which may depend on other universal constants, i.e., c in the control parameter ψ defined in (.6), constants b and c in Assumption.3 and.4, and may be different from line by line. We write X Y or X = O(Y ), if there exists some universal constant such that X CY. We write X k Y, or X = O k (Y ) if there exists some constant C k, which only depends on k (and possibly other universal constants), such that X C k Y. We write X Y if there exists some small constant c, such that c X Y. ow we can state the assumptions on the initial matrix H 0. In Sections 2 and 3, we fix an arbitrarily small number c > 0, and define the control parameter ψ = c. (.6) We fix an energy level E 0, radius / r, and mesoscopic scales / η r, where r and η will depend on. For example, the reader can take η = ψ/, r = /2 in mind. We will study the eigenvectors corresponding to the bulk eigenvalues, which refer to eigenvalues on the interval [E 0 r, E 0 + r]. We show that after short time, the projections of those bulk eigenvectors on some unit vector q are asymptotically normal. The first assumption is the same as in [26], which imposes the regularity of density of H 0 around E 0. Assumption.3. We assume that there exists some large constant a > 0 such that. The norm of H 0 is bounded, H 0 a. 2. The Stieltjes transform of H 0 is lower and upper bounded uniformly for any z {E + ıη : E [E 0 r, E 0 + r], η η }. a Im[m 0 (z)] a, (.7) Besides the information of eigenvalues of the initial matrix H 0, we also need the following regularity assumption on its eigenvectors. Assumption.4. We assume that for some unit vector q, there exists some small constant b > 0 such that q, G(0, z)q m0 (z) b, (.8) uniformly for any z {E + ıη : E [E 0 r, E 0 + r], η η r}, where m 0 is the Stieltjes transform of H 0..2 Statement of Results Let E 0 and r be the same as in Assumption.3. For any 0 κ <, we denote I r κ(e 0 ) := [E 0 ( κ)r, E 0 + ( κ)r], 4

5 and the spectral domain: D κ := {z = E + ıη : E I r κ(e 0 ), ψ 4 / η κr}. (.9) Theorem.5. We assume that the inital matrix H 0 satisfies Assumption.3 and.4. Fix κ > 0, positive integer n > 0 and polynomial P of n variables. Then for any η t r and unit vector q R, there exists a constant d > 0 depending on a, b, r, t such that sup I =n: k I,λ k (t) I r 2κ (E 0 ) E [ P ( ( q, uk (t) 2) k I)] E [ P ( ( j 2 ) n j=)] C d, (.0) provided is large enough, where sup is over all possible index sets I, and j are independent standard normal random variables. As a corollary, we have the following local quantum unique ergodicity statements for bulk eigenvectors. Corollary.6. We assume that the initial matrix H 0 satisfies Assumption.3. We further assume that there exists a small constant b such that (H 0 z) ij m 0 (z)δ ij b, m 0(z) = Tr(H 0 z) (.) uniformly for any z {E + ıη : E [E 0 r, E 0 + r], η η r}. Then the following quantum unique ergodicity holds: Fix κ > 0. For any η t r and ε > 0, there exists a constant d > 0 depending on a, b, r, t such that sup P k:λ k (t) I2κ r (E0) ( a ) a i u 2 ki > ε C 2ε ( d + a ), (.2) provided is large enough, where a = (a, a 2,, a ), such that i a i = 0 and max i a i, and its norm a = a i. Acknowledgements. The authors thank Antti Knowles for pointing out an error in an early version of this paper. 2 Local Law In this section, we prove the following isotropic local law for the resolvent of H t. If we write H 0 = U 0 Λ 0 U 0, where Λ 0 = diag{λ (0),, λ (0)}, and U 0 is the orthogonal matrix of its eigenvectors. Theorem 2. states that G(t, z) is well approximated by U 0 diag{g (t, z), g 2 (t, z),, g (t, z)}u 0 where g i are defined in (.3). It implies that the Green function becomes regular after adding a small Gaussian component. Theorem 2.. Under the Assumption.3, fix κ > 0. Then for any η t r, unit vector q = (q, q 2,, q ) R, uniformly for any z D κ (as in (.9)), the following holds with overwhelming probability, [ q, G(t, z)q ] u i (0), q 2 g i (t, z) ψ2 Im u i (0), q 2 g i (t, z), (2.) η provided is large enough, where u i (0) are eigenvectors of H 0, and g i are defined in (.3). 5

6 2. Rigidity of Eigenvalues In [26], the eigenvalues of H t are detailed studied under the Assumption.3. In this section we recall some estimates on the locations of eigenvalues from [26]. For the free convoluted density ρ fc,t, we have the following deterministic estimate on its Stieltjes transform and classical eigenvalue locations (as in (.3) and (.4)) from [26, Lemma 7.2]. Proposition 2.2. Under the Assumption.3, fix κ > 0. Then for any η t r and large enough, the following holds: uniformly for z {E + ıη : E I r κ(e 0 ), 0 < η κr}, the Stieltjes transform m fc,t, and C Im[m fc,t (z)] C, (2.2) m fc,t (z) g i (t, z) C log, (2.3) where C is a constant depending on the constant a in Assumption.3, and g i (t, z) are as in (.3); for the classical eigenvalue locations, uniformly for any index i such that γ i (t) I κ r (E 0 ), we have t γ i (t) C log. (2.4) Proof. (2.2) is the same as [26, (7.7) Lemma 7.2]. For (2.3), we denote Ẽ + ı η := z + tm fc,t(z), and divide the sum into the following dyadic regions: We divide the sum into the following dyadic regions: U 0 = {i : λ i (0) Ẽ η}, U n = {i : 2 n η < λ i (0) Ẽ 2n η}, n log 2 ( η). For the eigenvalues which do not belong to n U n, we have λ i (0) Ẽ. Since η t η, we have U n Thus we can bound (2.32) g i log 2 n=0 2(2 n η) 2 λ i (0) Ẽ ı2n η 2 2 Im[m 0(Ẽ + ı η)]2n η C2 n η. i U n λ i (0) Ẽ ı η + Finally for (2.4), we have t γ i (t) = Re[m fc,t (γ i (t))] C log. log 2 η The following result on eigenvalue rigidity estimates of H t is from [26, Theorem 3.3]. n=0 U n + C log. (2.5) 2n η Theorem 2.3. Under the Assumption.3, fix κ > 0. Then for any η t r, and large enough, with overwhelming probability, the followings hold: uniformly for z D κ ; and for the eigenvalues, uniformly for any index i such that λ i (t) I κ r (E 0 ). m t (z) m fc,t (z) ψ(η) (2.6) λ i (t) γ i (t) ψ, 6

7 2.2 Isotropic Local Law Before we start proving Theorem 2., we need some reductions. We write H 0 as H 0 = U 0 Λ 0 U0, where Λ 0 = d diag{λ (0),, λ (0)}, and U 0 is the orthogonal matrix of its eigenvectors. Since H t = H0 + tw, where W is a standard Gaussian orthogonal ensemble, i.e., W = (w ij ) i j is symmetric with (w ij ) i j a family of independent Brownian motions of variance (+δ ij )/, we have the following equality in law: q, G(t, z)q = q, (U 0 Λ 0 U 0 + tw z) q = q, U 0 (Λ 0 + tu 0 W U 0 z) U 0 q d = q, U 0 (Λ 0 + tw z) U 0 q = U 0 q, (Λ 0 + tw z) U 0 q. Therefore, Theorem 2. can be reduced to the case that H t = Λ 0 + tw : [ q, G(t, z)q ] qi 2 g i (t, z) ψ2 Im q 2 η i g i (t, z). (2.7) The entry-wise local law of the matrix ensemble Λ 0 + tw (so called deformed Gaussian orthogonal ensemble) was studied in [26]. In the following we recall some estimates on the entry-wise local law from [26, Theorem 3.3]. To state it we need to introduce some notations. For any index set T [, ], we denote [H t ] i,j / T the minor of H t by removing the columns and rows indexed by T, and its resolvent by G (T) (t, z) := ([H t ] i,j / T z). Recall the definition of g i from (.3): g i (t, z) = λ i (0) z tm fc,t (z). (2.8) For the simplicity of notation, if the context is clear, we may simply write g i (t, z) as g i. Roughly speaking, the following theorem states that the resolvent matrix G(t, z) is close to the diagonal matrix diag{g, g 2,, g }. Theorem 2.4. The initial matrix H 0 = diag{λ (0), λ 2 (0),, λ (0)} satisfies Assumption.3 and fix κ > 0. Then for any η t r and large enough, with overwhelming probability, the following hold. Uniformly for any z D κ : for the diagonal resolvent entries, G (T) ii (t, z) g i (t, z) ψt g i (t, z) 2, (2.9) η and for the off-diagonal resolvent entries, G (T) ij (t, z) ψ min{ g i (t, z), g j (t, z) } ψ ( g i (t, z) g j (t, z) ) /2, (2.0) η η where T is any index set of size T log. Proof of Theorem 2.. From the discussions above, we can assume that H 0 = Λ 0 is diagonal, and take H t = Λ 0 + tw, where W is the standard Gaussian orthogonal ensemble. The quadratic term in (2.) can be written as a sum of diagonal terms and off-diagonal terms: q, G(t, z)q = G ii q 2 i + i j G ij q i q j, 7

8 where q = (q, q 2,, q ). The proof consists of two parts, the first part is trivial, we prove that the leading order term is the sum over diagonal terms; the second part is more involved, we show that the sum over off-diagonal terms is negligible by moment method. For the diagonal terms, from (2.9) in Theorem 2.4 and (2.32) in Proposition 2.8, with overwhelming probability we have G ii qi 2 g i qi 2 ψt [ g i 2 q 2 η i ψ ] Im q 2 η i g i (2.) For the second part we prove that for any integer k > 0, uniformly for z D κ, we have E [ Z 2k] k Y 2k, Z = [ G ij q i q i, Y = ψ log ] Im q 2 i j η i g i. (2.2) where the implicit constant depends only on k. Then it will follows from the Markov inequality that Z ψ 2 Im[ q2 i g i]/ η holds with overwhelming probability. By Assumption.3, we have the following trivial lower bound for Im [ i q2 i g i], [ ] Im qi 2 (η + t Im[m fc,t (z)])qi 2 g i = λ i (0) z tm fc,t (z) 2 η. (2.3) 2a We expand E[ Z 2k ], and introduce the shorthand notation X b2i b 2i := G b2i b 2i for i k, and X b2i b 2i := G b 2i b 2i for k + i 2k, E [ Z 2k] = b q b q b2 q b4k E[X bb 2 X b3b 4 X b4k b 4k ], (2.4) where b = (b, b 2,, b 4k ) and the sum b is over all b s such that b 2i b 2i, for i 2k. To obtain an efficient control on E[X bb 2 X b3b 4 X b4k b 4k ], we need to understand the correlations between these offdiagonal resolvent entries G ij for i j. Heuristically, G ij mainly depends on the matrix entry h ij, weakly depends on the matrix entries on the same row and column, and the dependence on the rest of the matrix H is negligible. Therefore the correlations of G ij and G mn are negligible if {i, j} {m, n} =. In the rest of this section, we will make this heuristic argument more rigorous. We denote the index set T = {b, b 2,, b 4k, b 4k }. Recall the following Schur complement formula ( ) A z B ( ( A z B = (C z) B ) ) B C z where A, B and C are block matrices. We take A = [H t ] i,j T, B = [H t ] i / T,j T and C = [H t ] i / T,j / T, where [H t ] i,j T is the submatrix of H t with row and column indices i, j T, and [H t ] i / T,j T and [H t ] i,j / T are defined analogously. Recall that G (T) (t, z) is the resolvent of the submatrix [H t ] i,j / T and m (T) t (z) = Tr G (T) / is its Stieltjes transform. Schur complement formula gives the following resolvent identity: ) [G] i,j T = ([H t ] i,j T z [H t ] i/ T,j T G(T) [H t ] i / T,j T ( = [Λ 0 ] i,j T + ( )) t[w ] i,j T z t [W ] i / T,j T G(T) [W ] i / T,j T =: (D(z) E(z)), 8

9 where D(z) and E(z) are two T T matrices, which depend on the index set T, ( D = [Λ 0 ] i,j T z tm fc,t, E = E () + E (2) + E (3), E () = t ( E (2) = t[w ] i,j T, E (3) = t [W ] i / T,j T G(T) [W ] i / T,j T m (T) t m (T) ) t m fc,t ). (2.5) With overwhelming probability, uniformly for any z D κ, the error term E(z) is much smaller than D(z) in the sense of matrix norm. In fact, for E (), by (2.6) and notice the deterministic estimate from interlacing of eigenveallues m t m (T) t T /η, with overwhelming probability, t m (T) t m fc,t ψt/(η). For E (2), with overwhelming probability, its entries are uniformly bounded by ψ(t/) /2. For E (3), with overwhelming probability, we have the following estimate E (3) mn = t ij ( w mi w nj δ ) ijδ mn G (T) ij ψt ij G (T) ij 2 = ψt Im[m(T) t ] /2 ψt, η η where the first inequality follows from the large deviation estimate [20, Appendix B], and the second inequality follows from (2.6). Since E(z) is a T T matrix, where T 4k, and with overwhelming probability, its entries are uniformly bounded, so is its norm: E(z) k ψ(t + η)/ η. For z D κ, we have Im[z + tm fc,t (z)] (η + t), which implies D(z) (η + t). As a result, there exists a constant C k which depends only on k, the following holds: uniformly for any z D κ, ψ E(z) C k D(z) (2.6) η with overwhelming probability. We define the event A, such that (2.6) holds. Since it holds with overwhelming probability, for sufficiently large, we can assume that By Taylor expansion, on the event A, we have P(A c ) (4a+6)k. (2.7) f [G] i,j T = (D E) = D ( ED ) l + (D E) ( ED ) f, where f is a large number, and we will choose it later. In the rest of the proof, we denote l=0 G (l) := D ( ED ) l, 0 l f, G ( ) := (D E) ( ED ) f, For l f or l =, we define X (l) b 2i b 2i := G (l) b 2i b 2i for i k, and X (l) b 2i b 2i := G (l) b 2i b 2i k + i 2k. We remark that G (l) and X (l) implicitly depend on the index set T. With these notations, we have f X b2i b 2i = X (l) b 2i b 2i + X ( ) b 2i b 2i, i 2k, l= for 9

10 where we used the fact that b 2i b 2i, and D = diag{g i } i T is a diagonal matrix; therefore, when l = 0, the term X (0) b 2i b 2i vanishes. On the event A, (D E) k /η and ED k ψ/(η) /2, they together imply: ( ) f X ( ) ψ b k 2i b 2i η η In the following we show that: once we take f sufficiently large, these terms X ( ) b 2i b 2i are negligible, and do not contribute to (2.2). Since X b2i b 2i s are all uniformly bounded by /η, and the sum q i is trivially bounded by /2, we have E [ Z 2k] = b q b q b2 q b4k E[X bb 2 X b3b 4 X b4k b 4k A ] + O ( 2k η 2k P(A c ) ) (2.8) By our choice of set A, i.e. (2.7), combining with the estimate (2.3), we have 2k η 2k P(A c ) ( 2a+2 η) 2k Y 2k. Therefore, E [ Z 2k] = b q b q b2 q b4k E[X bb 2 X b3b 4 X b4k b 4k A ] + O ( Y 2k), (2.9) where Y is as in (2.2). We separate the leading term of the product of those X b2i b 2i as ( 2k f 2k j ) 2k f X bb 2 X b3b 4 X b4k b 4k = X (l) b 2i b 2i + X b2i b 2i l= j= X ( ) b 2j b 2j i=j+ l= X (l) b 2i b 2i. (2.20) If we take f = 4k(a+)/c, then on the event A, the second term on the righthand side of (2.20) is bounded, ( 2k j ) 2k f X b2i b 2i X ( ) b 2j b 2j X (l) b 2i b 2i j= i=j+ l= ( 2k j ) 2k ( ) = X b2i b 2i X ( ) b 2j b 2j X b2i b 2i X ( ) b 2i b 2i j= i=j+ 4k ( ) f ψ η 2k 4kY 2k, η where in the last inequality we used ψ = c (as in (.6)) and η ψ 4 /, since z D κ (as in (.9)). This combining with (2.9) leads to E [ Z 2k] [ ] = q b q b2 q b4k E X (l) b b 2 X (l2) b 3b 4 X (l 2k) b 4k b 4k A + O ( 4kY 2k). (2.2) l,,l 2k f By the Cauchy-Schwarz inequality, we have b [ ] [ ] [ X ] E X (l) b b 2 X (l 2k) E b 4k b 4k A X (l) b b 2 X (l 2k) (l b 4k b 4k + E ) b b 2 X (l 2k) 2 b 4k b 4k P[A c ] (2.22) 0

11 In the following we bound the first term on the righthand side of (2.22), the second term can be treated in exactly the same way. By our definition of X (li) b 2i b 2i s, we have [ ] [ ] 2k E X (l) b b 2 X (l 2k) b 4k b 4k = E g a i Ẽ a i a i g 2 a i Ẽa 2 i l a i g i l i + a i, (2.23) l i + a:b a where a represents arrays a i j T = {b, b 2,, b 4k }, with indices i 2k and j l i + ; the above sum is over all the possible arrays a containing b, denoted by b a, in the sense that a i = b 2i and a i l = b i+ 2i for i 2k. For the tilde notation, g a i j := g a i j and Ẽa i := E j ai j+ a i for i k, and j ai j+ g a i j := g and a i Ẽa i := E for k + i 2k. j j ai j+ a i j ai j+ Since by our definition g i are all deterministic, we can separate the deterministic part and the random part of (2.23): [ 2k E g a i Ẽ a i a i g 2 a i Ẽa 2 i l a i g i l i + a i l i +] 2k l i+ 2k l i g a i j E Ẽ a i j a i j+. (2.24) a:b a a:b a j= j= For the control of the expectation of the product of Ẽij, we have the following proposition, whose proof we postpone to the next section. Proposition 2.5. For any indices b, b 2,, b 2l T, we have ] (ψ log ) E [Ẽbb l (t + η) l 2 Ẽ b3b 4 Ẽb 2l b 2l l χ(b (η) l/2, b 2,, b 2l ). (2.25) where χ is an indicator function such that χ = if any number in the array (b, b 2,, b 2l ) occurs even number of times, otherwise χ = 0. otice that χ((a i j, ai j+ ) i 2k, j l i ) = χ((a i, ali+ i ) i 2k ) = χ(b). With Proposition (2.5), we can bound (2.24) as E [ 2k a:b a g a i Ẽ a i a i 2 g a i 2 Ẽa i l i a i l i + g a i l i + ] k a:b a ( ) l ψ(t + η) log i χ(b) (η) /2 2k l i+ j= g a i j. (2.26) where we used the fact that l i 2k(f ) 8k 2 (a + )/c, so the implicit constant depends only on k. Combining (2.23), (2.24) and (2.37) together, [ ] q b q b2 q b4k E X (l) b b 2 X (l2) b 3b 4 X (l 2k) b 4k b 4k b = b q b q b2 q b4k E k ( ψ(t + η) log (η) /2 [ 2k a:b a ) l i b a:b a g a i Ẽ a i a i 2 g a i 2 Ẽa i l i a i l i + g a i l i + χ(b) 2k q a i q a i li + 2k l i+ j= ] g a i j (2.27)

12 Given b, the sum a:a b is over all the possible arrays a such that ai j T = {b, b 2,, b 4k }, for i 2k, j l i +, and a i = b 2i and a i l = b i+ 2i for i 2k. Since any array {a i j } i 2k, j l i+ induces a partition P of its index set {(i, j) : i 2k, j l i + }, such that (i, j) and (i, j ) are in the same block if and only if a i j = ai j. For any array a with b a and χ(b) = (as in Proposition (2.5)), we denote the frequency representation of the array (b, b 2,, b 4k ) = (a, a l,, + a2k, a 2k l 2k + ) as γ d γd2 2 γdn n, where 2 d, d 2,, d n are all even, and n = T. otice that d i = 4k counts the total number. We also denote the frequency representation of ((a i j ) j l i+) i 2k as γ d+r γ d2+r2 2 γn dn+rn, where r i 0. Similarly, d i + r i = 2k + l i counts the total number. We summarize here the relations between d i, r i and l i, which will be used later: di = 4k, 2k + r i = l i. Example 2.6. If we take k = 3, b = (, 2, 2, 3, 4, 5, 3, 5, 2,, 2, 4) and a = ((, 3, 2, 2);(2, 4,, 2, 3);(4,, 5); (3, 5);(2, 5,, ); (2, 4, 3, 4, 4, 4, 4)), then b a. The partition P induced by a is {{(, ), (2, 3), (3, 2), (5, 3), (5, 4)}, {(, 3), (, 4), (2, ), (2, 4), (5, ), (6, )}, {(, 2), (2, 5), (4, ), (6, 3)}, {(2, 2), (3, ), (6, 2), (6, 4), (6, 5), (6, 6), (6, 7)}, {(3, 3), (4, 2), (5, 2)}}. The frequency representations of b and a are given by and respectively. d i and r i are given by d = 2, d 2 = 4, d 3 = 2, d 4 = 2, d 5 = 2 and r = 3, r 2 = 2, r 3 = 2, r 4 = 5, r 5 =. Since d, d 2,, d 5 are all even, χ(b) =. otice that the frequencies d i and d i + r i are uniquely determined by the partition P, in fact d i + r i are the sizes of blocks of P. Moreover since b is uniquely determined by a, first adding up terms corresponding to a such that b a, and then summing over b is equivalent to first summing over arrays a corresponding to the same partitions P, which we denote by a P, and then summing over different partitions with each block size at least two. b a:b a χ(b) 2k q a i q a i li + 2k l i+ j= g a i j = P P k P P χ(b) a P 2k q a i q a i li + 2k γ,,γ n ] di/2 n Im [ qi 2g i (t + η) ri+di/2 ] 2k Im [ qi 2g i (t + η), l i l i+ j= n q γi di g γi di+ri g a i j (2.28) where in the first inequality we use (2.32) in Proposition 2.8 and d i 2, and for the last inequality we used di = 4k. Therefore by substituting (2.28) into (2.27), we have [ ] q b q b2 q b4k E X (l) b b 2 X (l2) b 3b 4 X (l 2k) ( ψ(t + η) log b 4k b k 4k (η) /2 b P ( [ ] ) 2k ψ Im q 2 ( ) i g i log r ψ log i k Y 2k, P η η ) l i Im [ q 2 i g i] 2k (t + η) l i (2.29) 2

13 in the last inequality, we used that ψ log / η and the total number of partition is bounded by ( l i + 2k)!, which is a constant depending on k. Following the same argument, one can check that [ X ] ( [ ] ) 2k q (l b q b2 q b4k E ) b b 2 X (l2) b 3b 4 X (l 2k) 2 Im q 2 b 4k b 4k i g i (ψ log ) 2 k 2k Y 2k. (2.30) η b Therefore, by combining (2.2), (2.22), (2.29) and (2.30), it follows This finishes the proof for isotropic law Theorem 2.. E[ Z 2k ] k Y 2k + 2k Y 2k P(A c ) k Y 2k. The following is an easy corollary of Theorem 2.: Corollary 2.7. Under Assumption.3 and.4, for any η t r, 0 < κ <, we have that q, G(t, z)q m fc,t (z) b + ψ2, (2.3) η uniformly for any z D κ, with overwhelming probability, provided is large enough. Proof. By Assumption.4, we have u i (0), q 2 λ i (0) z λ i (0) z b, uniformly for any z {E +ıη : E [E 0 r, E 0 +r], η η r}. We denote z = Ẽ +ı η := z +tm fc,t(z). From Proposition 2.2, we know that for any z D κ, Im[z + tm fc,t (z)] t + η η and tm fc,t (z) t log κr provided is large enough. Therefore, we have that z {E + ıη : E [E 0 r, E 0 + r], η η }. As a consequence, Im [ ] [ u i (0), q 2 = Im λ i (0) z tm fc,t (z) ] [ u i (0), q 2 λ i (0) z b + Im Combining with Theorem 2., it follows that q, G(t, z)q u i (0), q 2 λ i (0) z tm fc,t (z) ψ2. η Therefore with overwhelming probability we have q, G(t, z)q m fc,t (z) q, G(t, z)q u i (0), q 2 λ i (0) z tm fc,t (z) u i (0), q 2 + λ i (0) z λ i (0) z b + ψ2 η uniformly for any z D κ. ]. λ i (0) z 3

14 We take the event A of trajectories (λ(t)) 0 t r such that:. Eigenvalue rigidity holds: sup t0 s l/ m s (z) m fc,s (z) ψ(η) uniformly for z D κ ; and sup t0 s t 0+l/ λ i (s) γ i (s) ψ uniformly for indices i such that γ i (s) I κ r (E 0 ). 2. When we conditioning on any trajectory λ A, with overwhelming probability, the following holds uniformly for z D κ. sup q, G(t, z)q m fc,t (z) t 0 s t 0+l/ b + ψ η As a consequence of Theorem 2.3 and 2., and notice we can take the parameter c (as in (.6)) arbitrarily small, the event A holds with overwhelming probability. 2.3 Auxiliary results Proposition 2.8. The initial matrix H 0 = diag{λ (0),, λ (0)} satisfies Assumption.3. Fix κ > 0. Then for any k 2 and m 0, we have and for any m 0, we have and qi k g i (t, z) k+m Im [ qi 2 g ] k/2 i k, (2.32) (t + η) k/2+m q i g i (t, z) 2+m /2 Im [ qi 2g ] /2 i (t + η) +m, (2.33) uniformly for any z D κ, where g i are as in (2.8). g i (t, z) +m log (t + η) m, (2.34) Proof. We denote Ẽ + ı η := z + tm fc,t(z). From Proposition 2.2, η = Im[z + tm fc,t (z)] (η + t), which gives us a rough bound for g i (t, z) : g i (t, z) (t + η). (2.35) With the trivial bound (2.35), (2.32) and (2.33) are reduced to the case m = 0. For (2.32), we have the basic inequality x k i ( x 2 i ) k/2 if k 2. Therefore, ( ) k/2 ( [ ] ) k/2 Im q qi k g i (t, z) k qi 2 g i (t, z) 2 2 = i g i Im [ qi 2 g k/2 i] k. Im[z + tm fc,t (z)] (t + η) k/2 4

15 For (2.33), by Cauchy s inequality ( ) /2 ( ) /2 q i g i (t, z) 2 g i (t, z) 2 q i 2 g i (t, z) 2 ( Im [ ) ( /2 [ ] g i ] Im q 2 = i g i Im[z + tm fc,t (z)] Im[z + tm fc,t (z)] = /2 Im[m fc,t (z)] /2 Im [ qi 2g i t + η ] /2 ) /2 /2 Im [ qi 2g i t + η where we used Im[m fc,t (z)] C from (2.2). Finally, (2.34) in the case m = 0 is the same as (2.3). Proof of Proposition 2.5. Recall the decomposition E = E () + E (2) + E (3) from (2.5). If we condition on the submatrix [W ] i,j / T, E () is diagonal and non-random, E (2) depends on [W ] i,j T, and E (3) depends on W i/ T,j T, so they are independent. (2.25) can be decomposed into the following three estimates: with overwhelming probability ] ( ) l () E T [Ẽ b b 2 Ẽ () b 3b 4 Ẽ () ψt b 2l b 2l χ(b, b 2,, b 2l), (2.36) η ] ( ) l/2 (2) E T [Ẽ b b 2 Ẽ (2) b 3b 4 Ẽ (2) t b 2l b l 2l χ(b, b 2,, b 2l ), (2.37) E T [Ẽ (3) b b 2 Ẽ (3) b 3b 4 Ẽ (3) b 2l b 2l ] l ( ψt log η ) l χ(b, b 2,, b 2l ), (2.38) where E T is the expectation with respect to rows and columns of W indexed by T. For (2.36), since E () is diagonal and by (2.6) in Theorem 2.3, with overwhelming probability, t m (T) t m fc,t ψt/(η), we have ] ( ) l l ( ) l () E T [Ẽ b b 2 Ẽ () b 3b 4 Ẽ () ψt ψt b 2l b 2l δ b2i b η 2i χ(b, b 2,, b 2l). η For (2.37), it is a product of normal variables, which does not vanish only if each variable occurs even number of times. Thus (2.37) follows, and the implicit constant is from the moment of normal variables, and can be bounded by (2l )!!. In the following we prove (2.38). The entries of E (3) are given by E (3) b 2i b 2i = t β 2i,β 2i / T ( w b2i β 2i w b2iβ 2i δ b 2i b 2i δ β2i β 2i Therefore, the lefthand side of (2.38) is bounded by [ l ( t l E T w b2i β 2i w b2iβ 2i δ ) b 2i b 2i δ ] β2i β 2i β β 2, β 2l / T ] /2 ) G (T) β 2i β 2i. G (T) β β 2 G (T) β 2l β 2l, 5

16 For each monomial of resolvent entries G (T) β β 2 G (T) β 2l β 2l, we associate it with a labeled graph G in the following procedure: We denote the frequency representation of the array (β, β 2,, β 2l ) as γ d γd2 2 γdv v, where d i is the multiplicity of γ i, and v = {β, β 2,, β 2l }. We construct the labeled graph G with vertex set {γ, γ 2,, γ v } and l edges (β 2i, β 2i ) for i l (if β 2i = β 2i, the edge (β 2i, β 2i ) is a self-loop). We denote s the number of self-loops in G. For any vertex γ i G, its degree is given by d i, where self-loop adds two to the degree. It is easy to see that (2.38) follows from combining the following two estimates: [ l E T ( w b2i β 2i w b2iβ 2i δ ) b 2i b 2i δ ] β2i β 2i l l ρ(g)χ(b, b 2,, b 2l ), (2.39) where the implicit constant is from the moment of normal variables, and can be bounded by (2l )!!; and with overwhelming probability, uniformly for any z D κ, β β 2, β 2l / T G (T) β β 2 G (T) (ψ log ) l l/2 β 2l β 2l ρ(g) l, (2.40) η l/2 where ρ(g) is an indicator function, which equals one if each vertex of G is incident to two different edges, otherwise it is zero. For any graph G with ρ(g) =, we count the total number of edge-vertex pairs, such that the vertex is incident to the edge: each self-loop contributes to, and each non self-loop contributes to two, so the total number is s + 2(l s); since each vertex of G is incident to at least two different edges, the total number is at least 2v. Therefore, we have the following relation between v, s and l: 2v 2(l s) + s = 2l s. (2.4) For the first bound (2.39), we denote the set B = {(b j, β j )} j 2l. Then the product in (2.39) can be rewritten as l ( w b2i β 2i w b2iβ 2i δ ) b 2i b 2i δ β2i β 2i = w e(b,β) bβ (wbβ 2 /) e2(b,β), (b,β) B where e (b, β) = { i l : exact one of (b 2i, β 2i ), (b 2i, β 2i ) is (b, β)} and e 2 (b, β) = { i l : (b 2i, β 2i ) = (b 2i, β 2i ) = (b, β)}. Since for (b, β) B, w bβ are independent normal random variables, (2.39) does not vanish only if e (b, β) is even and e (b, β) + e 2 (b, β) 2 for any (b, β) B, which implies ρ(g)χ(b, b 2,, b 2l ) =. Therefore, we have E T (b,β) B and (2.39) follows. w e(b,β) bβ (w 2 bβ /) e2(b,β) l (b,β) B e(b,β)/2 (b,β) B e2(b,β) ρ(g)χ(b, b 2,, b 2l ) = l ρ(g)χ(b, b 2,, b 2l ), For the second bound (2.40), by Proposition 2.4, with overwhelming probability we have { G (T) ψ( g β 2i β 2i β2i g β2i ) /2, β 2i = β 2i, ψ( g β2i g β2i ) /2 / η, β 2i β 2i. 6

17 In terms of the graph G, the first bound corresponds to self-loops, and the second bound corresponds to non self-loop edges. In the graph G, there are s self-loops and l s non self-loop edges. The product of resolvent entries can be bounded as G (T) β β 2 G (T) ψ l β 2l β 2l ρ(g) l s η v g γi d i 2 ρ(g), with overwhelming probability. otice that ρ(g) = implies that d i 2. The index set (β, β 2,, β 2l ) induces a partition P on the set {, 2,, 2l} such that i and j are in the same block if and only if β i = β j. If two index sets induce the same partition, they correspond to isomorphic graphs (when we forget the labeling). Therefore, for (2.40), we can first sum over the index sets corresponding to the same partition and then sum over different partitions: G (T) β β 2 G (T) β 2l β 2l ρ(g) = G (T) β β 2 G (T) β 2l β 2l ρ(g) β,,β 2l / T P ψ l P ψ l η l s v γ,,γ v / T (η log ) v (η) (l s)/2 η l ψl P P (β,,β 2l ) P g γi d i ψ l 2 ρ(g) l l s η P v η log η d i 2 (η log ) l s/2 (ψ log ) l l/2 (η) (l s)/2 η l l η l/2 where the second inequality follows from (2.34), in the third inequality, we used i d i = 2l, for the second to last inequality, we used the bound v l s/2 from (2.4), and in the last inequality, we bounded the total number of different partitions by (2l)!. 3 Short Time Relaxation The Dyson Brownian motion.2 induces the following two dynamics on eigenvalues and eigenvectors, dλ k (t) = db kk(t) + dt, (3.) λ k (t) λ l (t) l k du k (t) = db kl (t) λ k (t) λ l (t) u l(t) 2 l k l k dt (λ k λ l ) 2 u k(t), (3.2) where B t = (b(t)) i,j is symmetric with (b ij (t)) i j a family of independent Brownian motions with variance ( + δ ij )t. Following the convention of [6, Definition 2.2], we call them Dyson Brownian motion for (3.) and Dyson vector flow for (3.2). In order to study the Dyson vector flow, the moment flow was introduced in [6, Section 3.], where the observables are the moments of projections of the eigenvectors onto a given direction. For any unit vector q R, and any index k, define: z k (t) = q, u k (t), where with the normalization, the typical size of z k is of order. The normalized test functions are Q t j,...,j m i,...,i m = m l= z 2j l i l m a(2j l ) where a(2j) = (2j )!!, (3.3) l= 7

18 These indices, {(i, j ),..., (i m, j m )} with distinct i k s and positive j k s can be encoded as a particle configuration η = (η, η 2,, η ) on [, ] such that η ik = j k for k m and η p = 0 if p / {i, i 2,, i m }. The total number of particles is (η) := η l = j k. We denote the particles in non-decreasing order by x (η) x 2 (η) x (η) (η). If the context is clear we will drop the dependence on η. We also say the j support of η is {i, i 2,, i m }. It is easy to see that this is a bijection between test functions Q,...,j m t i,...,i m and particle configurations. We define η ij to be the configuration by moving one particle from i to j. For any pair of n particle configurations η: x x 2 x n and ξ: y y 2 y n, we define the following distance: n d(η, ξ) = x α y α. (3.4) α= We condition on the trajectory of the eigenvalues, and define f H0 λ,t (η) = EH0 (Q t j,...,j m i,...,i m (t) λ), (3.5) where η is the configuration {(i, j ),..., (i m, j m )}. Here λ denotes the whole path of eigenvalues for 0 t. The dependence in the initial matrix H 0 will often be omitted so that we write f t = f H0 λ,t. We will call f t the eigenvector moment flow, which is governed by the following generator B(t) [6, Theorem 3.]: Theorem 3.. [Eigenvector moment flow] Let q R be a unit vector, z k = q, u k (t) and c ij (t) = (λ i (t) λ j (t)) 2 /. Suppose that f t (η) is given by (3.5) where η denote the configuration {(i, j ),..., (i m, j m )}. Then f t satisfies the equation t f t = B(t)f t, (3.6) B(t)f t (η) = i j c ij (t)2η i ( + 2η j ) ( f t (η i,j ) f t (η) ). (3.7) An important property of the eigenvector moment flow is the reversibility with respect to a simple explicit equilibrium measure: k ( π(η) = φ(η p ), φ(k) = ). (3.8) 2i p= And for any function f on the configuration space, the Dirichlet form is given by π(η)f(η)b(t)f(η) = π(η) c ij η i ( + 2η j ) ( f(η ij ) f(η) ) 2. η η i j We are interested in the eigenvectors corresponding to eigenvalues on the interval [E 0 r, E 0 + r], and we only have local information of the initial matrix H 0. However, the operator B(t) has long range interactions. We fix a short range parameter l, and split B(t) into short-range part and long range part: B(t) = S (t) + L (t), with (S f t )(η) = c jk (t)2η j ( + 2η k ) ( f t (η jk ) f t (η) ), (3.9) 0< j k l (L f t )(η) = j k >l c jk (t)2η j ( + 2η k ) ( f t (η jk ) f t (η) ). 8

19 otice that S and L are also reversible with respect to the measure π (as in (3.8)). We denote by U B (s, t) (U S (s, t) and U L (s, t)) the semigroup associated with B (S and L ) from time s to t, i.e. t U B (s, t) = B(t)U B (s, t). For any η t r, In the rest of this section, we fix time t 0 and the range parameter l, such that η t 0 t t 0 + l/, which we will choose later. We will show that the effect of the long-range operator L (t) is negligible in the sense of L norm, i.e. f t (η) U S (t 0, t)f t0 (η); and the short-range operator S (t) satisfies certain finite speed of propagation estimate, and (3.9) converges to equilibrium exponentially fast with rate. As a consequence, f t (η) and Theorem.5 follows. 3. Finite Speed of Propagation In this section, we fix some small parameter 0 < κ <, and define the following efficient distance on n particle configurations: d(η, ξ) = max α n #{i [, ] : γ i (t 0 ) I r κ(e 0 ), i [x α, y α ]}, (3.0) where η: x x 2 x n and ξ: y y 2 y n, and γ i (t 0 ) are classical eigenvalue locations at time t 0 (as in (.4)). In this section, we will condition on λ(t 0 ) = λ for some good eigenvalue configuration λ. We call an eigenvalue configuration λ good if we condition on λ(t 0 ) = λ, for large enough the following holds with overwhelming probability:. sup t0 s t m s (z) m fc,s (z) ψ(η), uniformly for any z D κ ; 2. sup t0 s t λ i (s) γ i (s) ψ, uniformly for indices i such that γ i (t) I κ r (E 0 ). By Theorem 2.3, combining with a simple continuity argument, λ(t 0 ) is a good eigenvalue configuration with overwhelming probablity. Lemma 3.2. Under the Assumption.3, for any η t r, we fix time t 0 and the range parameter l, such that η t 0 t t 0 + l/ r. For any n particle configurations η: x x 2 x n, and ξ: y y 2 y n, with d(η, ξ) ψl/2, then there exists a universal constant c, for large enough, the following holds with overwhelming probability: if we condition on λ(t 0 ) = λ, for any good eigenvalue configuration λ. sup U S (t 0, s)δ η (ξ) e cψ, (3.) t 0 s t Proof. Thanks to the Markov property of Dyson Brownian motion, we know that the conditioned law (λ(t)) t t0 λ(t 0 ) = λ is the same as Dyson Brownian motion starting at λ. In the proof, we will neglect the conditioning in (3.), and simply think it as a Dyson eigenvalue flow starting at λ. We denote ν = /l and r s (η, ξ) = U S (t 0, s)δ η (ξ). We define a family of cut-off functions g w parametrized by w R by demanding that inf x g w (x) = 0 and define g w by considering the following three cases: 9

20 . w E 0 ( 2κ)r. Define 2. w I r 2κ(E 0 ). Define 3. w E 0 + ( 2κ)r. Define g w(x) = { if x I r 2κ(E 0 ) 0 if x / I r 2κ(E 0 ) if x w, x I2κ(E r 0 ) g w(x) = if x < w, x I2κ(E r 0 ) 0 if x I2κ(E r 0 ) g w(x) = { if x I r 2κ(E 0 ) 0 if x I r 2κ(E 0 ) It is easy to see that for any fixed x, as a function of w, g w(x) is non-increasing. We take χ a smooth, nonnegative function, compactly supported on [, ] with χ(x)dx =. We also define the smoothed version of g w, ϕ i (x) = g γi(t 0)(x y)νχ(νy)dy. Then ϕ i is smooth, ϕ i and ϕ i ν. Moreover, ϕ i (γ i (t 0 )) /ν, and ϕ i (x) all vanish for x E 0 ( 2κ)r l/ or x E 0 + ( 2κ)r + l/. From the monotonicity of g w(x), for any a b, we have λ a (t 0 ) λ b (t 0 ), so ϕ a(x) ϕ b(x) 0. (3.2) We define the stopping time τ, which is the first time s t 0 such that either of the following fails: i) m s (z) m fc,s (z) ψ(η) uniformly for z D κ ; ii) λ i (s) γ i (s) ψ uniformly for indices i such that γ i (s) I κ r (E 0 ). By our assumption that λ(t 0 ) is a good configuration, we have that τ t with overwhelming probability. Recall the inverse Stieltjest transform, ρ fc,s (E) = lim η 0 Im[m fc,s (E + iη)]/π. By Proposition 2.2, the densities ρ fc,s (E) and δ λi(s) are lower and upper bounded on I r κ(e 0 ), on the scale η ψ 4 /. Thus, there exists some universal constant C such that for any t 0 s t, and interval I centered in I r κ(e 0 ), with I ψ 4 /, C I #{i : γ i (s τ) I}, #{i : λ i (s τ) I} C I. (3.3) For any configuration ξ with n particles we define ϕ s (ξ) := n α= ϕ xα (λ yα (s τ)), φ s (ξ) := e νϕs(ξ), v s (ξ) := φ s (ξ)r s τ (η, ξ), X s := ξ π(ξ)v s (ξ) 2, where π is the reversible measure with respect to the eigenvector moment flow (as in (3.8)). We denote Xt := sup t0 s t X s (by our definition, X s is always positive). We claim that (3.) follows from the estimate E[Xt ] Ce C(t t0)ν log, (3.4) 20

21 where C is a constant depending on n. In fact, (3.4) implies that [ E sup e 2ν n t 0 s t ] α= ϕxα (λyα (s τ)) rs τ 2 (η, ξ) Ce C(t t0)ν log. (3.5) Under the assumption that d(η, ξ) ψl/2, there exists some index α n (by symmetry, we can assume x α y α ) such that #{i : γ i (t 0 ) I r κ(e 0 ), i [x α, y α ]} ψl/2, then it follows from (3.3) that [γ xα (t 0 ), γ yα (t 0 )] I r 2κ(E 0 ) ψl/, and thus ϕ xα (γ yα (t 0 )) ϕ xα (γ xα (t 0 )) ψl/. We can lower bound ϕ xα (λ yα (s τ)) as ϕ xα (λ yα (s τ)) ϕ xα (λ yα (t 0 )) ϕ xα (λ yα (s τ)) ϕ xα (γ yα (s τ)) ϕ xα (γ yα (s τ)) ϕ xα (γ yα (t 0 )). (3.6) For the second term in (3.6), since either γ yα (s τ) I r κ(e 0 ), and λ yα (s τ) γ yα (s τ) ψ/, or γ yα (s τ) / I r κ(e 0 ), and ϕ xα (λ yα (s τ)) = ϕ xα (γ yα (s τ)) = 0. In both cases ϕ xα (λ yα (s τ)) ϕ xα (γ yα (s τ)) ψ/. For the third term in (3.6), we have ϕ xα (γ yα (s τ)) ϕ xα (γ yα (t 0 )) s t 0 ϕ (γ yα (σ τ) γ y α (σ τ) (s t 0 ) log ψl/, where we used (2.4). As a consequence we have ϕ xα (λ yα (s τ)) ψl/, for any t 0 s t. It then follows by combining with (3.5), E[ sup r s τ (η, ξ) 2 ] e cψ. t 0 s t Since λ(t 0 ) is a good eigenvalue configuration, with overwhelming probability we have τ t, Therefore, (3.) follows by the Markov inequality. In the following we prove (3.4). We decompose X s as X s = M s + A s, where M s is a continuous local martingale with M t0 = 0, and A s is a continuous adapted process of finite variance. We denote A t := sup t0 s t A s, and M t := sup t0 s t M t. Then we have that X t M t + A t. For M t we will bound it by Burkholder-Davis-Gundy inequality: [ E (Mt ) 2] [ t ] C E dm s, dm s. (3.7) t 0 For A t, since A t is a finite variance process, we will directly upper bound s A s, and [ t ] E [A t ] E A t0 + ( s A s 0)ds. (3.8) t 0 2

22 By the Itó s formula we have dx s = ξ + ξ π(ξ) k j l c kj 2ξ k ( + 2ξ j ) ( φs (ξ kj ) φ s (ξ) + φ ) s(ξ) φ s (ξ kj ) 2 v s (ξ kj )v s (ξ)d(s τ) (3.9) π(ξ)r 2 s τ (η, ξ) dφ s (ξ), dφ s (ξ) (3.20) +2 ξ ξ π(ξ)v s (ξ)r s τ (η, ξ)dφ s (ξ) (3.2) π(ξ) c kj 2ξ k ( + 2ξ j )(v s (ξ kj ) v s (ξ)) 2 d(s τ). (3.22) k j l The martingale part comes from (3.2), dm s = 2 ξ π(ξ)v s (ξ) 2 ν n α= ϕ x α (λ yα (s r)) db y αy α (s τ). Since ϕ i, we have dm s, dm s n ν 2 X2 s ds τ. Therefore, combining with (3.7), we have E [(Mt ) 2] ν 2 [ t ] n E Xs 2 ds = ν2 t 0 t t 0 E[X 2 s ]ds (3.23) To understand (3.8) and (3.23), we need an upper bound of A s, which is the finite variance part of dx s. Thanks to the choice of ϕ i s, we can directly upper bound (3.9) and (3.20) in terms of X s. For (3.2), we upper bound it by taking advantage of its cancellation with (3.22). Firstly, for (3.9), we need the following estimate: for k j l, φ s (ξ kj ) φ s (ξ) + φ s(ξ) φ s (ξ kj ) 2 ν2 λ k λ j 2. (3.24) We assume that j < k, then there exists p < q n such that y p j < y p (we set y 0 = 0) and y q < k = y q (recall y q = y q (ξ)) and ϕ s (ξ kj ) ϕ s (ξ) q ϕ xα (λ yα j) ϕ xα (λ yα ) α=p Since y α (y α j) k j l, by our choice of stopping time τ, if λ yα j E 0 ( κ)r, then λ yα E ( κ)r + Cl/, where C is from (3.3), and both ϕ xα (λ yα j) and ϕ xα (λ yα ) vanish. Especially we have ϕ xα (λ yα j) ϕ xα (λ yα ) = 0. Similarly, if λ yα E + ( κ)r, then λ yα j E + ( κ)r Cl/, and ϕ xα (λ yα j) ϕ xα (λ yα ) = 0. Therefore, ϕ s (ξ kj ) ϕ s (ξ) [λyp j(s τ), λ yq (s τ)] I r κ(e 0 ) min{ λj (s τ) λ k (s τ), ν }. 22

23 where we used (3.3) again. This estimate leads to (3.24): φ s (ξ kj ) φ s (ξ) + φ s(ξ) φ s (ξ kj ) 2 = exp ν(ϕ s(ξ kj ) ϕ s (ξ)) 2 Combining with (3.24), it follows that (3.9) ν2 π(ξ) ξ k j l exp ν(ϕ s(ξ) ϕ s (ξ kj )) 2 2 ν 2 λ k λ j 2. 2ξ k ( + 2ξ j )v s (ξ kj )v s (ξ)d(s τ) n ν 2 l X sd(s τ). (3.25) For (3.20), we have the bound (3.20) = ν 2 X s n α= ϕ 2 x α (λ yα (s)) d(s τ) n ν 2 X sd(s τ). (3.26) For (3.2), the finite variance part is given by π(ξ)v s (ξ) 2 ξ +2ν ξ +2ν ξ π(ξ)v s (ξ) 2 π(ξ)v s (ξ) 2 n α= n α= n α= ( νϕ x α (λ yα ) + ν 2 ϕ 2 x α (λ yα ) ) d(s τ) (3.27) ϕ x α (λ yα ) ϕ x α (λ yα ) k y α >l 0< k y α l By our choice of the cutoff function, νϕ x α (λ yα ) + ν 2 ϕ 2 x α (λ yα ) ν 2 (3.27) n ν 2 d(s τ) λ yα λ k (3.28) d(s τ) λ yα λ k. (3.29) X sd(s τ). (3.30) For (3.28), we either have λ yα / I r 2κ(E 0 ), then ϕ x α (λ yα ) = 0; or λ yα I r 2κ(E 0 ), in this case, by a dyadic decomposition argument similar to (2.5), we have : Therefore we always have that k: k y α >l λ yα (s τ) λ k (s τ) log. (3.28) ν log X s d(s τ). (3.3) 23

24 Finally to bound (3.29), we symmetrize its summands 2ν n π(ξ)v s (ξ) 2 ϕ x α (λ yα ) ξ α= = 2ν d(s τ) π(ξ)v s (ξ) 2 λ i λ k 0<k i l ξ α:y α=i = 2ν d(s τ) π(ξ)v s (ξ) 2 λ i λ k 0<k i l ξ 2ν d(s τ) π(ξ)v s (ξ) 2 λ i λ k 0<k i l ξ 0< k y α l α:y α=i α:y α=i d(s τ) λ yα λ k ϕ x α (λ i ) + 2ν ϕ x α (λ i ) ϕ x α (λ i ) 0<i k l α:y α=k α:y α=k d(s τ) π(ξ)v s (ξ) 2 λ i λ k ξ ϕ x α (λ k ) α:y α=k ϕ x α (λ k ) ϕ x α (λ i ) + O(nνX s d(s τ)), (3.32) where in the last inequality, we replaced ϕ x α (λ k ) by ϕ x α (λ i ). By our choice of ϕ i, ϕ x α (λ i ) ϕ x α (λ k ) ϕ (x α ) λ i λ k ν λ i λ k, and there are at most 2ln choices for the pairs (k, i), so the error is at most O(ν 2 lx s /) = O(nνX s ). In all the following bounds, we consider i and k as fixed indices. We also introduce the following subsets of configurations with n particles, for any 0 q p n: A p = {ξ : ξ i + ξ k = p}, A p,q = {ξ A p : ξ i = q}. We denote ξ = ( ξ, ξ 2,, ξ ) the configuration exchanging all particles from sites i and k, i.e. ξi = ξ k, ξ k = ξ i and ξ j = ξ j if j i, k. We denote the locations of particles of the configuration ξ: ȳ ȳ 2 ȳ n. Using π(ξ) = π( ξ), we can rewrite the sum over ξ in (3.32) as 2 n p π(ξ)v s (ξ) 2 ϕ x λ i λ α (λ i ) ϕ x α (λ i ) k p=0 q=0 ξ A p,q α:y α=i α:y α=k n p = π(ξ)v s (ξ) 2 ϕ x λ i λ α (λ i ) ϕ x α (λ i ) k p=0 q=0 ξ A p,q α:y α=i α:y α=k n p π(ξ)v s ( λ i λ ξ) 2 ϕ x α (λ i ) ϕ x α (λ i ). (3.33) k p=0 q=0 ξ A p,q α:ȳ α=k α:ȳ α=i For i < k, both index sets {α : y α = k} {α : ȳ α = k} and {α : y α = i} {α : ȳ α = i} has cardinality p, and the j-th largest number in the first set is larger than its counterpart in the second set. By (3.2), for any a b, we have ϕ a(x) ϕ b (x). This implies that α:y α=i ϕ x α (λ i ) α:y α=k ϕ x α (λ i ) α:ȳ α=k ϕ x α (λ i ) α:ȳ α=i ϕ x α (λ i ). (3.34) 24

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

Local Kesten McKay law for random regular graphs

Local Kesten McKay law for random regular graphs Local Kesten McKay law for random regular graphs Roland Bauerschmidt (with Jiaoyang Huang and Horng-Tzer Yau) University of Cambridge Weizmann Institute, January 2017 Random regular graph G N,d is the

More information

arxiv: v2 [math.pr] 13 Jul 2018

arxiv: v2 [math.pr] 13 Jul 2018 Eigenvectors distribution and quantum unique ergodicity for deformed Wigner matrices L. Benigni LPSM, Université Paris Diderot lbenigni@lpsm.paris arxiv:7.0703v2 [math.pr] 3 Jul 208 Abstract We analyze

More information

Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues

Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues László Erdős Antti Knowles 2 Horng-Tzer Yau 2 Jun Yin 2 Institute of Mathematics, University of Munich, Theresienstrasse

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

On the principal components of sample covariance matrices

On the principal components of sample covariance matrices On the principal components of sample covariance matrices Alex Bloemendal Antti Knowles Horng-Tzer Yau Jun Yin February 4, 205 We introduce a class of M M sample covariance matrices Q which subsumes and

More information

The Eigenvector Moment Flow and local Quantum Unique Ergodicity

The Eigenvector Moment Flow and local Quantum Unique Ergodicity The Eigenvector Moment Flow and local Quantum Unique Ergodicity P. Bourgade Cambridge University and Institute for Advanced Study E-mail: bourgade@math.ias.edu H.-T. Yau Harvard University and Institute

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

arxiv: v4 [math.pr] 10 Sep 2018

arxiv: v4 [math.pr] 10 Sep 2018 Lectures on the local semicircle law for Wigner matrices arxiv:60.04055v4 [math.pr] 0 Sep 208 Florent Benaych-Georges Antti Knowles September, 208 These notes provide an introduction to the local semicircle

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Random band matrices in the delocalized phase, I: Quantum unique ergodicity and universality

Random band matrices in the delocalized phase, I: Quantum unique ergodicity and universality Random band matrices in the delocalized phase, I: Quantum unique ergodicity and universality P. Bourgade Courant Institute bourgade@cims.nyu.edu H.-T. Yau Harvard University htyau@math.harvard.edu J. Yin

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Local law of addition of random matrices

Local law of addition of random matrices Local law of addition of random matrices Kevin Schnelli IST Austria Joint work with Zhigang Bao and László Erdős Supported by ERC Advanced Grant RANMAT No. 338804 Spectrum of sum of random matrices Question:

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

Applications of random matrix theory to principal component analysis(pca)

Applications of random matrix theory to principal component analysis(pca) Applications of random matrix theory to principal component analysis(pca) Jun Yin IAS, UW-Madison IAS, April-2014 Joint work with A. Knowles and H. T Yau. 1 Basic picture: Let H be a Wigner (symmetric)

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Extreme eigenvalues of Erdős-Rényi random graphs

Extreme eigenvalues of Erdős-Rényi random graphs Extreme eigenvalues of Erdős-Rényi random graphs Florent Benaych-Georges j.w.w. Charles Bordenave and Antti Knowles MAP5, Université Paris Descartes May 18, 2018 IPAM UCLA Inhomogeneous Erdős-Rényi random

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Homogenization of the Dyson Brownian Motion

Homogenization of the Dyson Brownian Motion Homogenization of the Dyson Brownian Motion P. Bourgade, joint work with L. Erdős, J. Yin, H.-T. Yau Cincinnati symposium on probability theory and applications, September 2014 Introduction...........

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

III. Quantum ergodicity on graphs, perspectives

III. Quantum ergodicity on graphs, perspectives III. Quantum ergodicity on graphs, perspectives Nalini Anantharaman Université de Strasbourg 24 août 2016 Yesterday we focussed on the case of large regular (discrete) graphs. Let G = (V, E) be a (q +

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich From random matrices to free groups, through non-crossing partitions Michael Anshelevich March 4, 22 RANDOM MATRICES For each N, A (N), B (N) = independent N N symmetric Gaussian random matrices, i.e.

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Local Semicircle Law and Complete Delocalization for Wigner Random Matrices

Local Semicircle Law and Complete Delocalization for Wigner Random Matrices Local Semicircle Law and Complete Delocalization for Wigner Random Matrices The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters.

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

The Isotropic Semicircle Law and Deformation of Wigner Matrices

The Isotropic Semicircle Law and Deformation of Wigner Matrices The Isotropic Semicircle Law and Deformation of Wigner Matrices Antti Knowles 1 and Jun Yin 2 Department of Mathematics, Harvard University Cambridge MA 02138, USA knowles@math.harvard.edu 1 Department

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE

CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE PETER ROBICHEAUX Abstract. The goal of this paper is to examine characterizations of linear differential equations. We define the flow of an equation and examine

More information

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014 Quivers of Period 2 Mariya Sardarli Max Wimberley Heyi Zhu ovember 26, 2014 Abstract A quiver with vertices labeled from 1,..., n is said to have period 2 if the quiver obtained by mutating at 1 and then

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Dissertation Defense

Dissertation Defense Clustering Algorithms for Random and Pseudo-random Structures Dissertation Defense Pradipta Mitra 1 1 Department of Computer Science Yale University April 23, 2008 Mitra (Yale University) Dissertation

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

10.1. The spectrum of an operator. Lemma If A < 1 then I A is invertible with bounded inverse

10.1. The spectrum of an operator. Lemma If A < 1 then I A is invertible with bounded inverse 10. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the matrix). If the operator is symmetric, this is always

More information

From longest increasing subsequences to Whittaker functions and random polymers

From longest increasing subsequences to Whittaker functions and random polymers From longest increasing subsequences to Whittaker functions and random polymers Neil O Connell University of Warwick British Mathematical Colloquium, April 2, 2015 Collaborators: I. Corwin, T. Seppäläinen,

More information

F (z) =f(z). f(z) = a n (z z 0 ) n. F (z) = a n (z z 0 ) n

F (z) =f(z). f(z) = a n (z z 0 ) n. F (z) = a n (z z 0 ) n 6 Chapter 2. CAUCHY S THEOREM AND ITS APPLICATIONS Theorem 5.6 (Schwarz reflection principle) Suppose that f is a holomorphic function in Ω + that extends continuously to I and such that f is real-valued

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

MATRIX INTEGRALS AND MAP ENUMERATION 2

MATRIX INTEGRALS AND MAP ENUMERATION 2 MATRIX ITEGRALS AD MAP EUMERATIO 2 IVA CORWI Abstract. We prove the generating function formula for one face maps and for plane diagrams using techniques from Random Matrix Theory and orthogonal polynomials.

More information

11. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the

11. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the 11. Spectral theory For operators on finite dimensional vectors spaces, we can often find a basis of eigenvectors (which we use to diagonalize the matrix). If the operator is symmetric, this is always

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Eigenvalue Statistics for Toeplitz and Circulant Ensembles

Eigenvalue Statistics for Toeplitz and Circulant Ensembles Eigenvalue Statistics for Toeplitz and Circulant Ensembles Murat Koloğlu 1, Gene Kopp 2, Steven J. Miller 1, and Karen Shen 3 1 Williams College 2 University of Michigan 3 Stanford University http://www.williams.edu/mathematics/sjmiller/

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction

RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES. 1. Introduction RANDOM MATRICES: TAIL BOUNDS FOR GAPS BETWEEN EIGENVALUES HOI NGUYEN, TERENCE TAO, AND VAN VU Abstract. Gaps (or spacings) between consecutive eigenvalues are a central topic in random matrix theory. The

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

Universality for a class of random band matrices. 1 Introduction

Universality for a class of random band matrices. 1 Introduction Universality for a class of random band matrices P. Bourgade New York University, Courant Institute bourgade@cims.nyu.edu H.-T. Yau Harvard University htyau@math.harvard.edu L. Erdős Institute of Science

More information

Higher-order Fourier analysis of F n p and the complexity of systems of linear forms

Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Hamed Hatami School of Computer Science, McGill University, Montréal, Canada hatami@cs.mcgill.ca Shachar Lovett School

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Limit Laws for Random Matrices from Traffic Probability

Limit Laws for Random Matrices from Traffic Probability Limit Laws for Random Matrices from Traffic Probability arxiv:1601.02188 Slides available at math.berkeley.edu/ bensonau Benson Au UC Berkeley May 9th, 2016 Benson Au (UC Berkeley) Random Matrices from

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Mesoscopic eigenvalue statistics of Wigner matrices

Mesoscopic eigenvalue statistics of Wigner matrices Mesoscopic eigenvalue statistics of Wigner matrices Yukun He Antti Knowles May 26, 207 We prove that the linear statistics of the eigenvalues of a Wigner matrix converge to a universal Gaussian process

More information

arxiv: v2 [math.pr] 16 Aug 2014

arxiv: v2 [math.pr] 16 Aug 2014 RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND

More information

Spectral Clustering. Guokun Lai 2016/10

Spectral Clustering. Guokun Lai 2016/10 Spectral Clustering Guokun Lai 2016/10 1 / 37 Organization Graph Cut Fundamental Limitations of Spectral Clustering Ng 2002 paper (if we have time) 2 / 37 Notation We define a undirected weighted graph

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

DIMENSION OF SLICES THROUGH THE SIERPINSKI CARPET

DIMENSION OF SLICES THROUGH THE SIERPINSKI CARPET DIMENSION OF SLICES THROUGH THE SIERPINSKI CARPET ANTHONY MANNING AND KÁROLY SIMON Abstract For Lebesgue typical (θ, a), the intersection of the Sierpinski carpet F with a line y = x tan θ + a has (if

More information

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot Free probabilities and the large N limit, IV March 27th 2014 Loop equations and all-order asymptotic expansions... Gaëtan Borot MPIM Bonn & MIT based on joint works with Alice Guionnet, MIT Karol Kozlowski,

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

The Altshuler-Shklovskii Formulas for Random Band Matrices I: the Unimodular Case

The Altshuler-Shklovskii Formulas for Random Band Matrices I: the Unimodular Case The Altshuler-Shklovskii Formulas for Random Band Matrices I: the Unimodular Case László Erdős Antti Knowles March 21, 2014 We consider the spectral statistics of large random band matrices on mesoscopic

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Spectra of adjacency matrices of random geometric graphs

Spectra of adjacency matrices of random geometric graphs Spectra of adjacency matrices of random geometric graphs Paul Blackwell, Mark Edmondson-Jones and Jonathan Jordan University of Sheffield 22nd December 2006 Abstract We investigate the spectral properties

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

On pathwise stochastic integration

On pathwise stochastic integration On pathwise stochastic integration Rafa l Marcin Lochowski Afican Institute for Mathematical Sciences, Warsaw School of Economics UWC seminar Rafa l Marcin Lochowski (AIMS, WSE) On pathwise stochastic

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j= be a symmetric n n matrix with Random entries

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

FREE PROBABILITY THEORY

FREE PROBABILITY THEORY FREE PROBABILITY THEORY ROLAND SPEICHER Lecture 4 Applications of Freeness to Operator Algebras Now we want to see what kind of information the idea can yield that free group factors can be realized by

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

The expansion of random regular graphs

The expansion of random regular graphs The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

2 Lebesgue integration

2 Lebesgue integration 2 Lebesgue integration 1. Let (, A, µ) be a measure space. We will always assume that µ is complete, otherwise we first take its completion. The example to have in mind is the Lebesgue measure on R n,

More information