Mesoscopic eigenvalue statistics of Wigner matrices

Size: px
Start display at page:

Download "Mesoscopic eigenvalue statistics of Wigner matrices"

Transcription

1 Mesoscopic eigenvalue statistics of Wigner matrices Yukun He Antti Knowles May 26, 207 We prove that the linear statistics of the eigenvalues of a Wigner matrix converge to a universal Gaussian process on all mesoscopic spectral scales, i.e. scales larger than the typical eigenvalue spacing and smaller than the global extent of the spectrum.. Introduction Let H be an N N Wigner matrix a Hermitian random matrix with independent upper-triangular entries with zero expectation and constant variance. We normalize H so that as N its spectrum converges to the interval [ 2, 2], and therefore its typical eigenvalue spacing is of order N. In this paper we study linear eigenvalue statistics of H of the form ( ) H E Tr f, (.) η where f is a test function, E ( 2, 2) a fixed reference energy inside the bulk spectrum, and η an N-dependent spectral scale. We distinguish the macroscopic regime η, the microscopic regime η N, and the mesoscopic regime N η. The limiting distribution of (.) in the macroscopic regime is by now well understood; see [2, 8]. Conversely, in the microscopic regime the limiting distribution of (.) is governed by the distribution of individual eigenvalues of H. This question has recently been the focus of much attention, and the universality of the emerging Wigner-Dyson-Mehta (WDM) microscopic eigenvalue statistics for Wigner matrices has been established in great generality; we refer to the surveys [9, 4] for further details. In this paper we focus on the mesoscopic regime. The study of linear eigenvalue statistics of Wigner matrices on mesoscopic scales was initiated in [4,5]. In [4], the authors consider the case of Gaussian H (the Gaussian Orthogonal Ensemble) and take f(x) = (x i), in which case (.) is η times the trace of the resolvent of H at E + iη. Under these assumptions, it is proved in [4] that, after a centring, the linear statistic (.) converges in distribution to a Gaussian random variable on all mesoscopic scales N η. In [5] this result was extended to a class of Wigner matrices for the range of mesoscopic scales N /8 η. Recently, the results of [5] were extended in [7] to arbitrary Wigner matrices, mesoscopic scales N /3 η, and general test functions f subject to mild regularity and decay conditions. Apart from the works [5, 7] on Wigner matrices, mesoscopic eigenvalue statistics have also been analysed for invariant ensembles; see [6, 8] and the references therein. ETH Zürich, Departement Mathematik. yukun.he@math.ethz.ch. ETH Zürich, Departement Mathematik. knowles@math.ethz.ch.

2 Let Z = (Z(f)) f denote the Gaussian process obtained as the mesoscopic limit of a centring of (.). From the works cited above, it is known that the variance of Z(f) is the square of the Sobolev H /2 -norm of f: EZ(f) 2 = 2π 2 ( f(x) f(y) x y ) 2 dx dy = π ξ ˆf(ξ) 2 dξ, (.2) where ˆf(ξ).= (2π) /2 f(x)e iξx dx. Hence, a remarkable property of Z is scale invariance: Z = d Z λ, where Z λ (f).= Z(f λ ) and f λ (x).= f(λx). It may be shown that Z is obtained by extrapolating the microscopic WDM statistics to mesoscopic scales, and we therefore refer to its behaviour as the WDM mesoscopic statistics. In light of the microscopic universality results for Wigner matrices mentioned above, the emergence of WDM statistics on mesoscopic scales is therefore not surprising. All of the models described above, including Wigner matrices, correspond to mean-field models without spatial structure. In [0, ], linear eigenvalue statistics were analysed for band matrices, where matrix entries are set to be zero beyond a certain distance W N from the diagonal. Band matrices are a commonly used model of quantum transport in disordered media. Wigner matrices can be regarded as a special case W = N of band matrices. Unlike the mean-field Wigner matrices, band matrices possess a nontrivial spatial structure. An important motivation for the study of mesoscopic eigenvalue statistics of band matrices arises from the theory of conductance fluctuations; we refer to [0] for more details. The results of [0, ] hold in the regime W /3 η, and hence for the special case of Wigner matrices they hold for N /3 η. A key conclusion of [0, ] is that for band matrices there is a sharp transition in the mesoscopic spectral statistics, predicted in the physics literature []: above a certain critical spectral scale η c the mesoscopic spectral statistics are no longer governed by the Wigner-Dyson-Mehta mesoscopic statistics (.2), but by new limiting statistics, referred to as Altshuler-Shklovskii (AS) statistics in [0, ], which are not scale invariant like (.2). For instance for the d-dimensional (d =, 2, 3) AS statistics, the variance of the limiting Gaussian process Z(f) is π ξ d/2 ˆf(ξ) 2 dξ instead of the right-hand side of (.2); see [0, ]. In particular, there is a range of mesoscopic scales such that, although the microscopic eigenvalue statistics are expected to satisfy the WDM statistics, the mesoscopic statistics do not, and instead satisfy the AS statistics. Hence, the WDM statistics on microscopic and mesoscopic scales do in general not come hand in hand. In this paper we establish the WDM mesoscopic statistics for Wigner matrices in full generality. Our results hold on all mesoscopic scales N η and all Wigner matrices whose entries have finite moments of order 4 + o(). We require our test functions to have + o() continuous derivatives and be subject to mild decay assumptions, as in [7]. The precise statements are given in Section 2 below. Our proof is based on two main ingredients: families of self-consistent equations for moments of linear statistics inspired by [5], and the local semicircle law for Wigner matrices from [2, 6]. Our analysis of the self-consistent equations departs significantly from that of [5], since repeating the steps there, even using the optimal bounds provided by the local semicircle law, requires the lower bound η N /2 on the spectral scale. In addition, dealing with general test functions f instead of f(x) = (x i) requires a new family of self-consistent equations that is combined with the Helffer-Sjöstrand representation for general functions of H. We perform the proof in two major steps. In the first step, performed in Section 4, we consider traces of resolvents G G(z) = (H z), corresponding to taking f(x) = (x i) in (.). Denoting by G the normalized trace of G and X.= X EX, we derive a family of self-consistent equations (see (4.22) below) for the 2

3 moments E G n G m following [5], obtained by expanding one factor inside the expectation using the resolvent identity and then applying a standard cumulant expansion (see Lemma 3. below) to the resulting expression of the form Ef(h)h. The main work of the first step is to estimate the error terms of the self-consistent equation. An important ingredient is a careful estimate of the remainder term in the cumulant expansion (see Lemma 4.6 (i) below), which allows us to remove the condition η N /2 on the spectral scale that would be required if one merely combined the local semicircle law with the approach of [5]. Other important tools behind these estimates are new precise high-probability bounds on the entries of the powers G k of the resolvent (see Lemma 4.4 below) and a further family of self-consistent equations for EG k (see Lemma 4.8 below). In the second step, performed in Section 5, we consider general test functions f. The starting point is the well-known Helffer-Sjöstrand respresentation of (.) as an integral of traces of resolvents. An important ingredient of the proof is a self-consistent equation (see (5.2) below) that is used on the integrand of the Helffer-Sjöstrand representation. Compared to the first step, we face the additional difficulty that the arguments z of the resolvents are now integrated over, and may in particular have very small imaginary parts. Handling such integrals for arbitrary mesoscopic scales η and comparatively rough test functions in C +o() requires some care, and we use two different truncation scales N ω σ η in the imaginary part of z, which allow us to extract the leading term. See Section 5 for a more detailed explanation of the truncation scales. The error terms are estimated by a generalization of the estimates established in the first step. Finally, in Section 6 we give a simple truncation and comparison argument that allows us to consider without loss of generality Wigner matrices whose entries have finite moments of all order, instead of finite moments of order 4 + o(). Conventions. We regard N as our fundamental large parameter. Any quantities that are not explicitly constant or fixed may depend on N; we almost always omit the argument N from our notation. We use C to denote a generic large positive constant, which may depend on some fixed parameters and whose value may change from one expression to the next. Similarly, we use c to denote a generic small positive constant. 2. Results We begin this section by defining the class of random matrices that we consider. Definition 2. (Wigner matrix). A Wigner matrix is a Hermitian N N matrix H = H C N N whose entries H ij satisfy the following conditions. (i) The upper-triangular entries (H ij. i j N) are independent. (ii) We have EH ij = 0 for all i, j, and E NH ij 2 = for i j. (iii) There exists constants c, C > 0 such that E NH ij 4+c 2δ ij C for all i, j. We distinguish the real symmetric case, where H ij R for all i, j, and the complex Hermitian case, where EHij 2 = 0 for i j. For conciseness, we state our results for the real symmetric case. Analogous results hold for the complex Hermitian case; see Remark 2.4 below. 3

4 Our first result is on the convergence of the trace of the resolvent G(z).= (H z), where Im z 0. The Stieltjes transform of the empirical spectral measure of H is G(z).= N Tr G(z). (2.) For x R, z C, and Im z 0, the Wigner semicircle law ϱ and its Stieltjes transform m are defined by ϱ(x).= ϱ(x) (4 x 2π 2 ) +, m(z).= dx. (2.2) x z Denote by H.= {z C : Im z > 0} the complex upper half-plane. complex-valued Gaussian process with mean zero and covariance for all b, b 2 H. For instance, we can set Let (Y (b)) b H denote the 2 E(Y (b )Y (b 2 )) = (b b 2 ), E(Y (b )Y (b 2 2 )) = 0 (2.3) Y (b) = 2 ( 2 b + i ) 2 k=0 ( ) b i k k + ϑ k, (2.4) b + i where (ϑ k ) k=0 is a family of independent standard complex Gaussians, where, by definition, a standard complex Gaussian is a mean-zero Gaussian random variable X satisfying EX 2 = 0 and E X 2 =. Finally, for E R and η > 0, we define the process (Ŷ (b)) b H through for all b H. We may now state our first result. Ŷ (b).= Nη (G(E + bη) m(e + bη)) Theorem 2.2 (Convergence of the resolvent). Let H be a real symmetric Wigner matrix. Fix α (0, ) and set η.= N α. Fix E ( 2, 2). Then the process (Ŷ (b)) b H converges in the sense of finite-dimensional distributions to (Y (b)) b H as N. That is, for any fixed p and b, b 2,..., b p H, we have as N. (Ŷ (b ),..., Ŷ (b p)) d (Y (b ),..., Y (b p )) (2.5) Our second result is on the convergence of the trace of general functions of H. For fixed r, s > 0, denote by C,r,s (R) the space of all real-valued C -functions f such that f is r-hölder continuous uniformly in x, and f(x) + f (x) = O(( + x ) s ). Let (Z(f)) f C,r,s (R) denote the real-valued Gaussian process with mean zero and covariance E(Z(f )Z(f 2 )) = 2π 2 (f (x) f (y))(f 2 (x) f 2 (y)) (x y) 2 dx dy (2.6) for all f, f 2 C,r,s (R) (see also (.2)). Our next result is the weak convergence of the process ( ) H E 2 ( ) x E Ẑ(f).= Tr f N ϱ(x)f dx, (2.7) η 2 η where f C,r,s (R). We may now state our second result. 4

5 Theorem 2.3 (Convergence of general test functions). Let H be a real symmetric Wigner matrix. Fix α (0, ) and set η.= N α. Fix E ( 2, 2). Then the process (Ẑ(f)) f C,r,s (R) converges in the sense of finite-dimensional distributions to (Z(f)) f C,r,s (R) as N. That is, for any fixed p and f, f 2,..., f p C,r,s (R), we have as N. (Ẑ(f ),..., Ẑ(f p)) d (Z(f ),..., Z(f p )) (2.8) Remark 2.4. In the complex Hermitian case, Theorems 2.2 and 2.3 remain true up to an additional factor /2 in the covariances. More precisely, if H is a complex Wigner matrix then (2.5) is replaced by (Ŷ (b ),..., Ŷ (b d p)) (Y (b ),..., Y (b p )) (2.9) 2 and (2.8) by (Ẑ(f ),..., Ẑ(f p)) d (Z(f ),..., Z(f p )). (2.0) 2 The minor modifications to the proof in the complex Hermitian case are given in Section 7 below. 3. Tools The rest of this paper is devoted to the proofs of Theorems 2.2 and 2.3. In this section we collect notations and tools that are used throughout the paper. Let M be an N N matrix. We use the notations M n ij (M ij ) n, M n (M ) n, Mij (M ) ij = M ji. We denote by M the operator norm of M, and abbreviate M.= N Tr M. It is easy to see that G(E + iη) η. For σ > 0, we use N (0, σ 2 ) to denote the real Gaussian random variable with mean zero and variance σ 2, and N C (0, σ 2 ) d = σn C (0, ) the complex Gaussian random variable with mean zero and variance σ 2. We abbreviate X.= X EX for any random variable X with finite expectation. Finally, if h is a real-valued random variable with finite moments of all order, we denote by C k (h) the kth cumulant of h, i.e. C k (h).= ( i) k ( k λ log Eeiλh) λ=0. (3.) We now state the cumulant expansion formula that is a central ingredient of the proof. The formula is analogous to the corresponding formula in [5], and its proof is obtained as a minor modification whose details we omit. Lemma 3. (Cumulant expansion). Let h be a real-valued random variable with finite moments of all order, and f a complex-valued smooth function on R. Then for any fixed l N we have Ef(h)h = l k=0 k! C k+(h)ef (k) (h) + R l+, (3.2) provided all expectations in (3.2) exist. For any fixed τ > 0, the remainder term R l+ satisfies R l+ = O() E h l+2 { h >N τ /2 } f (l+) + O() E h l+2 sup f (l+) (x). (3.3) x N τ /2 5

6 The bulk of the proof is performed on Wigner matrices satisfying a stronger condition than Definition 2. (iii) by having entries with finite moments of all order. Definition 3.2. We consider the subset of Wigner matrices obtained from Definition 2. by replacing (iii) with (iii) For each p N there exists a constant C p such that E NH ij p C p for all N, i, j. We focus on Wigner matrices satisfying Definition 3.2 until Section 6, where we explain how to relax the condition (iii) to (iii) using a Green function comparison argument; see Section 6 for more details. We shall deduce Theorem 2.3 from Theorem 2.2 using the Helffer-Sjöstrand formula [7], which is summarized in the following result whose standard proof we omit. Lemma 3.3 (Helffer-Sjöstrand formula). Let f C,r,s (R) with some r, s > 0. Let f be the almost analytic extension of f defined by If f is further in C 2 (R), we can also set f(x + iy).= f(x) + i(f(x + y) f(x)). (3.4) f(x + iy).= f(x) + iyf (x). (3.5) Let χ C c (R) be a cutoff function satisfying χ(0) =, and by a slight abuse of notation write χ(z) χ(im z). Then for any λ R we have f(λ) = π C z ( f(z)χ(z)) λ z d 2 z, (3.6) where z.= 2 ( x + i y ) is the antiholomorphic derivative and d 2 z the Lebesgue measure on C. The following definition introduces a notion of a high-probability bound that is suited for our purposes. It was introduced (in a more general form) in [2]. Definition 3.4 (Stochastic domination). Let X = ( X (N) (u) : N N, u U (N)), Y = ( Y (N) (u) : N N, u U (N)) be two families of nonnegative random variables, where U (N) is a possibly N-dependent parameter set. We say that X is stochastically dominated by Y, uniformly in u, if for all (small) ε > 0 and (large) D > 0 we have [ ] sup P X (N) (u) > N ε Y (N) (u) N D (3.7) u U (N) for large enough N N 0 (ε, D). If X is stochastically dominated by Y, we use the notation X Y. The stochastic domination will always be uniform in all parameters, such as z and matrix indices, that are not explicitly constant. We conclude this section with the local semicircle law for Wigner matrices from [2, 6]. For a recent survey of the local semicircle law, see [3], where the following version of the local semicircle law is stated. 6

7 Theorem 3.5 (Local semicircle law). Let H be a Wigner matrix satisfying Definition 3.2, and define the spectral domain S.= {E + iη : E 0, 0 < η 0}. Then we have the bounds max G ij (z) δ ij m(z) Im m(z) + Nη Nη (3.8) and G(z) m(z) Nη, (3.9) uniformly in z = E + iη S. Moreover, outside the spectral domain we have the stronger estimates and uniformly in z H \ S. max G ij (z) δ ij m(z) (3.0) N G(z) m(z) N, (3.) 4. Convergence of the resolvent In this section we prove the following weaker form of Theorem 2.2. Theorem 4.. Theorem 2.2 holds for Wigner matrices H satisfying Definition 3.2, and the convergence also holds in the sense of moments. For the statements of the following results we abbreviate G G(E+iη) and [G].= G m(e+iη). The following result is a special case of Theorem 4.. Proposition 4.2. Under the assumptions of Theorem 4. we have ( d Nη [G] N C 0, ) 2 (4.) as N. The convergence also holds in the sense of moments. The main work in this section is to show the one-dimensional case from Proposition 4.2, whose proof can easily be extended to the general case of Theorem 4. (see Section 4.4 below). Recall the notation X.= X EX. Proposition 4.2 is a direct consequence of the following lemma. Lemma 4.3. Under the assumptions of Theorem 4. the following holds. (i) For fixed m, n N satisfying m + n 2 we have n! E G n G m = 2 n N 2n(α ) + O(N 2n(α ) c 0 ) if m = n O(N (m+n)(α ) c 0 ) if m n, (4.2) where c 0 c 0 (α).= min{α, α}. (4.3) 3 7

8 (ii) We have with c 0 defined in (4.3). [G] G = EG m(e + iη) = O(N α c 0 ), (4.4) As advertised, Proposition 4.2 follows immediately from Lemma 4.3. Indeed, suppose that Lemma 4.3 holds. From (4.2) we find that N α G converges to N C (0, /2) in the sense of moments, and hence also in distribution. Proposition 4.2 therefore follows from (4.4). The bulk of this section, Sections , is devoted to the proof of Lemma Preliminary estimates on G. We begin with estimates on the entries of G k. For k 2, the bounds provided by the following lemma are significantly better than those obtained for G k by applying the estimate (3.8) to each entry of the matrix product. For instance, a straightforward application of (3.8) yields (G k ) ij N k(+α)/2, which is not enough to conclude the proof of Lemma 4.3. The following result yields bounds that grow slower with k and in addition provide extra smallness for the offdiagonal entries of G k. Both of these features are necessary for the proof of Lemma 4.3. Lemma 4.4. Under the conditions of Theorem 4., for any fixed k N + we have G k N kα (4.5) as well as uniformly in i, j. ( { G k) N (k )α if i = j ij N (k /2)α /2 if i j, (4.6) Proof. We first prove (4.5). The case k = is easy. Indeed, from (3.9) we get G [G] + E[G] N α (4.7) as desired, where we used the definition of combined with the trivial bound G N α to estimate E[G]. Next, for k 2 we write G k = ( ) (H E)/η + i k (H E) 2 /η 2 η k = f + ( H E η ) η k, (4.8) where we defined f(x).= ( x+i x 2 + )k. Note that f : R C is smooth, and for any n N, f (n) (x) = O(( + x ) 2 ). We define f as in (3.5) and let χ be as in Lemma 3.3 and satisfy χ(y) = for y. Writing f η (x).= f ( ) x E η, we obtain from Lemma 3.3 that so that G k = 2πη k R 2 f η (H) = π C z ( f η (z)χ(z/η)) H z d 2 z, ( iyf η (x)χ(y/η) + i η f η(x)χ (y/η) y η f η(x)χ (y/η)) G(x + iy) dx dy. (4.9) 8

9 In order to estimate the right-hand side, we use (3.9) and (3.) to obtain G(x + iy) N y (4.0) uniformly for y (0, ) and x. Hence, 2πη k iyf η (x)χ(y/η) G(x + iy) dx dy 2πη k R 2 R 2 N f η (x)χ(y/η) dx dy = O() Nη k. (4.) (Note that the use of stochastic domination inside the integral requires some justification. In fact, we use that a high-probability bound of the form (4.0) holds simultaneously for all x R and y (0, ). We refer to [3, Remark 2.7 and Lemma 0.2] for further details.) Similarly, by our choice of χ, we find 2πη k R 2 i η f η(x)χ (y/η) G(x + iy) dx dy 2πη k An analogous estimate yields 2πη k R 2 y η y η f η(x)χ (y/η) G(x + iy) dx dy Nη k. Nη 2 f η(x)χ (y/η) O() dx dy = Nη k. Altogether we have G k N kα, which is (4.5). Next, we prove (4.6). For k =, we use the well-known bound m(z) <, which follows using an elementary estimate from the fact that m is the unique solution of m(z) + m(z) + z = 0 (4.2) satisfying Im m(z) Im z > 0. Thus by (3.8) we have { if i = j (G) ij N (α )/2 if i j, (4.3) which is (4.6) for k =. The extension to k 2 follows again using Lemma 3., and we omit the details. Lemma 4.4 is very useful in estimating the expectations involving entries of G, in combination with the following elementary result about stochastic domination. Lemma 4.5. (i) If X Y and X 2 Y 2 then X X 2 Y Y 2. (ii) Suppose that X is a nonnegative random variable satisfying X N C and X Φ for some deterministic Φ N C. Then EX Φ Proof of Lemma 4.3 (i). Abbreviating ζ i.= E NH ii 2, we find from Definition 3.2 (iii) that ζ i = O(). We write z.= E + iη and often omit the argument z from our notation. Note that G = (H z) and G = (H z). In particular, Theorem 3.5 also holds for G with obvious modifications accounting for the different sign of η. For m, n, we need to compute E G n G m = E G m G n G. (4.4) 9

10 By the resolvent identity we have G = z GH z I, so that E G n G m = z E G n G m GH = zn E G n G m G ij H ji. (4.5) Since H is symmetric, for any differentiable f = f(h) we set H ij f(h) = f(h).= d f ( H + t (ij)), (4.6) H ji dt t=0 where (ij) denotes the matrix whose entries are zero everywhere except at the sites (i, j) and (j, i) where they are one: (ij) kl = (δ ik δ jl + δ jk δ il )( + δ ij ). We then compute the last averaging in (4.5) using the formula (3.2) with f = f ij (H).= G n G m G ij, h = H ji, and obtain ze G n G m = N 2 E ( G n G m G ij ) ( + δ ji (ζ i )) + L H ji = N 2 E G n G m G ij ( + δ ji ) H ji + N 2 E ( G n G m ) G ij ( + δ ji ) + K + L H ji =. (a) + (b) + K + L, (4.7) where K = N 2 i E ( G n G m G ii ) H ii (ζ i 2) (4.8) and L = N [ l ] k! C k+(h ji )E k ( G n G m G ij ) + R (ji) k l+. (4.9) H ji k=2 Here l is a fixed positive integer to be chosen later, and R (ji) l+ is a remainder term defined analogously to R l+ in (3.2). More precisely, we have the bound l+ = O() E l+2 ( ) Hji { Hji >N τ /2 } l+ ji f ij H R (ji) + O() E H ji l+2 E sup x N τ /2 l+ ji f ij ( H (ij) + x (ij)), (4.20) where we define H (ij).= H H ij (ij), so that the matrix H (ij) has zero entries at the positions (i, j) and (j, i), and abbreviate ij.= H ij. Note that for G = (H z) we have G ij H kl = (G ik G lj + G il G kj )( + δ kl ), (4.2) 0

11 which gives (a) = N 2 E G n G m ( G ij G ij G ii G jj ) Similarly, a straightforward calculation gives Altogether we obtain = N E G n G m G 2 E G n G m G 2 = N E G n G m G 2 E G n G m G 2E G n G m EG + E G n G m E G 2. (b) = 2 N 2 [ ne G n G m GG 2 + (m )E G n G m 2 G 3]. E G n G m = T E G n G m+ T E G n G m E G 2 + T N E G n G m G 2 + 2m 2 N 2 T E G n G m 2 G 3 K T L T + 2n N 2 T E G n G m GG 2, where T.= z 2EG. From (3.9), (4.2), and Lemma 4.5 it is easy to see that (4.22) = O(), (4.23) T and the implicit constant depends only on the distance to the spectral edge κ.= 2 E. (4.24) In (4.22), the last term is the leading term. The calculation of (4.22) consists of computing the leading term and estimating the subleading terms. We aim to show that the subleading terms are of order N (m+n)(α ) c 0. We begin with L. For k 2 define J k.= N (k+3)/2 ( G n G m G ij ) E k k H ji. (4.25) Lemma 4.6. Let R (ji) l+ be as in (4.9). (i) For any fixed D 0 > 0 there exists some l 0 = l 0 (D 0 ) 2 such that R (ji) l 0 + = O( N D 0 ). (4.26) (ii) For all fixed k 2 we have where c 0 is defined in (4.3). J k = O ( N (m+n)(α ) c 0 ), (4.27)

12 Before proving Lemma 4.6, we show how to use it to estimate L. By setting D 0 = (m+n)( α) in (4.26), we obtain R (ji) l 0 + = O( N (m+n)(α )) (4.28) for some l 0 2. From Definition 3.2 (iii) we get max Ck (H ji ) = O(N k/2 ) for all k 2. Thus (4.27) and (4.28) together imply as desired. L = O ( N (m+n)(α ) c 0 ), (4.29) Proof of Lemma 4.6 (i). Let D 0 > 0 be given. Fix i, j, and choose τ = min {α/2, ( α)/4} in (4.20). Define W.= H ij (ij) and Ĥ.= H (ij) = H W. Let Ĝ.= (Ĥ E iη). We have the resolvent expansions Ĝ = G + (GW )G + (GW ) 2 Ĝ (4.30) and G = Ĝ (ĜW )Ĝ + (ĜW )2 G. (4.3) Note that only two entries of W are nonzero, and they are stochastically dominated by N /2. Then the trivial bound max Ĝab N α together with (4.3) and (4.30) show that max Ĝab N (α )/2, a,b a b and max Ĝaa. Combining with (4.3), the trivial bound max G ab N α, and the fact Ĝ is a a,b independent of W, we have and and max a b sup G ab N (α )/2, (4.32) H ji N τ /2 max a sup G aa. (4.33) H ji N τ /2 Now let us estimate the last term in (4.20). We have the derivatives ji G ab = (G aj G ib + G ai G jb )( + δ ji ), ji G = 2 N (G2 ) ji ( + δ ji ), ji (G 2 ) ab = ( (G 2 ) aj G ib + (G 2 ) ai G jb + (G 2 ) bj G ia + (G 2 ) bi G ja ) ( + δji ), where ij.= H ij. Hence for any fixed l N, ji l+ f ij is a polynomial in the variables G, G, N (G2 ) ab, N (G 2 ) ab, G ab, and G ab, with a, b {i, j}. Note that in each term of the polynomial, the sum of the degrees of G, G, N (G2 ) ab, and N (G 2 ) ab is m + n, so that the product of the factors other than G ab and G ab is trivially bounded by O(N (m+n )α ) for all H. Together with (4.32) and (4.33) we know, for any fixed l N, sup ji l+ (Ĥ f ij + x (ij) ) N (m+n )α. x N τ /2 2

13 Note that E H ji l+2 = O(N (l+2)/2 ), and we can find l 0 = l 0 (D 0, m, n) 2 such that E H ji l 0+2 E sup l (Ĥ 0+ ji f ij + x (ij) ) = O(N (D0+2) ). (4.34) x N τ /2 Finally, we estimate the first term of (4.20). Note that by the trivial bound G ij N α, we have l 0+ ji f(h) = O(N C(m,n,l0) ). From Definition 3.2 (iii) we find max H ij N, then by Hölder s inequality we have E H l+2 ji { Hji >N τ /2 } l+ ji f ij (H) = O(N (D 0+2) ). (4.35) Combining (4.34) and (4.35), we obtain from (4.20) that R (ji) l 0 + = O(N (D 0+2) ), from which (4.26) follows. Proof of Lemma 4.6 (ii). We begin with the case k = 2, which gives rise to terms of three types depending on how many derivatives act on G ij. We deal with each type separately. Step. The first type is J 2,.= N 5/2 E G n G m 2 G ij 2 H. ji Note that 2 G ij H ji 2 = a G ii G jj G ij + a 2 G ij 3, where a, a 2 are some constants depending on the value of δ ji. Together with (4.6) and Lemma 4.5, we find J 2, N 5/2 N 2 E G n G m O(N (α )/2+ε ) = O(N α/2+ε ) E G n G m for any fixed ε > 0. Together with (4.5) and Lemma 4.5, we find J 2, = O(N α/2+ε +(m+n )(α ) ) = O(N (m+n)(α ) c 0 ), (4.36) where in the last inequality we chose ε small enough depending on α. Step 2. The second type is J 2,2.= N 5 2 ( G n G m ) E 2 G ij. (4.37) Since H ji 2 G H ji = a 3 N (G 2 ) ij and 2 G H ji 2 = a 4 N (G 2 ) ii G jj + a 5 N (G 2 ) jj G ii + a 6 N (G 2 ) ij G ij for some constants a 3, a 4 and a 5, we see that the most dangerous term of J 2,2 is of the form P 2,2.= N 7/2 E G n G m (G 2 ) ii G jjg ij. (4.38) 3

14 By (4.5), (4.6), and Lemma 4.5 we have P 2,2 = O(N 7/2+2+(m+n 2)(α )+α+(α )/2+ε ) = O(N (m+n)(α ) α/2+ε ) for any fixed ε > 0. The other terms of J 2,2 are estimated similarly. By choosing ε small enough, we obtain J 2,2 = O ( N (m+n)(α ) c 0 ). (4.39) Step 3. The third type is J 2,3.= N 5/2 n G m ) E ( G H ji G ij H ji. (4.40) The most dangerous term in J 2,3 is of the form P 2,3.= N 7/2 E G n G m (G 2 ) ij G ii G jj. (4.4) Again by (4.5), (4.6), and Lemma 4.5 we have P 2,3 = O(N 7/2+2+(m+n 2)(α )+3α/2 /2+ε ) = O(N (m+n)(α ) c 0 ) for any fixed ε > 0. The other terms of J 2,3 are estimated similarly. Thus we get J 2,3 = O ( N (m+n)(α ) c 0 ). Step 4. Putting the estimates of the three types in Steps -3 together, we find J 2 = O ( N (m+n)(α ) c 0 ), which concludes the proof of (4.27) for k = 2. For k 3, the estimates are easier than those in k = 2 because of the small prefactor N (k+3)/2 in the definition of J k. Analogously to the case k = 2, we obtain for any fixed k 3 and ε > 0, J k = O(N (m+n)(α )+ k/2+ε ), for any ε > 0, from which (4.27) follows. We omit further details. Now we look at the term K defined in (4.8), whose estimate is contained in the next lemma. Lemma 4.7. We have K = O(N (m+n)(α ) α/2 ). (4.42) Proof. Let us first consider B.= N 2 i E ( G n G m G ii ) H ii. (4.43) The estimate of B is similar to that of J 2, namely we will have terms of two types depending on whether the derivative acts on G ii or not. We then estimate the terms by Lemmas 4.4 and 4.5, which easily yields B = O(N (m+n)(α ) α+ε ) = O(N (m+n)(α ) α/2 ). From Definition 3.2 (iii) we get max ζ i 2 = O(), hence K = O(B). This finishes the proof. i 4

15 In order to conclude the proof, we need to use that the expectation of G k is typically much smaller than G k itself. Lemma 4.4 implies that E G k N (k )α, which is not enough to conclude the proof. We need some extra decay from the expectation, which is provided by the following result. Lemma 4.8. Let c 0 be defined as in (4.3). We have for k = 2, 3. EG k = O ( N (k )α c 0 ) (4.44) Proof. The proof is analogous to that of Lemma 4.6. Let us first consider EG 2. Again by the resolvent identity and the cumulant expansion, we arrive at EG 2 = T EG + 2 T E G G2 + T N EG3 K(2) T L(2) T, (4.45) where K (2) = N 2 i E (G2 ) ii H ii (ζ i 2), (4.46) and L (2) = N [ l k! C k+(h ji )E k (G 2 ) ij k H ji k=2 + R (2,ji) l+ ], (4.47) and we recall the definition (4.23) of T. Here R (2,ji) l+ is a remainder term defined analogously to R (ji) l+ in (4.20). We can argue similarly as in the proof of Lemma 4.6 (i) and show that R (2,ji) l 0 + = O(N ) for some l 0 N. Thus we have L (2) l 0 O(J (2) k ) + O(), where J (2) k k=2.= N (k+3)/2 Analogously to the proof of Lemma 4.6 (ii), we find (G 2 ) ij E k. (4.48) H ji k J (2) 2 = O(N 5/2 N 2 N α+ε ) = O(N α+ε /2 ), for any fixed ε > 0, and J (2) k = O(N α /2 ) for k 3. This shows L (2) = O(N α+ε /2 ) for any fixed ε > 0. Similar as in Lemma 4.7, one can show K (2) = O(N α/2 ). By Lemma 4.4 and 4.5, we have E G G 2 = O(N (α )+(2α )+ε ) = O(N 3α 2+ε ), EG 3 = O(N +2α+ε ), (4.49) N for any fixed ε > 0. Hence by using (4.23) and choosing ε small enough, we obtain EG 2 O(N 2α +ε ) + O(N α/2 ) + O(N α+ε /2 ) = O(N α c 0 ). The proof of the case k = 3 is similar, and we omit the details. 5

16 Armed with Lemmas 4.6 and 4.8, we may now conclude the proof of Lemma 4.3 (i). We still have to estimate the subleading terms on the right-hand side of (4.22). From (4.5), Lemma 4.5, and Lemma 4.8 we have E G n G m 2 G 3 = E G n G m 2( EG 3 + G 3 ) = O ( N (m+n)(α )+2 c 0 ). (4.50) Moreover, (4.5) and Lemma 4.5 imply as well as E G n G m G 2 = O ( N (m+n)(α )+ c 0 ), (4.5) E G n G m+ = O ( N (m+n)(α ) c 0 ), (4.52) E G n G m E G 2 = O ( N (m+n)(α ) c 0 ). (4.53) Applying (4.29), Lemma 4.8, and (4.50) (4.53) to (4.22), together with (4.23), we obtain E G n G m = By the resolvent identity, 2n N 2 T E G n G m GG 2 + O ( N (m+n)(α ) c 0 ). GG 2 = z z (GG G 2 ) = N 2α 4 (G G ) N α 2i G 2. (4.54) Moreover, (4.5) and Lemmas 4.5 and 4.8 give E G n G m G 2 = O(N (m+n 2)(α )+α c 0 ). We therefore conclude that E G n G m = n 2T N 2α 2 E G n G m (G G ) + O ( N (m+n)(α ) c 0 ), which yields E G n G m = n 2T N 2α 2 E G n G m (EG EG ) + O ( N (m+n)(α ) c 0 ) by (4.5) and Lemma 4.5. Writing ξ.= Im G, we have EG EG = 2iEξ. Moreover, (3.9) and (4.2) imply that T = z 2EG = 2iEξ + O(N c 0 ). Together with (4.5) and Lemma 4.5 we have E G n G m = n 2 N 2α 2 E G n G m + O ( N (m+n)(α ) c 0 ) (4.55) for m, n. The preceding argument can also be used to show for all m 2. In fact, one can start with E G m = O ( N m(α ) c 0 ) (4.56) E G m = zn E G m G ij H ji and apply Lemma 3. to get an analogue of (4.22), which is E G m = T E G m+ T E G m E G 2 + T N E G m G 2 + 2m 2 N 2 T E G m 2 G 3 K(3) T L(3) T. (4.57) 6

17 Here T = z 2EG, and K (3), L (3) are defined analogously as K and L in (4.22). Due to the absence of G, there is no leading term in (4.57) as the last term in (4.22). One can easily apply our previous techniques and show every term in RHS of (4.57) is bounded by O(N m(α ) c 0 ). By taking complex conjugation in (4.56) we also have E G n = O ( N n(α ) c 0 ) (4.58) for all n 2. Now (4.2) follows from (4.55), (4.56), and (4.58) combined with induction. This concludes the proof of Lemma 4.3 (i) Proof of Lemma 4.3 (ii). Again by the resolvent identity and the cumulant expansion, we have EG = ( + E G 2 + U N EG2 K (4) L (4)), (4.59) where U.= z EG, z = E + iη, K (4) = N 2 i E G ii H ii (ζ i 2), (4.60) and L (4) = N [ l k=2 ] k! C k+(h ji )E k G ij k H + R(4,ji) l+. (4.6) ji Here R (4,ji) l+ is a remainder term defined analogously to R (ji) l+ in (4.20). We can argue similarly as in the proof of Lemma 4.6 (i) and show that R (4,ji) l 0 + = O(N 2 ) for some l 0 N. Thus we have L (4) l 0 k=2 O(J (4) k ) + O(N ), where J (4) k.= N (k+3)/2 Analogously to the proof of Lemma 4.6 (ii), we find E k G ij. (4.62) H ji k J (4) 2 = O(N 5/2 N 2 N (α )/2+ε ) = O(N α/2 +ε ), for any fixed ε > 0, and J (4) k = O(N α/2 ) for k 3. This shows that L (2) = O(N α/2 +ε ) for any fixed ε > 0. As in Lemma 4.7, one can show that K (4) = O(N ). By (4.56) and Lemma 4.8, we have Altogether we have E G 2 = O(N 2α 2 c 0 ) and N EG2 = O(N α c 0 ). EG(z + EG) + = O(N α c 0 ). (4.63) Recall that m(z) is the unique solution of x 2 + zx + = 0 satisfying sgn(im m(z)) = sgn(im z) = sgn(η) =. Let m(z) be the other solution of x 2 + zx + = 0. An application of Lemma 5.5 in [3] gives min{ EG m(z), EG m(z) } = O(N α c 0 ) = O(N α c 0 ), (4.64) κ 7

18 where we recall the definition (4.24) of κ. Since G = (H z), we know that sgn(im G) = sgn(im z) > 0. Also, we have Im m(z) c for some c = c(κ) > 0. This shows EG m(z) c. Thus from (4.64) we have EG m(z) = O(N α c 0 ), which completes the proof Proof of Theorem 4.. Let Ỹ (b).= N α G(E + bη). As in the one-dimensional case, Proposition 4.2, Theorem 4. follows from the following lemma, which generalizes Lemma 4.3. Lemma 4.9. Let c 0 be defined as in (4.3). Under the assumptions of Theorem 4. the following holds. (i) For fixed m, n and i,..., i m, j,..., j n {, 2,..., p}, we have E [ Ỹ (b i ) Ỹ (b i m ) Ỹ ( ) b j Ỹ ( { ) ] 2 b + O(N c 0 ) if m = n jn = (b il b jk ) 2 O(N c 0 ) if m n, (4.65) where the notation means summing over all distinct ways of partitioning b i,..., b im, b j,..., b jn into pairs b il, b jk, and each summand is the product of the n pairs. (ii) For any fixed b H, we have Suppose Lemma 4.9 holds. Result (4.65) implies Ŷ (b) Ỹ (b) = EŶ (b) = O(N c 0 ). (4.66) (Ỹ (b ),..., Ỹ (b p)) Theorem 4. then follows from (4.66). d (Y (b ),..., Y (b p )). (4.67) Proof of Lemma 4.9. The proof is similar to that of Lemma 4.3. Indeed, we see that E [ Ỹ (b i ) Ỹ (b i m ) Ỹ ( bj ) Ỹ ( bjn ) ] = N (m+n)( α) E [ G(E + b i η) G(E + b im η) G(E + b j η) G(E + b jn η) ], (4.68) which can be computed in the same way as E G m G n = E G(E + iη) m G(E iη) n in Section 4.2. Most of our previous techniques and estimates can be applied to the new computation, and the only difference is when using the resolvent identity (for example in (4.54)), we now have G(E + b ik η)g(e + b jl η) = b ik b jl (G(E + b ik η) G(E + b jl η)) instead of GG = 2i (G G ). This will give us different constants in the leading terms, and lead to the induction step E [ Ỹ (b i ) Ỹ (b i m ) Ỹ ( ) b j Ỹ ( ) ] b jn n 2 = (b im b jk ) E[ Ỹ (b 2 i ) Ỹ ( ) ( ) b im Ỹ bj Ỹ ( ) b jn / Ỹ ( ) ] b jk + O(N c 0 ) k= 8

19 for m, n. One can also show that E [ Ỹ (b i ) Ỹ (b i m ) ] = O(N c 0 ) and E [ Ỹ ( b j ) Ỹ ( b jn ) ] = O(N c 0 ) for m, n 2. These results together imply (4.65). Moreover, (4.66) says nothing but N α E ( G(E + bη) m(e +bη) ) = O(N c 0 ), and this can be shown using the steps in Section 4.3, in which we proved N α E ( G(E + iη) m(e+iη) ) = O(N c 0 ). 5. Convergence of general functions Similar as in the resolvent case, in Section 5 we prove the following analogue of Theorem 2.3. Theorem 5.. Theorem 2.3 holds for Wigner matrices H satisfying Definition 3.2, and the convergence also holds in the sense of moments. Let us abbreviate f η (x).= f ( ) x E η, and denote [Tr fη (H)].= Tr f η (H) N 2 2 ϱ(x)f η(x) dx. Our next result is a particular case of Theorem 5.. Proposition 5.2. Let η, E, H be as in Theorem 5.. Then ( ( ) d f(x) f(y) 2 ) [Tr f η (H)] N 0, 2π 2 dx dy x y (5.) as N. The convergence also holds in the sense of moments. Our main work in is section will be to show the above -dimensional case, since the proof can easily be extended to the general case (see Section 5.5). Recall that for a random variable X, X.= X EX. Proposition 5.2 is a direct consequence of the following lemma. Lemma 5.3. Under the conditions of Theorem 5., we have the following results. (i) For any n 2 E Tr f η (H) n = n 2π 2 where c = c (r, s, α) > 0. ( ) f(x) f(y) 2 dx dy E Tr f η (H) n 2 + O(N c/n2 ), (5.2) x y (ii) The random variables [Tr f η (H)] and Tr f η (H) are close in the sense that with c 0 defined in (4.3). [Tr f η (H)] Tr f η (H) = E[Tr f η (H)] = O(N rs2 c 0 /6 ), (5.3) Assume Lemma 5.3 holds. Then (5.2) and Wick s theorem imply Tr f η (H) ( d N 0, 2π 2 ( ) f(x) f(y) 2 ) dx dy x y (5.4) as N. Note that the above result is proved in a stronger sense that we have convergence in moments. Proposition 5.2 then follows from (5.3). 9

20 Sections 5. to 5.3 are devoted to proving Lemma 5.3 (i). Before starting the proof, we give some explanations of the ideas, especially the choice of truncations in the proof. We use Lemma 3.3 to write f η (H) in the form (5.6) below, where we scale the cutoff function χ to be supported in an interval of size O(σ), with N σ η. This scaling ensures that when we integrate ϕ f, the integral of the last term in (5.7) below dominates over the others. We then write E Tr f η (H) n as an integral over C n, written F in (5.8) below. The leading contribution to F arises from the region { y,..., y n ω}, where ω N is a second truncation scale. In order to ensure that F is small in the complementary region, we require that ω σ. Then, when estimating y <ω F, the integral over z yields a factor that is small enough to compensate the integrals from the other variables. We use the notations σ = N (α+β) and ω = N (α+γ), so that 0 < β < γ. In addition, for all steps of the analysis to work, we have further requirements on the exponents γ and β; for instance, the last step in (5.8) below requires nβ rsγ/4. Combining all requirements, we are led to set β as in (5.5) below. 5.. Transformation by Helffer-Sjöstrand formula. Let f C,r,s (R) with r, s > 0, and without loss of generality we assume s. Fix n 2, and define σ.= N (α+β), where β.= rs2 c 0 24n 2, (5.5) and c 0 is defined in (4.3). We define f as in (3.4). Let χ be as in Lemma 3.3 satisfying χ(y) = for y, and χ(y) = 0 for y 2. An application of Lemma 3.3 gives f η (H) = π C z ( f η (z)χ(z/σ)) H z d 2 z = C ϕ f (z)g(z) d 2 z, (5.6) where ϕ f (x + iy) = ( (i ) ( f 2π η(x + y) f η(x) ) χ(y/σ) ( fη (x + y) f η (x) ) χ (y/σ) σ + i ) σ f η(x)χ (y/σ). (5.7) Thus E Tr f η (H) n = N n ϕ f (z ) ϕ f (z n )E G n G d 2 z d 2 z n =. F, (5.8) where k G.= (H z k ) for i {, 2,..., n}. Note that χ(y/σ) 0 for y 2σ, and we only need to consider the integral for y,..., y n 2σ The subleading terms. Let ω.= N (α+γ) with γ.= 4nβ/rs, and by (5.5) we have α+β < α + γ <. We define X.= { x,..., x n 2 κ 2 } and Y.= { y,..., y n [ω, 2σ]}, where we recall the definition (4.24) of κ. We have a lemma about F outside the region X Y. Lemma 5.4. For F as in (5.8) we have (X Y ) c F = O(N β/2 ). (5.9) Proof. We first estimate R n Y c F. By the estimates (3.9) and (3.) we know G n G y y n N n (5.0) 20

21 uniformly in { y,..., y n 2σ}. Since χ (y/σ) = 0 for y < σ, we have y <ω ϕ f (z) d 2 z = O() y = O() y <ω (f η(x + y) f η(x)) dx dy y (f (a + bn β ) f (a)) da db, b b <N β γ (5.) where in the second step we used the change of variables By the Hölder continuity and decay of the function f, we know for all q [0, ]. Choose q = q 0 (s).= y <ω a.= (x E)/η and b.= y/σ. (5.2) f (a + bn β ) f (a) { } C min ( b N β ) r, + a +s ( ) C( b N β ) rq q (5.3) + a +s s 2(+s) s 4, so that ( + s)( q 0) = + s 2 >. Thus we have ϕ f (z) y d2 z = O(N rq0β ) b rq 0 b <N β γ + a +s/2 da db = O(N rq0γ ). (5.4) Similarly, one can show that y [ω,σ) ϕ f (z) y d2 z = O(N rq0β ). (5.5) We also have y σ ϕ f (z) y d2 z = N β b 2 ψ f (a, b) b da db = O(N β ), (5.6) where we used the change of variables (5.2), and abbreviate ψ f (a, b).= ( N β (i ) ( f (a + bn β ) f (a) ) χ(b) ( f(a + bn β ) f(a) ) χ (b) 2π ) + if(a)χ (b). Using lemma 4.5 we have R n Y c F n = y <ω y <ω F ϕ f (z ) ϕ f (z n ) y <ω y y n d2 z d 2 z n ϕ f (z ) dz ϕ f (z 2 ) ϕ f (z n ) y d2 z 2 d 2 z n y = O ( N rq 0γ N (n )β) O ( N rsγ/4 N (n )β) O ( N β). y n (5.7) (5.8) 2

22 Next, we estimate X c Y F. By the decay of the functions f and f, we have ϕ f (z) x >2 κ y d2 z N β ψ f (a, b) a κ b da db 2 2η ( ) = O(N β ) da + da + f(a) da + a +s/2 + a +s = O(N β sα/2 ). a κ 2η a κ 2η a κ 2η where a, b are defined as in (5.2). Hence F ϕ f (z ) ϕ f (z n ) X c Y { x >2 κ 2 } Y y y n d2 z d 2 z n = O ( N (n )β) ϕ f (z ) d2 z { x >2 κ 2, y ω} = O ( N nβ sα/2 ) O(N β). y (5.9) (5.20) Combining (5.8) and (5.20), we get F = X Y F + O(N β/2 ) The main computation. Now let us focus on the integral X Y F. Note that now we are in the good region where it is effective to apply the cumulant expansion to the resolvent. In X Y, we want to compute the quantity E G n G. Note that this is very close to the expression we had in (4.4). Let us abbreviate Q m.= G m G and Q (k) m.= Q m / k G for all k m n, and ζ i.= E NH ii 2. We proceed the computation as in Section 4.2, and get an analogue of (4.22): where T n.= z n 2E n G, and (ji) R l+ EQ n = EQ n n G EQ n E n G 2 + EQ n n G 2 T n T n NT n L = N K L + 2 n T n T n N 2 T n K = N 2 i [ l k=2 EQ (k) n k G k= 2 n G, E ( G n G n ) Gii (ζ i 2), H ii ( k! C l+(h ji )E k G n G H ji k n ) Gij + R (ji) l+ ]. (5.2) Here is a remainder term defined analogously to R(ji) l+ in (4.20). Note in Y, we have y,..., y n ω, and we have estimates analogue to those in Section 4.2. We state these estimates in the next lemma and omit the proof. 22

23 Lemma 5.5. Let us extend the definition of c 0 in (4.3) to a function c 0 ( ) : (0, ) R such that The following results hold uniformly in X Y. c 0 (x).= min{x, x}. (5.22) 3 (i) Analogously to Lemma 4.4, for any m N + and k =, 2,..., n, we have k G m N m(α+γ) (5.23) as well as ( k G m) ij { N (m )(α+γ) if i = j N (m /2)(α+γ) /2 if i j. (5.24) (ii) Analogously to (4.23), we have = O(). (5.25) T n (iii) Analogously to Lemma 4.6, we have L = O(N n(α+γ ) c 0(α+γ) ). (5.26) (iv) Analogously to Lemma 4.7, we have K = O(N n(α+γ ) (α+γ)/2 ). (5.27) (v) Analogously to Lemma 4.8, we have for k =, 2,..., n. Applying Lemma 5.5 to (5.2) yields EQ n = 2 n N 2 T n E k G 2 = O(N α+γ c0(α+γ) ) (5.28) EQ (k) n k G k= 2 n G + O(N n(α+γ ) c0(α+γ) ) (5.29) uniformly in X Y. Note that by the definition of β and γ we have γ = sc 0 (α)/6n < c 0 (α)/2, which gives c 0 (α + γ) > 5c 0 (α)/6 > 0. Since we have the simple estimate ϕ f (z) d 2 z = O(η), (5.30) we know = X Y 2N n 2 n T n k= F = N n X Y X Y ϕ f (z ) ϕ f (z n ) EQ n d 2 z d 2 z n ϕ f (z ) ϕ f (z n )EQ (k) n k G 2 n G d 2 z d 2 z n + O(N 2c0(α)/3 ), (5.3) where in the estimate of the error term we implicitly used nγ = sc 0 (α)/6 c 0 (α)/6. By symmetry, it suffices to fix k {, 2,..., n }, and consider the integral over X Y of F kn.= 2N n 2 T n ϕ f (z ) ϕ f (z n )EQ (k) n k G As before, we summarize the necessary estimates into a lemma. 2 n G. (5.32) 23

24 Lemma 5.6. Let F kn be as in (5.32). Then we have the following estimates. (i) Let A.= {(x, y) X Y : y k y n > 0, x k x n ηn (n+)γ }. Then N γ. (5.33) A F kn (ii) Let A 2.= {(x, y) X Y : y k y n > 0, x k x n (ηn (n+)γ, ηn (n+)γ/s ] }. Then where the function c 0 ( ) is defined in (5.22). A 2 F kn = O(N c 0(α)/4 ), (5.34) (iii) For A 3.= {(x, y) X Y : x k x n > ηn (n+)γ/s }, we have N γ. (5.35) Proof. (i) By Lemma 5.5 (i)-(ii) we have A 3 F kn N n 2 EQ (k) n k T G n 2 n G ω n uniformly in A. Thus by (5.30) and the decay of f and f we know Fkn ω n η n 2 ϕ f (z k ) ϕ f (z n ) { xk x n η N (n+)γ } d2 z k d 2 z n A = O(ω n η n ) ψ f (a k, b k ) ψ f (a n, b n ) { ak a n N (n+)γ } da k db k da n db n (5.36) = O(N nγ N (n+)γ ) = O(N γ ), where we use the change of variables a i = (x i E)/η and b i = y i /σ, i = k, n, (5.37) and ψ f is defined as in (5.7). (ii) Note that our assumption (5.5) on β shows ηn (n+)γ/s = O(N α/2 ). identity, the semicircle law (3.9), and Lemma 5.5 we know N n 2 EQ (k) n k T G 2 n G n = N n 2 EQ (k) n ( ω ω (n 2) N c 0(α+γ) ( k G 2 + E k G 2 T n (z k z n ) ηn (n+)γ + n G k G + E( n G T n (z k z n ) 2 k ) G + (Nω) + (Nω) + N c0(α+γ) η 2 N 2(n+)γ = O(η n N 3nγ c 0(α+γ) ) O(η n N c 0(α)/3 ) ) ) By the resolvent (5.38) 24

25 uniformly in A 2. Hence (5.30) yields A 2 F kn = O(N c 0(α)/4 ). (5.39) (iii) Similar as in (5.36), we know ω n η n 2 ϕ f (z k ) ϕ f (z n ) { xk x n >ηn (n+)γ/s } d2 z k d 2 z n A 3 F kn = O(ω n η n ) ψ f (a k, b k ) ψ f (a n, b n ) { ak a n >N (n+)γ/s } da k db k da n db n. (5.40) Note that in { a k a n > N (n+)γ/s }, either a k > 2 N (n+)γ/s or a n > 2 N (n+)γ/s. Hence by the decay conditions of f and f, we have ω n η n N (n+)γ = N γ. A 3 F kn Let A 4.= {(x, y) (X Y ) : y k y n < 0, x k x n ηn (n+)γ/s }. Note that C n is the disjoint union of A,..., A 4. The next result is about the integral of F kn in A 4, which gives the leading contribution. Lemma 5.7. We have A 4 F kn = 2π 2 where β is defined in (5.5). ( ) f(x) f(y) 2 dx dy E Tr f η (H) n 2 + O(N rsβ/9 ), (5.4) x y Proof. Step. By symmetry, let us consider A 5.= {(x, y) A 4 : y k ω, y n ω, x k x n ηn (n+)γ/s }. Similar as in (5.38), we have N n 2 EQ (k) n k T G n 2 n G = N n 2 EQ (k) n uniformly in A 5. Note the semicircle law (3.9) now gives E n G E k G T n (z k z n ) 2 + O(η n N c0(α)/3 ) (5.42) E n G E k G T n = EnG E k G z n 2E n = + O(N c0(α+γ) ) (5.43) G uniformly in A 5. By (5.30) we know F kn = N n 2 ϕ f (z ) ϕ f (z n )EQ (k) 2 n A 5 A 5 (z k z n ) 2 d2 z d 2 z n + O(N c0(α)/3 ) = 2 (z k z n ) 2 ϕ f (z k )ϕ f (z n ) d 2 z k d 2 z n ˆF kn + O(N c0(α)/3 ), A 5 where we decompose A 5 = A 5 A 5, with A 5 depends on (x k, y k, x n, y n ). Here ˆF kn is defined as A 5 ˆF kn.= N n 2 ϕ f (z ) ϕ f (z n )/ϕ f (z k ) EQ (k) n. 25

26 Let X (k,n).= { x,..., x k, x k+,..., x n 2 κ 2 }, and Y (k,n).= { y,..., y k, y k+,..., y n ω}, and note that A 5 = X(k,n) Y (k,n). Applying Lemma 5.4 with n replaced by n 2, we get ˆF kn = E Tr f η (H) n 2 + O(N β/2 ). (5.44) A 5 By the decay conditions of f and f, A (z 5 k z n ) 2 ϕ f (z k )ϕ f (z n ) d 2 z k d 2 z n = (z k z n ) 2 ϕ f (z k )ϕ f (z n ) {yk ω, y n ω} d 2 z k d 2 z n + O(N γ ) ψ f (a k, b k )ψ f (a n, b n ) = (a k a n + i (b k b n )N β ) 2 {b k N β γ, b n N β γ } d 2 z k d 2 z n + O(N γ ) =. Ψ + O(N γ ), where in the second last step we use the change of variables in (5.37), and ψ f is as in (5.7). Note that one can repeat the steps in the proof of Lemma 4.4 for any f C,r,s (R) instead of f(x) = ( x+i x 2 + )k, and get fη (H). (5.45) Together with Lemma 4.5 we know F kn = 2 E Tr f η (H) n 2 A 5 Step 2. We now compute Ψ. Let us set Ψ + Ψ O( N β/2) + O(N γ/2 ). (5.46) ψ f, (a, b).= i 2π N β( f (a + bn β ) f (a) ) χ(b), ψ f,2 (a, b).= ( f(a + bn β ) f(a) ) χ (b), and ψ f,3 (a, b).= i 2π 2π f(a)χ (b), which gives ψ f (a, b) = ψ f, (a, b) + ψ f,2 (a, b) + ψ f,3 (a, b). Let ψ f,i (a k, b k )ψ f,j (a n, b n ) Ψ.= (a k a n + i (b k b n )N β ) 2 {b k N β γ, b n N β γ } da k db k da n db n, (5.47) with i, j 3. symmetry. We will calculate Ψ by calculating 6 different integrals Ψ, subject to Let us first consider Ψ,. Note that by (5.3), where q 0 = q 0 (s) = Ψ, C f (a + bn β ) f (a) C( b N β ) rq 0 s 2(s+) s 4 > 0. Thus ( ) + a +s/2, (5.48) b k b n N 2β N 2β 2rq 0β b k b n rq 0 χ(b k )χ(b n ) {bk N β γ, b n N β γ } db k db n = O(N 2rq 0β ). 26

27 Now we consider Ψ 2,. Note that χ (b) = 0 for b <. Using integration by parts on the variable a k, we know ψ f,2 (a k, b k )ψ f, (a n, b n ) Ψ 2, = (a k a n + i (b k b n )N β ) 2 {b k, b n N β γ } da k db k da n db n ψ f,2 (a k, b k )ψ f, (a n, b n ) = a k a n + i (b k b n )N β {b k, b n N β γ } da k db k da n db n, where ψ f,2 (a, b).= 2π ( f (a + bn β ) f (a) ) χ (b). Then by (5.48) we know Ψ 2, = O(N β N rq 0β N β rq 0β ) = O(N 2rq 0β ). (5.49) Similarly, Ψ 3, = O(N rq 0β ). Now we move to Ψ 2,2. Using integration by parts on the variables a k and a n, we know Ψ 2,2 = log ( a k a n + i (b k b n )N β) ψf,2 (a k, b k ) ψ f,2 (a n, b n ) {bk, b n } da k db k da n db n = O ( log N N 2rq 0β ) = O(N rq 0β ). Similarly, Ψ 3,2 = O(N rq 0β/2 ) O(N rsβ/8 ). The leading contribution comes from Ψ 3,3. Note that Ψ 3,3 = f(a k )f(a n ) 4π 2 (a k a n + i (b k b n )N β ) 2 χ (b k )χ (b n ) {bk, b n } da k db k da n db n = ( ) f(a k ) f(a n ) 2 8π 2 (a k a n + i (b k b n )N β χ (b k )χ (b n ) ) {bk, bn } da k db k da n db n = ( ) f(ak ) f(a n ) 2 8π 2 χ (b k )χ (b n ) a k a {bk, bn } da k db k da n db n + O(N β/3 ) n = ( ) f(ak ) f(a n ) 2 8π 2 da k da n + O(N β/3 ), a k a n where the second last step is an elementary estimate whose details we omit. Hence by (5.45) and Lemma 4.5 we have F kn = ( ) f(x) f(y) 2 A 5 4π 2 dx dy E Tr f η (H) n 2 + O(N rsβ/9 ). (5.50) x y Similarly, let A 6.= {(x, y) A 4 : y k ω, y n ω, x k x n ηn (n+)γ/s }, and we have A 6 F kn = 4π 2 ( ) f(x) f(y) 2 dx dy E Tr f η (H) n 2 + O(N rsβ/9 ). (5.5) x y Thus by (5.50) and (5.5) we conclude proof. Note that Lemma 5.6 and 5.7 imply F kn = n ( ) f(x) f(y) 2 2π 2 dx dy E Tr f η (H) n 2 + O(N rsβ/9 ). x y 27

28 Together with Lemma 5.4 and (5.3), we have E Tr f η (H) n = n 2π 2 ( ) f(x) f(y) 2 dx dy E Tr f η (H) n 2 + O(N rsβ/9 ), (5.52) x y which finishes the proof of Lemma 5.3 (i) Proof of Lemma 5.3 (ii). Let f C,r,s (R) with r, s > 0, and without loss of generality we assume s. We define f as in (3.4). Let σ = N (α+β), where we define β = sc 0 /4 instead in (5.5). Let χ be as in Lemma 3.3 satisfying χ(y) = for y, and χ(y) = 0 for y 2. An application of Lemma 3.3 gives E[Tr f η (H)] = N ϕ f (x + iy) (EG(x + iy) m(x + iy)) dx dy =. F, (5.53) where ϕ f is defined as in (5.7). Note that (3.9) and (3.) imply uniformly in x R, y. Then we have y σ, x >2 κ 2 y σ G(x + iy) m(x + iy) N y F y σ, x >2 κ 2 y σ ϕ f (z) y d2 z = O(N rq0β ), (5.54) where q 0 = q 0 (s) = s 2(+s) s 4. Also, we have F ϕ f (z) y d2 z = O(N β sα/2 ) = O(N sc0/2 ). (5.55) An analogue of (4.4) yields EG(x + iy) m(x + iy) = O(N (α+β) c 0(α+β) ) (5.56) uniformly in x 2 κ 2, y σ, where the function c 0( ) is defined as in (5.22). Thus y σ, x 2 κ 2 Altogether we have (5.3). F = O(N (α+β) c 0(α+β) η) = O(N β c 0(α+β) ) = O(N c 0/2 ). (5.57) 5.5. Remark on the general case. Let us turn to Theorem 5.. As in the -dimensional case, we first show that ( Z(f ),..., Z(f d p )) (Z(f ),..., Z(f p )) (5.58) as N, where Z(f i ).= Tr f i ( H E η ) for i p. In order to show (5.58), it suffices to compute E[ Z(f i ) Z(f in )], i,..., i n {, 2,..., p}, and this follows exactly the same way as we compute E Tr f η (H) n. Theorem 5. then follows from the estimate EẐ(f i) = O(N rs2 c 0 /6 ) for all i {, 2,..., p}, which is Lemma 5.3 (ii). 28

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

arxiv: v4 [math.pr] 10 Sep 2018

arxiv: v4 [math.pr] 10 Sep 2018 Lectures on the local semicircle law for Wigner matrices arxiv:60.04055v4 [math.pr] 0 Sep 208 Florent Benaych-Georges Antti Knowles September, 208 These notes provide an introduction to the local semicircle

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

2. Function spaces and approximation

2. Function spaces and approximation 2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues

Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues Spectral Statistics of Erdős-Rényi Graphs II: Eigenvalue Spacing and the Extreme Eigenvalues László Erdős Antti Knowles 2 Horng-Tzer Yau 2 Jun Yin 2 Institute of Mathematics, University of Munich, Theresienstrasse

More information

On the principal components of sample covariance matrices

On the principal components of sample covariance matrices On the principal components of sample covariance matrices Alex Bloemendal Antti Knowles Horng-Tzer Yau Jun Yin February 4, 205 We introduce a class of M M sample covariance matrices Q which subsumes and

More information

The Altshuler-Shklovskii Formulas for Random Band Matrices I: the Unimodular Case

The Altshuler-Shklovskii Formulas for Random Band Matrices I: the Unimodular Case The Altshuler-Shklovskii Formulas for Random Band Matrices I: the Unimodular Case László Erdős Antti Knowles March 21, 2014 We consider the spectral statistics of large random band matrices on mesoscopic

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction

ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY. 1. Introduction ON THE UNIQUENESS OF SOLUTIONS TO THE GROSS-PITAEVSKII HIERARCHY SERGIU KLAINERMAN AND MATEI MACHEDON Abstract. The purpose of this note is to give a new proof of uniqueness of the Gross- Pitaevskii hierarchy,

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE

SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE BETSY STOVALL Abstract. This result sharpens the bilinear to linear deduction of Lee and Vargas for extension estimates on the hyperbolic paraboloid

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

JUHA KINNUNEN. Harmonic Analysis

JUHA KINNUNEN. Harmonic Analysis JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes

More information

Local Kesten McKay law for random regular graphs

Local Kesten McKay law for random regular graphs Local Kesten McKay law for random regular graphs Roland Bauerschmidt (with Jiaoyang Huang and Horng-Tzer Yau) University of Cambridge Weizmann Institute, January 2017 Random regular graph G N,d is the

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

MAT 257, Handout 13: December 5-7, 2011.

MAT 257, Handout 13: December 5-7, 2011. MAT 257, Handout 13: December 5-7, 2011. The Change of Variables Theorem. In these notes, I try to make more explicit some parts of Spivak s proof of the Change of Variable Theorem, and to supply most

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Homogenization of the Dyson Brownian Motion

Homogenization of the Dyson Brownian Motion Homogenization of the Dyson Brownian Motion P. Bourgade, joint work with L. Erdős, J. Yin, H.-T. Yau Cincinnati symposium on probability theory and applications, September 2014 Introduction...........

More information

NORMAL NUMBERS AND UNIFORM DISTRIBUTION (WEEKS 1-3) OPEN PROBLEMS IN NUMBER THEORY SPRING 2018, TEL AVIV UNIVERSITY

NORMAL NUMBERS AND UNIFORM DISTRIBUTION (WEEKS 1-3) OPEN PROBLEMS IN NUMBER THEORY SPRING 2018, TEL AVIV UNIVERSITY ORMAL UMBERS AD UIFORM DISTRIBUTIO WEEKS -3 OPE PROBLEMS I UMBER THEORY SPRIG 28, TEL AVIV UIVERSITY Contents.. ormal numbers.2. Un-natural examples 2.3. ormality and uniform distribution 2.4. Weyl s criterion

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Problem Set. Problem Set #1. Math 5322, Fall March 4, 2002 ANSWERS

Problem Set. Problem Set #1. Math 5322, Fall March 4, 2002 ANSWERS Problem Set Problem Set #1 Math 5322, Fall 2001 March 4, 2002 ANSWRS i All of the problems are from Chapter 3 of the text. Problem 1. [Problem 2, page 88] If ν is a signed measure, is ν-null iff ν () 0.

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Local law of addition of random matrices

Local law of addition of random matrices Local law of addition of random matrices Kevin Schnelli IST Austria Joint work with Zhigang Bao and László Erdős Supported by ERC Advanced Grant RANMAT No. 338804 Spectrum of sum of random matrices Question:

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ 8. The Banach-Tarski paradox May, 2012 The Banach-Tarski paradox is that a unit ball in Euclidean -space can be decomposed into finitely many parts which can then be reassembled to form two unit balls

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 While these notes are under construction, I expect there will be many typos. The main reference for this is volume 1 of Hörmander, The analysis of liner

More information

The Isotropic Semicircle Law and Deformation of Wigner Matrices

The Isotropic Semicircle Law and Deformation of Wigner Matrices The Isotropic Semicircle Law and Deformation of Wigner Matrices Antti Knowles 1 and Jun Yin 2 Department of Mathematics, Harvard University Cambridge MA 02138, USA knowles@math.harvard.edu 1 Department

More information

CORRIGENDUM: THE SYMPLECTIC SUM FORMULA FOR GROMOV-WITTEN INVARIANTS

CORRIGENDUM: THE SYMPLECTIC SUM FORMULA FOR GROMOV-WITTEN INVARIANTS CORRIGENDUM: THE SYMPLECTIC SUM FORMULA FOR GROMOV-WITTEN INVARIANTS ELENY-NICOLETA IONEL AND THOMAS H. PARKER Abstract. We correct an error and an oversight in [IP]. The sign of the curvature in (8.7)

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

Gaussian integers. 1 = a 2 + b 2 = c 2 + d 2.

Gaussian integers. 1 = a 2 + b 2 = c 2 + d 2. Gaussian integers 1 Units in Z[i] An element x = a + bi Z[i], a, b Z is a unit if there exists y = c + di Z[i] such that xy = 1. This implies 1 = x 2 y 2 = (a 2 + b 2 )(c 2 + d 2 ) But a 2, b 2, c 2, d

More information

The deterministic Lasso

The deterministic Lasso The deterministic Lasso Sara van de Geer Seminar für Statistik, ETH Zürich Abstract We study high-dimensional generalized linear models and empirical risk minimization using the Lasso An oracle inequality

More information

CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE

CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE CLASSIFICATIONS OF THE FLOWS OF LINEAR ODE PETER ROBICHEAUX Abstract. The goal of this paper is to examine characterizations of linear differential equations. We define the flow of an equation and examine

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5. VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

Imaginaries in Boolean algebras.

Imaginaries in Boolean algebras. Imaginaries in Boolean algebras. Roman Wencel 1 Instytut Matematyczny Uniwersytetu Wroc lawskiego ABSTRACT Given an infinite Boolean algebra B, we find a natural class of -definable equivalence relations

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

PRODUCT THEOREMS IN SL 2 AND SL 3

PRODUCT THEOREMS IN SL 2 AND SL 3 PRODUCT THEOREMS IN SL 2 AND SL 3 1 Mei-Chu Chang Abstract We study product theorems for matrix spaces. In particular, we prove the following theorems. Theorem 1. For all ε >, there is δ > such that if

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analysis Handout 1 Luca Trevisan October 3, 017 Scribed by Maxim Rabinovich Lecture 1 In which we begin to prove that the SDP relaxation exactly recovers communities

More information

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES P. M. Bleher (1) and P. Major (2) (1) Keldysh Institute of Applied Mathematics of the Soviet Academy of Sciences Moscow (2)

More information

ON KRONECKER PRODUCTS OF CHARACTERS OF THE SYMMETRIC GROUPS WITH FEW COMPONENTS

ON KRONECKER PRODUCTS OF CHARACTERS OF THE SYMMETRIC GROUPS WITH FEW COMPONENTS ON KRONECKER PRODUCTS OF CHARACTERS OF THE SYMMETRIC GROUPS WITH FEW COMPONENTS C. BESSENRODT AND S. VAN WILLIGENBURG Abstract. Confirming a conjecture made by Bessenrodt and Kleshchev in 1999, we classify

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

Stein s Method and Characteristic Functions

Stein s Method and Characteristic Functions Stein s Method and Characteristic Functions Alexander Tikhomirov Komi Science Center of Ural Division of RAS, Syktyvkar, Russia; Singapore, NUS, 18-29 May 2015 Workshop New Directions in Stein s method

More information

2 A Model, Harmonic Map, Problem

2 A Model, Harmonic Map, Problem ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or

More information

The Hadamard product and the free convolutions

The Hadamard product and the free convolutions isid/ms/205/20 November 2, 205 http://www.isid.ac.in/ statmath/index.php?module=preprint The Hadamard product and the free convolutions Arijit Chakrabarty Indian Statistical Institute, Delhi Centre 7,

More information

Local Semicircle Law and Complete Delocalization for Wigner Random Matrices

Local Semicircle Law and Complete Delocalization for Wigner Random Matrices Local Semicircle Law and Complete Delocalization for Wigner Random Matrices The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters.

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

MORE NOTES FOR MATH 823, FALL 2007

MORE NOTES FOR MATH 823, FALL 2007 MORE NOTES FOR MATH 83, FALL 007 Prop 1.1 Prop 1. Lemma 1.3 1. The Siegel upper half space 1.1. The Siegel upper half space and its Bergman kernel. The Siegel upper half space is the domain { U n+1 z C

More information

Eigenvector Statistics of Sparse Random Matrices. Jiaoyang Huang. Harvard University

Eigenvector Statistics of Sparse Random Matrices. Jiaoyang Huang. Harvard University Eigenvector Statistics of Sparse Random Matrices Paul Bourgade ew York University E-mail: bourgade@cims.nyu.edu Jiaoyang Huang Harvard University E-mail: jiaoyang@math.harvard.edu Horng-Tzer Yau Harvard

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer Lecture 5, January 27, 2006 Solved Homework (Homework for grading is also due today) We are told

More information

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 1 PROBLEMS AND SOLUTIONS First day July 29, 1994 Problem 1. 13 points a Let A be a n n, n 2, symmetric, invertible

More information

Density of States for Random Band Matrices in d = 2

Density of States for Random Band Matrices in d = 2 Density of States for Random Band Matrices in d = 2 via the supersymmetric approach Mareike Lager Institute for applied mathematics University of Bonn Joint work with Margherita Disertori ZiF Summer School

More information

Some analysis problems 1. x x 2 +yn2, y > 0. g(y) := lim

Some analysis problems 1. x x 2 +yn2, y > 0. g(y) := lim Some analysis problems. Let f be a continuous function on R and let for n =,2,..., F n (x) = x (x t) n f(t)dt. Prove that F n is n times differentiable, and prove a simple formula for its n-th derivative.

More information

SPECTRAL GAP FOR ZERO-RANGE DYNAMICS. By C. Landim, S. Sethuraman and S. Varadhan 1 IMPA and CNRS, Courant Institute and Courant Institute

SPECTRAL GAP FOR ZERO-RANGE DYNAMICS. By C. Landim, S. Sethuraman and S. Varadhan 1 IMPA and CNRS, Courant Institute and Courant Institute The Annals of Probability 996, Vol. 24, No. 4, 87 902 SPECTRAL GAP FOR ZERO-RANGE DYNAMICS By C. Landim, S. Sethuraman and S. Varadhan IMPA and CNRS, Courant Institute and Courant Institute We give a lower

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Decoupling Lecture 1

Decoupling Lecture 1 18.118 Decoupling Lecture 1 Instructor: Larry Guth Trans. : Sarah Tammen September 7, 2017 Decoupling theory is a branch of Fourier analysis that is recent in origin and that has many applications to problems

More information

The L p -dissipativity of first order partial differential operators

The L p -dissipativity of first order partial differential operators The L p -dissipativity of first order partial differential operators A. Cialdea V. Maz ya n Memory of Vladimir. Smirnov Abstract. We find necessary and sufficient conditions for the L p -dissipativity

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

On Expected Gaussian Random Determinants

On Expected Gaussian Random Determinants On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Average theorem, Restriction theorem and Strichartz estimates

Average theorem, Restriction theorem and Strichartz estimates Average theorem, Restriction theorem and trichartz estimates 2 August 27 Abstract We provide the details of the proof of the average theorem and the restriction theorem. Emphasis has been placed on the

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

INTRODUCTION TO REAL ANALYTIC GEOMETRY

INTRODUCTION TO REAL ANALYTIC GEOMETRY INTRODUCTION TO REAL ANALYTIC GEOMETRY KRZYSZTOF KURDYKA 1. Analytic functions in several variables 1.1. Summable families. Let (E, ) be a normed space over the field R or C, dim E

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

4.7 The Levi-Civita connection and parallel transport

4.7 The Levi-Civita connection and parallel transport Classnotes: Geometry & Control of Dynamical Systems, M. Kawski. April 21, 2009 138 4.7 The Levi-Civita connection and parallel transport In the earlier investigation, characterizing the shortest curves

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018 1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

arxiv: v2 [math.pr] 16 Aug 2014

arxiv: v2 [math.pr] 16 Aug 2014 RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND

More information