Gaussian Fluctuation for the Number of Particles in Airy, Bessel, Sine, and Other Determinantal Random Point Fields

Size: px
Start display at page:

Download "Gaussian Fluctuation for the Number of Particles in Airy, Bessel, Sine, and Other Determinantal Random Point Fields"

Transcription

1 Journal of Statistical Physics, Vol. 00, Nos. 34, 000 Gaussian Fluctuation for the Number of Particles in Airy, Bessel, Sine, and Other Determinantal Random Point Fields Alexander B. Soshnikov Received December 3, 999 We prove the Central Limit Theorem (CLT) for the number of eigenvalues near the spectrum edge for certain Hermitian ensembles of random matrices. To derive our results, we use a general theorem, essentially due to Costin and Lebowitz, concerning the Gaussian fluctuation of the number of particles in random point fields with determinantal correlation functions. As another corollary of the CostinLebowitz Theorem we prove the CLT for the empirical distribution function of the eigenvalues of random matrices from classical compact groups. KEY WORDS: Determinantal random point fields; central limit theorem; random matrices; Airy and Bessel kernels; classical compact groups.. INTRODUCTION AND FORMULATION OF RESULTS Random Hermitian matrices were introduced in mathematical physics by Wigner in the fifties [Wig, Wig]. The main motivation of pioneers in this field was to obtain a better understanding of the statistical behavior of energy levels of heavy nuclei. An archetypical example of random matrices is the Gaussian Unitary Ensemble (G.U.E.) which can be defined by the probability distribution on a space of n-dimensional Hermitian matrices as P(dA)=const n } e &n Trace A da (.) Here da is the Lebesgue measure on n -parameters set [Re a ij,i< jn; Ima ij,i< jn, a ii,in] (.) Department of Mathematics, California Institute of Technology, Pasadena, California 95, and Department of Mathematics, University of California, Davis, Davis, California Plenum Publishing Corporation

2 49 Soshnikov and const n =(?n) &n } n(n&) is a normalization constant. (.) implies that matrix entries (.) area independent Gaussian random variables N(0, (+$ ij )8n). It is well known that the G.U.E. is the only ensemble of Hermitian random matrices (up to a trivial rescaling) that satisfies both of the following properties: () probability distribution P(dA) is invariant under unitary transformation A U & AU, U # U(n) () matrix entries up from the diagonal are independent random variables (see [Me, Chap. ]). The n eigenvalues, all real, of a Hermitian matrix A will be denoted by *, *,..., * n. For the formulas for their joint distribution density p n (*,..., * n ) and k-point correlation functions \ n, k (*,..., * k ) we refer to [Me]. One has p n (*,..., * n ) =const$ n } ` i< jn * i &* j } exp \&n } : n \ n, k (*,..., * k ):= n! (n&k)! p n (*,..., * n ) d* k+ }}}d* n R n&k i= * i (.3) + =det(k n (* i, * j )) k i, j= (.4) where K n (x, y) is a projection kernel, and n& K n (x, y)=- n } : l=0 l (x)= (&)l? 4 }( l } l!) } exp \ x l (- n x)} l (- n } y) (.5) + } d l dx l exp(&x ) (.6) l=0,,..., are WeberHermite functions. The global behavior of eigenvalues is governed by the celebrated semicircle law, which states that the empirical distribution function of the eigenvalues weakly converges to a nonrandom (Wigner) distribution: F n (*)= n *[* i*] ww w F(*)= n * \(x) dx (.7) &

3 Determinantal Random Point Fields 493 with probability one ([Wig, Wig]) where the spectral density \ is given by \(t)={? - &t, 0, t t > (.8) To study the local behavior of eigenvalues near an arbitrary point in the spectrum x # [&, ], one has to consider rescaling * i =x+ y i, i=,..., k (.9) \ n, (x) and study the rescaled k-point correlation functions R n.k ( y,..., y k ):=(\ n, (x)) &k } \ n, k (*,..., * k ) (.0) The biggest interest is paid to the asymptotics of rescaled correlation functions when n goes to infinity. For G.U.E. the answer can be obtained from the PlancherelRotach asymptotic formulas for Hermite polynomials [PR]: lim R n, k ( y,..., y k )=\ k ( y,..., y k )=det(k( y i, y j )) k i, j= (.) n The kernel K actually also depends on x but in a very simple way. It can be represented as A( y)}a$(z)&a(z) A$( y) K( y, z)= y&z (.) where for all x < the function A is just sin(?y)?, and for x=\ it is Ai(\y)=? cos 0 \ 3 t3 \yt dt (.3) + The function defined by (.3) is known as the Airy function and the kernel (.)(.3) is known as the Airy kernel (see [Me, TW, F]). The limiting correlation functions (.)(.3) determine a random point field on the real line, i.e., probability measure on the Borel _-algebra of the space of locally finite configurations, 0=[ =(y i ) i # Z : \T>0*[y i : y i <T]<] (.4)

4 494 Soshnikov The distribution of random point field is uniquely defined by the generating function k (z,..., z k ; I,..., I k )=E ` where I j, j=,..., k, are disjoint intervals on the real line, & j =*[y i # I j ]= *(I j ), the number of particles in I j, and k # Z +. It follows from the general theory of existence and uniqueness for random point fields [L, L], that if K( y, z) is locally bounded than determinantal correlation functions uniquelly determine random point field assuming that such random point field exists. The generating function (z,..., z k ) is given by Fredholm determinant of the intergal operator in L (R ): k (z,..., z k )=det($(x& y)+ : j= j= z & j j (z j &) } K(x, y)}/ Ij ( y)) (.5) where / Ij is an indicator of I j. In particular these results are applicable to the Airy kernel (.)(.3). We shall call the corresponding random point field the Airy random point field. For one-level density formulas (.)(.3) produce \ ( y)=&y }(Ai) ( y)+(ai$( y)) (.6) The asymptotic expansion of the Airy function is well known (see [Ol]). One can deduce from it? \ ( y)t{ y &cos(4 } y 3 3) +0( y &5 ) 4? } y 7 96 }?y } exp(&4y3 3) as as y & y +. (.7) \ satisfies the third order differential equation \ $$$( y)=&\ ( y)+4y } \$ ( y) (.8) One can think about the one-point correlation function as a level density, since for any interval I/R we have E*(I)= I \ ( y) dy. It follows from (.7) that E*((&T, +)) is finite for any T and E*((&T, +))t (T 3 3?)+0 () when T goes to +. The last formula means that E*((&T, +))&(T 3 3?) stays bounded for large positive T. Let us denote & (T ):=*[y i >&T]=*((&T, +)), & k (T ):=*((&kt, &(k&) T ])), k=, 3,... Theorem establishes the Central Limit Theorem for & k (T ), k.

5 Determinantal Random Point Fields 495 Theorem. The variance of & k (T ) grows logarithmically Var & k (T )t? } log T+0 () and the sequence of normalized random variables (& k (T )&E& k (T )) - Var & k (T ) converges in distribution to the centalized gaussian random sequence [! k ] with the covariance function E! k! l =$ k, l & $ k, l+& k, l&. Remark. The first result about Gaussian fluctuation of the number of particles in random matrix model was established by Costin and Lebowitz [CL] for the kernel (sin?(x& y))?(x& y). See Section for a more detailed discussion. Remark. Basor and Widom [BaW] recently proved the Central Limit Theorem for a large class of smooth linear statistics + f( y i=& it ) where f satisfies some decay and differentiality conditions. Similar results for smooth linear statistics in other random matrix ensembles were proven in [Sp], [DS], [Jo], [SiSo], [KKP], [SiSo], [Ba], [BF], [BdMK], [Br]; see also [So] for the results about global distribution of spacings. Another class of random Hermitian matrices, called Laguerre ensemble, was introduced by Bronk in [Br]. This one is the ensemble of positive n_n Hermitian matrices. Any positive Hermitian matrix H can be represented as H=AA*, where A is some complex valued n_n matrix and A* is its conjugate. The distribution on such matrices is defined as P(dH)=const" n } exp(&n } Trace A } A*) } [det(aa*)] : da (.9) where :>& and da is Lebesgue measure on n -dimensional space of complex matrices. The joint distribution of n (positive) eigenvalues of H is given by p n (*,..., * n )=const n $$$ } exp \&n } : n i= * i+ } ǹ i= * : i } ` i< jn (* i &* j ) (.0) The Vandermonde factor in (.0) implies that correlation functions still have the determinantal form (.4) with the kernel n& K n (x, y)=n } : l=0, l (nx)}, l (ny) (.)

6 496 Soshnikov where the sequence [, l (x)] is obtained by orthonormalizing the sequence [x k } x : } e &x ] on (0, +). The limiting level density is supported on [0, ] and given by the formula \ (x)=? x& }(&x) (.) (It is not surprising to see that (.) is the density of a square of the Wigner random variable!) PlancherelRotach type asymptotics for Laguerre polynomials [E], [PR] imply that by scaling the kernel K n (x, y) in the bulk of the spectrum, we obtain the sine kernel (sin?(x& y))?(x& y), and by scaling at x= (``soft edge''), we obtain the Airy kernel. As we already know, the same kernels appear after rescaling in G.U.E. This feature, called universality of local correlations, has been established recently for a variety of ensembles of Hermitian random matrices (see [PS], [BI], [DKMVZ], [Jo], [So], [BZ]). To scale the kernel at the ``hard edge'' x=0, we need an asymptotic formula of Hilb's type (see [E]), which leads to lim n 4n n\ K x 4n, y 4n+ =J :(- x)}-y } J$ : (- y)&- xj$ : (- x)}j : (- y) (x& y) (.3) where J : is the Bessel function of order : [F, TW, Ba]. The kernel (.3) is also known to appear at hard edges in the Jacobi ensemble [NW]. For a quick reference, we note that in the Jacobi case, a sequence [, l (x)] from (.) is obtained by orthonormalizing [x k (&x) : (+x) ; ]. The random point field on [0, +] with the determinantal correlation functions defined by (.3) will be referred to as the Bessel random point field. There is a general belief among people working in random matrix theory that in the same way as the sine kernel appears to be a universal limit in the bulk of the spectrum for random Hermitian matrices, Airy and Bessel kernels are universal limits at the soft and hard edge of the spectrum. The next theorem establishes the CLT for & k (T )=*(((k&) T, kt ]), k=,,... Theorem. Let &(T ) be the number of particles in (0, T ) for the Bessel random point field. Then E& k (T )t? T (k &(k&) )+0 () Var & k (T )t 4? log T+0 ()

7 Determinantal Random Point Fields 497 and the sequence of the the normalized random variable (& k (T )&E& k (T )) - Var & k (T ) converges in distribution to the gaussian random sequence from the Theorem. Theorems and, as well as similar results for the random fields arising from the classical compact groups (see Section 4) are the corollaries of the general result about determinantal random point fields, which is essentially due to Costin and Lebowitz. Recently a number of discrete determinantal random point fields appeared in two-dimensional growth models [Jo3, Jo5], asymptotics of Plancherel measures on symmetric groups and the representation theory of the infinite symmetric group [BO, BO, BOO, Jo4, Ok, Ok]. If one can show the infinite growth of the variance of the number of particles in these models (the goal which is probably attainable since the asymptotics of the discrete orthogonal polynomials arising in some of these problems are known) the CostinLebowitz theorem should work there as well. The rest of the paper is organized as follows. We discuss the general (CostinLebowitz) theorem in Section. Theorems and will be proven in Sections 3 and 4. In the Bessel case, we will see that the kernel (sin?(x& y)?(x& y))\(sin?(x+ y)?(x+ y)) naturally appears in our considerations. We recall in Section 4 that the sine kernel also appears in the limiting distribution of eigenvalues in unitary group and the even and odd sine kernels appear in the distribution of eigenvalues in orthogonal and symplectic groups and then prove the Gaussian fluctuation for the number of eigenvalues in these models in Theorem 36.. THE CENTRAL LIMIT THEOREM FOR DETERMINANTAL RANDOM POINT FIELDS Let [P t ] t # R + be a family of random point fields on Rd such that their correlation functions have determinantal form at the r.h.s. of (.) with Hermitian kernels K t ( y, z), and [I t ] a collection of Borel subsets t # R Rd. + We denote by A t an integral operator on I t with the kernel K t ( y, z), A t : L (I t ) L (I t ), by & t the number of particles in I t, & t =*(I t ), and by E t, Var t the mathematical expectation and variance with respect to the probability distribution of the random field P t. In many applications the random point field P t, and therefore the kernel K t will be the same for all t. In such situations I t will be expanding. Theorem (O. Costin, J. Lebowitz). Let A t =K t } / It be a family of trace class Hermitian operators associated with determinantal random

8 498 Soshnikov point fields [P t ] such that Var t & t =Trace(A t &A t ) goes to infinity as t +. Then the distribution of the normalized random variable (& t &E t & t )- Var t & t with respect to the random point field P t weakly converges to the normal law N(0, ). Remark 3. The result has been proven by Costin and Lebowitz t when d=, K t (x, y)=(sin?(x& y))?(x& y) for any t and I t ww (see [CL]). The original paper contains a remark, due to Widom, that the result holds for more general kernels. Remark 4. There is a general result that a (locally) trace class Hermitian operator K defines a determinantal random point field iff 0K (see [So4] or, for a slightly weaker version, [Ma]). The idea of the proof is very clear and consists of two parts. Let us denote the lth cumulant of & t by C l (& t ). We remind that by definition : l= C l (iz) l l!=log(e t exp(iz& t )) Lemma. The following recursive relation holds for any l: l& C l (& t )=(&) l }(l&)! Trace(A t &A l )+ : t s= : sl C s (& t ) (.) where : sl,sl&, are some combinatorial coefficients (irrelevant for our purposes). The proof can be found in [CL] or [So, Section ]; (of course one has to replace everywhere (sin?(x& y))?(x& y) by K t (x, y)). For the convinience of the reader we sketch the main ideas here. We start by introducing the Ursell (cluster) functions: and, in general, r (x )=\(x ), r (x, x )=\ (x, x )&\ (x ) \(x ) k r k (x,..., x k )= : m= : G m (&) m& (m&)! ` j= r Gj (x (G j )) (.) where G is a partition of indices [,,..., k] into m subgroups G,..., G m, and x (G j ) stands for the collection of x i with indices in G j.

9 Determinantal Random Point Fields 499 It appears that the integral of k-point Ursell function r k (x,..., x k ) over dk-dimensional cube I t _}}}I t is equal to the linear combination of C j (& t ), j=,..., k. Namely, let us denote T k (& t )= It }}} It r k (x,..., x k ) dx }}}dx k Then : k C k (iz) k k!= : k= (exp(z)&) k T k (& t )k! (.3) Taking into account that for the determinantal random point fields T k (& t )=(&) k }(k&)! Trace(A t ) k the last two equations imply (.). The next lemma allows us to estimate Trace(A t &A l t ). Lemma. 0Trace(A t &A l t )(l&) } Trace(A t &A t ). The proof is elementary: 0Trace(A t &A l t )=l& Trace(A j j= t j& &At & } Trace(A t &A )(l&) } Trace(A t t&a ). K t l& j= &A j+ t ) As a corollary of the lemmas we have C l (& t )=0(C (& t )) for any l. Since C (& t )=Trace(A t &A ) ww t t +, we conclude that for l>, C l ((& t &E& t )- Var t & t )=C l (& t )((C (& t )) l t ) ww 0. At the same time the first two cumulants of the normalized random variable are 0 and, respectively. The convergence of cumulants implies the convergence of moments to the moments of N(0, ). The theorem is proven. K To generalize the CostinLebowitz theorem to the case of several intervals we consider I (m) t, m=,..., s, and define & (m) t =*(I (m) t ). The equation : C k,..., k s (iz ) k k!}}}(iz s ) k s ks!=log(e t exp(i(z & () t +}}}+z s & (s) t ))) (.4) defines the joint cumulants of & (m) t 's. Proposition. C k,..., k s (& () t,..., & t (s)) is equal to a linear combination of the traces Trace K t } / (}}}) I t } K t } / (}}}) I t }}}K t } / (}}}) I t

10 500 Soshnikov with some combinatorial coefficients (irrelevant for our purposes), such that for any nonzero k j at least one indicator in each term of the linear combination is the indicator of I ( j) t. The proof immediately follows from the analogue of (.4) for the case of several intervals. In the next section we will apply these results to prove Theorem. 3. PROOF OF THEOREM For the most part of the section we will study the case of one interval (&T, +). We start by recalling the asymptotic expansion of Airy function for large positive and negative y (see [Ol]). A i ( y )t e&?z? } y } : 4 A$ i ( y )t y 4 } e &?z A i (& y )t s=0 } :? s=0? } y 4 } { cos \?z+? 4+ } : +sin \?z+? 4+ } : (&) s } u s z s (3.) (&) s } v s z s (3.) s=0 s=0 (&) s } u s z s (&) s } u s+ z s+= (3.3) 4 y A$ i (& y )t? } { sin \?z+? 4+ } : (&) s+ } v s z s s=0 &cos \?z+? 4+ } : s=0 (&) s+ } v s+ z s+= (3.4) where z=(3?) y } y ; u 0 =v 0 =, and u s =((s+) } (s+3) } }}} } (6s&))(6 }?) } s!, v s =&((6s+)(6s&)) u s, s. In particular, as a consequence of (3.)(3.4) one has (.7). It follows from (3.)(3.4) together with the boundedness of A i ( y), A$ i ( y) on any compact set that for any fixed a # R all moments of *((a, +)) are finite. Therefore it is enough to establish the CLT for *((&T, a)). We choose a=&(3?) 3 ( y=&(3?) 3 corresponds to z=&). We are going to show that the conditions of the theorem from Section are satisfied by / (&T, a) K } / (&T, a), where, as above, this notation is reserved for the integral operator with the kernel / (&T, a) (x)k(x, y)}/ (&T, a) ( y). Lemma 3. class. K 0/ (&T, a) K } / (&T, a) and / (&T, a) K } / (&T, a) is trace

11 Determinantal Random Point Fields 50 The kernel K( y, y )=(A i ( y )}A$ i ( y )&A$ i ( y )}A i ( y ))( y & y ) was obtained from K n (x, x )=- n } n& l=0 l (- nx )} l (- n } x ) after rescaling x i =+(y i n 3 ), i=,, and taking the limit n. The convergence is uniform on compact sets. As a projection operation K n satisfies 0K n. We immediately conclude that / (&T, a) K } / (&T, a) satisfies the same inequalities and since the kernel is continuous and non-negative definite the operator is trace class (see, e.g., [GK] or [RS, Section XI.4]). Now the main step of the proof consists of Proposition. Var \* \ &T, & \ 3? t? log T+0 () Proof. We introduce the chana5 of variables z i = 3? y i } y i (3.5) and agree to use the notations Q(z, z ), q k (z,..., z k ), k=,,..., for the kernel and k-point correlation function of the new random point field obtained after the change of coordinates. It follows from (.7) that q (z)t +(cos?z6?z)+0 (z & ) for z &, so we see that the configuration (z i ) is equally spaced at &. The kernel Q(z, z ) is defined by? Q(z, z )= y 4 } y } K( y, y 4 )? = y 4 } y } A i( y )}A$ i ( y )&A$ i ( y )}A i ( y ) 4 y & y Formulas (3.3)(3.4) allow us to represent Q as the sum of six kernels Q (i), i=,..., 6, with the known asymptotic expansions: Q(z, z )= 6 i= Q(i) (z, z ), where Q () (z, z )t 3? } } sin?(z z 3 &z ) &z3 } { : m, n=0 (&) m+n } u m } v n }(z &m&3 } z &n +z &n } z &m&3 ) = (3.6)

12 50 Soshnikov Q () (z, z )t 3? } } cos?(z z 3 &z 3 +z ) } { : m, n=0 (&) m+n } u m } v n }(z &n } z &m&3 &z &m&3 } z &n ) = (3.7) Q (3) (z, z )t 3? } } cos z 3 &z 3 \?z +? 4+ } cos \?z +? 4+ } { : m, n=0 (&) m+n+ } u m } v n+ }(z &m&3 } z &n& &z &n& } z &m&3 ) = (3.8) Q (4) (z, z )t 3? } } sin z 3 &z3 \?z +? 4+ } sin \?z +? 4+ } { : m, n=0 (&) m+n } u m+ } v n }(z &m&43 } z &n &z &n } z &m&43 ) = (3.9) Q (5) (z, z )t 3? } } sin?(z z 3 &z ) &z3 } { : m, n=0 (&) m+n+ } u m+ } v n+ }(z &m&43 } z &n& +z &n& } z &m&43 ) = (3.0) Q (6) (z, z )t 3? } } cos?(z z 3 +z ) &z3 } { : m, n=0 (&) m+n } u m+ } v n+ }(z &n& } z &m&43 &z &m&43 } z &n& ) = (3.)

13 Determinantal Random Point Fields 503 We denote by Q (i) m, n (z, z ) the (m, n)th term in the asymptotic expansion of Q (i) (z, z ). Then 0, 0(z, z )= sin?(z &z ) } Q () z 3 &z3 = sin?(z &z )?(z &z ) 3? }(z&3 +z &3 ) } z3 +z 3 } z 3 +z 3 3}z 3 } z 3 We note that near the diagonal Q () 0, 0 is essentially the sine kernel. We also will need Q () 0, 0(z, z )= cos?(z +z )?(z +z ) } z3 &z3 } z 3 +z3 3}z 3 } z 3 Let us define S(z, z )=Q () 0, 0 (z, z )+Q () 0, 0 (z, z ), U(z, z )=Q(z, z )& S(z, z ). Lemma 4. & & (Q () (z 0, 0, z )) dz dz =L& &L &L 3? log L+0 () (3.) Proof. The integral can be written as 9 L L \ sin?(z &z ) }?(z &z ) + \z3 +z3 } z 3 +z3 dz z 3 } z + 3 dz = 9 } L& sin?u 0 \?u + } L&u _\ z+u 3 ++ z + \ z+u+ z 3 We represent the inner integral as L&u _\ & dz du z+u 3+ } 3+ z + \z+u z + \ z+u+ z 3+ } \ z+u+ z 3 +3 & dz =I (u)+i (u)+3 } (L&u&) (3.3) with I (u)= L&u \ I (u)= L&u \ z+u } z + \z+u dz z + z+u+ z 3 + } \ z+u+ z 3 dz

14 504 Soshnikov To calculate I (u) we introduce the change of variables t=((z+u)z) 3. Then We have I (u)= (+u)3 (t +t)} \ u 3+ $ dt (L(L&u)) 3 &t =((+u) 3 + } (+u) 3 )}(&) & \\ L&u+ L 3 + } \ L&u+ L 3 + }(&L+u) +u (+u)3 (t+) } dt (3.4) (L(L&u)) 3 t 3 & and (t+) } (L(L&u)) 3 t 3 & dt u } (+u)3 =u } (+u)3 (L(L&u)) } \ t& & t+ t +t++ dt = 4u 3 } log[(+u)3 &]& 4u 3 log _\ L&u+ L 3 & & & u 3 log[(+u)3 +(+u) 3 +] + u 3 log _\ L&u+ L 3+ \ L&u+ L 3 + & (3.5) 3 3 I (u)=&(+u) 3 &(+u) 3 +L } L+ \&u +L } L+ \&u + 4u 3 log[(+u)3 &]& 4u 3 log _\ L&u+ L 3 & & + u 3 log _\ L&u+ L 3+ \ L&u+ L 3 + & & u 3 log[(+u)3 +(+u) 3 +] (3.6)

15 Determinantal Random Point Fields 505 In a similar way in order to calculate I (u)= +u\ L z&u } z + \z&u dz z + we consider the change of variables t=((z&u)z) 3. Then I (u)= (L(L&u))&3 (+u) &3 (t +t)} \ u &t 3+ $ dt 3 3 = \\L&u + } L + \L&u L + + } L &((+u) &3 + } (+u) &3 )}(+u) +u } (L(L&u))&3 (t+) } (+u) &3 t 3 & dt 3 3 =&(+u) 3 &(+u) 3 +L } L+ \&u +L } L+ \&u u } log _ & \ L&u & 3 u } log _\ L&u 3 + L + \L&u 3 L + & &4 3 u } log[&(+u)&3 ] 3 L + + & + 3 u } log[(+u)&3 +(+u) &3 +]. (3.7) It follows from (3.3) that & & (Q () (z 0, 0, z )) dz dz &L &L We note that = 9 } L& 0 \ L& 0 \ sin?u }(I?u + (u)+i (u)+3l&3u&3) du. (3.8) sin?u } Ldu=?u + L +0 (), (3.9) L& 0 (sin?u)?u du=? log L+0 (), (3.0)

16 506 Soshnikov L& 3 sin?u } L } 0 \?u + L+ \&u du L& 0 \ =L } L& 0 \ sin?u?u + u } \& 3 L +0 \ u L ++ du = L & 6? log L+0 (), (3.) 3 sin?u } L }?u + L+ \&u du = L & 3? log L+0 (). (3.) The combined contribution of all other terms to (3.8) is 0 (). Indeed, L& 0 \ L& 0 \ L& 0 \ L& 0 \ sin?u }(+u)?u + 3 =0(), sin?u }(+u)?u + 3 =0(), sin?u }?u + 4u 3 log[(+u)3 &] du& L& sin?u 0 \?u + } u 3 } log[(+u)3 +(+u) 3 +] du = 3? L& sin?u 0? u } log ((+u) _ 3 &) (+u) 3 +(+u) +& du 3 = u 3? L& 0 L& 0 \ sin?u }0(u &3 ) du=0(), u sin?u }?u + 4u 3 log[&(+u)&3 ] du=0(), sin?u }?u + u 3 log[(+u)&3 +(+u) &3 +] du=0().

17 Determinantal Random Point Fields 507 The last expression to consider is & L& 0 \ sin?u }?u u } log _\ L&u+ L 3& & } 3 u } log _\ L&u+ L 3 + \ L&u+ L } log \ L&u _& L + 3& du& L& 0 \ } log 3+ _\L&u L + \L&u L + = 3? L& 0 3+ & du+ L& 0 \ sin?u?u & du sin?u _(&((L&u)L)3 ) } log u ((L(L&u)) 3 &) _ ((L(L&u))3 +(L(L&u)) 3 +) ((L&u)L) 3 +((L&u)L) 3 +)& du = 3? L& 0 sin?u } log \ L u L&u+ du = 3? } L& u } \ &log \ &u L++ du+0 () = 3? } L& = 3? } : k= u } : k } \ u k du+0() L+ k= k +0 ()=0(). du+ L& 0 \ } 3 u sin?u?u + sin?u?u + } 4 3 u Combining all the above integrals and looking specifically for the contributions from (3.9)(3.) we obtain & & (Q () (z 0, 0, z )) dz dz &L &L = 9\ 3}L &3 }? log L+L & 6? } log L+ } L & } 3? } log L + L & 3? log L+ } L & } 6? log L+0 () + =L& 3? log L+0 ().

18 508 Soshnikov In the next lemma we evaluate the integrals involving Q () 0, 0. Lemma 5. (a) & &L & &L (Q() 0, 0, z )) dz dz =(8? ) log L+0(). (b) & &L & &L Q() 0, 0, z )}Q () 0, 0, z ) dz dz =0(). Proof. The integral in part (a) can be written as 9 L L \ cos?(z +z ) }?(z +z ) + \z3 = 9 L+ \ We denote the inner integral by I 3 (u) := u _\ =} u &z3 } z 3 z 3 } z 3 +z3 + dz dz cos?u?u + } u _\ u&z+ z 3 3 &+ \u&z dz du+0(). z + & u&z+ z 3 & \ u&z+ z \u&z & z + \u&z + z + & dz \ u&z 3 3 & z + \u&z dz+(u&) z + =} 0 (t &t)} \ u (u&) 3 t ++ $ dt+(u&) 3 =6u } (u&)3 0 = u \+6 0 (t &t)}t (t 3 +) dt+(u&) t 4 &t 3 (t 3 +) dt + +0 (u 3 ). Taking into account 0 ((t4 &t 3 )(t 3 +) ) dt=0, we obtain I 3 (u)= u+0 (u 3 ), and 9 L+ cos?u }(u+0(u \?u + 3 )) du= 8? log L+0 ()

19 Determinantal Random Point Fields 509 Now let us consider the integral in part (b): 9 L L cos?(z +z ) } sin?(z &z )?(z +z )? }(z &z ) \z3 _ +z3 } z 3 +z3 = 9 } L z 3 } z + 3 cos?u?u } u& 0 } \ z 3 &z3 } z 3 z 3 } z + 3 +z3 dz dz 3 sin?v } 3+?v u&v+ u+v+ _\u+v \u&v + dv du & It is not difficult to see that because of the oscillations of trigonometric functions this integral is of order of constant. To show this we write the inner integral as I 4 (u)+i 5 (u)+i 6 (u), where I 4 (u)= u& 0 I 5 (u)= u& 0 I 6 (u)= u& 0 3 sin?v }?v u&v+ \u+v dv 3 sin?v }?v u&v+ \u+v dv sin?v?v Integration by parts gives I 6 (u)= +0 (u & ). Consider now I 4 (u)= u&u4 0 \ + u& u&u dv sin?v?v + } \ u+v 3 dv u&v+ 4\ sin?v 3 u&v+?v + } \ u+v dv (3.3) The second integral in (3.3) is 0 (u 4 } u & } u 3 )=0 (u & ). As for the first one we introduce /([v] even), an indicator of the set where the integer part of v is even, and write the integral in the following form u&u4 0 sin?v } /([v] even)? } \ v } \ u+v 3 & u&v+ v+\ u+v+ 3 u&v&+ + dv+0 (u & )

20 50 Soshnikov The absolute value of the last expression is estimated from above by u&u4 0 sin?v? = u&u4 0 + u&u4 0 =0 (u &6 ) } }(u&v)3 &(u&v&) 3 }(v+) \v dv v }(v+) } (u&v) 3 }(u&v&) 3+ sin?v } \ (u&v) 3 &(u&v&) 3 dv? (v+) } (u&v) 3 }(u&v&) 3+ sin?v } \? v }(v+) } (u&v) 3) dv Therefore I 4 (u)=0 (u &6 ), and similarly I 5 (u)=0 (u & ). As a result 9 L sin?u?u (I 4(u)+I 5 (u)+i 6 (u)) du = 8 L sin?u?u du+ 9 L sin?u?u }0 (u & ) du=0() Lemma 5 is proven. K As a result of the last two lemmas we have & & &L & S (z, z ) dz dz =L& log L+ 3? 8? log L+0 () =L& 8? log L+0 () Remember that S was defined as S(z, z )=Q () 0, 0 (z, z )+Q () 0, 0 (z, z ). To finish the proof of the CLT for *(&T, +) we just need to show that the remainder term U(z, z )=Q(z, z )&S(z, z ) is negligible in the following sense: Lemma 6. (a) & &L & &L (z, z ) dz dz =0() (b) & &L & &L, z )}S(z, z ) dz dz =0() (c) & &L () Proof. We shall establish part (a). The estimates (b) and (c) can be obtained in a similar manner. Repeating the calculations of the last two

21 Determinantal Random Point Fields 5 lemmas, it is easy to see that for any fixed indices (i, m, n) such that (i&)(i&)+m+n>0, we have & & (Q (i) (z m, n, z )) dz dz =0() &L &L Let us now choose N to be sufficiently large and write U(z, z )= U N (z, z )+V N (z, z ), where U N (z, z )= : i= : 0<m+nN 6 Q (i) (z m, n, z )+ : i=3 : 0m+N Q (i) m, n (z, z ) We see that & &L & &L (U N(z, z )) dz dz =0 (). Asymptotic formulas (3.6)(3.) imply that V N (z, z ) const N z 3 &z3 }(z&n +z &N n ) const N z &z } 3 }(z3 +z3 )}(z&n +z &N ) (3.4) It follows from (3.4) that if we choose N then z &z (z ) (V n (z, z )) dz dz =0 () (3.5) (The integration in (3.5) is over the subset of [, L]_[, L]). Indeed, to estimate the integral over z we write z &z (z )\ z (z &z ) } z N dz + L L z 4 (z &z ) + } z3 } z 4N dz z 3 (z &z ) + } z&4 dz =0 (z &43 ) Integrating over z we arrive at (3.5). To integrate V N near the diagonal we observe that kernels Q(z, z ), Q (i) (z m, n, z ) are bounded in [, +)_ [, +); therefore there exists some const$ N such that V N (z, z ) const$ N and z &z (z ) (V N (z, z )) dz dz + (( } const$ N )z ) dz =0(). Lemma 6 is proven. K

22 5 Soshnikov Taking L=(3?) T 3 we deduce from Lemmas 46 and (3.5) that Var \ \* y i # \&T, & \ 3? = &(3?)3 &T K( y, y) dy& &(3?)3 &T &(3?)3 K ( y, y ) dy dy &T = 3? T 3 +0()& 3? T 3 + 8? log \ 3+ 3? T +0 () =? log T+0 () This finishes the proof of Proposition as well as the proof of the CLT for *(&T, +). In a very similar way one proves the CLT for arbitrary & k (T )= *((&kt, &(k&) T )), k>. To prove the result for the joint distribution of [& k (T )] we note that the decay of K(x, y) off the diagonal implies C, (& k (T ), & l (T ))=Cov(& k (T ), & l (T )) This together with implies that Var \ : k l= = &(k&) T &kt (l&) T &lt K (x, x ) dx dx =&Trace / [&lt, &(l&) T ) } K } / [&kt, &(k&) T ) } K =0 () if k&l > & l (T ) +=(? ) log T+0 () Var(& k (T ))=(? ) log T+0 (), k=,,... C, (& k (T ), & l (T ))=Cov(& k (T ), & l (T )) for k&l =. Therefore as T. E & k(t )&E& k (T ) - Var & k (T ) =&(4? ) log T+0 () } & l(t )&E& l (T ) $ k, l & $ k, l& & $ k, l+ - Var & l (T )

23 Determinantal Random Point Fields 53 To take care of the joint cumulants of higher order it is enough to prove Lemma 7. Let at least two indices in (k,..., k s ) are non-zero. Then C k,..., k s (& (T ),..., & s (T ))=0 (log T ) Proof. According to Proposition Lemma 7 follows from Lemma 8. Trace / l K/ l K }}}K/ ls K/ l =0 (log T ) where / lj are the indicators of the intervals (&l j T,&(l j &) T ], l j # Z + and at least two intervals are disjoint. Proof. This has been already established for s=. Let s>. Since not all indices coincide by cyclicity of the trace we may assume l {l. Now if l =l 3 we can use the positivity of / l K/ l K/ l to write Trace(/ l K/ l K/ l K/ l4 }}}K/ ls K/ l ) Trace(/ l K/ l K/ l )}&K/ l4 }}}K/ ls K/ l & Trace(/ l K/ l K/ l ) where we used &K&, &/ lj &. Since Trace(/ l K/ l K/ l )=0 (log T ) Lemma 8 is proven when l =l 3 {l. If l {l 3 one more trick is needed. Let us denote D =/ l K/ l, D =K/ l3 K }}}/ ls K/ l Then Trace(/ l K/ l K/ l3 }}}/ ls K/ l ) =Trace(D D )(Trace(B B *)) (Trace(D D*)) (see [RS, Vol. I, Section VI.6]). As before Trace(D D *)=Trace(/ l K/ l K/ l )=0 (log T ) To obtain a similar bound for Trace(D D *) we define <ps as the maximal index such that l p {l. Since we assume in Lemma 8 that there are at least two different indices, such p always exists. Then Trace(K/ l3 K }}}/ lp K/ l }}}K/ l }}}K/ l )}(K/ l3 K }}}/ lp K/ l }}}K/ l }}}K/ l )* =Trace(K/ l3 K }}}/ lp K/ l }}}K/ l }}}K/ l ) }(/ l K }}}/ l K }}}K/ l K/ lp }}}K/ l3 K )

24 54 Soshnikov Using the identity Trace(D D )=Trace(D D ) where D =K/ l3 K }}}/ lp& K D =/ lp K/ l K }}}/ l K/ l }}}/ l K/ lp K/ lp& K }}}K/ l3 K we can rewrite and estimate the r.h.s. as Trace(/ lp K/ l }}}/ l K }}}/ l K/ lp )}(K/ lp& }}}K/ l3 KK/ l3 K }}}/ lp& K ) Trace(/ lp K/ l }}}K/ l K }}}/ l K/ lp )}&K/ lp& }}}K/ l3 KK/ l3 K }}}/ lp& K& Here we used the positivity of / lp K/ l K/ l }}}K/ l }}}K/ l. The norm of the last factor again is not greater than. Finally, Trace(/ lp K/ l K/ l }}}K/ l }}}/ l K/ lp )=Trace(/ l K/ lp K/ l }}}K/ l }}}K/ l ) Trace(/ l K/ lp K/ l )}=0(log T ) Here we also used cyclicity of the trace. Combining the estimates for Trace D D * and Trace D D* we finish the proof of the Lemmas 7 and 8. K It follows from Lemma 7 that the higher joint cumulants of the normalized random variables go to zero which implies that the limiting distribution function is Gaussian with the known covariance function. Theorem is proven. K 4. PROOF OF THEOREM AND SIMILAR RESULTS FOR THE CLASSICAL COMPACT GROUPS The Bessel kernel has the form (see Section ) K( y, y )= J :(- y )}- y } J$ : (- y )&- y } J$ : (- y )}J : (- y ) ( y & y ) The level density is given by y, y #(0,+), :>& (4.) \ ( y)=k( y, y)= 4 J :(- y) & 4 J :+(- y)}j :& (- y) (4.) The asymptotic formula for large y is well known in the case of Bessel functions (see asymptotic expansion in (4.5) below). In particular, one can see that \ ( y)t for y + (4.3)? - y

25 Determinantal Random Point Fields 55 The last formula suggests to make the (unfolding) change of variables z i =- y i?, i=,. The kernel Q corresponding to the new evenly spaced random point field is given by Q(z, z )=?y 4 } y 4 } K( y, y ) =z z } J :(?z )}?z } J$ : (?z )&?z } J$ : (?z )}J : (?z ) z &z (4.4) The asymptotic expansion of the Bessel function at infinity is given by (see, for example, [Ol]) where J : (z)t \?z+ } \ _cos z& :?& 4? + } : (&) s } A s(:) z s s=0 &sin \z& :?& 4? + } : s=0 (&) s } A s+(:) (4.5) z & s+ A 0 (:)=, A s (:)= (4: & )}(4: &3 )}}}(4: &(s&) ) s!}8 s (4.6) Similarly, J$ : (z)t \?z+ } _ &sin \ z& :?& 4? + } : (&) s } B s(:) z s s=0 &cos \z& :?& 4? + } : s=0 (&) s } B s+(:) (4.7) z & s+ The coefficients B s (:) can be obtained from (4.5)(4.6); for example, B 0 (:)=. It follows from (4.5)(4.7) that for :=\ Q(z, z )= sin?(z &z ) + sin?(z +z &:&)?(z &z )?(z +z ) (4.8) which can be further simplified as sin?(z &z ) sin?(z +z )?(z &z )?(z +z )

26 56 Soshnikov For general values of : a small remainder term appears at the r.h.s. of (4.8). To write the asymptotic expansion, we represent Q(z, z ) as the sum of six kernels: Q(z, z )= 6 i= Q(i) (z, z ), where Q () (z, z )t sin?(z &z )?(z &z) } : (&) n+m } A n (:) B m (:) n, m=0 }(z &n } z &m +z &m } z &n ) (4.9) Q () (z, z )t sin?(z +z &:&)?(z &z) } : (&) n+m } A n (:) n, m=0 } B m (:)}(z &n } z &m &z &m } z &n ) (4.0) Q (3) (z, z )t cos(?z &(?:)&(?4)) } cos(?z &(?:)&(?4))?(z &z ) } : n, m=0 (&) n+m+ } A n (:)}B m+ (:) }(z &n } z &m +z &m } z &n ) (4.) Q (4) (z, z )t sin(?z &(?:)&(?4)) } sin(?z &(?:)&(?4))?(z &z ) } : n, m=0 (&) n+m } A n+ (:)}B m (:) }(z &&n } z &m +z &m } z &&n ) (4.) Q (5) (z, z )t sin?(z &z )?(z &z) } : (&) n+m+ } A n+ (:)}B m+ (:) n, m=0 }(z &&n } z &m +z &m } z &&n ) (4.3) Q (6) (z, z )t sin?(z +z &:&)?(z &z) } : (&) n+m+ } A n+ (:) n, m=0 } B m+ (:)}(z &&n } z &m &z &m } z &&n ) (4.4) The analysis of (4.9)(4.4) is very similar to Section 3. One can see that the only contribution to the leading term of the variance comes from

27 Determinantal Random Point Fields 57 Q () 0, 0 (z, z )+Q () 0, 0 (z, z ) which is exactly the r.h.s. of (4.8). It can be shown by a straightforward calculation that L 0 L 0 L 0 (Q () 0, 0 (z, z)+q() (z, z)) dztl+0 0, 0 () (Q () (z 0, 0, z )+Q () (z 0, 0, z ))) dz dz tl&? log L+0 () Taking into account that L=T? we finish the proof. K The kernels (sin?(x& y)?(x& y))\(sin?(x+ y)?(x+ y)) are well known in Random Matrix Theory. For one, they are the kernels of restrictions of the sine-kernel integral operator to the subspaces of even and odd functions and play an important role in spacings distribution in G.O.E. and G.S.E. [Me]. They also appear as the kernels of limiting correlation functions in orthogonal and symplectic groups near *= [So]. Let us start with the even case. Consider the normalized Haar measure on SO(n). The eigenvalues of matrix M can be arranged in pairs: exp(i% ), exp(&i% ),..., exp(i% n ), exp(&i% n ), 0%, %,..., % n? (4.5) In the rescaled coordinates near the origin x i =(n&) } (% i?), i=,..., n, the k-point correlation functions are equal to R n, k (x,..., x k )=det \ sin?(x i &x j ) (n&) } sin(? }(x i &x j )(n&)) sin?(x) i+x j ) + (n&) } sin(?(x i +x j )(n&))+ i, j=,..., k (4.6) In the limit n the kernel in (4.6) becomes (sin?(x i &x j )?(x i &x j ))+ (sin?(x i +x j )?(x i +x j )). If we consider resealing near arbitrary 0<%<? the limiting kernel will be just the sine kernel. Let us now consider the SO(n+) case. The first n eigenvalues of M # SO(n+) can be arranged in pairs as in (4.5). The last one equals. In the resealed coordinates near %=0 x i = n% i?, i=,..., n

28 58 Soshnikov the k-point correlation functions are given by the formula R n, k (x,..., x k ) =det \ sin?(x i &x j ) n } sin(? }(x i &x j )n) & sin?(x i +x j ) n } sin(? }(x i +x j )n)+i, j=,..., k (4.7) In the limit n the kernel in (4.7) becomes (sin?(x i &x j )? }(x i &x j )) &(sin?(x i +x j )? }(x i +x j )). If we again consider rescaling near 0<%<?, the limiting kernel appears to be the sine kernel. The case of symplectic group is very similar. Let M # Sp(n). Rescaled k-point correlation functions are given by R n, k (x,..., x k )=det \ sin?(x i &x j ) (n+) } sin(?(x i &x j )(n+) sin?(x i +x j ) & (n+) } sin(?(x i +x j )(n+))+i, j=,..., k (4.8) One can then deduce the following result from the CostinLebowitz Theorem. Theorem 3. Consider the normalized Haar measure on SO(n) or Sp(n). Let % #[0,?), $ n be such that 0<$ n <?&%&= for some =>0 and n } $ n +. Denote by & n the number of eigenvalues in [%, %+$ n ]. We have E& n =(n?)}$ n +0(), Var & n ={? log(n } $ n)+0 ()? log(n } $ n)+0 () if %>0 if %=0 and the normalized random variable (& n &E& n )- Var & n converges in distribution to the normal law N(0, ). Proof. Let K n (x, y) be the kernel in (4.6), (4.7), or (4.8). We observe that 0/ I K n / I Id as a composition of projection, Fourier transform, another projection and inverse Fourier transform. To check the asymptotics of Var & n is an excercise which is left to the reader. K The case of several intervals is treated in a similar fasion.

29 Determinantal Random Point Fields 59 Theorem 4. Let $ n >0 be such that $ n 0, n$ n and & k, n = *((k&) $ n, k$ n ]). Then a sequence of normalized random variables (& k, n &E& k, n )- Var & k, n converges in distribution to the centalized gaussian sequence [! k ] k= ] with the covariance function E! k! l = {$ k, l& $ k, l+ & $ k, l&, $ 0, l &- $, l, if k>0, l>0 if %=0. Finally we discuss the unitary group U(n). The eigenvalues of matrix M can be written as: exp(i% ), exp(i% ),..., exp(i% n )}d, 0%, %,..., % n? (4.9) In the rescaled coordinates x i =n }(% i?), i=,..., n, the k-point correlation functions are equal to R n, k (x,..., x k )=det \ sin?(x i &x j ) n } sin(? }(x i &x j )n)+i, j=,..., k (4.0) In the limit n the kernel in (4.0) becomes the sine kernel. We finish with the analoques of the last two theorems for U(n). Theorem 5. Consider the normalized Haar measure on U(n). Let % #[0,?), $ n be such that 0<$ n <?&%&= for some =>0 and n } $ n +. Denote by & n the number of eigenvalues in [%, %+$ n ]. We have E& n =(n?)}$ n, Var & n =? log(n } $ n)+0 () and the normalized random variable (& n &E& n )- Var & n distribution to the normal law N(0, ). converges in Theorem 6. Let $ n >0 be such that $ n 0, n$ n and & k, n = *((k&) $ n, k$ n ]). Then a sequence of normalized random variables (& k, n &E& k, n )- Var & k, n converges in distribution to the centalized gaussian sequence [! k ] k= ] with the covariance function E! k! l =$ k, l & $ k, l+ & $ k, l& Remark 5. Results similar to Theorems 5 and 6 in the regime $ n =$>0 have been also established by K. Wieand [W].

30 50 Soshnikov Remark 6. For the results about smooth linear statistics we refer the reader to [DS, Jo, So3]. ACKNOWLEDGMENTS It is a pleasure to thank Prof. Ya. Sinai, who drew my attention to the preprint of CostinLebowitz paper some time ago, and the organizers of the Special Session on Integrable Systems and Random Matrix Theory (AMS Meeting at Tucson, November 35, 998) and the Introductory Workshop on Random Matrix Models (MSRI, Berkeley, January 93, 999) for the opportunity to attend the meetings, where the idea of this paper has been finalized. The work was partially supported by the Euler stipend from the German Mathematical Society. REFERENCES [BF] T. H. Baker and P. J. Forrester, Finite N fluctuation formulas for random matrices, J. Stat. Phys. 88:37385 (997). [Ba] E. Basor, Distribution functions for random variables for ensembles of positive Hermitian matrices, Commun. Math. Phys. 88:37350 (997). [BaW] E. Basor and H. Widom, Determinants of Airy operators and applications to random matrices, J. Stat. Phys. 96():0 (999). [BI] P. Bleher and A. Its, Semiclassical asymptotics of orthogonal polynomials, RiemannHilbert problem, and universality in the matrix model, Ann. of Math. 50:8566 (999). [BO] A. Borodin and G. Olshanski, Point processes and the infinite symmetric group, Math. Res. Lett. 5:79986 (998). [BO] A. Borodin and G. Olshanski, Z-Measures on partitions, RobinsonSchensted Knuth correspondence and ;= Random Matrix ensembles, available via http: xxx.lanl.govabsmath [BOO] A. Borodin, A. Okounkov, and G. Olshanski, Asymptotics of Plancherel measures for symmetric groups, available via [BdMK] A. Boutet de Monvel and A. Khorunzhy, Asymptotic distribution of smoothed eigenvalue density. I. Gaussian random matrices. II. Wigner random matrices, to appear in Rand. Oper. Stoch. Equ. [BZ] E. Bre zin and A. Zee, Universality of the correlations between eigenvalues of large random matrices, Nucl. Phys. B 40:6367 (993). [Br] B. V. Bronk, Exponential ensemble for random matrices, J. Math. Phys. 6: 837 (965). [CL] O. Costin and J. Lebowitz, Gaussian fluctuations in random matrices, Phys. Rev. Lett. 75():697 (995). [DKMVZ] P. Deift, T. Kriecherbauer, K. T-R McLaughlin, S. Venakides, and X. Zhou, Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory, Commun. Pure Appl. Math. 5:33545 (999).

31 Determinantal Random Point Fields 5 [DS] P. Diaconis and M. Shahshahani, On the eigenvalues of random matrices, Studies in Appl. Prob., Essays in honour of Lajos Takacs, J. Appl. Probab. A 3:496 (994). [E] A. Erdelyi (ed.), Higher Transcendental Functions, Vol. (New York, McGraw Hill, 953). [F] P. J. Forrester, The spectrum edge of random matrix ensembles, Nucl. Phys. B 40:70978 (994). [GK] I. Gohberg and M. G. Krein, Introduction to the Theory of Linear Non-selfadjoint Operators in Hilbert Space (Amer. Math. Soc., Providence, R.I., 969). [Jo] K. Johansson, On fluctuation of eigenvalues of random Hermitian matrices, Duke Math. J. 9:504 (998). [Jo] K. Johansson, Universality of local spacing distribution in certain Hermitian Wigner matrices, available via [Jo3] K. Johansson, Shape fluctuations and random matrices, available via lanl.govabsmath [Jo4] K. Johansson, Discrete orthogonal polynomials and the Plancherel measure, available via [Jo5] K. Johansson, Transversal fluctuations for increasing subsequences on the plane, available via [KKP] A. M. Khorunzhy, B. A. Khoruzhenko, and L. A. Pastur, Asymptotic properties of large random matrices with independent entries, J. Math. Phys. 37: (996). [L] A. Lenard, Correlation functions and the uniqueness of the state in classical statistical mechanics, Commun. Math. Phys. 30:3544 (973). [L] A. Lenard, States of classical statistical mechanical system of infinitely many particles, I, II, Arch. Rational Mech. Anal. 59:956 (975). [Ma] O. Macchi, The coincidence approach to stochastic point processes, Advances in Appl. Probability 7:83 (975). [Me] M. L. Mehta, Random Matrices, nd ed. (Academic Press, New York, 99). [NW] T. Nagao and M. Wadati, Correlation functions of random matrix ensembles related to classical orthogonal polynomials, J. Phys. Soc. Japan 60:39833 (99). [Ok] A. Okounkov, Random matrices and random permutations, available via http: xxx.lanl.govabsmath [Ok] A. Okounkov, Infinite wedge and measures on partitions, available via http: xxx.lanl.govabsmath [Ol] F. W. J. Oliver, Asymptotics and Special Functions (Wellesley, Massachusetts, A.K. Peters, 997). [PS] L. A. Pastur and M. Shcherbina, Universality of the local eigenvalue statistics for a class of unitary invariant random matrices, J. Stat. Phys. 86:0947 (997). [PR] M. Plancherel and W. Rotach, Sur les valeurs asymptotiques des polynomes d'hermite, Comm. Math. Helv. :754 (99). [RS] M. Reed and B. Simon, Methods of Modern Mathematical Physics, Vols. IIV (Academic Press, New York, 980, 975, 979, 978). [SiSo] Ya. Sinai and A. Soshnikov, A refinement of Wigner's semicircle law in a neighborhood of the spectrum edge for random symmetric matrices, Funct. Anal. Appl. 3():43 (998). [SiSo] Ya. Sinai and A. Soshnikov, Central limit theorem for traces of large random symmetric matrices with independent entries, Bol. Soc. Brasil. Mat. 9():4 (998).

32 5 Soshnikov [So] [So] [So3] [So4] [Sp] [TW] [TW] [Wig] [Wig] [W] A. Soshnikov, Level spacing distribution for large random matrices: Gaussian fluctuations, Ann. of Math. 48:57367 (998). A. Soshnikov, Universality at the edge of the spectrum in Wigner random matrices, Commun. Math. Phys. 07: (999). A. Soshnikov, Central limit theorem for local statistics in the classical compact groups and related combinatorial identities, available via math , to appear in Ann. Prob. A. Soshnikov, Determinantal random point fields, available via govabsmath000099, to appear in Russian Math. Surv. H. Spohn, Interacting Brownian particles: A study of Dyson's model, in Hydrodynamic Behavior and Interacting Particle Systems, G. Papanicolau, ed. (Springer-Verlag, New York, 987). C. A. Tracy and H. Widom, Level-spacing distribution and the Airy kernel, Commun. Math. Phys. 59:574 (994). C. A. Tracy and H. Widom, Level spacing distribution and the Bessel kernel, Commun. Math. Phys. 6:89309 (994). E. Wigner, Characteristic vectors of bordered matrices with infinite dimensions, Ann. of Math. 6: (955). E. Wigner, On the distribution of the roots of certain symmetric matrices, Ann. of Math. 67:3538 (958). K. Wieand, Eigenvalue distribution of random matrices in the permutation group and compact Lie groups (Ph.D. thesis, Dept. Math. Harvard, 998).

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part I Manuela Girotti based on M. Girotti s PhD thesis and A. Kuijlaars notes from Les Houches Winter School 202 Contents Point Processes

More information

Central Limit Theorem for Local Linear Statistics in Classical Compact Groups and Related Combinatorial Identities

Central Limit Theorem for Local Linear Statistics in Classical Compact Groups and Related Combinatorial Identities arxiv:math/9908063v [math.pr] 3 Aug 999 Central Limit Theorem for Local Linear Statistics in Classical Compact Groups and Related Combinatorial Identities Alexander Soshnikov California Institute of Technology

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

JINHO BAIK Department of Mathematics University of Michigan Ann Arbor, MI USA

JINHO BAIK Department of Mathematics University of Michigan Ann Arbor, MI USA LIMITING DISTRIBUTION OF LAST PASSAGE PERCOLATION MODELS JINHO BAIK Department of Mathematics University of Michigan Ann Arbor, MI 48109 USA E-mail: baik@umich.edu We survey some results and applications

More information

A Generalization of Wigner s Law

A Generalization of Wigner s Law A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean

More information

Airy and Pearcey Processes

Airy and Pearcey Processes Airy and Pearcey Processes Craig A. Tracy UC Davis Probability, Geometry and Integrable Systems MSRI December 2005 1 Probability Space: (Ω, Pr, F): Random Matrix Models Gaussian Orthogonal Ensemble (GOE,

More information

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,

More information

Universal Fluctuation Formulae for one-cut β-ensembles

Universal Fluctuation Formulae for one-cut β-ensembles Universal Fluctuation Formulae for one-cut β-ensembles with a combinatorial touch Pierpaolo Vivo with F. D. Cunden Phys. Rev. Lett. 113, 070202 (2014) with F.D. Cunden and F. Mezzadri J. Phys. A 48, 315204

More information

Extreme eigenvalue fluctutations for GUE

Extreme eigenvalue fluctutations for GUE Extreme eigenvalue fluctutations for GUE C. Donati-Martin 204 Program Women and Mathematics, IAS Introduction andom matrices were introduced in multivariate statistics, in the thirties by Wishart [Wis]

More information

Fredholm determinant with the confluent hypergeometric kernel

Fredholm determinant with the confluent hypergeometric kernel Fredholm determinant with the confluent hypergeometric kernel J. Vasylevska joint work with I. Krasovsky Brunel University Dec 19, 2008 Problem Statement Let K s be the integral operator acting on L 2

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

More information

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31 STAT C26A / MATH C223A : Stein s method and applications Lecture 3 Lecture date: Nov. 7, 27 Scribe: Anand Sarwate Gaussian concentration recap If W, T ) is a pair of random variables such that for all

More information

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS David García-García May 13, 2016 Faculdade de Ciências da Universidade de Lisboa OVERVIEW Random Matrix Theory Introduction Matrix ensembles A sample computation:

More information

MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES

MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES Ann. Inst. Fourier, Grenoble 55, 6 (5), 197 7 MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES b Craig A. TRACY & Harold WIDOM I. Introduction. For a large class of finite N determinantal

More information

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems

More information

Stochastic Differential Equations Related to Soft-Edge Scaling Limit

Stochastic Differential Equations Related to Soft-Edge Scaling Limit Stochastic Differential Equations Related to Soft-Edge Scaling Limit Hideki Tanemura Chiba univ. (Japan) joint work with Hirofumi Osada (Kyushu Unv.) 2012 March 29 Hideki Tanemura (Chiba univ.) () SDEs

More information

A Note on Universality of the Distribution of the Largest Eigenvalues in Certain Sample Covariance Matrices

A Note on Universality of the Distribution of the Largest Eigenvalues in Certain Sample Covariance Matrices Journal of Statistical Physics, Vol. 18, Nos. 5/6, September ( ) A Note on Universality of the Distribution of the Largest Eigenvalues in Certain Sample Covariance Matrices Alexander Soshnikov 1 Received

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

arxiv:hep-th/ v1 14 Oct 1992

arxiv:hep-th/ v1 14 Oct 1992 ITD 92/93 11 Level-Spacing Distributions and the Airy Kernel Craig A. Tracy Department of Mathematics and Institute of Theoretical Dynamics, University of California, Davis, CA 95616, USA arxiv:hep-th/9210074v1

More information

Near extreme eigenvalues and the first gap of Hermitian random matrices

Near extreme eigenvalues and the first gap of Hermitian random matrices Near extreme eigenvalues and the first gap of Hermitian random matrices Grégory Schehr LPTMS, CNRS-Université Paris-Sud XI Sydney Random Matrix Theory Workshop 13-16 January 2014 Anthony Perret, G. S.,

More information

Correlation Functions, Cluster Functions, and Spacing Distributions for Random Matrices

Correlation Functions, Cluster Functions, and Spacing Distributions for Random Matrices Journal of Statistical Physics, Vol. 92, Nos. 5/6, 1998 Correlation Functions, Cluster Functions, and Spacing Distributions for Random Matrices Craig A. Tracy1 and Harold Widom2 Received April 7, 1998

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Differential Equations for Dyson Processes

Differential Equations for Dyson Processes Differential Equations for Dyson Processes Joint work with Harold Widom I. Overview We call Dyson process any invariant process on ensembles of matrices in which the entries undergo diffusion. Dyson Brownian

More information

Beyond the Gaussian universality class

Beyond the Gaussian universality class Beyond the Gaussian universality class MSRI/Evans Talk Ivan Corwin (Courant Institute, NYU) September 13, 2010 Outline Part 1: Random growth models Random deposition, ballistic deposition, corner growth

More information

Maximal height of non-intersecting Brownian motions

Maximal height of non-intersecting Brownian motions Maximal height of non-intersecting Brownian motions G. Schehr Laboratoire de Physique Théorique et Modèles Statistiques CNRS-Université Paris Sud-XI, Orsay Collaborators: A. Comtet (LPTMS, Orsay) P. J.

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

Introduction to determinantal point processes from a quantum probability viewpoint

Introduction to determinantal point processes from a quantum probability viewpoint Introduction to determinantal point processes from a quantum probability viewpoint Alex D. Gottlieb January 2, 2006 Abstract Determinantal point processes on a measure space (X, Σ, µ) whose kernels represent

More information

Fluctuations of random tilings and discrete Beta-ensembles

Fluctuations of random tilings and discrete Beta-ensembles Fluctuations of random tilings and discrete Beta-ensembles Alice Guionnet CRS (E S Lyon) Advances in Mathematics and Theoretical Physics, Roma, 2017 Joint work with A. Borodin, G. Borot, V. Gorin, J.Huang

More information

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v1 [math-ph] 19 Oct 2018 COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]

More information

Large deviations of the top eigenvalue of random matrices and applications in statistical physics

Large deviations of the top eigenvalue of random matrices and applications in statistical physics Large deviations of the top eigenvalue of random matrices and applications in statistical physics Grégory Schehr LPTMS, CNRS-Université Paris-Sud XI Journées de Physique Statistique Paris, January 29-30,

More information

Fluctuations from the Semicircle Law Lecture 4

Fluctuations from the Semicircle Law Lecture 4 Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014

More information

References 167. dx n x2 =2

References 167. dx n x2 =2 References 1. G. Akemann, J. Baik, P. Di Francesco (editors), The Oxford Handbook of Random Matrix Theory, Oxford University Press, Oxford, 2011. 2. G. Anderson, A. Guionnet, O. Zeitouni, An Introduction

More information

Uniformly Random Lozenge Tilings of Polygons on the Triangular Lattice

Uniformly Random Lozenge Tilings of Polygons on the Triangular Lattice Interacting Particle Systems, Growth Models and Random Matrices Workshop Uniformly Random Lozenge Tilings of Polygons on the Triangular Lattice Leonid Petrov Department of Mathematics, Northeastern University,

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Fluctuations of random tilings and discrete Beta-ensembles

Fluctuations of random tilings and discrete Beta-ensembles Fluctuations of random tilings and discrete Beta-ensembles Alice Guionnet CRS (E S Lyon) Workshop in geometric functional analysis, MSRI, nov. 13 2017 Joint work with A. Borodin, G. Borot, V. Gorin, J.Huang

More information

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI 1. Maximal Tori By a torus we mean a compact connected abelian Lie group, so a torus is a Lie group that is isomorphic to T n = R n /Z n. Definition 1.1.

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

Bulk scaling limits, open questions

Bulk scaling limits, open questions Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson

More information

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot Free probabilities and the large N limit, IV March 27th 2014 Loop equations and all-order asymptotic expansions... Gaëtan Borot MPIM Bonn & MIT based on joint works with Alice Guionnet, MIT Karol Kozlowski,

More information

On the Error Bound in the Normal Approximation for Jack Measures (Joint work with Le Van Thanh)

On the Error Bound in the Normal Approximation for Jack Measures (Joint work with Le Van Thanh) On the Error Bound in the Normal Approximation for Jack Measures (Joint work with Le Van Thanh) Louis H. Y. Chen National University of Singapore International Colloquium on Stein s Method, Concentration

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

L n = l n (π n ) = length of a longest increasing subsequence of π n.

L n = l n (π n ) = length of a longest increasing subsequence of π n. Longest increasing subsequences π n : permutation of 1,2,...,n. L n = l n (π n ) = length of a longest increasing subsequence of π n. Example: π n = (π n (1),..., π n (n)) = (7, 2, 8, 1, 3, 4, 10, 6, 9,

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

Triangular matrices and biorthogonal ensembles

Triangular matrices and biorthogonal ensembles /26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION Malaysian Journal of Mathematical Sciences 6(2): 25-36 (202) Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces Noli N. Reyes and Rosalio G. Artes Institute of Mathematics, University of

More information

An Inverse Problem for Gibbs Fields with Hard Core Potential

An Inverse Problem for Gibbs Fields with Hard Core Potential An Inverse Problem for Gibbs Fields with Hard Core Potential Leonid Koralov Department of Mathematics University of Maryland College Park, MD 20742-4015 koralov@math.umd.edu Abstract It is well known that

More information

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

Remarks on the Rademacher-Menshov Theorem

Remarks on the Rademacher-Menshov Theorem Remarks on the Rademacher-Menshov Theorem Christopher Meaney Abstract We describe Salem s proof of the Rademacher-Menshov Theorem, which shows that one constant works for all orthogonal expansions in all

More information

DEVIATION INEQUALITIES ON LARGEST EIGENVALUES

DEVIATION INEQUALITIES ON LARGEST EIGENVALUES DEVIATION INEQUALITIES ON LARGEST EIGENVALUES M. Ledoux University of Toulouse, France In these notes, we survey developments on the asymptotic behavior of the largest eigenvalues of random matrix and

More information

Asymptotics of Hermite polynomials

Asymptotics of Hermite polynomials Asymptotics of Hermite polynomials Michael Lindsey We motivate the study of the asymptotics of Hermite polynomials via their appearance in the analysis of the Gaussian Unitary Ensemble (GUE). Following

More information

Random matrices and determinantal processes

Random matrices and determinantal processes Random matrices and determinantal processes Patrik L. Ferrari Zentrum Mathematik Technische Universität München D-85747 Garching 1 Introduction The aim of this work is to explain some connections between

More information

OPSF, Random Matrices and Riemann-Hilbert problems

OPSF, Random Matrices and Riemann-Hilbert problems OPSF, Random Matrices and Riemann-Hilbert problems School on Orthogonal Polynomials in Approximation Theory and Mathematical Physics, ICMAT 23 27 October, 2017 Plan of the course lecture 1: Orthogonal

More information

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e., Abstract Hilbert Space Results We have learned a little about the Hilbert spaces L U and and we have at least defined H 1 U and the scale of Hilbert spaces H p U. Now we are going to develop additional

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course Matrix Lie groups and their Lie algebras Mahmood Alaghmandan A project in fulfillment of the requirement for the Lie algebra course Department of Mathematics and Statistics University of Saskatchewan March

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

Rigidity of the 3D hierarchical Coulomb gas. Sourav Chatterjee

Rigidity of the 3D hierarchical Coulomb gas. Sourav Chatterjee Rigidity of point processes Let P be a Poisson point process of intensity n in R d, and let A R d be a set of nonzero volume. Let N(A) := A P. Then E(N(A)) = Var(N(A)) = vol(a)n. Thus, N(A) has fluctuations

More information

EXPLICIT QUANTIZATION OF THE KEPLER MANIFOLD

EXPLICIT QUANTIZATION OF THE KEPLER MANIFOLD proceedings of the american mathematical Volume 77, Number 1, October 1979 society EXPLICIT QUANTIZATION OF THE KEPLER MANIFOLD ROBERT J. BLATTNER1 AND JOSEPH A. WOLF2 Abstract. Any representation ir of

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 PFAFFIANS, HAFNIANS AND PRODUCTS OF REAL LINEAR FUNCTIONALS. Péter E.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 PFAFFIANS, HAFNIANS AND PRODUCTS OF REAL LINEAR FUNCTIONALS. Péter E. M ath. Res. Lett. 15 (008), no., 351 358 c International Press 008 PFAFFIANS, HAFNIANS AND PRODUCTS OF REAL LINEAR FUNCTIONALS Péter E. Frenkel Abstract. We prove pfaffian and hafnian versions of Lieb

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Strong Markov property of determinantal processes associated with extended kernels

Strong Markov property of determinantal processes associated with extended kernels Strong Markov property of determinantal processes associated with extended kernels Hideki Tanemura Chiba university (Chiba, Japan) (November 22, 2013) Hideki Tanemura (Chiba univ.) () Markov process (November

More information

Homogenization of the Dyson Brownian Motion

Homogenization of the Dyson Brownian Motion Homogenization of the Dyson Brownian Motion P. Bourgade, joint work with L. Erdős, J. Yin, H.-T. Yau Cincinnati symposium on probability theory and applications, September 2014 Introduction...........

More information

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st Primes, Paul-Olivier Dehaye pdehaye@math.ethz.ch ETH Zürich, October 31 st Outline Review of Bump & Gamburd s method A theorem of Moments of derivatives of characteristic polynomials Hypergeometric functions

More information

Numerical analysis and random matrix theory. Tom Trogdon UC Irvine

Numerical analysis and random matrix theory. Tom Trogdon UC Irvine Numerical analysis and random matrix theory Tom Trogdon ttrogdon@math.uci.edu UC Irvine Acknowledgements This is joint work with: Percy Deift Govind Menon Sheehan Olver Raj Rao Numerical analysis and random

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition F. Ammar Khodja Clermont-Ferrand, June 2011 GOAL: 1 Show the important differences between scalar and non

More information

A trigonometric orthogonality with respect to a nonnegative Borel measure

A trigonometric orthogonality with respect to a nonnegative Borel measure Filomat 6:4 01), 689 696 DOI 10.98/FIL104689M Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A trigonometric orthogonality with

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

On the algebraic structure of Dyson s Circular Ensembles. Jonathan D.H. SMITH. January 29, 2012

On the algebraic structure of Dyson s Circular Ensembles. Jonathan D.H. SMITH. January 29, 2012 Scientiae Mathematicae Japonicae 101 On the algebraic structure of Dyson s Circular Ensembles Jonathan D.H. SMITH January 29, 2012 Abstract. Dyson s Circular Orthogonal, Unitary, and Symplectic Ensembles

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

arxiv:math/ v2 [math.pr] 10 Aug 2005

arxiv:math/ v2 [math.pr] 10 Aug 2005 arxiv:math/0403090v2 [math.pr] 10 Aug 2005 ORTHOGOAL POLYOMIAL ESEMBLES I PROBABILITY THEORY By Wolfgang König 1 10 August, 2005 Abstract: We survey a number of models from physics, statistical mechanics,

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

TEST CODE: MMA (Objective type) 2015 SYLLABUS

TEST CODE: MMA (Objective type) 2015 SYLLABUS TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,

More information

Applications and fundamental results on random Vandermon

Applications and fundamental results on random Vandermon Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)

More information

MS 3011 Exercises. December 11, 2013

MS 3011 Exercises. December 11, 2013 MS 3011 Exercises December 11, 2013 The exercises are divided into (A) easy (B) medium and (C) hard. If you are particularly interested I also have some projects at the end which will deepen your understanding

More information

DLR equations for the Sineβ process and applications

DLR equations for the Sineβ process and applications DLR equations for the Sineβ process and applications Thomas Leble (Courant Institute - NYU) Columbia University - 09/28/2018 Joint work with D. Dereudre, A. Hardy, M. Maı da (Universite Lille 1) Log-gases

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap Institute of Fundamental Sciences Massey University New Zealand 29 August 2017 A. A. Kapaev Memorial Workshop Michigan

More information

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX TIN-YAU TAM AND HUAJUN HUANG Abstract.

More information