Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Similar documents
Markov operators, classical orthogonal polynomial ensembles, and random matrices

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Fluctuations of random tilings and discrete Beta-ensembles

Two-periodic Aztec diamond

Fluctuations of random tilings and discrete Beta-ensembles

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot

Exponential tail inequalities for eigenvalues of random matrices

Central Limit Theorem for discrete log gases

The Hermitian two matrix model with an even quartic potential. Maurice Duits Arno B.J. Kuijlaars Man Yue Mo

Determinantal point processes and random matrix theory in a nutshell

ORTHOGONAL POLYNOMIALS

Fluctuations from the Semicircle Law Lecture 4

Triangular matrices and biorthogonal ensembles

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

Uniformly Random Lozenge Tilings of Polygons on the Triangular Lattice

Concentration Inequalities for Random Matrices

Double contour integral formulas for the sum of GUE and one matrix model

1 Math 241A-B Homework Problem List for F2015 and W2016

Extreme eigenvalue fluctutations for GUE

arxiv: v2 [math-ph] 20 Dec 2015

Airy and Pearcey Processes

Determinantal point processes and random matrix theory in a nutshell

Microscopic behavior for β-ensembles: an energy approach

Random Matrix: From Wigner to Quantum Chaos

Multiple orthogonal polynomials. Bessel weights

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford

Real Variables # 10 : Hilbert Spaces II

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Differential Equations for Dyson Processes

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

DLR equations for the Sineβ process and applications

Radboud University Nijmegen

Large Deviations for Random Matrices and a Conjecture of Lukic

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

Determinants of Perturbations of Finite Toeplitz Matrices. Estelle Basor American Institute of Mathematics

Gaussian Free Field in (self-adjoint) random matrices and random surfaces. Alexei Borodin

Gaussian Free Field in beta ensembles and random surfaces. Alexei Borodin

Valerio Cappellini. References

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Random regular digraphs: singularity and spectrum

Stein s Method and Characteristic Functions

Angular matrix integrals

Quantum Computing Lecture 2. Review of Linear Algebra

Eigenvalue variance bounds for Wigner and covariance random matrices

Lecture I: Asymptotics for large GUE random matrices

Random Matrices and Determinantal Process

Kernel Method: Data Analysis with Positive Definite Kernels

Fredholm determinant with the confluent hypergeometric kernel

Elementary linear algebra

Spectral analysis of two doubly infinite Jacobi operators

Mathematical foundations - linear algebra

Lectures 6 7 : Marchenko-Pastur Law

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

arxiv: v1 [math-ph] 19 Oct 2018

Asymptotics of Hermite polynomials

Zeros and ratio asymptotics for matrix orthogonal polynomials

CHAPTER 8. Smoothing operators

Nonlinear Integral Equation Formulation of Orthogonal Polynomials

Non white sample covariance matrices.

Gaussian Random Fields

Multiple Orthogonal Polynomials

OPSF, Random Matrices and Riemann-Hilbert problems

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

Rectangular Young tableaux and the Jacobi ensemble

Orthogonal Polynomial Ensembles

From the mesoscopic to microscopic scale in random matrix theory

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

Spectral Theory of Orthogonal Polynomials

DEVIATION INEQUALITIES ON LARGEST EIGENVALUES

Update on the beta ensembles

Simultaneous Gaussian quadrature for Angelesco systems

arxiv: v3 [math-ph] 23 May 2017

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

Compact symetric bilinear forms

Comparison Method in Random Matrix Theory

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Clark model in the general situation

NORMS ON SPACE OF MATRICES

1 Tridiagonal matrices

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

MATH 205C: STATIONARY PHASE LEMMA

Review of some mathematical tools

Optimal series representations of continuous Gaussian random fields

The Matrix Dyson Equation in random matrix theory

Beyond the Gaussian universality class

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia

arxiv: v1 [math.pr] 22 May 2008

Inner product spaces. Layers of structure:

MP 472 Quantum Information and Computation

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

ON THE SECOND-ORDER CORRELATION FUNCTION OF THE CHARACTERISTIC POLYNOMIAL OF A HERMITIAN WIGNER MATRIX

Problems Session. Nikos Stylianopoulos University of Cyprus

Applications and fundamental results on random Vandermon

Trace Class Operators and Lidskii s Theorem

Convergence of spectral measures and eigenvalue rigidity

Statistical Mechanics and Combinatorics : Lecture IV

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Transcription:

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 1 / 53

Linear statistic Given a function f : R R and random points x 1,..., x n R the linear statistic X n (f ) is defined as X n (f ) = n f (x j ) j=1 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 2 / 53

Linear statistic Given a function f : R R and random points x 1,..., x n R the linear statistic X n (f ) is defined as X n (f ) = n f (x j ) j=1 In this talk we will be discussing asymptotic behavior as n of the fluctuations of X n (f ) where 1 x 1,..., x n are random from a biorthogonal ensembles with a recurrence (so β = 2) 2 f C 1 (R) with at most polynomial growth at ±. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 2 / 53

Central Limit Theorems Typically, there exists a limiting distribution ν of points as n, and we have, almost surely, 1 n X n(f ) = 1 n n f (x j ) f (x)dν(x) j=1 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 3 / 53

Central Limit Theorems Typically, there exists a limiting distribution ν of points as n, and we have, almost surely, 1 n X n(f ) = 1 n n f (x j ) f (x)dν(x) j=1 In several important examples it has been proved that the fluctuations X n (f ) EX n (f ) satisfy a Central Limit Theorem for some σ(f ) 2. X n (f ) EX n (f ) N(0, σ(f ) 2 ), as n, M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 3 / 53

Central Limit Theorems Typically, there exists a limiting distribution ν of points as n, and we have, almost surely, 1 n X n(f ) = 1 n n f (x j ) f (x)dν(x) j=1 In several important examples it has been proved that the fluctuations X n (f ) EX n (f ) satisfy a Central Limit Theorem for some σ(f ) 2. X n (f ) EX n (f ) N(0, σ(f ) 2 ), as n, No normalization, since f C 1 (R). In particular, the variance has a limit. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 3 / 53

Example: Unitary ensembles Consider the probability measure on R n defined by 1 Z n 1 i<j n (x i x j ) 2 e n n j=1 V (x j ) dx 1 dx n. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 4 / 53

Example: Unitary ensembles Consider the probability measure on R n defined by 1 Z n 1 i<j n (x i x j ) 2 e n n j=1 V (x j ) dx 1 dx n. As n there is a limiting density of points µ V, which is the unique minimizer of 1 log x y dν(x)dν(y) + V (x)dν(x) among all probability measures ν on R. Hence 1 n X n(f ) f (x)dµ V (x), almost surely as n. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 4 / 53

Example: Unitary ensembles Theorem (Johansson 98) Under certain assumptions on V, in particular S(µ V ) = [γ, δ], we have that for sufficiently smooth f X n (f ) EX n (f ) N(0, in distribution as n, where f k = 1 2π ( δ γ f 2πi 2 0 k f k 2 ) k=1 cos θ + δ + γ ) e ikθ dθ. 2 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 5 / 53

Example: Unitary ensembles Some history Polynomial V, one-cut case and f H 2+ε for some ε > 0 (+additional assumptions) Johansson 98 Techniques were extended in Shcherbina-Kriecherbauer 10 Borot-Guionnet 12 to allow real analytic V. The one-cut assumption is essential. In multi-cut case there is not necessarily a limit to a Gaussian Pastur 06. In the multicut case, full asymptotic expansions of the partition function were recently obtained in Shcherbina 13, Borot-Guionnet 13. These results extend (with some appropriate modifications) to β-ensembles, but in this talk we assume β = 2. Other classical ensembles have been studied: e.g.β-jacobi ensembles Dumitriu-Paquette 12,Borodin-Gorin 13. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 6 / 53

Example: Lozenge tilings b c. Tilings of the abc-hexagon with lozenges a M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 7 / 53

Example: Lozenge tilings b c. Tilings of the abc-hexagon with lozenges a M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 7 / 53

Example: Lozenge tilings b c. Tilings of the abc-hexagon with lozenges a Random tilings: take uniform measure on all possible tilings M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 7 / 53

Example: Lozenge tilings M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 8 / 53

Gaussian Free Field To each tiling one associates a height function, whose graph is a stepped surface. The pairing of the random height function with a test function gives a linear statistic. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 9 / 53

Gaussian Free Field To each tiling one associates a height function, whose graph is a stepped surface. The pairing of the random height function with a test function gives a linear statistic. In many recent works these 2 dimensional fluctuations of the height function turn out ot be described by the Gaussian Free Field (+ complex structure). Borodin, Borodin-Bufetov, Borodin-Gorin, Borodin-Ferrari, D., Kenyon, Kuan, Petrov,... 2008 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 9 / 53

Gaussian Free Field To each tiling one associates a height function, whose graph is a stepped surface. The pairing of the random height function with a test function gives a linear statistic. In many recent works these 2 dimensional fluctuations of the height function turn out ot be described by the Gaussian Free Field (+ complex structure). Borodin, Borodin-Bufetov, Borodin-Gorin, Borodin-Ferrari, D., Kenyon, Kuan, Petrov,... 2008 The example of lozenge tilings of a hexagon has been a special case in Petrov 13. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 9 / 53

Gaussian Free Field To each tiling one associates a height function, whose graph is a stepped surface. The pairing of the random height function with a test function gives a linear statistic. In many recent works these 2 dimensional fluctuations of the height function turn out ot be described by the Gaussian Free Field (+ complex structure). Borodin, Borodin-Bufetov, Borodin-Gorin, Borodin-Ferrari, D., Kenyon, Kuan, Petrov,... 2008 The example of lozenge tilings of a hexagon has been a special case in Petrov 13. For the purpose of this talk, the main interest is in the 1-d fluctuations... M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 9 / 53

Example: Lozenge tilings M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 10 / 53

Example: Lozenge tilings s M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 10 / 53

Example: Lozenge tilings s. The centers x 1..., x n of the red tiles at a vertical line at distance s of the left side, have the jpdf on N n 1 i<j n n (x i x j ) 2 w(x j ), j=1 where w is the Hahn weight (with an appropriate choice of parameters). Johansson 02. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 10 / 53

Example: Lozenge tilings s. The centers x 1..., x n of the red tiles at a vertical line at distance s of the left side, have the jpdf on N n 1 i<j n n (x i x j ) 2 w(x j ), j=1 where w is the Hahn weight (with an appropriate choice of parameters). Johansson 02. Fluctuations: X n (f ) EX n (f ) N(0, σ 2 f ) in distribution as n. (1-d section of the GFF+complex structure) M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 10 / 53

The previous two models (the Unitary Enembles and vertical sections of lozenge tilings of a hexagon) are both examples of orthogonal polynomial ensembles. We will show that the CLT s for these models are special cases of a more general CLT for such ensembles. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 11 / 53

Orthogonal Polynomial Ensembles M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 12 / 53

Orthogonal polynomial ensembles Let µ be a Borel measure on R with finite moments x k dµ(x) <. Then the Orthogonal Polynomial Ensemble of size n associated to µ is the probability measure on R n proportional to (x i x j ) 2 dµ(x 1 ) dµ(x n ). 1 i<j n M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 13 / 53

Orthogonal polynomial ensembles Let µ be a Borel measure on R with finite moments x k dµ(x) <. Then the Orthogonal Polynomial Ensemble of size n associated to µ is the probability measure on R n proportional to (x i x j ) 2 dµ(x 1 ) dµ(x n ). 1 i<j n We allow for varying measures µ = µ n (but both varying and nonvarying will be discussed) M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 13 / 53

Orthogonal polynomial ensembles Let µ be a Borel measure on R with finite moments x k dµ(x) <. Then the Orthogonal Polynomial Ensemble of size n associated to µ is the probability measure on R n proportional to (x i x j ) 2 dµ(x 1 ) dµ(x n ). 1 i<j n We allow for varying measures µ = µ n (but both varying and nonvarying will be discussed) We want to find minimal conditions on the measure µ such that we have for f C 1 (R). X n (f ) EX n (f ) N(0, σ(f ) 2 )) as n, M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 13 / 53

Orthogonal polynomials We let p k,n be the orthogonal polynomial of degree k with respect to µ(= µ n ), i.e. Three term recurrence p k,n (x)p l,n (x)dµ(x) = δ kl xp k,n (x) = a k,n p k+1,n (x) + b k,n p k,n (x) + a k 1,n p k 1,n (x) Let J be the Jacobi operator corresponding to µ. b 0,n a 0,n J = J (n) a 0,n b 1,n a 1,n =. a..... 1,n... M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 14 / 53

Reproducing kernel The reproducing kernel is defined as n 1 K n (x, y) = p k,n (x)p k,n (y) k=0 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 15 / 53

Reproducing kernel The reproducing kernel is defined as n 1 K n (x, y) = p k,n (x)p k,n (y) k=0 It satisfies the reproducing property K n (x, y)k n (y, z)dµ(y) = K n (x, z). In other words, K n as a operator on L 2 (µ) is an orthogonal projection, i.e. K n = K n and K 2 n = K n, of rank n. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 15 / 53

Reproducing kernel The reproducing kernel is defined as n 1 K n (x, y) = p k,n (x)p k,n (y) k=0 It satisfies the reproducing property K n (x, y)k n (y, z)dµ(y) = K n (x, z). In other words, K n as a operator on L 2 (µ) is an orthogonal projection, i.e. K n = K n and K 2 n = K n, of rank n. Christoffel-Darboux formula: p n,n (x)p n 1,n (y) p n,n (y)p n 1,n (x) K n (x, y) = a n,n. x y M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 15 / 53

Determinantal point process The Orthogonal Polynomial Ensemble of size n associated to a Borel measure µ is a determinantal point process on R with kernel K n and reference measure µ. This means that ) E [exp tx n (f )] = det (I + (e tf 1)K n L 2 (µ) where the right-hand side is the Fredholm determinant for operator with integral kernel (e tf (x) 1)K n (x, y) acting on L 2 (µ). M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 16 / 53

A first question We recall that we ask whether we have a Central Limit Theorem where X n (f ) EX n (f )? N(0, σ(f ) 2 ) X n (f ) = n f (x j ), where f is sufficiently smooth and x 1,..., x n are from the OPE of size n for a Borel measure µ(= µ n ) on R. Question: can we expect this to hold for general measures µ? Do we have the correct normalization or does the normalization depend on regularity properties of µ? j=1 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 17 / 53

Concentration inequality Theorem (Breuer-D 13) For a bounded function f we have [ E e t(xn(f ) EXn(f )] exp(a t 2 Var X n (f )), t 1 3 f where A is a universal constant. And hence ( ( ε 2 )) P ( X n (f ) EX n (f ) ε) 2 exp min 4A Var X n (f ), ε 6 f M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 18 / 53

Concentration inequality Theorem (Breuer-D 13) For a bounded function f we have [ E e t(xn(f ) EXn(f )] exp(a t 2 Var X n (f )), t 1 3 f where A is a universal constant. And hence ( ( ε 2 )) P ( X n (f ) EX n (f ) ε) 2 exp min 4A Var X n (f ), ε 6 f The theorem holds for any determinantal process that has a kernel K n that is an orthogonal projection, i.e. K n = K 2 n and K n = K n. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 18 / 53

Concentration inequality There is always the crude bound VarX n (f ) = n f 2. With the concentration inequality this implies 1 n 1/2+ε (X n(f ) EX n (f )) 0, almost surely as n for any bounded f and ε > 0. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 19 / 53

Concentration inequality There is always the crude bound VarX n (f ) = n f 2. With the concentration inequality this implies 1 n 1/2+ε (X n(f ) EX n (f )) 0, almost surely as n for any bounded f and ε > 0. In particular, if 1 n EX n(f ) has limit, then 1 n X n(f ) converges to the same limit almost surely. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 19 / 53

Concentration inequality There is always the crude bound VarX n (f ) = n f 2. With the concentration inequality this implies 1 n 1/2+ε (X n(f ) EX n (f )) 0, almost surely as n for any bounded f and ε > 0. In particular, if 1 n EX n(f ) has limit, then 1 n X n(f ) converges to the same limit almost surely. No conditions on µ! This holds for general DPP with orthogonal projection as a kernel. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 19 / 53

Concentration inequality There is always the crude bound VarX n (f ) = n f 2. With the concentration inequality this implies 1 n 1/2+ε (X n(f ) EX n (f )) 0, almost surely as n for any bounded f and ε > 0. In particular, if 1 n EX n(f ) has limit, then 1 n X n(f ) converges to the same limit almost surely. No conditions on µ! This holds for general DPP with orthogonal projection as a kernel. The bound on the variance can be improved significantly for C 1 functions. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 19 / 53

Computing the variance Compute the variance of a linear statistic VarX n (f ) M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 20 / 53

Computing the variance Compute the variance of a linear statistic VarX n (f ) = f (x) 2 K n (x, x)dµ(x) f (x)f (y)k n (x, y) 2 dµ(x)dµ(y) M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 20 / 53

Computing the variance Compute the variance of a linear statistic VarX n (f ) = f (x) 2 K n (x, x)dµ(x) f (x)f (y)k n (x, y) 2 dµ(x)dµ(y) = 1 (f (x) f (y)) 2 K n (x, y) 2 dµ(x)dµ(y) 2 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 20 / 53

Computing the variance Compute the variance of a linear statistic VarX n (f ) = f (x) 2 K n (x, x)dµ(x) f (x)f (y)k n (x, y) 2 dµ(x)dµ(y) = 1 (f (x) f (y)) 2 K n (x, y) 2 dµ(x)dµ(y) 2 ( ) f (x) f (y) 2 (x y) 2 K n (x, y) 2 dµ(x)dµ(y) = 1 2 x y M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 20 / 53

Computing the variance Compute the variance of a linear statistic VarX n (f ) = f (x) 2 K n (x, x)dµ(x) f (x)f (y)k n (x, y) 2 dµ(x)dµ(y) = 1 (f (x) f (y)) 2 K n (x, y) 2 dµ(x)dµ(y) 2 = 1 ( ) f (x) f (y) 2 (x y) 2 K n (x, y) 2 dµ(x)dµ(y) 2 x y ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 20 / 53

Lipschitz functions If f is a Lipschitz function, i.e. f L = sup x,y f (x) f (y) x y <, then VarX n (f ) ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 21 / 53

Lipschitz functions If f is a Lipschitz function, i.e. f L = sup x,y f (x) f (y) x y <, then VarX n (f ) ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y a2 n,n f 2 L 2 (p n (x)p n 1 (y) p n (y)p n 1 (x)) 2 dµ(x)dµ(y) = f 2 La 2 n,n M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 21 / 53

Lipschitz functions If f is a Lipschitz function, i.e. f L = sup x,y f (x) f (y) x y <, then VarX n (f ) ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y a2 n,n f 2 L 2 (p n (x)p n 1 (y) p n (y)p n 1 (x)) 2 dµ(x)dµ(y) = f 2 La 2 n,n Hence if we have a n,n = O(1) as n, then the variance remains bounded. Pastur 06 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 21 / 53

Lipschitz functions If f is a Lipschitz function, i.e. f L = sup x,y f (x) f (y) x y <, then VarX n (f ) ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y a2 n,n f 2 L 2 (p n (x)p n 1 (y) p n (y)p n 1 (x)) 2 dµ(x)dµ(y) = f 2 La 2 n,n Hence if we have a n,n = O(1) as n, then the variance remains bounded. Pastur 06 Conclusion: if f is Lipschitz and a n,n = O(1) as n, then X n (f ) EX n (f ) 1, and we have exponential tails uniformly in n. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 21 / 53

Polynomial test functions Recall the formula for the variance: VarX n (f ) ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 22 / 53

Polynomial test functions Recall the formula for the variance: VarX n (f ) ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y If is a polynomial, the variance can be written in terms of finitely many recurrence coefficients a n+k,n and b n+k,n. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 22 / 53

Polynomial test functions Recall the formula for the variance: VarX n (f ) ( ) = a2 n,n f (x) f (y) 2 (p n (x)p n 1(y) p n (y)p n 1(x)) 2 dµ(x)dµ(y) 2 x y If is a polynomial, the variance can be written in terms of finitely many recurrence coefficients a n+k,n and b n+k,n. In particular, For k Z we have that a n+k,n and b n+k,n have a limit as n = VarX n (f ) has a limit for any polynomial f M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 22 / 53

CLT for Orthogonal Polynomial Ensembles Theorem (Breuer-D 13) Assume that for any k Z we have a n+k,n a b n+k,n b as n. Then, if f is a polynomial with real coefficients, ( ) X n (f ) EX n (f ) N 0, k ^f k 2 k=1 where ^f k = 1 2π 2π 0 f (2a cos θ + b)e iθk dθ. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 23 / 53

CLT for Orthogonal Polynomial Ensembles Theorem (Breuer-D 13) Assume that there exists a subsequence {n j } j such that for any k Z we have a nj +k,n j a b nj +k,n j b as j. Then, if f is a polynomial with real coefficients, ( ) X nj (f ) EX nj (f ) N 0, k ^f k 2 k=1 where ^f k = 1 2π 2π 0 f (2a cos θ + b)e iθk dθ. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 24 / 53

Central Limit Theorems for C 1 functions Theorem (Breuer-D 13) Suppose there is a compact E R such that for k N we have x k K n (x, x)dµ(x) = o(1/n), R\E as n. Then the CLT holds for any f C 1 (R) such that f (x) C(1 + x k ) for some C > 0 and k N. Corollary If µ is non-varying and has compact support S(µ), then the CLT holds for any f C 1 (S(µ)). M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 25 / 53

Examples 1a and 1b Unitary Ensembles. Absolutely continuous measure with S(µ V ) = [γ, δ]. dµ n (x) = e nv (x) dx. Lozenge tiling of the hexagon. Discrete measure: dµ = N x=0 ( α + x x ) ( ) β + N x δ N x x. This called the Hahn weight and the orthogonal polynomials are the Hahn polynomials. In both cases one can prove that, with given parameters, the recurrence coefficients have limits and the CLT follows from the general CLT. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 26 / 53

Example 2 The following result for the non-varying situation follows by combining the CLT with a theorem by Denissov and Rakhmanov. Theorem Fix dµ = w(x)dx + dµ sing, and assume that S ess (µ) = [γ, δ] and w(x) > 0 Lebesgue a.e. on [γ, δ]. Then for any f C 1 (R) we have X n (f ) EX n (f ) N(0, k f k 2 ) k=1 where ^f k = 1 2πi 2π 0 f ( δ γ 2 cos θ + δ + γ ) e ikθ dθ. 2 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 27 / 53

Example 3: different CLT s for different subsequences! Partition N into blocks A 1, B 2, A 2, B 2,... such that and set b n = 0 and A j = 2 j2, B j = 3 j2. a n = { 1, n A j 1 2, n B j Then the spectral measure µ for J is supported on [ 2, 2] and is singular with respect to the Lebesgue measure. For the OPE for µ one can show that 1 n X n(f ) f (x)dµ eq (x) where µ eq is the equilibrium measure for the interval [ 2, 2]. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 28 / 53

Example 3: different CLT s for different subsequences! Hence the exists subsequence n j such that where X nj (f ) EX nj (f ) N(0, ^f k = 1 2π 2π 0 k ^f k 2 ) k=1 f (cos θ)e ikθ dθ. There also exists subsequences n j such that X n j (f ) EX n j (f ) N(0, k f k 2 ) k=1 where f k = 1 2π 2π 0 f (2 cos θ)e ikθ dθ. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 29 / 53

Example 4 Example 3 can be modified such that the spectral measure is absolutely continuous on [ 1, 1] and purely singular on [ 2, 2] \ ( 1, 1) and b n = 0 and the a n are such that for each a [ 1 2, 1] there exists a subsequence {n j (a)} j such that k Z a nj (a)+k a, as j. Hence for each a there is an CLT for a special subsequence and each CLT is different. We stil have 1 lim n n X n(f ) = f (x)dµ eq (x) where µ eq is the equilibrium measure for the interval [ 2, 2]. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 30 / 53

Right Limits M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 31 / 53

Some notation Let J be the Jacobi operator corresponding to µ. b 0,n a 0,n J = J (n) a 0,n b 1,n a 1,n =. a..... 1,n... Tri-diagonal semi-infinite. Since we allow varying measures, we allow J to depend on n and write J = J (n). Given a Laurent polynomial s(z) = p j= q s jz j, the Laurent matrix/operator L(s) and Toeplitz matrix/operator T (s) are defined as (L(s)) k,l = s k l, k, l Z. (T (s)) k,l = s k l, k, l N. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 32 / 53

Right limits Definition A two-sided infinte matrix J R is call a right limit of {J (n) } n iff there exists a subsequence {n j } j such that ( J R) = lim J (n j ) k,l j n j +k,n j +l, k, l Z. Example: there exists a subsequence n j such that for all k we have a nj +k,n j a and b nj +k,n j b if and only if L(az + b + a/z) is a right limit of {J (n) } n. Right limits have proved to be very useful when studying the spectra of Jacobi operators Last-Simon 99, Last-Simon 06, Remling 11 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 33 / 53

Laurent matrices as right limits In this way, we can reformulate the CLT for OPE: Theorem (Breuer-D 13) Suppose that L(s) is a right limit of J (n) with subsequence {n j } j. Then, for any polynomial f with real coefficients, ( ) X nj (f ) EX nj (f ) N 0, k ^f k 2, k=1 where ^f k = 1 f (s(z)) dz 2πi z =1 z k+1. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 34 / 53

A general limit theorem Theorem Let J R be a right limit of J with subsequence {n j } j. Define { ( ) JM R (J R ) kl, k, l = M,..., M, =. kl 0, otherwise Let P be{ the projection on the negative coefficients x k k < 0 (P x) k =. Then for any polynomial f we have 0 k 0. lim E [ exp t(x nj (f ) EX nj (f )) ] j ) = lim M et Tr P f (JR M ) det (I + P (e tf (JR M ) I )P In particular, both limits exist. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 35 / 53

A general limit theorem If J R is a Laurent matrix, the right-hand side can be computed giving the previous CLT. It is an interesting open problem to find the limiting value at the right-hand side if J R is, for example, two-periodic. This should of course match with the formulas for the multi-cut case by Shcherbina 13, Borot-Guionnet 13 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 36 / 53

Biorthogonal ensembles M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 37 / 53

Biorthogonal ensembles Let µ be a Borel measure on R and {φ j },{ψ j } two families of linearly indendent functions. Then the corresponding biorthogonal ensemble of size n is defined as the probability measure on R n proportional to det (φ k 1 (x j )) n j,k=1 det (ψ k 1(x j )) n j,k=1 dµ(x 1) dµ(x n ) M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 38 / 53

Biorthogonal ensembles Let µ be a Borel measure on R and {φ j },{ψ j } two families of linearly indendent functions. Then the corresponding biorthogonal ensemble of size n is defined as the probability measure on R n proportional to det (φ k 1 (x j )) n j,k=1 det (ψ k 1(x j )) n j,k=1 dµ(x 1) dµ(x n ) By taking linear combination of the rows we can assume that φ k (x)ψ l (x)dµ(x) = δ kl. R M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 38 / 53

Biorthogonal ensembles Let µ be a Borel measure on R and {φ j },{ψ j } two families of linearly indendent functions. Then the corresponding biorthogonal ensemble of size n is defined as the probability measure on R n proportional to det (φ k 1 (x j )) n j,k=1 det (ψ k 1(x j )) n j,k=1 dµ(x 1) dµ(x n ) By taking linear combination of the rows we can assume that φ k (x)ψ l (x)dµ(x) = δ kl. R The biorthogonal ensembles is a determinantal point process on R with kernel n 1 K n (x, y) = φ j (x)ψ j (y) and reference measure µ. k=0 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 38 / 53

Biorthogonal ensembles with a recurrence We will assume that there exists a banded one-sided infinite matrix J such that φ 0 (x) φ 0 (x) φ 1 (x) x φ 2 (x) = J φ 1 (x) φ 2 (x).. Since J is banded this is a recurrence. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 39 / 53

Biorthogonal ensembles with a recurrence We will assume that there exists a banded one-sided infinite matrix J such that φ 0 (x) φ 0 (x) φ 1 (x) x φ 2 (x) = J φ 1 (x) φ 2 (x).. Since J is banded this is a recurrence. We will allow that J varies with n, i.e. J = J (n), in which case we assume that the number of bands is independent of n. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 39 / 53

Biorthogonal ensembles with a recurrence We will assume that there exists a banded one-sided infinite matrix J such that φ 0 (x) φ 0 (x) φ 1 (x) x φ 2 (x) = J φ 1 (x) φ 2 (x).. Since J is banded this is a recurrence. We will allow that J varies with n, i.e. J = J (n), in which case we assume that the number of bands is independent of n. There many interesting biorthogonal ensembles that admit such a recurrence. In particular when we are concerned Multiple Orthogonal Polynomials Ensembles. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 39 / 53

Example: unitary ensembles with external source Consider the eigenvalues of a n n Hermitian matrix randomly chosen from 1 Z n e c Tr(M2 AM) dm where c > 0 and A is a diagonal matrix with eigenvalues a 1,..., a m of multiplicities k 1,..., k m. Brezin-Hikami 97. Alternatively, consider n non-intersecting brownian paths, with k j paths conditioned to start at 0 a time t = 0 and end at b j at time t = 1. Location of paths at time t has the same distribution as the eigenvalues of M with a j = 2tb j and c = 1/t(1 t). M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 40 / 53

The eigenvalues form a biorthogonal ensemble constructed out of multiple Hermite polynomials with m weights. These polynomials satisfy an m + 2-term recurrence. Bleher-Kuijlaars 04 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 41 / 53

Example: two matrix model Consider the eigenvalues of two n n Hermitian matrices (M 1, M 2 ) randomly chosen from 1 Z n e Tr(V (M 1)+W (M 2 ) τm 1 M 2 ) dm 1 dm 2 where V and W are polynomials such that the integral converges. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 42 / 53

Example: two matrix model Consider the eigenvalues of two n n Hermitian matrices (M 1, M 2 ) randomly chosen from 1 Z n e Tr(V (M 1)+W (M 2 ) τm 1 M 2 ) dm 1 dm 2 where V and W are polynomials such that the integral converges. In Eynard-Mehta 98 it was shown that the eigenvalue correlations have determinantal struture with kernels constructed out of the biorthogonal polynomials p j and q k of degree j and k such that p j (x)q k (y)e V (x) W (y)+τxy dxdy = δ jk. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 42 / 53

Example: two matrix model The eigenvalues of M 1, when averaged over M 2, form a biorthogonal ensembles with φ j (x) = p j 1 (x) ψ j (x) = q j 1 (y)e (V (x)+w (y) τxy)) dy. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 43 / 53

Example: two matrix model The eigenvalues of M 1, when averaged over M 2, form a biorthogonal ensembles with φ j (x) = p j 1 (x) ψ j (x) = q j 1 (y)e (V (x)+w (y) τxy)) dy. The polynomials p j satisfy a d W + 1-term recurrence, where d W is the degree of W. Bertola-Eynard-Harnad 02 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 43 / 53

Example: two matrix model The eigenvalues of M 1, when averaged over M 2, form a biorthogonal ensembles with φ j (x) = p j 1 (x) ψ j (x) = q j 1 (y)e (V (x)+w (y) τxy)) dy. The polynomials p j satisfy a d W + 1-term recurrence, where d W is the degree of W. Bertola-Eynard-Harnad 02 In D-Kuijlaars 08, D-Kuijlaars-Mo 12, D-Geudens-Kuijlaars 11, D-Geudens 13 the asymptotics have been computed for W (y) = y 4 /4 + αy 2 /2 and V even, using Riemann-Hilbert methods, from which we can compute the recurrence coefficients. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 43 / 53

CLT for biorthogonal ensembles Theorem (Breuer-D 13) Assume that L(s) is a right limit of J (n) with subsequence {n j } j for some s(z) = p j= q s jz j. Then, if f is a polynomial with real coefficients, X nj (f ) EX nj (f ) N ( 0, ) k ^f k 2, k=1 as j, where ^f k = 1 f (s(z)) dz 2πi z =1 z k+1. Note: for orthogonal polynomial ensembles p = q = 1. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 44 / 53

CLT for biorthogonal ensembles Theorem (Breuer-D 13) Assume that L(s) is a right limit of J (n) with subsequence {n j } j for some s(z) = p j= q s jz j. Then, if f is a polynomial with real coefficients, X nj (f ) EX nj (f ) N ( 0, ) k ^f k 2, k=1 as j, where ^f k = 1 f (s(z)) dz 2πi z =1 z k+1. Note: for orthogonal polynomial ensembles p = q = 1. In case K n K n then we have no concentration inequality and hence extension to f C 1 (R), without more a priori knowledge. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 44 / 53

A general limit theorem Theorem Let J R be a right limit of J with subsequence {n j } j. Define { ( ) JM R (J R ) kl, k, l = M,..., M, =. kl 0, otherwise Let P be{ the projection on the negative coefficients x k k < 0 (P x) k =. Then for any polynomial f we have 0 k 0. lim E [exp t(x n(f ) EX n (f ))] j In particular, both limits exist. = lim M et Tr P f (JR M ) det (I + P (e tf (JR M ) I )P ) M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 45 / 53

Some words about the proofs M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 46 / 53

Cumulant expansion Expand the moment-generating function E [exp tx n (f )]) = det(1 + (e tf 1)K n ) ) = exp Tr log (1 + (e tf 1)K n ( ) = exp t m C m (n) (f ) m=1 where C (n) m (f ) = m ( 1) j j=1 The are called the cumulants. j l 1 + +l j =m,l i 1 Tr f l 1 K n f l j K n l 1! l j! M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 47 / 53

For a Central Limit Theorem one needs to show that { lim C m (n) 2σ(f ) 2, m = 2 (f ) n 0, m 3. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 48 / 53

For a Central Limit Theorem one needs to show that { lim C m (n) 2σ(f ) 2, m = 2 (f ) n 0, m 3. A first problem: each term in the sum C (n) m (f ) = m ( 1) j j=1 j l 1 + +l j =m,l i 1 Tr f l 1 K n f l j K n l 1! l j! grows linearly with n. Clearly, some very effective cancelation must come into play! M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 48 / 53

By using the identity x = log (1 + (e x 1)) and expanding the right-hand side we obtain if m 2. m ( 1) j j=1 j l 1 + +l j =m,l i 1 1 l 1! l j! = 0 M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 49 / 53

By using the identity x = log (1 + (e x 1)) and expanding the right-hand side we obtain if m 2. m ( 1) j j=1 j l 1 + +l j =m,l i 1 1 l 1! l j! = 0 Hence C (n) m (f ) = m ( 1) j j=1 j l 1 + +l j =m,l i 1 Tr f l 1 K n f l j K n Tr f m K n l 1! l j! for m 2. Now each term in the double sum can be proved to be bounded and this captures a first cancelation. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 49 / 53

Concentration inequality Using Kn 2 = K n and Kn = K n one can show that Tr f l 1 K n f l j K n Tr f m K n jm 2 f m 2 [f, K n ] 2 2, m 2 where [f, K n ] 2 is the Hilbert-Schmidt norm for the commutator of K n anf f (viewed as a multiplication operator). M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 50 / 53

Concentration inequality Using Kn 2 = K n and Kn = K n one can show that Tr f l 1 K n f l j K n Tr f m K n jm 2 f m 2 [f, K n ] 2 2, m 2 where [f, K n ] 2 is the Hilbert-Schmidt norm for the commutator of K n anf f (viewed as a multiplication operator). If K n = K n then [f, K n ] 2 2 = 2VarX n (f ) and the mentioned concentration inequality Breuer-D 13 follows after a further arranging of terms. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 50 / 53

Concentration inequality Using Kn 2 = K n and Kn = K n one can show that Tr f l 1 K n f l j K n Tr f m K n jm 2 f m 2 [f, K n ] 2 2, m 2 where [f, K n ] 2 is the Hilbert-Schmidt norm for the commutator of K n anf f (viewed as a multiplication operator). If K n = K n then [f, K n ] 2 2 = 2VarX n (f ) and the mentioned concentration inequality Breuer-D 13 follows after a further arranging of terms. For a CLT we need to capture another cancelation. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 50 / 53

Use orthogonality to rewrite C (n) m (f ) = = m ( 1) j j=1 j m ( 1) j j=1 j l 1 + +l j =m,l i 1 l 1 + +l j =m,l i 1 Tr f l 1 K n f l j K n Tr f m K n l 1! l j! Tr f (J) l 1 P n f (J) l j P n Tr f (J) m P n l 1! l j! M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 51 / 53

Use orthogonality to rewrite C (n) m (f ) = = m ( 1) j j=1 j m ( 1) j j=1 j l 1 + +l j =m,l i 1 l 1 + +l j =m,l i 1 Tr f l 1 K n f l j K n Tr f m K n l 1! l j! Tr f (J) l 1 P n f (J) l j P n Tr f (J) m P n l 1! l j! By using the band structure of J (and hence of f (J) for polynomial f ) it is not hard to prove that Tr f (J) l 1 P n f (J) l j P n Tr f (J) m P n only depends on a finite and fixed number of recurrence coefficients. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 51 / 53

Use orthogonality to rewrite C (n) m (f ) = = m ( 1) j j=1 j m ( 1) j j=1 j l 1 + +l j =m,l i 1 l 1 + +l j =m,l i 1 Tr f l 1 K n f l j K n Tr f m K n l 1! l j! Tr f (J) l 1 P n f (J) l j P n Tr f (J) m P n l 1! l j! By using the band structure of J (and hence of f (J) for polynomial f ) it is not hard to prove that Tr f (J) l 1 P n f (J) l j P n Tr f (J) m P n only depends on a finite and fixed number of recurrence coefficients. This does not prove the desired asymptotic behavior, but it does show that only a relatively small part of J matters in the fluctuation. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 51 / 53

Comparison/Universality principle If J 1 and J 2 have the same right limit J R along the same subsequence {n j }, then the leading term in the asympotic behavior of the fluctuations for the linear statistics for polynomial f along that subsequence is the same. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 52 / 53

Comparison/Universality principle If J 1 and J 2 have the same right limit J R along the same subsequence {n j }, then the leading term in the asympotic behavior of the fluctuations for the linear statistics for polynomial f along that subsequence is the same. Hence if J has a Laurent matrix L(s) as a right limit, then we can assume that J = T (s) from the start! Here T (s) is the Toeplitz operator with symbol s. Hence we only need to compute the CLT for the special Toeplitz case and the rest follows by the comparison principle. M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 52 / 53

Lemma (Breuer-D 13) Let s(z) = p j= q s jz j be a Laurent polynomial. Then ( ( ) lim det I + P n (e T (s) I )P n e Tr PnT (s) 1 = exp n 2 ) ks k s k k=1 Proof is based on Ehrhardt s generalization of the Helton-Howe-Pincus formula: if [A, B] is trace class, then det e A e A+B e B = exp 1 Tr[A, B]. 2 (This beautiful formula essentially captures the final cancelations in the cumulant expansion) M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 53 / 53

Thank you for your attention! M. Duits (SU) CLT s for linear statistics Princeton, April 2, 2014 54 / 53