Lecture I: Asymptotics for large GUE random matrices

Similar documents
Lecture III: Applications of Voiculescu s random matrix model to operator algebras

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Exponential tail inequalities for eigenvalues of random matrices

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

Lectures 6 7 : Marchenko-Pastur Law

Lectures 2 3 : Wigner s semicircle law

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

The norm of polynomials in large random matrices

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The Sherrington-Kirkpatrick model

Lectures 2 3 : Wigner s semicircle law

5 Birkhoff s Ergodic Theorem

Probability and Measure

Problem set 1, Real Analysis I, Spring, 2015.

On the concentration of eigenvalues of random symmetric matrices

Convergence of spectral measures and eigenvalue rigidity

(Part 1) High-dimensional statistics May / 41

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

1 Introduction. 2 Measure theoretic definitions

Random matrix pencils and level crossings

Probability and Measure

18.175: Lecture 3 Integration

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

A remark on the maximum eigenvalue for circulant matrices

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

A new method to bound rate of convergence

The Lindeberg central limit theorem

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Lecture Notes 3 Convergence (Chapter 5)

Kernel Density Estimation

Gaussian Random Fields

Triangular matrices and biorthogonal ensembles

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Rademacher functions

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials

Eigenvalue Statistics for Toeplitz and Circulant Ensembles

Metric Spaces and Topology

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

Random Matrix: From Wigner to Quantum Chaos

Fluctuations from the Semicircle Law Lecture 4

We denote the space of distributions on Ω by D ( Ω) 2.

Lecture 1 Operator spaces and their duality. David Blecher, University of Houston

Part II Probability and Measure

ELEMENTS OF PROBABILITY THEORY

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws

Random Bernstein-Markov factors

Stochastic Models (Lecture #4)

Lecture 2: Convergence of Random Variables

ORTHOGONAL POLYNOMIALS

Measure and Integration: Solutions of CW2

Random regular digraphs: singularity and spectrum

Gaussian vectors and central limit theorem

Wigner s semicircle law

Random Matrix Theory and its Applications to Econometrics

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

1 The Glivenko-Cantelli Theorem

FREE PROBABILITY THEORY

MA8109 Stochastic Processes in Systems Theory Autumn 2013

Freeness and the Transpose

Lecture 7. Sums of random variables

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

Concentration Inequalities for Random Matrices

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Random Process Lecture 1. Fundamentals of Probability

1 Math 241A-B Homework Problem List for F2015 and W2016

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results

Second Order Freeness and Random Orthogonal Matrices

Random matrices: Distribution of the least singular value (via Property Testing)

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

On the convergence of sequences of random variables: A primer

Real Analysis Problems

IEOR 6711: Stochastic Models I Fall 2013, Professor Whitt Lecture Notes, Thursday, September 5 Modes of Convergence

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

n E(X t T n = lim X s Tn = X s

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Commutative Banach algebras 79

Analysis Qualifying Exam

Probability and Measure

Lecture 3. Frits Beukers. Arithmetic of values of E- and G-function. Lecture 3 E- and G-functions 1 / 20

Convergence Concepts of Random Variables and Functions

An Introduction to Malliavin calculus and its applications

Asymptotic results for empirical measures of weighted sums of independent random variables

Eigenvalue variance bounds for Wigner and covariance random matrices

STAT 200C: High-dimensional Statistics

17. Convergence of Random Variables

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Chapter 7. Basic Probability Theory

Random Fermionic Systems

Regularity of the density for the stochastic heat equation

MET Workshop: Exercises

Jordan normal form notes (version date: 11/21/07)

Basic Properties of Metric and Normed Spaces

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Random Methods for Linear Algebra

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Transcription:

Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus

andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random n n matrix A on (Ω, F, P) is an n n-matrix A = (a ij ) 1 i,j n, where all the entries are complex valued random variables on (Ω, F, P). In other words, A is a measurable mapping A: (Ω, F, P) (M n (C), B(M n (C))), when M n (C) is equipped with its Borel σ-algebra B(M n (C)).

The spectral distribution of a selfadjoint random matrix Let A: Ω M n (C) be a selfadjoint random matrix, i.e., A (ω) = A(ω) for all ω. Then for each ω we consider the ordered eigenvalues λ 1 (ω) λ 2 (ω) λ n (ω) of A(ω). For each fixed ω the empirical eigenvalue distribution of A(ω) is the probability measure µ A(ω) = 1 n n j=1 δ λj (ω), where δ c denotes the Dirac measure at the constant c. Then the spectral distribution of A is the mixture of the family (µ A(ω) ) ω Ω with respect to P, i.e...

The spectral distribution (continued) µ A (B) = Ω µ A(ω) (B) dp(ω), for any Borel subset B of. Thus, by a standard extension argument, for any bounded Borel function f : we have ( ) f (t) µ A (dt) = f (t) µ A(ω) (dt) dp(ω) = = Ω Ω Ω 1 n n f (λ j (ω)) dp(ω) j=1 tr n ( f (A(ω)) ) dp(ω) = E { tr n ( f (A) )}.

The Gaussian Unitary Ensemble Definition. By GUE(n, σ 2 ) we denote the set of random n n matrices X = (x ij ) 1 i,j n, defined on (Ω, F, P), which satisfy the following conditions: i j : x ij = x ji. the random variables x ij, 1 i j n, are independent. i < j : e(x ij ), Im(x ij ) i.i.d. N(0, 1 2 σ2 ). i : x ii N(0, σ 2 ).

The spectral distribution of a GUE random matrix. Let X n be a random matrix in GUE(n, 1 n ). For any bounded Borel function f : we then have E { tr n (f (X n )) } = f (x)h n (x) dx, where the function h n : is given by and where h n (x) = 1 n 1 ( ϕ n j 2n 2 x) 2, ϕ 0, ϕ 1, ϕ 2,..., is the sequence of Hermite functions: ϕ k (x) = 1 (2 k k! π) 1/2 H k (x) exp( x2 2 ), (k N 0), H 0, H 1, H 2,..., are the Hermite polynomials: ( d H k (x) = ( 1) k exp(x 2 k ) ) dx k exp( x 2 ).

The moment generating function of a GUE random matrix Theorem. Let X n be a random matrix in GUE(n, 1 n ). Then for any complex number z we have E { tr n (exp(zx n )) } n 1 = exp( z2 2n ) (n 1)(n 2) (n j) j!(j + 1)! ( z 2 ) j. n

Sketch of Proof. By analytic continuation it suffices to consider the case z. Step I. We know that E { tr n (exp(zx n )) } = 1 2n exp(zt) ( n 1 ( ) ϕ n k 2 t) 2 dt, k=0 so by a substitution, it follows that we have to show that F (s) := exp(st) n 1 = n exp( s2 4 ) for any real number s. ( n 1 ϕ k (t) 2) dt k=0 (n 1)(n 2) (n j) j!(j + 1)! ( s 2 ) j, 2

Sketch of Proof (continued). Step II. By partial integration we find that F (s) := exp(st) ϕ k (t) 2) dt = 1 exp(st) d s dt ( n 1 k=0 ( n 1 ϕ k (t) 2) dt k=0 and using properties of the Hermite polynomials, one may verify that d ( n 1 ϕ k (t) 2) = 2nϕ n (t)ϕ n 1 (t), dt k=0 so that 2n F (s) = exp(st)ϕ n (t)ϕ n 1 (t) dt s 2n ( = 2 2n 1 n!(n 1)!π ) 1/2 exp(st t 2 )H n (t)h n 1 (t) dt. s

Sketch of Proof (continued). Step III Using the substitution y = t 1 2s we have that exp(st t 2 ) = exp( y 2 ) exp( 1 4 s2 ), and hence 2n ( F (s) = 2 2n 1 n!(n 1)!π ) 1/2 exp(st t 2 )H n (t)h n 1 (t) dt s = 1 ( 2 (n 1) ((n 1)!) 2 π ) 1/2 exp( 1 s 4 s2 ) exp( y 2 )H n (y + 1 2 s)h n 1(y + 1 2s) dy. Using then the following properties of the Hermite polynomials: k ( ) k H k (x + a) = (2a) k j H j (x), (x, a, k N) j and H k (y)h l (y) exp( y 2 ( ) dy = δ k,l 2 k k! π ) (k, l N 0 ),

Sketch of Proof (continued). we find that F (s) = 1 s ( 2 (n 1) ((n 1)!) 2 π ) 1/2 exp( 1 = n 1 = n exp( s2 4 ) n 1 ( n j 4 s2 ) )( n 1 (n 1)(n 2) (n j) j!(j + 1)! j ) s n j s n 1 j( 2 j j! π ) ( s 2 ) j, 2 as desired.

Wigner s semi-circle law Theorem. For each n in N, let X n be a random matrix in GUE(n, 1 n ) and consider its spectral distribution µ X n. Then µ Xn w 1 2π 4 t 2 1 [ 2,2] (t) dt, as n, i.e., for any continuous bounded function f : we have E { tr n (f (X n )) } = as n. f (x) µ Xn (dx) 1 2 f (x) 4 x 2π 2 dx, 2

Proof of Wigner s semi-circle law. By the continuity theorem for characteristic functions of probability measures, it suffices to show that E { tr n (exp(zx n )) } 1 2 exp(zt) 4 t 2π 2 dt, 2 as n for any complex number z. Given such a z it follows by the previous theorem and dominated convergence (for series) that E { tr n (exp(zx n )) } n 1 = exp( z2 2n ) n (n 1)(n 2) (n j) j!(j + 1)! 1 j!(j + 1)! z2j. ( z 2 n ) j

Proof of Wigner s semi-circle law (continued). On the other hand we have 1 2π 2 2 exp(zt) 4 t 2 dt = 1 2π = = 2 2 ( z j t j ) 4 t j! 2 dt z j ( 1 2 t j ) 4 t j! 2π 2 dt 2 z 2j ( 1 ( ) 2j ) (2j)! j + 1 j = 1 j!(j + 1)! z2j, as desired.

The strong version of Wigner s semi-circle law Theorem. For each n in N, let X n be a random matrix in GUE(n, 1 n ). Then there is a measurable set S Ω with probability one, such that µ Xn(ω) w 1 2π 4 t 2 1 [ 2,2] (t) dt, as n, for all ω in S. In other words, for any interval I in, we have 1 n #{ j {1, 2,..., n} λ j (X n (ω)) I } 1 n 2π for all ω in S. I [ 2,2] 4 t 2 dt,

Sketch of Proof. Step 1 (Concentration inequality). Let G N,σ denote the Gaussian distribution on N with Lebesgue density dg N,σ (x) dx = (2πσ 2 ) N/2 exp( x 2 2σ 2 ), where x is the Euclidean norm of x. Furthermore, let F : N be a function that satisfies the Lipschitz condition F (x) F (y) c x y, (x, y N ), (1) for some positive constant c. Then for any positive number ɛ, we have that G N,σ ({ x N F (x) E(F ) > ɛ }) 2 exp ( Kɛ2 c 2 σ 2 ), where E(F ) = N F (x) dg N,σ (x), and K = 2 π 2.

Sketch of Proof (continued). Step 2. Use (1) on the function F (A) = tr n ( f (A) ), (A Mn (C) sa ), where f : satisfies a Lipschitz condition: This yields the estimate f (y) f (x) c x y, (x, y ). P ( tr n (f (X n )) E { tr n (f (X n )) } > ɛ ) exp( n2 Kɛ 2 c 2 ). Step 3. Use the Borel-Cantelli lemma!

A differential equation for the spectral density h n Proposition. For any positive integer n, the spectral density h n of a GUE random matrix X n satisfies the differential equation: 1 n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) = 0.

Sketch of proof. Step 1. We have seen that ψ n (z) := exp(zt)h n (t) dt = E { tr n (exp(zx n )) } = e z2 /2n η n (z 2 /n), ( with η n : the function given by η n (s) = n 1 (n 1)(n 2) (n j) s j. j!(j + 1)! It is a classical result that η n satisfies the differential equation: sη n(s) + (2 + s)η n(s) + (n 1)η n (s) = 0. (2)

Sketch of proof (continued). Step II. Using the substitution s = z 2 /n it follows from (2) that ψ satisfies the differential equation n 2 zψ n(z) + 3n 2 ψ n(z) (4n 2 z + z 3 )ψ n (z) = 0. Setting z = iy for y in, it follows that the Fourier transform ĥ n (y) = ψ n ( iy) satisfies the differential equation: n 2 iyĥ n(y) + 3n 2 iĥ n(y) + (4n 2 iy iy 3 )ĥn(y) = 0. (3)

Sketch of proof (continued). Step III. Consider the co-fourier transform F : L 1 () L 1 () defined by [Ff ](y) = exp(iyt)f (t) dt, (y ), and recall that by Fourier inversion Fĥn = (2π)h n. Applying now F to (3), we obtain that h n satisfies 1 n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) = 0, as desired.

The Harer-Zagier ecursion Formulae Theorem. For each n in N and p in N 0, put γ(p, n) = t 2p h n (t) dt = E { tr n (Xn 2p ) }, where X n GUE(n, 1 n ). These moments satisfy the recursion formula (p + 2)γ(p + 1, n) = (4p2 1)p n 2 γ(p 1, n) + (4p + 2)γ(p, n), (4) for any positive integer p.

Proof of the Harer-Zagier recursion formulae Since h n satisfies the differential equation 1 n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) = 0, it follows by multiplication by x 2p+1 and partial integration that 0 = x 2p+1( n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) ) dx = ( n 2 (2p + 1)(2p)(2p 1)x 2p 2 4(2p + 1)x 2p + (2p + 3)x 2p+2 + x 2p+2) h n (x) dx = 2(4p2 1)p n 2 γ(p 1, n) 4(2p + 1)γ(p, n) + 2(p + 2)γ(p + 1, n), from which (4) follows readily.

Convergence of largest and smallest eigenvalue for a GUE matrix Theorem. For each positive integer n, let X n be a random matrix from GUE(n, 1 n ), and let λ max(x n ) and λ min (X n ) denote, respectively, the largest and smallest eigenvalues of X n. Then and lim λ max(x n ) = 2, n lim λ min(x n ) = 2, n almost surely, almost surely.

Proof of lim sup n λ max (X n ) 2 almost surely It suffices to show that ( ) ɛ > 0: P lim sup λ max (X n ) 2 + ɛ n = 1, which will follow if we show that ɛ > 0: P ( λ max (X n ) 2 + ɛ, for all sufficiently large n ) = 1, which in turn follows from the Borel-Cantelli Lemma if we show that ɛ > 0: P ( λ max (X n ) 2 + ɛ ) <. (5) n=1

Proof of lim sup n λ max (X n ) 2 (continued) So let ɛ > 0 be given, and then note that for any t > 0 we have by Chebychev s inequality that for any t > 0, P ( λ max (X n ) 2 + ɛ ) = P ( exp(tλ max (X n )) exp(t(2 + ɛ) ) Note here that exp(tλ max (X n )) = λ max (exp(tx n )) = exp( (2 + ɛ)t)e { exp(tλ max (X n )) }. (6) n ( λ j (exp(tx n )) = ntr n exp(txn ) ), j=1 since all the eigenvalues of exp(tx n ) are positive. Thus...

E { exp(tλ max (X n )) } ne { tr n ( exp(txn ) )} n 1 = n exp( t2 2n ) (n 1)(n 2) (n j) j!(j + 1)! n exp( t2 2n ) n j ( t 2 ) j. j!(j + 1)! n n exp( t2 2n ) ( t j ) 2. j! ( n exp( t2 2n ) t j ) 2. j! = n exp ( t 2 2n + 2t). ( t 2 ) j. n

Proof of lim sup n λ max (X n ) 2 (continued) Comparing with (6) we conclude that P ( λ max (X n ) 2 + ɛ ) exp( (2 + ɛ)t)e { exp(tλ max (X n )) } n exp( (2 + ɛ)t) exp ( t 2 2n + 2t) = n exp ( ɛt + t2 2n), which holds for all t > 0. Putting t = nɛ we obtain P ( λ max (X n ) 2 + ɛ ) n exp ( ) nɛ 2 2, from which (5) follows immediately.

Proof of lim inf n λ max (X n ) 2 almost surely Let ɛ be a positive number. Then for almost all ω we have # { j {1,..., n} λ j (X n (ω)) 2 ɛ} = n δ λj (X n(ω))([2 ɛ, )) j=1 = nµ Xn(ω)([2 ɛ, )), n since, according to the strong version of Wigner s semi-circle law, 1 µ Xn(ω)([2 ɛ, )) n 2π 2 2 ɛ for almost all ω. It follows in particular that 4 t 2 dt > 0, lim inf n λ max(x n (ω)) 2 ɛ, for almost all ω.