Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus
andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random n n matrix A on (Ω, F, P) is an n n-matrix A = (a ij ) 1 i,j n, where all the entries are complex valued random variables on (Ω, F, P). In other words, A is a measurable mapping A: (Ω, F, P) (M n (C), B(M n (C))), when M n (C) is equipped with its Borel σ-algebra B(M n (C)).
The spectral distribution of a selfadjoint random matrix Let A: Ω M n (C) be a selfadjoint random matrix, i.e., A (ω) = A(ω) for all ω. Then for each ω we consider the ordered eigenvalues λ 1 (ω) λ 2 (ω) λ n (ω) of A(ω). For each fixed ω the empirical eigenvalue distribution of A(ω) is the probability measure µ A(ω) = 1 n n j=1 δ λj (ω), where δ c denotes the Dirac measure at the constant c. Then the spectral distribution of A is the mixture of the family (µ A(ω) ) ω Ω with respect to P, i.e...
The spectral distribution (continued) µ A (B) = Ω µ A(ω) (B) dp(ω), for any Borel subset B of. Thus, by a standard extension argument, for any bounded Borel function f : we have ( ) f (t) µ A (dt) = f (t) µ A(ω) (dt) dp(ω) = = Ω Ω Ω 1 n n f (λ j (ω)) dp(ω) j=1 tr n ( f (A(ω)) ) dp(ω) = E { tr n ( f (A) )}.
The Gaussian Unitary Ensemble Definition. By GUE(n, σ 2 ) we denote the set of random n n matrices X = (x ij ) 1 i,j n, defined on (Ω, F, P), which satisfy the following conditions: i j : x ij = x ji. the random variables x ij, 1 i j n, are independent. i < j : e(x ij ), Im(x ij ) i.i.d. N(0, 1 2 σ2 ). i : x ii N(0, σ 2 ).
The spectral distribution of a GUE random matrix. Let X n be a random matrix in GUE(n, 1 n ). For any bounded Borel function f : we then have E { tr n (f (X n )) } = f (x)h n (x) dx, where the function h n : is given by and where h n (x) = 1 n 1 ( ϕ n j 2n 2 x) 2, ϕ 0, ϕ 1, ϕ 2,..., is the sequence of Hermite functions: ϕ k (x) = 1 (2 k k! π) 1/2 H k (x) exp( x2 2 ), (k N 0), H 0, H 1, H 2,..., are the Hermite polynomials: ( d H k (x) = ( 1) k exp(x 2 k ) ) dx k exp( x 2 ).
The moment generating function of a GUE random matrix Theorem. Let X n be a random matrix in GUE(n, 1 n ). Then for any complex number z we have E { tr n (exp(zx n )) } n 1 = exp( z2 2n ) (n 1)(n 2) (n j) j!(j + 1)! ( z 2 ) j. n
Sketch of Proof. By analytic continuation it suffices to consider the case z. Step I. We know that E { tr n (exp(zx n )) } = 1 2n exp(zt) ( n 1 ( ) ϕ n k 2 t) 2 dt, k=0 so by a substitution, it follows that we have to show that F (s) := exp(st) n 1 = n exp( s2 4 ) for any real number s. ( n 1 ϕ k (t) 2) dt k=0 (n 1)(n 2) (n j) j!(j + 1)! ( s 2 ) j, 2
Sketch of Proof (continued). Step II. By partial integration we find that F (s) := exp(st) ϕ k (t) 2) dt = 1 exp(st) d s dt ( n 1 k=0 ( n 1 ϕ k (t) 2) dt k=0 and using properties of the Hermite polynomials, one may verify that d ( n 1 ϕ k (t) 2) = 2nϕ n (t)ϕ n 1 (t), dt k=0 so that 2n F (s) = exp(st)ϕ n (t)ϕ n 1 (t) dt s 2n ( = 2 2n 1 n!(n 1)!π ) 1/2 exp(st t 2 )H n (t)h n 1 (t) dt. s
Sketch of Proof (continued). Step III Using the substitution y = t 1 2s we have that exp(st t 2 ) = exp( y 2 ) exp( 1 4 s2 ), and hence 2n ( F (s) = 2 2n 1 n!(n 1)!π ) 1/2 exp(st t 2 )H n (t)h n 1 (t) dt s = 1 ( 2 (n 1) ((n 1)!) 2 π ) 1/2 exp( 1 s 4 s2 ) exp( y 2 )H n (y + 1 2 s)h n 1(y + 1 2s) dy. Using then the following properties of the Hermite polynomials: k ( ) k H k (x + a) = (2a) k j H j (x), (x, a, k N) j and H k (y)h l (y) exp( y 2 ( ) dy = δ k,l 2 k k! π ) (k, l N 0 ),
Sketch of Proof (continued). we find that F (s) = 1 s ( 2 (n 1) ((n 1)!) 2 π ) 1/2 exp( 1 = n 1 = n exp( s2 4 ) n 1 ( n j 4 s2 ) )( n 1 (n 1)(n 2) (n j) j!(j + 1)! j ) s n j s n 1 j( 2 j j! π ) ( s 2 ) j, 2 as desired.
Wigner s semi-circle law Theorem. For each n in N, let X n be a random matrix in GUE(n, 1 n ) and consider its spectral distribution µ X n. Then µ Xn w 1 2π 4 t 2 1 [ 2,2] (t) dt, as n, i.e., for any continuous bounded function f : we have E { tr n (f (X n )) } = as n. f (x) µ Xn (dx) 1 2 f (x) 4 x 2π 2 dx, 2
Proof of Wigner s semi-circle law. By the continuity theorem for characteristic functions of probability measures, it suffices to show that E { tr n (exp(zx n )) } 1 2 exp(zt) 4 t 2π 2 dt, 2 as n for any complex number z. Given such a z it follows by the previous theorem and dominated convergence (for series) that E { tr n (exp(zx n )) } n 1 = exp( z2 2n ) n (n 1)(n 2) (n j) j!(j + 1)! 1 j!(j + 1)! z2j. ( z 2 n ) j
Proof of Wigner s semi-circle law (continued). On the other hand we have 1 2π 2 2 exp(zt) 4 t 2 dt = 1 2π = = 2 2 ( z j t j ) 4 t j! 2 dt z j ( 1 2 t j ) 4 t j! 2π 2 dt 2 z 2j ( 1 ( ) 2j ) (2j)! j + 1 j = 1 j!(j + 1)! z2j, as desired.
The strong version of Wigner s semi-circle law Theorem. For each n in N, let X n be a random matrix in GUE(n, 1 n ). Then there is a measurable set S Ω with probability one, such that µ Xn(ω) w 1 2π 4 t 2 1 [ 2,2] (t) dt, as n, for all ω in S. In other words, for any interval I in, we have 1 n #{ j {1, 2,..., n} λ j (X n (ω)) I } 1 n 2π for all ω in S. I [ 2,2] 4 t 2 dt,
Sketch of Proof. Step 1 (Concentration inequality). Let G N,σ denote the Gaussian distribution on N with Lebesgue density dg N,σ (x) dx = (2πσ 2 ) N/2 exp( x 2 2σ 2 ), where x is the Euclidean norm of x. Furthermore, let F : N be a function that satisfies the Lipschitz condition F (x) F (y) c x y, (x, y N ), (1) for some positive constant c. Then for any positive number ɛ, we have that G N,σ ({ x N F (x) E(F ) > ɛ }) 2 exp ( Kɛ2 c 2 σ 2 ), where E(F ) = N F (x) dg N,σ (x), and K = 2 π 2.
Sketch of Proof (continued). Step 2. Use (1) on the function F (A) = tr n ( f (A) ), (A Mn (C) sa ), where f : satisfies a Lipschitz condition: This yields the estimate f (y) f (x) c x y, (x, y ). P ( tr n (f (X n )) E { tr n (f (X n )) } > ɛ ) exp( n2 Kɛ 2 c 2 ). Step 3. Use the Borel-Cantelli lemma!
A differential equation for the spectral density h n Proposition. For any positive integer n, the spectral density h n of a GUE random matrix X n satisfies the differential equation: 1 n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) = 0.
Sketch of proof. Step 1. We have seen that ψ n (z) := exp(zt)h n (t) dt = E { tr n (exp(zx n )) } = e z2 /2n η n (z 2 /n), ( with η n : the function given by η n (s) = n 1 (n 1)(n 2) (n j) s j. j!(j + 1)! It is a classical result that η n satisfies the differential equation: sη n(s) + (2 + s)η n(s) + (n 1)η n (s) = 0. (2)
Sketch of proof (continued). Step II. Using the substitution s = z 2 /n it follows from (2) that ψ satisfies the differential equation n 2 zψ n(z) + 3n 2 ψ n(z) (4n 2 z + z 3 )ψ n (z) = 0. Setting z = iy for y in, it follows that the Fourier transform ĥ n (y) = ψ n ( iy) satisfies the differential equation: n 2 iyĥ n(y) + 3n 2 iĥ n(y) + (4n 2 iy iy 3 )ĥn(y) = 0. (3)
Sketch of proof (continued). Step III. Consider the co-fourier transform F : L 1 () L 1 () defined by [Ff ](y) = exp(iyt)f (t) dt, (y ), and recall that by Fourier inversion Fĥn = (2π)h n. Applying now F to (3), we obtain that h n satisfies 1 n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) = 0, as desired.
The Harer-Zagier ecursion Formulae Theorem. For each n in N and p in N 0, put γ(p, n) = t 2p h n (t) dt = E { tr n (Xn 2p ) }, where X n GUE(n, 1 n ). These moments satisfy the recursion formula (p + 2)γ(p + 1, n) = (4p2 1)p n 2 γ(p 1, n) + (4p + 2)γ(p, n), (4) for any positive integer p.
Proof of the Harer-Zagier recursion formulae Since h n satisfies the differential equation 1 n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) = 0, it follows by multiplication by x 2p+1 and partial integration that 0 = x 2p+1( n 2 h n (x) + (4 x 2 )h n(x) + xh n (x) ) dx = ( n 2 (2p + 1)(2p)(2p 1)x 2p 2 4(2p + 1)x 2p + (2p + 3)x 2p+2 + x 2p+2) h n (x) dx = 2(4p2 1)p n 2 γ(p 1, n) 4(2p + 1)γ(p, n) + 2(p + 2)γ(p + 1, n), from which (4) follows readily.
Convergence of largest and smallest eigenvalue for a GUE matrix Theorem. For each positive integer n, let X n be a random matrix from GUE(n, 1 n ), and let λ max(x n ) and λ min (X n ) denote, respectively, the largest and smallest eigenvalues of X n. Then and lim λ max(x n ) = 2, n lim λ min(x n ) = 2, n almost surely, almost surely.
Proof of lim sup n λ max (X n ) 2 almost surely It suffices to show that ( ) ɛ > 0: P lim sup λ max (X n ) 2 + ɛ n = 1, which will follow if we show that ɛ > 0: P ( λ max (X n ) 2 + ɛ, for all sufficiently large n ) = 1, which in turn follows from the Borel-Cantelli Lemma if we show that ɛ > 0: P ( λ max (X n ) 2 + ɛ ) <. (5) n=1
Proof of lim sup n λ max (X n ) 2 (continued) So let ɛ > 0 be given, and then note that for any t > 0 we have by Chebychev s inequality that for any t > 0, P ( λ max (X n ) 2 + ɛ ) = P ( exp(tλ max (X n )) exp(t(2 + ɛ) ) Note here that exp(tλ max (X n )) = λ max (exp(tx n )) = exp( (2 + ɛ)t)e { exp(tλ max (X n )) }. (6) n ( λ j (exp(tx n )) = ntr n exp(txn ) ), j=1 since all the eigenvalues of exp(tx n ) are positive. Thus...
E { exp(tλ max (X n )) } ne { tr n ( exp(txn ) )} n 1 = n exp( t2 2n ) (n 1)(n 2) (n j) j!(j + 1)! n exp( t2 2n ) n j ( t 2 ) j. j!(j + 1)! n n exp( t2 2n ) ( t j ) 2. j! ( n exp( t2 2n ) t j ) 2. j! = n exp ( t 2 2n + 2t). ( t 2 ) j. n
Proof of lim sup n λ max (X n ) 2 (continued) Comparing with (6) we conclude that P ( λ max (X n ) 2 + ɛ ) exp( (2 + ɛ)t)e { exp(tλ max (X n )) } n exp( (2 + ɛ)t) exp ( t 2 2n + 2t) = n exp ( ɛt + t2 2n), which holds for all t > 0. Putting t = nɛ we obtain P ( λ max (X n ) 2 + ɛ ) n exp ( ) nɛ 2 2, from which (5) follows immediately.
Proof of lim inf n λ max (X n ) 2 almost surely Let ɛ be a positive number. Then for almost all ω we have # { j {1,..., n} λ j (X n (ω)) 2 ɛ} = n δ λj (X n(ω))([2 ɛ, )) j=1 = nµ Xn(ω)([2 ɛ, )), n since, according to the strong version of Wigner s semi-circle law, 1 µ Xn(ω)([2 ɛ, )) n 2π 2 2 ɛ for almost all ω. It follows in particular that 4 t 2 dt > 0, lim inf n λ max(x n (ω)) 2 ɛ, for almost all ω.