NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS Sheehan Olver NA Group, Oxford
We are interested in numerically computing eigenvalue statistics of the GUE ensembles, i.e., Hermitian matrices with the distribution (given a particular V) e ntr V (M) dm Z n First question: can we automatically calculate the global mean distribution of eigenvalues, i.e., the equilibrium measure? Second question: how do statistics which satisfy universality laws differ when n is finite? In short, we do the Riemann Hilbert approach in a numerical way
Many other physical and mathematical objects have since been found to also be described by random matrix theory, including: Quantum billiards Random Ising model for glass Trees in the Scandinavian forest Resonance frequencies of structural materials Even the nontrivial zeros of the Riemann zeta function!
OUTLINE. Relationship of random matrix theory and orthogonal polynomials 2. Equilibrium measures supported on a single interval. Equivalence to a simple Newton iteration 2. Equilibrium measures supported on multiple intervals 3. Computation of large order orthogonal polynomials, through Riemann Hilbert problems 4. Calculation of gap eigenvalue statistics
Relationship of random matrix theory with orthogonal polynomials For the Gaussian unitary ensemble (i.e., large random Hermitian matrices), gap eigenvalue statistics can be reduced to the Fredholm determinant det [I + K n Ω ] where the kernel the weight K n is constructed from pn and pn+; the orthogonal polynomials with respect to e nv (x) dx Fredholm determinants can be easily computed numerically (Bornemann 200), as long as we can evaluate the kernel Therefore, computing distributions depends only on evaluating the orthogonal polynomials We will construct a numerical method for computing pn and pn+ whose computational cost is independent of n, which requires first calculating the equilibrium measure
EQUILIBRIUM MEASURES Suppose the real line is a conductor, on which n discrete charges are placed, with total unit charge Suppose further an external field V is present 4 3 The equilibrium measure dµ = ψ(x)dx is the limiting distribution (weak* limit) of the charges On the right is a sketch for V(x) = x 2 2 2 2 (see eg. Saff and Totik 997)
Applications of the equilibrium measure Global mean distribution of the eigenvalues of GUE matrix with distribution Z n e ntr V (M) dm Distribution of near optimum interpolation points (Fekete points) Distribution of zeros of orthogonal polynomials with the weight e nv (x) dx For V(x) = x 2 these are scaled Hermite polynomials In the last slide, I cheated and plotted these roots Computing orthogonal polynomials Best rational approximation
FORMAL DEFINITION Given an external field V : R R, the equilibrium measure is the unique Borel measure dµ = ψ(x)dx such that log t s dµ(t)dµ(s)+ V (s)dµ(s) is minimal.
This definition can be reduced to the following Euler Lagrange formulation: 2 log 2 log dµ + V (z) = for z supp µ x z dµ + V (z) for all real z x z We let g(z) = log(z x)dµ so that (where ± imply limit from the left and right) Differentiating we get g + + g = V and g log z φ + + φ = V and φ z for φ(z) =g (z) = dµ(x) z x
Now suppose we manage to find an analytic function such that φ + + φ = V and φ z on some subset Γ of the real line In certain cases (for example, if V is convex), there is only one Γ such that this is possible, hence Γ must be supp μ Plemelj s lemma then tells us that we can find dµ = ψ(x)dx by: i 2π φ + (x) φ (x) = ψ(x)
For a given Γ, we can find all solutions to φ + + φ = V and φ(z) c Γ z The goal, then, is to choose Γ so that: cγ is precisely φ is bounded This is the inverse Cauchy transform, which we denote by P Γf
φ + (z)+φ (z) =f(z) and φ( ) =0 PROBLEM ON THE CIRCLE f(z) = k= ˆf k z k φ + φ
φ + (z)+φ (z) =f(z) and φ( ) =0 PROBLEM ON THE CIRCLE f(z) = k= ˆf k z k φ + = ˆf k z k φ = ˆf k z k k=0 k=
PROBLEM ON THE UNIT INTERVAL Consider the Joukowski map from the unit circle to the unit interval J(z) = 2 z + z Functions analytic inside and outside the unit circle are mapped to functions analytic off the unit interval.
We define four inverses to the Joukowski map: J + (x) =x x x + J (x) =x + x x + J (x) =x +i x +x J (x) =x i x +x
Suppose we compute the inverse Cauchy transform of the mapped function on the unit circle φ = P f(j( )) 2 Then the following function satisfies the necessary jump φ(j + (x)) + φ(j (x)) f(j(z)) φ + f(j(z)) φ 2 2 φ + (J φ + (J (x)) + φ (J (x)) (x)) + φ (J (x))
The problem: 2 φ(j + ( )) + φ(j ( )) = 2 [φ(0) + φ( )] = ˆf 0 +0 2 = ˆf 0 2 =0 Fortunately, κ(z) = z + z has no jump on the contour: κ + + κ = 0 and κ(z) z Therefore, we find that P ξ f = 2 φ(j + (x)) + φ(j (x)) ˆf 0 2 zκ(z)+ξκ(z) where there is now a free parameter
Now suppose f is sufficiently smooth, so that it can be expanded into Chebyshev series (in O(n log n) time using the DCT): f(x) = k=0 ˇf k T k (x) k Then f(j(z)) = ˇf 0 + 2 k= ˇf k z k It follows that (a numerically stable, uniformly convergent expression) P ξ f(z) = 2 k=0 ˇf k J + (z) k ˇf 0 2 zκ(z)+ξκ(z)
OTHER INTERVALS M (a,b) (z) = 2z a b b a maps the interval (a, b) to the unit interval. Let ˇf (a,b),k be the Chebyshev coefficents over (a, b), so that f(z) = k=0 ˇf (a,b),k T k (M (a,b) (z)) k Then P (a,b),ξ f(z) = 2 k=0 ˇf (a,b),k J + (M (a,b) (z)) k ˇf (a,b),0 2 M (a,b) (z)κ(m (a,b) (z))+ξκ(m (a,b) (z))
P (a,b),ξ f(z) = 2 k=0 ˇf (a,b),k J + (M (a,b) (z)) k ˇf (a,b),0 2 M (a,b) (z)κ(m (a,b) (z)) + ξκ(m (a,b) (z)) Let f =. The only way the solution is bounded is if Since we require V ξ = 0 and ˇf(a,b),0 =0 J + (z) 2z + O z 2 and M (a,b) (z) 2z b a + O() so that b a ˇf (a,b), = 8 P (a,b) f(z) z (equivalent to standard conditions in terms of moments)
We thus have two unknowns, a and b, with two conditions: b a ˇf (a,b),0 = 0 and ˇf (a,b), = 8 The Chebyshev coefficients can be computed efficiently using the FFT Moreover, each of these equations are differentiable with respect to a and b Thus we can setup a trivial Newton iteration to determine a and b This iteration is guaranteed to converge to supp µ whenever V is convex or, if supp µ is a single interval and the initial guess of a and b are sufficiently accurate We then recover the equilibrium measure using the formula M(a,b) (x) dµ = π k= ˇf k U k (M (a,b) (x)) dx
x 00 e x x 0.25 3.5 0.20 0.5 0.0 3.0 2.5 2.0 0.05.5.0 0.5 0.5.0 3 2 dµ dµ 2.5 0.30 2.0 0.25.5 0.20.0 0.5 0.0 0.5 0.05.0 0.5 0.5.0 3 2
5 x2 4 5 x3 + 20 x4 + 8 5 x sin 3x + 0x 20.0 2 0.5 0.5 0.5 2 2 0.5.0 dµ dµ 0.5 0.4.4.2.0 0.3 0.8 0.2 0.6 0.4 0. 0.2 2 2 0.5 0.5
g + + g = V µ g + + g V g = Pf Pf
æ κ Σ (z) z = a b J+ (M Σ (z)), 2 M Σ (z)κ Σ (z) z = b a MΣ (z) J+ (M Σ (z)), 2 J+ (M Σ (z)) z = b a 4 2 J + (M Σ (z)) 2 J+ (M Σ (z)), J+ (M Σ (z)) k z = b a 4 k + J + (M Σ (z)) k+ k J + (M Σ (z)) k (,b) (a, b) J (M Σ(x)) J (M Σ(x)) g(z) z = O z.
g + + g V 5 x2 4 5 x3 + 20 x4 + 8 5 x sin 3x + 0x 20 4 2 2 4 6.5.0 0.5 0.5.0.5 5 5 0 0 5 5 20 20
MULTIPLE INTERVALS P Γ Γ Γ = Γ Γ 2 P Γ f = P Γ r + P Γ2 r 2 r r 2
r r 2 P Γλ r κ (z) = ř λ,k P Γλ T k (M Γλ (z)) x Γ λ,...,xγ λ N r λ,k r (x Γ j )+[P+ Γ 2 + P Γ 2 ]r 2 (x Γ j )=f(xγ j ) P + Γ + P Γ r (x Γ 2 j )+r 2(x Γ 2 j )=f(xγ 2 j ) Γ Γ 2 f O(n n)
ř,0 =ř 2,0 =0 z Γ =(a,b ) Γ 2 =(a 2,b 2 ) b a 8 ř, + b 2 a 2 8 ř 2, g = Pf g + + g = V Γ Γ 2 g + (b )+g (b ) V (b )=g + (a 2 )+g (a 2 ) V (a 2 )
V (x) = ( 3+x)( 2+x)( + x)(2 + x)(3 + x)( +2x) α = α 600 400 200 4 2 2 4
ORTHOGONAL POLYNOMIALS
Once we have computed the equilibrium measure, we can set up the Riemann Hilbert problem for the orthogonal polynomials RH problems can be computed numerically (as described last week) We need to write the RH problem so that the solution is smooth (no singularities) along each contour, and localized around stationary points This will be accurate in the asymptotic regime: arbitrarily large n The key result: we can systematically compute arbitrarily large order orthogonal polynomials with respect to general weights We could also use this to compute the nodes and weights of Gaussian quadrature rules with respect to general weights in O(n) time
The unaltered Riemann Hilbert problem for orthogonal polynomials is Φ + = Φ e nv z n O z and Φ n O z n z n We know how to compute g such that g(z) log z and g + + g = V on supp dµ We thus let Φ = Ψ e ng e ng where Ψ I and Ψ + = Ψ e ng e nv e ng+ e ng = Ψ e n(g g + ) e n(g+ +g V ) e n(g+ g ) e ng+ (based on Deift 999)
From the construction of g, the matrix e n(g g + ) e n(g+ +g V ) has the following nice properties: e n(g+ g ) Off of supp µ, it has the form e n(g + +g V ) which decays exponentially (g decays at infinity, so V dominates) On supp µ, it has the form e n(g g + ) e n = L e n(g+ g ) for L = e n/2 e n/2 e n(v 2g ) e n(v 2g+ ) L (based on Deift 999)
We thus want to solve the RH problem (where all the jumps are computable numerically!) e n(v 2g) e n(g + +g V +) e n(2g V +) e n(v 2g) (based on Deift 999)
P (z) P + = P P (z) I Q Q 2
P e n(v 2g) P UP UP UP UQ UQ 2 PQ PQ 2 UP UP UP P e n(v 2g) P
RANDOM MATRIX DISTRIBUTIONS
We want to compute gap probabilities These are expressible as det [I + K n Ω ] for (where Φ is still the solution the RH problem) K n (x, y) = 2πi e n/2[v (x)+v (y)] Φ (x)φ 2 (y) Φ (y)φ 2 (x) x y
ALGORITHM Input: potential V, dimension of random matrix n, gap interval Ω Output: probability that there are no eigenvalues in Ω Step : Compute the equilibrium measure using Newton iteration Step 2: Construct and solve the orthogonal polynomial RH problem numerically Step 3: Use Bornemann s Fredholm determinant solver
UNIVERSALITY Let Ω = a + ( s, s) K n (a, a) Then it is known that, for a inside the support of the equilibrium measure and any polynomial potential, the Fredholm determinant converges to the Fredholm determinant over ( s,s) with the sine kernel K (x, y) = sin π(x y) x y However, for finite n, the distributions vary
n = 30, 50, x 2 x 4 sin 3x + 0x 20
Potential of numerical RH approach for random matrix theory Discovery of unknown universality laws? Inverse problem: determining information about the potential V from observed data? Can we determine which random matrix ensemble generates the zeros of the Riemann Zeta function? Modelling physical systems?