NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford

Similar documents
Computation of equilibrium measures

Numerical solution of Riemann Hilbert problems: random matrix theory and orthogonal polynomials

OPSF, Random Matrices and Riemann-Hilbert problems

OPSF, Random Matrices and Riemann-Hilbert problems

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Markov operators, classical orthogonal polynomial ensembles, and random matrices

A Riemann Hilbert approach to Jacobi operators and Gaussian quadrature

Exponential tail inequalities for eigenvalues of random matrices

Universality for random matrices and log-gases

Universality of local spectral statistics of random matrices

Simultaneous Gaussian quadrature for Angelesco systems

Polynomial Approximation: The Fourier System

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

Fredholm determinant with the confluent hypergeometric kernel

Random Matrix: From Wigner to Quantum Chaos

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials

Orthogonal Polynomial Ensembles

Free probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot

4 Uniform convergence

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka

Kernel Method: Data Analysis with Positive Definite Kernels

ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS

A general framework for solving Riemann Hilbert problems numerically

A general framework for solving Riemann Hilbert problems numerically

Change of variable formulæ for regularizing slowly decaying and oscillatory Cauchy and Hilbert transforms

Random matrices and the Riemann zeros

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Lectures 6 7 : Marchenko-Pastur Law

Qualifying Exams I, 2014 Spring

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

NUMERICAL SOLUTION OF DYSON BROWNIAN MOTION AND A SAMPLING SCHEME FOR INVARIANT MATRIX ENSEMBLES

1 Math 241A-B Homework Problem List for F2015 and W2016

Numerical Methods I Orthogonal Polynomials

Separation of Variables in Linear PDE: One-Dimensional Problems

Non-perturbative effects in ABJM theory

INTEGRATION WORKSHOP 2004 COMPLEX ANALYSIS EXERCISES

Determinantal point processes and random matrix theory in a nutshell

Multiple Orthogonal Polynomials

Notation. General. Notation Description See. Sets, Functions, and Spaces. a b & a b The minimum and the maximum of a and b

Triangular matrices and biorthogonal ensembles

Complex Analysis, Stein and Shakarchi Meromorphic Functions and the Logarithm

A quick derivation of the loop equations for random matrices

Physics 342 Lecture 17. Midterm I Recap. Lecture 17. Physics 342 Quantum Mechanics I

Universality in Numerical Computations with Random Data. Case Studies.

arxiv: v1 [math.cv] 30 Jun 2008

1. The COMPLEX PLANE AND ELEMENTARY FUNCTIONS: Complex numbers; stereographic projection; simple and multiple connectivity, elementary functions.

Section 9 Variational Method. Page 492

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

Double contour integral formulas for the sum of GUE and one matrix model

GENERALIZED STIELTJES POLYNOMIALS AND RATIONAL GAUSS-KRONROD QUADRATURE

Electronic Transactions on Numerical Analysis Volume 50, 2018

Commutative Banach algebras 79

Chapter 2 Orthogonal Polynomials and Weighted Polynomial Approximation

Numerical solution of Riemann Hilbert problems: Painlevé II

Determinantal point processes and random matrix theory in a nutshell

Physics 221A Fall 1996 Notes 16 Bloch s Theorem and Band Structure in One Dimension

Matrix Functions and their Approximation by. Polynomial methods

Preliminary Examination in Numerical Analysis

Determinantal Processes And The IID Gaussian Power Series

F (z) =f(z). f(z) = a n (z z 0 ) n. F (z) = a n (z z 0 ) n

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

Math 259: Introduction to Analytic Number Theory Functions of finite order: product formula and logarithmic derivative

Bulk scaling limits, open questions

RANDOM HERMITIAN MATRICES AND GAUSSIAN MULTIPLICATIVE CHAOS NATHANAËL BERESTYCKI, CHRISTIAN WEBB, AND MO DICK WONG

From the mesoscopic to microscopic scale in random matrix theory

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

Airy and Pearcey Processes

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

Simple Examples on Rectangular Domains

Contents. I Basic Methods 13

Quantum Harmonic Oscillator

z b k P k p k (z), (z a) f (n 1) (a) 2 (n 1)! (z a)n 1 +f n (z)(z a) n, where f n (z) = 1 C

Bernstein s inequality and Nikolsky s inequality for R d

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Synopsis of Complex Analysis. Ryan D. Reece

arxiv:math/ v2 [math.pr] 10 Aug 2005

Nonlinear steepest descent and the numerical solution of Riemann Hilbert problems

References 167. dx n x2 =2

SET-A U/2015/16/II/A

Physics 137A Quantum Mechanics Fall 2012 Midterm II - Solutions

Numerical Methods for Random Matrices

Part II Probability and Measure

Fast inverse transform sampling in one and two dimensions

Computation of Rational Szegő-Lobatto Quadrature Formulas

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Valerio Cappellini. References

Applied/Numerical Analysis Qualifying Exam

Lucio Demeio Dipartimento di Ingegneria Industriale e delle Scienze Matematiche

MATH 311: COMPLEX ANALYSIS CONTOUR INTEGRALS LECTURE

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Qualification Exam: Mathematical Methods

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

Numerical Evaluation of Standard Distributions in Random Matrix Theory

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Hilbert Space Problems

Statistical Interpretation

Riemann Mapping Theorem (4/10-4/15)

Transcription:

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS Sheehan Olver NA Group, Oxford

We are interested in numerically computing eigenvalue statistics of the GUE ensembles, i.e., Hermitian matrices with the distribution (given a particular V) e ntr V (M) dm Z n First question: can we automatically calculate the global mean distribution of eigenvalues, i.e., the equilibrium measure? Second question: how do statistics which satisfy universality laws differ when n is finite? In short, we do the Riemann Hilbert approach in a numerical way

Many other physical and mathematical objects have since been found to also be described by random matrix theory, including: Quantum billiards Random Ising model for glass Trees in the Scandinavian forest Resonance frequencies of structural materials Even the nontrivial zeros of the Riemann zeta function!

OUTLINE. Relationship of random matrix theory and orthogonal polynomials 2. Equilibrium measures supported on a single interval. Equivalence to a simple Newton iteration 2. Equilibrium measures supported on multiple intervals 3. Computation of large order orthogonal polynomials, through Riemann Hilbert problems 4. Calculation of gap eigenvalue statistics

Relationship of random matrix theory with orthogonal polynomials For the Gaussian unitary ensemble (i.e., large random Hermitian matrices), gap eigenvalue statistics can be reduced to the Fredholm determinant det [I + K n Ω ] where the kernel the weight K n is constructed from pn and pn+; the orthogonal polynomials with respect to e nv (x) dx Fredholm determinants can be easily computed numerically (Bornemann 200), as long as we can evaluate the kernel Therefore, computing distributions depends only on evaluating the orthogonal polynomials We will construct a numerical method for computing pn and pn+ whose computational cost is independent of n, which requires first calculating the equilibrium measure

EQUILIBRIUM MEASURES Suppose the real line is a conductor, on which n discrete charges are placed, with total unit charge Suppose further an external field V is present 4 3 The equilibrium measure dµ = ψ(x)dx is the limiting distribution (weak* limit) of the charges On the right is a sketch for V(x) = x 2 2 2 2 (see eg. Saff and Totik 997)

Applications of the equilibrium measure Global mean distribution of the eigenvalues of GUE matrix with distribution Z n e ntr V (M) dm Distribution of near optimum interpolation points (Fekete points) Distribution of zeros of orthogonal polynomials with the weight e nv (x) dx For V(x) = x 2 these are scaled Hermite polynomials In the last slide, I cheated and plotted these roots Computing orthogonal polynomials Best rational approximation

FORMAL DEFINITION Given an external field V : R R, the equilibrium measure is the unique Borel measure dµ = ψ(x)dx such that log t s dµ(t)dµ(s)+ V (s)dµ(s) is minimal.

This definition can be reduced to the following Euler Lagrange formulation: 2 log 2 log dµ + V (z) = for z supp µ x z dµ + V (z) for all real z x z We let g(z) = log(z x)dµ so that (where ± imply limit from the left and right) Differentiating we get g + + g = V and g log z φ + + φ = V and φ z for φ(z) =g (z) = dµ(x) z x

Now suppose we manage to find an analytic function such that φ + + φ = V and φ z on some subset Γ of the real line In certain cases (for example, if V is convex), there is only one Γ such that this is possible, hence Γ must be supp μ Plemelj s lemma then tells us that we can find dµ = ψ(x)dx by: i 2π φ + (x) φ (x) = ψ(x)

For a given Γ, we can find all solutions to φ + + φ = V and φ(z) c Γ z The goal, then, is to choose Γ so that: cγ is precisely φ is bounded This is the inverse Cauchy transform, which we denote by P Γf

φ + (z)+φ (z) =f(z) and φ( ) =0 PROBLEM ON THE CIRCLE f(z) = k= ˆf k z k φ + φ

φ + (z)+φ (z) =f(z) and φ( ) =0 PROBLEM ON THE CIRCLE f(z) = k= ˆf k z k φ + = ˆf k z k φ = ˆf k z k k=0 k=

PROBLEM ON THE UNIT INTERVAL Consider the Joukowski map from the unit circle to the unit interval J(z) = 2 z + z Functions analytic inside and outside the unit circle are mapped to functions analytic off the unit interval.

We define four inverses to the Joukowski map: J + (x) =x x x + J (x) =x + x x + J (x) =x +i x +x J (x) =x i x +x

Suppose we compute the inverse Cauchy transform of the mapped function on the unit circle φ = P f(j( )) 2 Then the following function satisfies the necessary jump φ(j + (x)) + φ(j (x)) f(j(z)) φ + f(j(z)) φ 2 2 φ + (J φ + (J (x)) + φ (J (x)) (x)) + φ (J (x))

The problem: 2 φ(j + ( )) + φ(j ( )) = 2 [φ(0) + φ( )] = ˆf 0 +0 2 = ˆf 0 2 =0 Fortunately, κ(z) = z + z has no jump on the contour: κ + + κ = 0 and κ(z) z Therefore, we find that P ξ f = 2 φ(j + (x)) + φ(j (x)) ˆf 0 2 zκ(z)+ξκ(z) where there is now a free parameter

Now suppose f is sufficiently smooth, so that it can be expanded into Chebyshev series (in O(n log n) time using the DCT): f(x) = k=0 ˇf k T k (x) k Then f(j(z)) = ˇf 0 + 2 k= ˇf k z k It follows that (a numerically stable, uniformly convergent expression) P ξ f(z) = 2 k=0 ˇf k J + (z) k ˇf 0 2 zκ(z)+ξκ(z)

OTHER INTERVALS M (a,b) (z) = 2z a b b a maps the interval (a, b) to the unit interval. Let ˇf (a,b),k be the Chebyshev coefficents over (a, b), so that f(z) = k=0 ˇf (a,b),k T k (M (a,b) (z)) k Then P (a,b),ξ f(z) = 2 k=0 ˇf (a,b),k J + (M (a,b) (z)) k ˇf (a,b),0 2 M (a,b) (z)κ(m (a,b) (z))+ξκ(m (a,b) (z))

P (a,b),ξ f(z) = 2 k=0 ˇf (a,b),k J + (M (a,b) (z)) k ˇf (a,b),0 2 M (a,b) (z)κ(m (a,b) (z)) + ξκ(m (a,b) (z)) Let f =. The only way the solution is bounded is if Since we require V ξ = 0 and ˇf(a,b),0 =0 J + (z) 2z + O z 2 and M (a,b) (z) 2z b a + O() so that b a ˇf (a,b), = 8 P (a,b) f(z) z (equivalent to standard conditions in terms of moments)

We thus have two unknowns, a and b, with two conditions: b a ˇf (a,b),0 = 0 and ˇf (a,b), = 8 The Chebyshev coefficients can be computed efficiently using the FFT Moreover, each of these equations are differentiable with respect to a and b Thus we can setup a trivial Newton iteration to determine a and b This iteration is guaranteed to converge to supp µ whenever V is convex or, if supp µ is a single interval and the initial guess of a and b are sufficiently accurate We then recover the equilibrium measure using the formula M(a,b) (x) dµ = π k= ˇf k U k (M (a,b) (x)) dx

x 00 e x x 0.25 3.5 0.20 0.5 0.0 3.0 2.5 2.0 0.05.5.0 0.5 0.5.0 3 2 dµ dµ 2.5 0.30 2.0 0.25.5 0.20.0 0.5 0.0 0.5 0.05.0 0.5 0.5.0 3 2

5 x2 4 5 x3 + 20 x4 + 8 5 x sin 3x + 0x 20.0 2 0.5 0.5 0.5 2 2 0.5.0 dµ dµ 0.5 0.4.4.2.0 0.3 0.8 0.2 0.6 0.4 0. 0.2 2 2 0.5 0.5

g + + g = V µ g + + g V g = Pf Pf

æ κ Σ (z) z = a b J+ (M Σ (z)), 2 M Σ (z)κ Σ (z) z = b a MΣ (z) J+ (M Σ (z)), 2 J+ (M Σ (z)) z = b a 4 2 J + (M Σ (z)) 2 J+ (M Σ (z)), J+ (M Σ (z)) k z = b a 4 k + J + (M Σ (z)) k+ k J + (M Σ (z)) k (,b) (a, b) J (M Σ(x)) J (M Σ(x)) g(z) z = O z.

g + + g V 5 x2 4 5 x3 + 20 x4 + 8 5 x sin 3x + 0x 20 4 2 2 4 6.5.0 0.5 0.5.0.5 5 5 0 0 5 5 20 20

MULTIPLE INTERVALS P Γ Γ Γ = Γ Γ 2 P Γ f = P Γ r + P Γ2 r 2 r r 2

r r 2 P Γλ r κ (z) = ř λ,k P Γλ T k (M Γλ (z)) x Γ λ,...,xγ λ N r λ,k r (x Γ j )+[P+ Γ 2 + P Γ 2 ]r 2 (x Γ j )=f(xγ j ) P + Γ + P Γ r (x Γ 2 j )+r 2(x Γ 2 j )=f(xγ 2 j ) Γ Γ 2 f O(n n)

ř,0 =ř 2,0 =0 z Γ =(a,b ) Γ 2 =(a 2,b 2 ) b a 8 ř, + b 2 a 2 8 ř 2, g = Pf g + + g = V Γ Γ 2 g + (b )+g (b ) V (b )=g + (a 2 )+g (a 2 ) V (a 2 )

V (x) = ( 3+x)( 2+x)( + x)(2 + x)(3 + x)( +2x) α = α 600 400 200 4 2 2 4

ORTHOGONAL POLYNOMIALS

Once we have computed the equilibrium measure, we can set up the Riemann Hilbert problem for the orthogonal polynomials RH problems can be computed numerically (as described last week) We need to write the RH problem so that the solution is smooth (no singularities) along each contour, and localized around stationary points This will be accurate in the asymptotic regime: arbitrarily large n The key result: we can systematically compute arbitrarily large order orthogonal polynomials with respect to general weights We could also use this to compute the nodes and weights of Gaussian quadrature rules with respect to general weights in O(n) time

The unaltered Riemann Hilbert problem for orthogonal polynomials is Φ + = Φ e nv z n O z and Φ n O z n z n We know how to compute g such that g(z) log z and g + + g = V on supp dµ We thus let Φ = Ψ e ng e ng where Ψ I and Ψ + = Ψ e ng e nv e ng+ e ng = Ψ e n(g g + ) e n(g+ +g V ) e n(g+ g ) e ng+ (based on Deift 999)

From the construction of g, the matrix e n(g g + ) e n(g+ +g V ) has the following nice properties: e n(g+ g ) Off of supp µ, it has the form e n(g + +g V ) which decays exponentially (g decays at infinity, so V dominates) On supp µ, it has the form e n(g g + ) e n = L e n(g+ g ) for L = e n/2 e n/2 e n(v 2g ) e n(v 2g+ ) L (based on Deift 999)

We thus want to solve the RH problem (where all the jumps are computable numerically!) e n(v 2g) e n(g + +g V +) e n(2g V +) e n(v 2g) (based on Deift 999)

P (z) P + = P P (z) I Q Q 2

P e n(v 2g) P UP UP UP UQ UQ 2 PQ PQ 2 UP UP UP P e n(v 2g) P

RANDOM MATRIX DISTRIBUTIONS

We want to compute gap probabilities These are expressible as det [I + K n Ω ] for (where Φ is still the solution the RH problem) K n (x, y) = 2πi e n/2[v (x)+v (y)] Φ (x)φ 2 (y) Φ (y)φ 2 (x) x y

ALGORITHM Input: potential V, dimension of random matrix n, gap interval Ω Output: probability that there are no eigenvalues in Ω Step : Compute the equilibrium measure using Newton iteration Step 2: Construct and solve the orthogonal polynomial RH problem numerically Step 3: Use Bornemann s Fredholm determinant solver

UNIVERSALITY Let Ω = a + ( s, s) K n (a, a) Then it is known that, for a inside the support of the equilibrium measure and any polynomial potential, the Fredholm determinant converges to the Fredholm determinant over ( s,s) with the sine kernel K (x, y) = sin π(x y) x y However, for finite n, the distributions vary

n = 30, 50, x 2 x 4 sin 3x + 0x 20

Potential of numerical RH approach for random matrix theory Discovery of unknown universality laws? Inverse problem: determining information about the potential V from observed data? Can we determine which random matrix ensemble generates the zeros of the Riemann Zeta function? Modelling physical systems?