Numerical Evaluation of Standard Distributions in Random Matrix Theory

Similar documents
arxiv:hep-th/ v1 14 Oct 1992

Numerical Methods for Random Matrices

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Determinantal point processes and random matrix theory in a nutshell

Differential Equations for Dyson Processes

Airy and Pearcey Processes

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Finite Difference Methods for Boundary Value Problems

Lecture 8: Boundary Integral Equations

Exponential tail inequalities for eigenvalues of random matrices

Stochastic Differential Equations Related to Soft-Edge Scaling Limit

Review I: Interpolation

Near extreme eigenvalues and the first gap of Hermitian random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Numerical Integration (Quadrature) Another application for our interpolation tools!

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Appearance of determinants for stochastic growth models

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford

Differentiation and Integration

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Gaussian interval quadrature rule for exponential weights

Statistical Inference and Random Matrices

ON THE NUMERICAL EVALUATION OF FREDHOLM DETERMINANTS

Math 3301 Homework Set Points ( ) ( ) I ll leave it to you to verify that the eigenvalues and eigenvectors for this matrix are, ( ) ( ) ( ) ( )

Extreme eigenvalue fluctutations for GUE

INTRODUCTION, FOUNDATIONS

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Numerical Methods for Engineers and Scientists

Fredholm determinant with the confluent hypergeometric kernel

Review for Exam 2 Ben Wang and Mark Styczynski

Dynamic Programming. Macro 3: Lecture 2. Mark Huggett 2. 2 Georgetown. September, 2016

Determinantal point processes and random matrix theory in a nutshell

A determinantal formula for the GOE Tracy-Widom distribution

Preface. 2 Linear Equations and Eigenvalue Problem 22

arxiv: v2 [math.na] 24 Jun 2008

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Concentration Inequalities for Random Matrices

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap

ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

εx 2 + x 1 = 0. (2) Suppose we try a regular perturbation expansion on it. Setting ε = 0 gives x 1 = 0,

Lecture 1 Monday, January 14

Notes for CS542G (Iterative Solvers for Linear Systems)

v(x, 0) = g(x) where g(x) = f(x) U(x). The solution is where b n = 2 g(x) sin(nπx) dx. (c) As t, we have v(x, t) 0 and u(x, t) U(x).

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Constructing Taylor Series

Rectangular Young tableaux and the Jacobi ensemble

The classifier. Theorem. where the min is over all possible classifiers. To calculate the Bayes classifier/bayes risk, we need to know

The classifier. Linear discriminant analysis (LDA) Example. Challenges for LDA

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Stochastic Spectral Approaches to Bayesian Inference

Random Matrix: From Wigner to Quantum Chaos

Finite Differences: Consistency, Stability and Convergence

MAT 460: Numerical Analysis I. James V. Lambers

From the mesoscopic to microscopic scale in random matrix theory

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

An Introduction to Numerical Methods for Differential Equations. Janet Peterson

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Autonomous Systems and Stability

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Solving Boundary Value Problems (with Gaussians)

Fast Direct Methods for Gaussian Processes

Large sample covariance matrices and the T 2 statistic

Computing Gauss-Jacobi quadrature rules

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

Experimental mathematics and integration

Getting Started with Communications Engineering

2 Two-Point Boundary Value Problems

Linear Algebra Practice Final

MA1023-Methods of Mathematics-15S2 Tutorial 1

THE SECANT METHOD. q(x) = a 0 + a 1 x. with

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Stat 451 Lecture Notes Numerical Integration

NUMERICAL METHODS. x n+1 = 2x n x 2 n. In particular: which of them gives faster convergence, and why? [Work to four decimal places.

Solution of Linear Equations

11 The Max-Product Algorithm

Advances in Random Matrix Theory: Let there be tools

Numerical Solutions to Partial Differential Equations

Lecture 10: Powers of Matrices, Difference Equations

Massachusetts Institute of Technology Instrumentation Laboratory Cambridge, Massachusetts

Ordinary Differential Equations

Numerical Methods. King Saud University

Concentration inequalities: basics and some new challenges

Lecture 4.2 Finite Difference Approximation

The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems

Four Point Gauss Quadrature Runge Kuta Method Of Order 8 For Ordinary Differential Equations

Lecture 25: November 27

Solutions Preliminary Examination in Numerical Analysis January, 2017

Multivariate Distributions

Comparison of Clenshaw-Curtis and Gauss Quadrature

A Generalization of Wigner s Law

Boundary Layer Problems and Applications of Spectral Methods

Beyond the Gaussian universality class

MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Lattice Basis Reduction and the LLL Algorithm

Dyson series for the PDEs arising in Mathematical Finance I

Accuracy, Precision and Efficiency in Sparse Grids

Transcription:

Numerical Evaluation of Standard Distributions in Random Matrix Theory A Review of Folkmar Bornemann s MATLAB Package and Paper Matt Redmond Department of Mathematics Massachusetts Institute of Technology Cambridge, MA 02139, USA May 15, 2013

Level Spacing Function

Level Spacing Function Definition (Gaussian Ensemble Spacing Function) Let J R be an open interval. E (n) β (k; J) P(k eigenvalues of the n n Gaussian β-ensemble lie in J)

Level Spacing Function Definition (Gaussian Ensemble Spacing Function) Let J R be an open interval. E (n) β (k; J) P(k eigenvalues of the n n Gaussian β-ensemble lie in J) Let J = (0, s). Then E 2 (0; J) = probability no eigenvalues lie in (0, s).

Fredholm Determinant

Fredholm Determinant It is well known that E 2 (0; J) can be represented as a Fredholm determinant:

Fredholm Determinant It is well known that E 2 (0; J) can be represented as a Fredholm determinant: Theorem (Gaudin 1961) Given K sin (x, y) = sinc(π(x y)), ) E 2 (0, J) = det (I K sin L 2J Note the operator s restriction to square integrable functions over J. In general we will choose J = (0, s), and will notate E 2 (0, (0, s)) as E 2 (0, s) as per Bornemann s conventions.

Integral Formulation Theorem (Jimbo, Miwa, Mori, Sato 1980) ( πs E 2 (0; s) = exp 0 ) σ(x) x dx where σ(x) solves a particular form of the Painleve V equation: (xσ ) 2 = 4(σ xσ )(xσ σ (σ ) 2 ), σ(x) x π + x 2 π 2 (x 0)

Tracy-Widom Distribution

Tracy-Widom Distribution Definition (Tracy-Widom Distribution) Let F 2 (s) P( no eigenvalues of large-matrix limit GUE lie in (s, ))

Determinantal Representation

Determinantal Representation Theorem (Bronk 1964) Given we have K Ai (x, y) = Ai(x)Ai (y) Ai (x)ai(y) x y ( ) F 2 (s) = det I K Ai L 2 (s, )

Integral Formulation

Integral Formulation Theorem (Tracy, Widom 1993) ( F 2 (s) = exp s ) (x s)u(x) 2 dx where u(x) is the Hastings-McLeod (1980) solution to the Painleve II equation u = 2u 3 + xu, u(x) Ai(x) (x )

The common point of view, and why it s wrong.

The common point of view, and why it s wrong. Common point of view: Painleve formulation is somehow numerically better behaved than Fredholm determinant Solving initial value problem for numerical integration is easier to implement than Fredholm

The common point of view, and why it s wrong. Common point of view: Painleve formulation is somehow numerically better behaved than Fredholm determinant Solving initial value problem for numerical integration is easier to implement than Fredholm Bornemann s view: Numerical evaluation of Painleve transcendents is actually fairly involved. Stability is a major concern. There exists a simple, fast, accurate numerical method for evaluating Fredholm determinants Many multivariate functions (joint prob. dists.) have a nice representation as a Fredholm determinant, but no representation in terms of a nonlinear PDE.

Straightforward Approach: Solving the IVP for Painleve

Straightforward Approach: Solving the IVP for Painleve All of the examples we are interested in take asymptotic IVP form:

Straightforward Approach: Solving the IVP for Painleve All of the examples we are interested in take asymptotic IVP form: Given an interval (a, b), we seek u(x) that solves u (x) = f (x, u(x), u (x)) subject to either of the asymptotic one-sided conditions u(x) u a (x) (x a) or u(x) u b (x) (x b)

Straightforward Approach (cont.)

Straightforward Approach (cont.) Problem 1: Must identify asympotic expansion of u(x) not always easy.

Straightforward Approach (cont.) Problem 1: Must identify asympotic expansion of u(x) not always easy. Even with expansion, we must choose initial point and asymptotic order of approximation, so choose a + > a (or b < b) close to boundary and compute solution to the (standard) IVP problem v (x) = f (x, v(x), v (x)) v(a + ) = u a (a + ), v (a + ) = u a(a + ) or v(b ) = u b (b ), v (b ) = u b(b )

Straightforward Approach (cont.)

Straightforward Approach (cont.) Problem 2: Standard solution methods demonstrate numerical instability.

Straightforward Approach (cont.) Problem 2: Standard solution methods demonstrate numerical instability. Example: Computing F 2 (s) = exp ( s (x s)u(x) 2 dx ) : v(x) = 2v(x) 3 + xv(x), v(b ) = Ai(b ), v (b ) = Ai (b )

Straightforward Approach (cont.) Problem 2: Standard solution methods demonstrate numerical instability. Example: Computing F 2 (s) = exp ( s (x s)u(x) 2 dx ) : v(x) = 2v(x) 3 + xv(x), v(b ) = Ai(b ), v (b ) = Ai (b ) Choosing b 8 gives initial values accurate to machine precision (about 10 16 for IEEE doubles). Choose b = 12 yields these results:

Stability Issues

Less Straightforward Approach: Solving the BVP for Painleve Stability issues described in depth in Bornemann s paper lead to a BVP approach. We use asymptotic expression u a (x) at (x a) to infer asymptotic expression u b (x) at (x b), or vice versa. Approximate u(x) by solving BVP: v (x) = f (x, v(x), v (x)), v(a + ) = u a (a + ), v(b ) = u b (b ) Requires four choices: values of a +, b, and order of asymptotic accuracy for u a (x) and u b (x)

Less Straightforward Approach: An Example

Less Straightforward Approach: An Example Computing F 2 (s) via computation of u(x) via BVP methods:

Less Straightforward Approach: An Example Computing F 2 (s) via computation of u(x) via BVP methods: By definition, u(x) Ai(x) (x ) so we take u b (x) = Ai(x). Choose a + = 10, b = 6 (Dieng, 2005).

Less Straightforward Approach: An Example Computing F 2 (s) via computation of u(x) via BVP methods: By definition, u(x) Ai(x) (x ) so we take u b (x) = Ai(x). Choose a + = 10, b = 6 (Dieng, 2005). We need to choose a sufficiently accurate asymptotic expansion for u a (x). Tracy and Widom show u(x) = x ( 1 + 1 2 8 x 3 73 128 x 6 + 10657 ) 1024 x 9 + O(x 12 ), (x ) so we ll use that for u a (x).

Stability Issues

Pitfalls of BVP approach

Pitfalls of BVP approach Require turning asymptotic expansion at one endpoint into asymptotic endpoint at other point. Not easy!

Pitfalls of BVP approach Require turning asymptotic expansion at one endpoint into asymptotic endpoint at other point. Not easy! Selecting appropriate a + and b along with indices of truncation is a bit of a black art.

Pitfalls of BVP approach Require turning asymptotic expansion at one endpoint into asymptotic endpoint at other point. Not easy! Selecting appropriate a + and b along with indices of truncation is a bit of a black art. Actually solving BVP requires choosing starting values for Newton iteration, discretizing the DE, choosing a good step size, etc.

Pitfalls of BVP approach Require turning asymptotic expansion at one endpoint into asymptotic endpoint at other point. Not easy! Selecting appropriate a + and b along with indices of truncation is a bit of a black art. Actually solving BVP requires choosing starting values for Newton iteration, discretizing the DE, choosing a good step size, etc. Punchline: BVP approach is insufficiently black-box for us.

Better Approach: Numerical Evaluation of Fredholm Determinants

Better Approach: Numerical Evaluation of Fredholm Determinants Choose your favorite quadrature rule (Clenshaw-Curtis is good) over nodes x j (a, b) and positive weights w j : m j=1 w jf (x j ) b a f (x)dx

Better Approach: Numerical Evaluation of Fredholm Determinants Choose your favorite quadrature rule (Clenshaw-Curtis is good) over nodes x j (a, b) and positive weights w j : m j=1 w jf (x j ) b a f (x)dx The Fredholm determinant ( ) d(z) = det I zk L 2 (a,b) has the approximation A m = K(x i, y j ) m i,j=1 ( ) d m (z) = det δ ij z w 1/2 i A m w 1/2 j

Evaluating Finite-Dimensional Determinants

Evaluating Finite-Dimensional Determinants This is a standard Numerical Linear Algebra problem.

Evaluating Finite-Dimensional Determinants This is a standard Numerical Linear Algebra problem. We need the value at a single point z C. Compute LU of (I za m ), get determinant from m j=1 U jj Computing d m (z) for a single z takes O(m 3 ) time.

Evaluating Finite-Dimensional Determinants This is a standard Numerical Linear Algebra problem. We need the value at a single point z C. Compute LU of (I za m ), get determinant from m j=1 U jj Computing d m (z) for a single z takes O(m 3 ) time. We need the value at many points, want d m (z) as polynomial. Compute eigenvalues λ j of A m via QR (one-time cost of O(m 3 ) time, but worse constant factor than LU in practice), then form d m (z) = m (1 zλ j ) j=1 Computing d m (z) takes O(m) time.

Sample Matlab Code The following code computes F 2 (0) to one unit of precision in the last decimal place: >> m = 64; [w, x] = ClenshawCurtis(0, inf, m); w2 = sqrt(w); >> [xi, xj] = ndgrid(x, x); >> KAi = @AiryKernel; >> F20 = det(eye(m) - (w2 * w2).*kai(x, x)) F20 = 0.969372828355262

Wrapup Computing Fredholm Determinants is faster, easier, and more stable than integrating Painleve IVP or BVP. Being able to handle things that are expressed in non-pde form is useful. Bornemann uses the toolset to identify (and subsequently prove) several new results (omitted here for brevity) about distributions of the k-th largest eigenvalue in the soft-edge scaling limit of the GOE and GSE the numerical code generates immediate insights!