Progress in the method of Ghosts and Shadows for Beta Ensembles

Similar documents
Advances in Random Matrix Theory: Let there be tools

The Random Matrix Technique of Ghosts and Shadows

c 2005 Society for Industrial and Applied Mathematics

Low-temperature random matrix theory at the soft edge. Abstract

Exponential tail inequalities for eigenvalues of random matrices

Random Matrices and Multivariate Statistical Analysis

Fluctuations from the Semicircle Law Lecture 4

Alan Edelman. Why are random matrix eigenvalues cool? MAA Mathfest 2002 Thursday, August 1. MIT: Dept of Mathematics, Lab for Computer Science

1 Tridiagonal matrices

Update on the beta ensembles

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne

Bulk scaling limits, open questions

1 Intro to RMT (Gene)

Eigenvalue Statistics for Beta-Ensembles. Ioana Dumitriu

Fluctuations of random tilings and discrete Beta-ensembles

SEA s workshop- MIT - July 10-14

Numerical Methods for Random Matrices

MATHEMATICS (MATH) Calendar

Concentration Inequalities for Random Matrices

Determinantal point processes and random matrix theory in a nutshell

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Numerical analysis and random matrix theory. Tom Trogdon UC Irvine

MIT Final Exam Solutions, Spring 2017

Rings If R is a commutative ring, a zero divisor is a nonzero element x such that xy = 0 for some nonzero element y R.

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Finite Rank Perturbations of Random Matrices and Their Continuum Limits. Alexander Bloemendal

MIT Term Project: Numerical Experiments on Circular Ensembles and Jack Polynomials with Julia

Fluctuations of random tilings and discrete Beta-ensembles

Orthogonal Polynomial Ensembles

Differential Equations for Dyson Processes

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

The Tracy-Widom distribution is not infinitely divisible.

The Hilbert Space of Random Variables

MODEL ANSWERS TO THE FIRST HOMEWORK

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Free Meixner distributions and random matrices

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

How long does it take to compute the eigenvalues of a random symmetric matrix?

Linear Algebra Review

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Linear Algebra for Machine Learning. Sargur N. Srihari

Meixner matrix ensembles

Markov operators, classical orthogonal polynomial ensembles, and random matrices

MODEL ANSWERS TO HWK #7. 1. Suppose that F is a field and that a and b are in F. Suppose that. Thus a = 0. It follows that F is an integral domain.

Random Matrix Pencils, Branching Points, and Monodromy

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

Multivariate Statistical Analysis

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Non white sample covariance matrices.

Geometric Dyson Brownian motion and May Wigner stability

Linear Algebra - Part II

Level statistics of the correlated and uncorrelated Wishart ensemble. Mario Kieburg

Mathematical Physics Homework 10

Stat 206: Linear algebra

8.1 Concentration inequality for Gaussian random matrix (cont d)

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra Massoud Malek

235 Final exam review questions

Eigenvalues and Eigenvectors A =

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

CLASSICAL GROUPS DAVID VOGAN

Statistical Inference and Random Matrices

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Cheat Sheet for MATH461

18.06 Quiz 2 April 7, 2010 Professor Strang

Eigenvectors and Hermitian Operators

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

A new type of PT-symmetric random matrix ensembles

One-to-one functions and onto functions

Free Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada

Triangular matrices and biorthogonal ensembles

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Characterizations of free Meixner distributions

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

ORTHOGONAL POLYNOMIALS

TOP EIGENVALUE OF CAUCHY RANDOM MATRICES

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

1. Introduction to commutative rings and fields

Lecture Notes 1: Vector spaces

Answers and Solutions to Selected Homework Problems From Section 2.5 S. F. Ellermeyer. and B =. 0 2

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math 291-2: Lecture Notes Northwestern University, Winter 2016

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

1. Introduction to commutative rings and fields

[y i α βx i ] 2 (2) Q = i=1

Review (Probability & Linear Algebra)

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Getting Started with Communications Engineering

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Multiple orthogonal polynomials. Bessel weights

Universality of local spectral statistics of random matrices

Review problems for MA 54, Fall 2004.

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Transcription:

Progress in the method of Ghosts and Shadows for Beta Ensembles Alan Edelman (MIT) Alex Dubbs (MIT) and Plamen Koev (SJS) Oct 8, 2012 1/47

Wishart Matrices (arbitrary covariance) G=mxn matrix of Gaussians Σ=mxn semidefinite matrix G G Σ is similar to A=Σ ½ G GΣ ½ For β=1,2,4, the joint eigenvalue density of A has a formula: Known for β=2 in some circles as Harish Chandra Itzykson Zuber 2/47

Main Purpose of this talk Eigenvalue density of G G Σ ( similar to A=Σ ½ G GΣ ½ ) Present an algorithm for sampling from this density Show how the method of Ghosts and Shadows can be used to derive this algorithm Further evidence that β=1,2,4 need not be special 3/47

Eigenvalues of GOE (β=1) Naïve Way: MATLAB : A=randn(n); S=(A+A )/sqrt(2*n);eig(s) R: A=matrix(rnorm(n*n),ncol=n);S=(a+t(a))/sqrt(2*n);eigen(S,symmetric=T,only.values=T)$values; Mathematica: A=RandomArray[NormalDistribution[],{n,n}];S=(A+Transpose[A])/Sqrt[n];Eigenvalues[s] 4/47

Eigenvalues of GOE/GUE/GSE and beyond Real Sym. Tridiagonal: β=1:real, β=2:complex, β=4:quaternion Diagonals N(0,2), Off diagonals chi random variables General β>0 Computations significantly faster 5/47

Introduction to Ghosts G 1 is a standard normal N(0,1) G 2 is a complex normal (G 1 +ig 1 ) G 4 is a quaternion normal (G 1 +ig 1 +jg 1 +kg 1 ) G β (β>0) seems to often work just fine Ghost Gaussian 6/47

Chi squared 2 Defn: χ β is the sum of β iid squares of standard normals if β=1,2, Generalizes for non integer β as the gamma function interpolates factorial χ β is the sqrt of the sum of squares (which generalizes) (wikipedia chi distriubtion) G 1 is χ 1, G 2 is χ 2, G 4 is χ 4 So why not G β is χ β? I call χ β the shadow of G β 7/47

Linear Algebra Example Given a ghost Gaussian, G β, G β is a real chibeta variable χ β variable. Gram Schmidt: G β G β G β χ 3β G β G β χ 3β G β G β χ 3β G β G β G β G β G β G β G β χ 2β G β H 3 H 2 H 1 χ 2β G β G β G β G β G β G β G β Χ β or G β G β G β G β G β G β G β G β G β = [Q] * χ 3β G β G β χ 2β G β Χ β 8/47

Linear Algebra Example Given a ghost Gaussian, G β, G β is a real chibeta variable χ β variable. Gram Schmidt: G β G β G β χ 3β G β G β χ 3β G β G β χ 3β G β G β G β G β G β G β G β χ 2β G β H 3 H 2 H 1 χ 2β G β G β G β G β G β G β G β Χ β or G β G β G β G β G β G β G β G β G β = [Q] * χ 3β G β G β χ 2β G β Χ β Q has β Haar Measure! We have computed moments! 9/47

Tridiagonalizing Example G β G β G β G β Symmetric Part of G β G β G β G β G β G β G β G β G β G β G β G β G β (Silverstein, Trotter, etc) 10/47

Histogram without Histogramming: Sturm Sequences Count #eigs < 0.5: Count sign changes in Det( (A 0.5*I)[1:k,1:k] ) Count #eigs in [x,x+h] Take difference in number of sign changes at x+h and x Mentioned in Dumitriu and E 2006, Used theoretically in Albrecht, Chan, and E 2008 11/47

A good computational trick is a good theoretical trick! 12/47

Stochastic Differential Eigen Equations Tridiagonal Models Suggest SDE s with Brownian motion as infinite limit: E: 2002 E and Sutton 2005, 2007 Brian Rider and (Ramirez, Cambronero, Virag, etc.) (Lots of beautiful results!) (Not today s talk) 13/47

Tracy Widom Computations Eigenvalues without the whole matrix! Never construct the entire tridiagonal matrix! Just say the upper 10n 1/3 by 10n 1/3 Compute largest eigenvalue of that, perhaps using Lanczos with shift and invert strategy! Can compute for n amazingly large!!!!! And any beta. E: 2003 Persson and E: 2005 14/47

Other Computational Results Infinite Random Matrix Theory The Free Probability Calculator (Raj) Finite Random Matrix Theory MOPS (Ioana Dumitriu) (β: orthogonal polynomials) Hypergeometrics of Matrix Argument (Plamen Koev) (β: distribution functions for finite stats such as the finite Tracy Widom laws for Laguerre) Good stuff, but not today. 15/47

Scary Ideas in Mathematics Zero Negative Radical Irrational Imaginary Ghosts: Something like a commutative algebra of random variables that generalizes random Reals, Complexes, and Quaternions and inspires theoretical results and numerical computation 16/47

Did you say commutative?? Quaternions don t commute. Yes but random quaternions do! If x and y are G 4 then x*y and y*x are identically distributed! 17/47

Hermite: RMT Densities c λ i λ j β e λ i 2 /2 (Gaussian Ensemble) Laguerre: Jacobi: c λ i λ j β λ m i e λ i (Wishart Matrices) c λ i λ j β λ m 1 i (1 λi ) m 2 (Manova Matrices) Fourier: c λ i λ j β (on the complex unit circle) Jack Polynomials Traditional Story: Count the real parameters β=1,2,4 for real, complex, quaternion Applications: β=1: All of Multivariate Statistics β=2:tons, β=4: Almost nobody cares Dyson 1962 Threefold Way Three Associative Division Rings 18/47

β Ghosts β=1 has one real Gaussian (G) β=2 has two real Gaussians (G+iG) β=4 has four real Gaussians (G+iG+jG+kG) β 1 has one real part and (β 1) Ghost parts 19/47

Introductory Theory There is an advanced theory emerging A β ghost is a spherically symmetric random variable defined on R β A shadow is a derived real or complex quantity 20/47

Wishart Matrices (arbitrary covariance) G=mxn matrix of Gaussians Σ=mxn semidefinite matrix G G Σ is similar to A=Σ ½ G GΣ ½ For β=1,2,4, the joint eigenvalue density of A has a formula: 21/47

Joint Eigenvalue density of G G Σ The 0 F 0 function is a hypergeometric function of two matrix arguments that depends only on the eigenvalues of the matrices. Formulas and software exist. 22/47

Laguerre: Generalization of Laguerre Versus Wishart: 23/47

General β? The joint density: is a probability density for all β>0. Goals: Algorithm for sampling from this density Get a feel for the density s ghost meaning 24/47

Main Result An algorithm derived from ghosts that samples eigenvalues A MATLAB implementation that is consistent with other beta ized formulas Largest Eigenvalue Smallest Eigenvalue 25/47

Real quantity Working with Ghosts 26/47

More practice with Ghosts 27/47

Bidiagonalizing Σ=I Z Z has the Σ=I density giving a special case of 28/47

The Algorithm for Z=GΣ ½ 29/47

The Algorithm for Z=GΣ ½ 30/47

Removing U and V 31/47

Algorithm cont. 32/47

Completion of Recursion 33/47

Numerical Experiments Largest Eigenvalue Analytic Formula for largest eigenvalue dist E and Koev: software to compute 34/47

1 m3n3beta5.000m150.stag.a.fig 0.9 0.8 0.7 0.6 F(x) 0.5 0.4 0.3 0.2 0.1 0 0 20 40 60 80 100 120 x 35 35/47

1 m4n4beta2.500m130.stag.a.fig 0.9 0.8 0.7 0.6 F(x) 0.5 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50 60 70 80 90 100 x 36/47

1 m5n4beta0.750m120.1234.a.fig 0.9 0.8 0.7 0.6 F(x) 0.5 0.4 0.3 0.2 0.1 0 0 20 40 60 80 100 120 140 x 37 37/47

Smallest Eigenvalue as Well The cdf of the smallest eigenvalue, 38/47

Cdf s of smallest m5n4beta3.000.stag.a.least.fig eigenvalue 1 0.9 0.8 0.7 0.6 F(x) 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 12 14 16 x 39 39/47

Goals Continuum of Haar Measures generalizing orthogonal, unitary, symplectic New Definition of Jack Polynomials generalizing the zonals Computations! E.g. Moments of the Haar Measures Place finite random matrix theory β into same framework as infinite random matrix theory: specifically β as a knob to turn down the randomness, e.g. Airy Kernel d 2 /dx 2 +x+(2/β ½ )dw White Noise 40/47

Formally Let S n =2π/Γ(n/2)= surface area of sphere Defined at any n= β>0. A β ghost x is formally defined by a function f x (r) such that f x (r) r β 1 S β 1 dr=1. r=0 Note: For β integer, the x can be realized as a random spherically symmetric variable in β dimensions Example: A β normal ghost is defined by f(r)=(2π) β/2 e r2 /2 Example: Zero is defined with constant*δ(r). Can we do algebra? Can we do linear algebra? Can we add? Can we multiply? 41/47

A few more operations ΙΙxΙΙ is a real random variable whose density is given by f x (r) (x+x )/2 is real random variable given by multiplying ΙΙxΙΙ by a beta distributed random variable representing a coordinate on the sphere 42/47

Addition of Independent Ghosts: Addition returns a spherically symmetric object Have an integral formula Prefer: Add the real part, imaginary part completed to keep spherical symmetry 43/47

Multiplication of Independent Ghosts Just multiply ΙΙzΙΙ s and plug in spherical symmetry Multiplication is commutative (Important Example: Quaternions don t commute, but spherically symmetric random variables do!) 44/47

Understanding λ i λ j β Define volume element (dx)^ by (r dx)^=r β (dx)^ (β dim volume, like fractals, but don t really see any fractal theory here) Jacobians: A=QΛQ (Sym Eigendecomposition) Q daq=dλ+(q dq)λ Λ(Q dq) (da)^=(q daq)^= diagonal ^ strictly upper diagonal = dλ i =(dλ)^ off diag = ((Q dq) ij (λ i λ j ))^=(Q dq)^ λ i λ j β 45/47

Haar Measure β=1: E Q (trace(aqbq ) k )= C κ (A)C κ (B)/C κ (I) Forward Method: Suppose you know C κ s a priori. (Jack Polynomials!) Let A and B be diagonal indeterminants (Think Generating Functions) Then can formally obtain moments of Q: Example: E( q 11 2 q 22 2 ) = (n+α 1)/(n(n 1)(n+α)) α:=2/ β Can Gram Schmidt the ghosts. Same answers coming up! 46/47

Further Uses of Ghosts Multivariate Orthogonal Polynomials Tracy Widom Laws Largest Eigenvalues/Smallest Eigenvalues Expect lots of uses to be discovered 47/47