Orthogonal Polynomial Ensembles

Similar documents
An Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices

Class notes: Approximation

Triangular matrices and biorthogonal ensembles

Math Matrix Algebra

In numerical analysis quadrature refers to the computation of definite integrals.

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap

Orthogonal Polynomials and Gaussian Quadrature

Quantum Computing Lecture 2. Review of Linear Algebra

HASSE INVARIANTS FOR THE CLAUSEN ELLIPTIC CURVES

Exponential tail inequalities for eigenvalues of random matrices

Numerical Linear Algebra

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Linear Algebra: Matrix Eigenvalue Problems

Curve Fitting and Approximation

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford

The coefficients of the characteristic polynomial in terms of the eigenvalues and the elements of an n n matrix

STABILITY ANALYSIS AND CONTROL OF STOCHASTIC DYNAMIC SYSTEMS USING POLYNOMIAL CHAOS. A Dissertation JAMES ROBERT FISHER

1 = I I I II 1 1 II 2 = normalization constant III 1 1 III 2 2 III 3 = normalization constant...

Econ Slides from Lecture 8

1 Intro to RMT (Gene)

ORTHOGONAL POLYNOMIALS

Kernel Method: Data Analysis with Positive Definite Kernels

Comlex orthogonal olynomials with the Hermite weight 65 This inner roduct is not Hermitian, but the corresonding (monic) orthogonal olynomials f k g e

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Progress in the method of Ghosts and Shadows for Beta Ensembles

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Participation Factors. However, it does not give the influence of each state on the mode.

The quantum state as a vector

The following definition is fundamental.

PHYSICS 234 HOMEWORK 2 SOLUTIONS. So the eigenvalues are 1, 2, 4. To find the eigenvector for ω = 1 we have b c.

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Spectral analysis of two doubly infinite Jacobi operators

Collocation based high dimensional model representation for stochastic partial differential equations

The Matrix-Tree Theorem

Advances in Random Matrix Theory: Let there be tools

Solving Linear Systems of Equations

Random matrices and the Riemann zeros

Random Matrix: From Wigner to Quantum Chaos

Limits of Interlacing Eigenvalues in the Tridiagonal β-hermite Matrix Model

Vector spaces and operators

Orthogonal polynomials

Lecture 1.2 Pose in 2D and 3D. Thomas Opsahl

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

PART IV Spectral Methods

Review of Linear Algebra

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Numerical Methods for Random Matrices

Extreme eigenvalue fluctutations for GUE

Quantum Physics II (8.05) Fall 2004 Assignment 3

Numerical Methods - Numerical Linear Algebra

Introduction. Chapter One

Eötvös Loránd University Faculty of Informatics. Distribution of additive arithmetical functions

Hilbert Space Problems

Boundary Conditions associated with the Left-Definite Theory for Differential Operators

Function Space and Convergence Types

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Vectors in Function Spaces

EIGENVALUES AND EIGENVECTORS 3

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Multiple orthogonal polynomials. Bessel weights

MATH 361: NUMBER THEORY ELEVENTH LECTURE

Introduction to MVC. least common denominator of all non-identical-zero minors of all order of G(s). Example: The minor of order 2: 1 2 ( s 1)

Mathematical foundations - linear algebra

Eigenvalues and Eigenvectors

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Elliptic Curves Spring 2015 Problem Set #1 Due: 02/13/2015

Quadrature for the Finite Free Convolution

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Fredholm determinant with the confluent hypergeometric kernel

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS. 1. Abstract

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

On the Vorobyev method of moments

Chapter 7: Special Distributions

Bulk scaling limits, open questions

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Cayley-Hamilton Theorem

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

MATH 2710: NOTES FOR ANALYSIS

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

1. General Vector Spaces

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

Linear Algebra Practice Problems

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Introduction to Arithmetic Geometry Fall 2013 Lecture #10 10/8/2013

1 Tridiagonal matrices

A Polynomial Chaos Approach to Robust Multiobjective Optimization

Polynomial chaos expansions for sensitivity analysis

Analysis IV : Assignment 3 Solutions John Toth, Winter ,...). In particular for every fixed m N the sequence (u (n)

Advanced Computational Fluid Dynamics AA215A Lecture 2 Approximation Theory. Antony Jameson

MATH 583A REVIEW SESSION #1

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

Transcription:

Chater 11 Orthogonal Polynomial Ensembles 11.1 Orthogonal Polynomials of Scalar rgument Let wx) be a weight function on a real interval, or the unit circle, or generally on some curve in the comlex lane. Without being too technical we think of wx) as iecewise continuous and we allow for delta functions. We will let I denote the suort of w. Werequirethatwx) have finite mass, i.e., I wx)dx is finite. Every famous mathematician seems to have a set of orthogonal olynomials: Examle 1: Legendre Polynomials: Let wx) =1on[ 1, 1]. The olynomials are named for Legendre and denoted P n x). Examle 2: Chebyshev Polynomials: wx) =1 x 2 ) 1/2. The olynomials are denoted by T n x) and satisfy T n cos θ) =cosnθ). For examle cos 3θ =4cos 3 θ) 3andT 3 x) =4x 3 3. Examle 3: Hermite Polynomials: wx) =e x2 /2 on the whole real line. The H n x) havea secial lace in random eigenvalue history because of the Gaussian Ensembles. Examle 4: Laguerre Polynomials: wx) =x γ e x/2. For every arameter γ there is a set of Laguerre olynomials denoted L γ nx). These olynomials lay a vital role in the Wishart matrices of multivariate statistics. Examle 5: Jacobi Polynomials: wx) =1 x) γ 1 1 + x) γ 2. This is the granddaddy of all the above. When γ 1 = γ 2 = 0, we obtain the Legendre Polynomials. If γ 1 = γ 2 = 1/2, we obtain Chebyshev. Taking limits aroriately we can obtain the Hermite and Laguerre Polynomials. There is a beautiful relationshi among all of the following quantities Quantity 1: The moments s k = I xk wx)dx Quantity 2: The sequence of Orthogonal Polynomials 0 x), 1 x),...,where I jx) k x)wx)dx = δ jk. It is assumed that j x) has degree j. Quantity 3: Infinite Symmetric Tridiagonal Matrices also known as Jacobi matrices). The three term recurrence for the orthogonal olynomials is often best seen as a tridiagonal matrix. The characteristic olynomial of the to j by j section is j x). If one has a finite symmetric 83

tridiagonal matrix then one obtains a finite sequence of orthogonal olynomials corresonding to a finite measure. Quantity 4: The eigenvalues and the first row of the eigenvectors of the above tridiagonal matrix as catured in w i x) = q 2 i δx λ i). Quantity 5: the qi 2. Gaussian Quadrature: The abscissas are the eigenvalues of T n, and the weights are Quantity 6: Continued Fractions: If we evaluate the continued fraction given by the diagonals and the offdiagonals squared, we obtain a sequence of numerators and denominators. The numerators are the orthogonal olynomials and the denominators are closely related. Quantity 7: Random Variables: If we normalize wx) to have integral 1, then we have random variables with that density. For examle, the Hermite density corresonds to the normal distribution, the Laguerre density corresonds to χ distributions, and the Jacobi density corresonds to the F check this!!!!!!!!!!!!!!!) distribution. Quantity 8: Jum Condition: One can seek function Y x) in the comlex lane that are defined off of the suort of wx) and satisfy a jum condition: lim {Y x + iɛ) Y x iɛ)} = π nx)wx), ɛ 0 where all we secify about π n x) is that it is a olynomial of degree n. We also require the growth condition Y x) =Ox n )asx. The solution is unique. π n turns out to be the nth orthogonal olynomial, and Y x) is its moment generating function. For more on this see the section on Riemann Hilbert roblem. Quantity 9: The Lanczos Tridiagonalization lgorithm 11.2 Orthogonal Polynomial Ensembles and Matrix Models Let ωx) be a weight function. For arameters β we define the multivariate weight function: ω n x; β) =Δ β n i=1 ωx i)), where Δ= xi x j ). Thus Δ is the absolute value of the Vandermonde determinant, i.e., the determinant of the matrix x j 1 i ) i,j=1,...,n. When β is understood from context, we may suress it in the notation. Definition: Random Matrix Models We say that we have a random matrix model for the multivariate weight function ω n x; β), if we have a random n n matrix n whose eigenvalues have ω n xβ) asajointdensity. Constructing nice random matrix models is a central roblem in random matrix theory. It would be cheating to say, let x be a random variable with multivariate density ω n β) andtake n to be the diagonal matrix with x on the diagonal. 11.2.1 The Cauchy-Binet Theorem and its Continuous Limit Let C = B be a matrix roduct of any kind. Let M i 1...i m ) j 1...j m denote the m m minor detm ik j ) 1 k m,1 l m.

In other words, it is the determinant of the submatrix of M formed from rows of i 1,...,i m and columns j 1,...,j m. The Cauchy-Binet Theorem states that ) i1,...,i m i1,...,i m C k 1,...,k m = j 1 <j 2 < <j m j 1,...,j m ) j1,...,j B m k 1,...,k m Notice that when m = 1 this is the familiar formula for matrix multilication. When all matrices are m m, then the formula states that det C =det det B. Cauchy-Binet extends to matrices with infinitely many columns. If the columns are indexed by a continuous variable, we now have a vector of functions. Relacing ij with ϕ i x j ), B jk with ψ k x j ), we see that Cauchy-Binet becomes det C = detϕ i x j )) i,j=1,...,n detψ k x j )) k,j=1,...,n dx 1 dx 2...dx n. where C ik = ϕ i x)ψ k x)dx, i, k =1,...,n. arently, this continuous version of Cauchy-Binet may be traced back to an 1883 aer by ndréief [9]. 11.2.2 Cauchy-Binet Examles Corollary 11.1. Let R n,,d R n,n. We have det T D) = d...d i1 i i 1 <...<i... = Pl) T di1...di... { 1 if i l Examle 11.1. Let d i = 0 if i>l We have l = 1 : l, :) and then Examle 11.2. Let d i = det T l l)= { 1+z if i =5 1 if i 5 ll i l lso let T = I. 1... det T 1+z... ) = 1+z 1 Some i=5 = 1+z S, :) 2 i 1 <...<i Pl) i... ).

Examle 11.3. Let d i = Z i symbolic det T D) = Z...Z i1 i Dislays squares of elements of Pl) with symbolic labels. Examle 11.4. Let d i =1+Z i det T D) = + Some Z k =i Z i + Some Z k =i i k =j Z i Z j + 11.3 Orthogonal Polynomial Ensembles when β =2 In this section we assume that β =2sothatω n x) =Δ 2 n i=1 ωx i). For classical weight function ωx), Hermitian matrix models have been constructed. We have already seen the GUE corresonding to Hermite matrix models, and comlex Wishart matrices for Laguerre. We also get the comlex MNOV matrices corresonding to Jacobi. Notation: We define φ n x) = n x)ωx) 1/2.Thustheφ i x) are not generally olynomials, but they do form an orthonormal set of functions on the suort of ω. It is a general fact that the level density is Given any function fx) onecanaskfor n 1 fωx) = φ i x. i=1 Ef) E ωn fx i )). When we have a matrix model, this is EdetfX)). It is a simle result that Ef) = detφ i x)φ j x)fx)) i,j=0,...,n 1 dx. This imlies by the continuous version of the Cauchy Binet theorem that Ef) =detc n, where C n ) ij = φ i x)φ j x)fx)dx. Some imortant functions to use are fx) =1+ z i δx y i )). The coefficients of the resulting olynomial then is the marginal density of k eigenvalues. [say slightly better] nother imortant one is fx) =1 χ [a,b], where χ [a,b] the indicator function on [a, b]. Then we obtain the robability that no eigenvalue is in the interval [a, b]. If b is infinite, we obtain the robability that all eigenvalues are less than a, that is the distribution function for the largest eigenvalue.

11.4 Orthogonal Polynomials of Matrix rgument Let x =x 1,...,x n ), for any β we can define the multivariate orthogonal olynomial κ x) bythe condition that κ x)νx)dω n x; β) =δ κ ν. Here κ is a finite non-increasing, sequence and the leading term in κ x) =x κ. These orthogonal olynomials are symmetric. Indeed the set of olynomials of degree k form a basis for the symmetric olynomials of degree k. Conjecture: Let ωx) = 1 on the unit circle in the comlex lane. Then the orthogonal olynomials are the Jack olynomials. These olynomials are homogeneous. Given any n n matrix, we can define κ X) as κ λ 1,...,λ n ). Since κ is symmetric, we see that κ X) is a olynomial in the elements of X. For examle the determinant, the trace, or any coefficient in the characteristic olynomial can be exressed as a olynomial in the entries of the matrix.) When β =1, 2, and 4 we can define a measure ωx) on real symmetric, comlex Hermitian, or symlectic self-dual matrix such that κ X) ν X)ωX)dX = δ κν. Here ωx) = ωλ i X)) = det ωx). lternatively there is a measure on real tridiagonal matrices for any such β. Similarly we can construct a random tridiagonal matrix whose eigenvalue robability density is ω n x; β).