Chater 11 Orthogonal Polynomial Ensembles 11.1 Orthogonal Polynomials of Scalar rgument Let wx) be a weight function on a real interval, or the unit circle, or generally on some curve in the comlex lane. Without being too technical we think of wx) as iecewise continuous and we allow for delta functions. We will let I denote the suort of w. Werequirethatwx) have finite mass, i.e., I wx)dx is finite. Every famous mathematician seems to have a set of orthogonal olynomials: Examle 1: Legendre Polynomials: Let wx) =1on[ 1, 1]. The olynomials are named for Legendre and denoted P n x). Examle 2: Chebyshev Polynomials: wx) =1 x 2 ) 1/2. The olynomials are denoted by T n x) and satisfy T n cos θ) =cosnθ). For examle cos 3θ =4cos 3 θ) 3andT 3 x) =4x 3 3. Examle 3: Hermite Polynomials: wx) =e x2 /2 on the whole real line. The H n x) havea secial lace in random eigenvalue history because of the Gaussian Ensembles. Examle 4: Laguerre Polynomials: wx) =x γ e x/2. For every arameter γ there is a set of Laguerre olynomials denoted L γ nx). These olynomials lay a vital role in the Wishart matrices of multivariate statistics. Examle 5: Jacobi Polynomials: wx) =1 x) γ 1 1 + x) γ 2. This is the granddaddy of all the above. When γ 1 = γ 2 = 0, we obtain the Legendre Polynomials. If γ 1 = γ 2 = 1/2, we obtain Chebyshev. Taking limits aroriately we can obtain the Hermite and Laguerre Polynomials. There is a beautiful relationshi among all of the following quantities Quantity 1: The moments s k = I xk wx)dx Quantity 2: The sequence of Orthogonal Polynomials 0 x), 1 x),...,where I jx) k x)wx)dx = δ jk. It is assumed that j x) has degree j. Quantity 3: Infinite Symmetric Tridiagonal Matrices also known as Jacobi matrices). The three term recurrence for the orthogonal olynomials is often best seen as a tridiagonal matrix. The characteristic olynomial of the to j by j section is j x). If one has a finite symmetric 83
tridiagonal matrix then one obtains a finite sequence of orthogonal olynomials corresonding to a finite measure. Quantity 4: The eigenvalues and the first row of the eigenvectors of the above tridiagonal matrix as catured in w i x) = q 2 i δx λ i). Quantity 5: the qi 2. Gaussian Quadrature: The abscissas are the eigenvalues of T n, and the weights are Quantity 6: Continued Fractions: If we evaluate the continued fraction given by the diagonals and the offdiagonals squared, we obtain a sequence of numerators and denominators. The numerators are the orthogonal olynomials and the denominators are closely related. Quantity 7: Random Variables: If we normalize wx) to have integral 1, then we have random variables with that density. For examle, the Hermite density corresonds to the normal distribution, the Laguerre density corresonds to χ distributions, and the Jacobi density corresonds to the F check this!!!!!!!!!!!!!!!) distribution. Quantity 8: Jum Condition: One can seek function Y x) in the comlex lane that are defined off of the suort of wx) and satisfy a jum condition: lim {Y x + iɛ) Y x iɛ)} = π nx)wx), ɛ 0 where all we secify about π n x) is that it is a olynomial of degree n. We also require the growth condition Y x) =Ox n )asx. The solution is unique. π n turns out to be the nth orthogonal olynomial, and Y x) is its moment generating function. For more on this see the section on Riemann Hilbert roblem. Quantity 9: The Lanczos Tridiagonalization lgorithm 11.2 Orthogonal Polynomial Ensembles and Matrix Models Let ωx) be a weight function. For arameters β we define the multivariate weight function: ω n x; β) =Δ β n i=1 ωx i)), where Δ= xi x j ). Thus Δ is the absolute value of the Vandermonde determinant, i.e., the determinant of the matrix x j 1 i ) i,j=1,...,n. When β is understood from context, we may suress it in the notation. Definition: Random Matrix Models We say that we have a random matrix model for the multivariate weight function ω n x; β), if we have a random n n matrix n whose eigenvalues have ω n xβ) asajointdensity. Constructing nice random matrix models is a central roblem in random matrix theory. It would be cheating to say, let x be a random variable with multivariate density ω n β) andtake n to be the diagonal matrix with x on the diagonal. 11.2.1 The Cauchy-Binet Theorem and its Continuous Limit Let C = B be a matrix roduct of any kind. Let M i 1...i m ) j 1...j m denote the m m minor detm ik j ) 1 k m,1 l m.
In other words, it is the determinant of the submatrix of M formed from rows of i 1,...,i m and columns j 1,...,j m. The Cauchy-Binet Theorem states that ) i1,...,i m i1,...,i m C k 1,...,k m = j 1 <j 2 < <j m j 1,...,j m ) j1,...,j B m k 1,...,k m Notice that when m = 1 this is the familiar formula for matrix multilication. When all matrices are m m, then the formula states that det C =det det B. Cauchy-Binet extends to matrices with infinitely many columns. If the columns are indexed by a continuous variable, we now have a vector of functions. Relacing ij with ϕ i x j ), B jk with ψ k x j ), we see that Cauchy-Binet becomes det C = detϕ i x j )) i,j=1,...,n detψ k x j )) k,j=1,...,n dx 1 dx 2...dx n. where C ik = ϕ i x)ψ k x)dx, i, k =1,...,n. arently, this continuous version of Cauchy-Binet may be traced back to an 1883 aer by ndréief [9]. 11.2.2 Cauchy-Binet Examles Corollary 11.1. Let R n,,d R n,n. We have det T D) = d...d i1 i i 1 <...<i... = Pl) T di1...di... { 1 if i l Examle 11.1. Let d i = 0 if i>l We have l = 1 : l, :) and then Examle 11.2. Let d i = det T l l)= { 1+z if i =5 1 if i 5 ll i l lso let T = I. 1... det T 1+z... ) = 1+z 1 Some i=5 = 1+z S, :) 2 i 1 <...<i Pl) i... ).
Examle 11.3. Let d i = Z i symbolic det T D) = Z...Z i1 i Dislays squares of elements of Pl) with symbolic labels. Examle 11.4. Let d i =1+Z i det T D) = + Some Z k =i Z i + Some Z k =i i k =j Z i Z j + 11.3 Orthogonal Polynomial Ensembles when β =2 In this section we assume that β =2sothatω n x) =Δ 2 n i=1 ωx i). For classical weight function ωx), Hermitian matrix models have been constructed. We have already seen the GUE corresonding to Hermite matrix models, and comlex Wishart matrices for Laguerre. We also get the comlex MNOV matrices corresonding to Jacobi. Notation: We define φ n x) = n x)ωx) 1/2.Thustheφ i x) are not generally olynomials, but they do form an orthonormal set of functions on the suort of ω. It is a general fact that the level density is Given any function fx) onecanaskfor n 1 fωx) = φ i x. i=1 Ef) E ωn fx i )). When we have a matrix model, this is EdetfX)). It is a simle result that Ef) = detφ i x)φ j x)fx)) i,j=0,...,n 1 dx. This imlies by the continuous version of the Cauchy Binet theorem that Ef) =detc n, where C n ) ij = φ i x)φ j x)fx)dx. Some imortant functions to use are fx) =1+ z i δx y i )). The coefficients of the resulting olynomial then is the marginal density of k eigenvalues. [say slightly better] nother imortant one is fx) =1 χ [a,b], where χ [a,b] the indicator function on [a, b]. Then we obtain the robability that no eigenvalue is in the interval [a, b]. If b is infinite, we obtain the robability that all eigenvalues are less than a, that is the distribution function for the largest eigenvalue.
11.4 Orthogonal Polynomials of Matrix rgument Let x =x 1,...,x n ), for any β we can define the multivariate orthogonal olynomial κ x) bythe condition that κ x)νx)dω n x; β) =δ κ ν. Here κ is a finite non-increasing, sequence and the leading term in κ x) =x κ. These orthogonal olynomials are symmetric. Indeed the set of olynomials of degree k form a basis for the symmetric olynomials of degree k. Conjecture: Let ωx) = 1 on the unit circle in the comlex lane. Then the orthogonal olynomials are the Jack olynomials. These olynomials are homogeneous. Given any n n matrix, we can define κ X) as κ λ 1,...,λ n ). Since κ is symmetric, we see that κ X) is a olynomial in the elements of X. For examle the determinant, the trace, or any coefficient in the characteristic olynomial can be exressed as a olynomial in the entries of the matrix.) When β =1, 2, and 4 we can define a measure ωx) on real symmetric, comlex Hermitian, or symlectic self-dual matrix such that κ X) ν X)ωX)dX = δ κν. Here ωx) = ωλ i X)) = det ωx). lternatively there is a measure on real tridiagonal matrices for any such β. Similarly we can construct a random tridiagonal matrix whose eigenvalue robability density is ω n x; β).