Double contour integral formulas for the sum of GUE and one matrix model Based on arxiv:1608.05870 with Tom Claeys, Arno Kuijlaars, and Karl Liechty Dong Wang National University of Singapore Workshop on Random Product Matrices Bielefeld University, Germany, 3 August, 016
Gaussian Unitary Ensemble, everyone knows it Let M be an n n Hermitian matrix, with diagonal entries in i.i.d. normal distribution N(1, 0), and upper-triangular entries in i.i.d. complex normal distribution, with real and imaginary parts in N(0, 1 ). Then the random matrix M is in the Gaussian Unitary Ensemble. Its eigenvalues are a determinantal process, described by the correlation functions, which are R k (x 1,..., x k ) = det(k n (x i, x j )) k i,j=1, where K n is the correlation kernel K n (x, y) = 0 n 1 k=0 1 πk! H k (x)e x /4 H k (y)e y /4. It is well known (in [Abramowitz Stegun]) H k (x) = k! e xt t / t k 1 dt = 1 i e (s x)/ s k ds. πi πi i
Double contour integral formula for GUE Taking the sum with the help of the telescoping trick, we have K n (x, y) = 1 i (πi) ds dt e(s x) / ( s ) n 1 i 0 e (t y) / t s t. The local statistics of the GUE, namely the limiting Airy distribution and limiting Sine distribution, can be derived by the saddle point analysis of the double contour integral formula. Figure: The space between consecutive eigenvalues is O(n /3 ). Airy distribution. Figure: The space between consecutive eigenvalues is O(n 1 ). Sine distribution.
Deformation of contours: Airy Figure: The deformed contours for Airy limit. The integral converges to K Airy (x, y) = 1 (πi) ds Figure: The limiting local shape of the contours. dt es3 /3 xs 1 e t3 /3 yt s t.
Deformation of contours: Sine We need to deform the contour to a greater degree, and allow the vertical contour to cut through the circular one. plus Figure: The deformed contours for Sine limit. Figure: The contour for the residue integral (wrt s). The two intersecting points are the saddle points. It turns out that the integral on the right is bigger, and gives the Sine kernel.
Other models solved by double contour integrals 1. GUE/Wishart with external source (GUE + diag(a 1,..., a n )) [Brezin Hikami], [Zinn-Justin], [Tracy Widom], [Baki Peche Ben Arous], [El Karoui], [Bleher Kuijlaars]. GUE/Wishart minor process (all upper-left corners together) [Johansson Nordenstam], [Dieker Warren] 3. Upper-triangular ensemble (XX, upper-triangular entries of X are i.i.d. complex normal, diagonal ones are in independent gamma distribution) and the related Muttalib Borodin model [Adler van Moerbeke W], [Cheliotis], [Forrester W], [Zhang] 4. Determinantal particle systems (TASEP, polynuclear growth model, Schur process, etc) Johansson, Spohn, Borodin, Farrari, and too many others
Pros and cons Pro When a double contour integral formula is obtained, the computation of asymptotics is a straightforward application of saddle point analysis. Con All models are related to special functions which have their own contour integral representations. If the model is defined by functions that are not special, or not special enough, then there is little hope to find a double contour formula. Pro Thanks to the recent development of models related to the product random matrices, we have abundant of models associated to special functions, namely Meijer-G functions. [all participants here, and many more]
Proof of Airy and Sine universality for the product of Ginibre matrices and the Muttalib Borodin model Figure: The deformed contours for Airy limit. Figure: For Sine limit. (The residue integral is omitted). The schematic figures applies for both the two models [Liu W Zhang], [Forrester W]. The computation of the hard edge limit, which is more interesting, can be done in a technically easier way by double contour integral formula. [Kuijlaars Zhang]
Review of one matrix model Consider the n-dimensional random Hermitian matrix M with pdf 1 exp( n Tr V (M)), C V is a potential. The distribution of eigenvalues is a determinantal process. To express the kernels, we consider orthogonal polynomials with weight e nv (x) : p j (x)p k (x)e nv (x) dx = δ jk h k, p k (x) = x k +. and then we have two equivalent kernels: Kn alg (x, y) = 1 p n (x)p n 1 (y) p n 1 (x)p n (y) e nv (y), h n 1 x y K n (x, y) = 1 p n (x)p n 1 (y) p n 1 (x)p n (y) e n V (x) e n V (y). h n 1 x y
Note that at x = 0, the density vanishes like a square function, in contrast with the vanishing of density two edges that hase the square root behaviour. We say that x = 0 is an (interior) singular point of the potential V (x) = x 4 /4 x. Cases to be considered We are interested in a particular potential function V = x 4 /4 px, especially if p = 1 or very close to 1, or more precisely, p 1 = O(n /3 ). The density of eigenvalues is shown in the figure (from [Claeys Kuijlaars]).
Solution to 1MM We need to compute the limit of K n (x, y), which is reduced to the asymptotics of p n (x) and p n 1 (x). They trick is that they satisfy the following Riemann Hilbert problem for such that Y (z) = ( 1 1 ) h n p n (z) h n Cp n (z) πih n 1 p n 1 (z) πih n 1 Cp n 1 (z) where Cp n (z) = 1 πi Cp n 1 (z) = 1 πi R R p n (s)e nv (s) ds, s z p n (s)e nv (s) ds, s z 1. Y + (x) = Y (x) ( 1 e for x R. nv (x) 0 1 ). Y (z) = (1 + O(z 1 ) ( ) z n 0 0 z n as z.
The asymptotics of Y is obtained by the Deift Zhou nonlinear steepest-descent method [Bleher Its], [DKMVZ]. Around the singular point 0, K n (n 1/3 x, n 1/3 y) K PII (x, y) = Φ 1(x; σ)φ (y; σ) Φ (x; σ)φ(y; σ), π(x y) where σ is propotional to n /3 (p 1), and (ψ (1) and ψ () are defined in next slide) Φ 1 (x; σ) = ψ (1) 1 Φ (x; σ) = ψ (1) (x; σ) + ψ() 1 (x; σ), (x; σ) + ψ() (x; σ).
x Riemann Hilbert problem associated to the Hastings McLeod solution to Painlevé II Let Ψ be a x matrix valued function, such that 1. Ψ is analytic on C\ the four rays and continuous up to the boundary.. Ψ + = Ψ A j on each ray, where the jump matrix A j is given in the figure. 3. Ψ(ζ) = Ψ(ζ; σ) = (I + O(ζ 1 )e i( 4 3 ζ3 +σζ)σ 3 as ζ, where σ 3 = ( 1 0 0 1 ). ( 1 ) 0 1 1 ( ) 1 0 1 1 ( 1 ) 1 ( 1 ) 1 0 1 0 1 Now denote the Ψ(ζ; σ) in the left and right sectors by (ψ (1) (ζ; σ), ψ () (ζ; σ)), where ψ (1) and ψ () are -vectors. (Yes, Ψ in these two sectors are identical.)
Review of two matrix model In the general form, the two matrix model has two n n random Hermitian matrices M 1, M with pdf 1 C exp[ n Tr(V (M 1) + W (M ) τm 1 M )], where V, W are potentials and τ is the interaction factor. We are interested in the case V (x) = x / and W (y) = y 4 / + (α/)y, and are interested in the distribution of the eigenvalues of M 1. The distribution of eigenvalues of M is much easier, and we are going to explain it below. Then the eigenvalues of M 1 are a determinantal process, with the correlation kernel K n (x, y) = (0, w 0,n(y), w 1,n (y), w,n (y))y+ 1 (y)y +(x)(1, 0, 0, 0) T, πi(x y) where Y is defined by a RHP in next slide.
4 4 Riemann Hilbert problem Consider the following Riemann Hilbert problem 1. Y + (x) = Y (x) ( 1 w0,n (x) w 1,n (x) w,n (x) 0 1 0 0 0 0 1 0 0 0 0 1 x R, where the exact formulas of w i,n (x) are omitted. ), for. As z, Y (z) = (1 + O(z 1 )) diag(z n, z n/3, z n/3, z n/3 ). The RHP seems not too bad, while how hard it is depends on the value of τ and α in the following phase diagram (Duits, Geudens, Kuijlaars, Mo, Delvaux, Zhang..., figure from [Duits]): 1390 DUITS and GEU q D 1 Case IV Case III b 1 a p 1 Case II Case I D p C Figure. The phase diagram in the -plane: the critical curves D p C and D p separate the four cases. The cases are distinguished by the fact whether 0 is in the support
MM with quadratic potential Consider the MM with pdf C 1 exp[ n Tr(M 1 / + W (M ) τm 1 M )], that is, the potential V is quadratic. Then let W (x) = W (x) τ Then the joint pdf for M 1 and M is x, and M 1 = M 1 τm. 1 [ ] C exp n Tr( M 1 / + W (M )), and then M 1 and M are independent. We can think M 1 as M 1 + τm. So the two matrix model with one quadratic potential is equivalent to the sum of a GUE matrix (i.e. a random matrix in 1MM with quadratic potential) and a random matrix in a 1MM [Duits].
Correlation functions for GUE + (fixed) external source Let H be a GUE and A a fixed Hermitian matrix with eigenvalues a 1,..., a n, then the correlation kernel of the eigenvalues of A + H is 1 i (πi) ds i dt e n (s x) e n (t y) n ( s ak k=1 t a k ) 1 s t, where Γ encloses a 1,..., a n. Here we can allow a 1,..., a n to be random, and need to integrate over the distribution of a 1,..., a n. How if they are eigenvalues of a matrix model too?
Correlation function for GUE + 1MM Theorem [Claeys Kuijlaars W] Let M be a random matrix in 1MM, with random eigenvalues a 1,..., a n, then the correlation kernel of the eigenvalues of M + H is K n,mm (x, y) = 1 i (πi) ds i = 1 i ds πi i R [ dt e n n (s x) E e n (t y) k=1 dt e n (s x) e n (t y) e nv (t) h n 1 ( s ak t a k = 1 i ds dt e n [(s x) +V (s)] πi i R e n [(t y) +V (t)] K n(s, t). ) ] 1 s t 1 (p n (s)p n 1 (t) p n 1 (s)p n (t)) s t }{{} Kn alg (s,t)
Result in figure Suppose the distribution of eigenvalues in for M is given in the upper-left Figure, then as τ becomes larger, the distribution of eigenvalues of M + τh evolves, shown in figures clockwise, into subcritical, critical), and then supercritical phases. Below we consider M + rh, whose kernel is given by Kn,MM r 1 i (x, y) = ds dt e n [(s x) /r+v (s)] πi e n [(t y) /r+v (t)] K n(s, t). i R
Derivation: subcritical Suppose the critical value for r is r cr (0, ). Then for r (0, t cr ) there are c r, c r depending on r in the way that as r runs from 0 to r cr, then c r, c r 0. such that for any ξ, η R, if x = c t n 1/3 ξ, y = c t n 1/3 η, we have that the the functions (s x) /r + V (s) and (t y) /r + V (t) have the saddle point approximation (s x) /r+v (s) c r n /3 (u ξ), (t y) /r+v (t) c r n /3 (v η), where u = n 1/3 s, v = n 1/3 t. Thus we have Kn,MM r 1 i (x, y) du πi i R 1 i du πi i R K PII (ξ, η). dv e n 1/3 c r [(u ξ) ] e n1/3 c r [(v η) ] 1/3 c r [(u ξ) ] dv e n e n1/3 c r [(v η) ] K n (n 1/3 u, n 1/3 v) K PII (u, v)
Derivation: critical When r = r cr, both c r and c r become 0, and the argument in last slide breaks down. However, we can still assume that as r = r cr + n 1/3 τ, x = n /3 ξ, y = n /3 η, and have that (s x) /r+v (s) n 1 (bu +ξu), (t y) /r+v (t) n 1 (bv ηv), where u = n 1/3 s, v = n 1/3 t and b depends on τ. So we have Kn,MM r 1 i (x, y) du dv ebu ξu πi i R e bv ηv K PII(u, v). The problem is that the integral may not be well defined, even if we consider it formally. The reason is that the sign of b depends on the sign of τ, and can be either positive or negative, while as v ±, K PII (u, v) does not vanish. (The correct form can be written down with the help of longer formulas, and we omit them.)
Result in formula Here we note that the PII singularity is quite robust. If M has the PII singularity, then M + rh has too, if r < r cr. The critical kernel, the most interesting one, has the kernel [Claeys Kuijlaars Liechty W] (formally) Kn,MM r 1 i (x, y) = du dv eau3 +bu +ξu πi i R e av 3 +bv +ηv K PII(u, v). If the potential is symmetric, then parameter a vanishes, as we discussed in previous slide. But our method allows us to consider asymmetric potentials, and generally there is a cubic term in the exponents. We can also deal with higher singularities, or singularities at the edge. The equivalence to the previous result by 4 4 RHP is obtained [Liechty-W].
Tacnode Riemann Hilbert problem Let M be a 4 4 matrix-valued function, and suppose it satisfies the following Riemann Hilbert problem: 1 1 0 0 J = 0 1 0 0 0 0 1 0 0 1 1 1 1 1 0 0 0 J 1 = 0 1 0 0 1 0 1 0 0 0 0 1 1 1 0 0 J 3 = 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 1 J 0 = 1 1 0 0 1 0 0 0 0 0 0 1 3 5 1 0 0 0 J 4 = 0 1 0 0 0 0 1 0 0 1 0 1 4 1 0 0 0 J 5 = 1 1 0 0 1 0 1 1 0 0 0 1
1. M is analytic in each of the sectors j, continuous up to the boundaries, and M(z) = O(1) as z 0.. On the boundaries of the sectors j, M = M (j) satisfies the jump conditions M (j) (z) = M (j 1) (z)j j, for j = 0,..., 5, M ( 1) M (5), for the jump matrices J 0,..., J 5 with constant entries specified in the figure in last page. 3. As z, M(z) satisfies the asymptotics M(z) = ( 1 + O(z 1 ) ) (v 1 (z), v (z), v 3 (z), v 4 (z)), where v 1, v, v 3, and v 4 are defined as v 1 (z) = 1 e θ 1 (( z) (z)+τz 4 1, 0, i( z) 1 T 4, 0), v (z) = 1 e θ (0, (z) τz z 4 1, 0, iz 1 ) T 4, v 3 (z) = 1 e θ 1 ( i( z) (z)+τz 4 1, 0, ( z) 1 T 4, 0), v 4 (z) = 1 e θ (0, (z) τz iz 4 1, 0, z 1 ) T 4, where θ 1 (z) = 3 r 1( z) 3 + s 1 ( z) 1, z C \ [0, ), θ (z) = 3 r z 3 + s z 1, z C \ (, 0],
Integral representation Then define the 4-vectors n (k) (z) = n (k) (z; r 1, r, s 1, s, τ) = Q Γ (k)(f (k), g (k) ), k = 0,..., 5, where Q Γ (f, g)(z) := Γ e izζ C 1 M Γ e izζ C 1 Γ e izζ C 1 Γ e izζ C 1 f 1 (ζ)g 1 (ζ)dζ + Γ e izζ C f (ζ)g (ζ)dζ + Γ e izζ C f 1 (ζ)g 3 (ζ)dζ + Γ e izζ C f (ζ)g 4 (ζ)dζ + Γ e izζ C g 1 (ζ)g 1 (ζ)dζ + Γ e izζ C 3 g (ζ)g (ζ)dζ + Γ e izζ C 3 g 1 (ζ)g 3 (ζ)dζ + Γ e izζ C 3 g (ζ)g 4 (ζ)dζ + Γ e izζ C 3 (f 1 (ζ) + g 1 (ζ))g 1 (ζ)dζ (f (ζ) + g (ζ))g (ζ)dζ, (f 1 (ζ) + g 1 (ζ))g 3 (ζ)dζ (f (ζ) + g (ζ))g 4 (ζ)dζ and r τz( 1 r r M = e 1 +r ) 1 0 0 0 0 1 0 0 ( ) i τ r 1 r r 1 r 1 +r + τ s1 + C u i r q r 1 γ i 0 r 1 C r 1, i γ ( ) r 1 q i r r τ r 1 r C r r 1 +r τ + s C u i 0 r
such that (C is defined below) a = 4 ( r 1 r ) 3 r1 + r, b = ( γ 1 = exp 8τ C (r 1 + r ), 8r 4 1 τ 3 3(r 1 + r )3 + 4r 1s 1 τ r 1 + r ) c = 1 [ 4τ (r 1 r ) ( s1 C (r1 + r s ) ], ) r 1 r (, γ = exp 8r 4 τ 3 3(r1 + r + 4r ) s τ )3 r1 + r, and then the function ( G(ζ) = exp iaζ 3 + bζ ) + icζ, and the related functions G 1 (ζ) = π γ 1 C G(ζ), G (ζ) = r 1 π γ C r G(ζ), G 3 (ζ) = i C ζg 1(ζ), G 4 (ζ) = i C ζg (ζ). The entries of M are expressed in ( C = (r 1 + r ) 1/3 8 r1, γ = exp r 3 (r1 + r τ 3 4 r ) 1s 1 r s ) r1 + r τ, and q and u are functions of σ := C ( s1 r 1 + s r τ r 1 + r ).
Furthermore, q = q(σ) satisfies the Painlevé II equation with Hastings McLeod initial condition (Ai is the Airy function) q (σ) = σq + q 3, q(σ) Ai(σ) as σ +, = q (σ) is the derivative with respect to σ, and u is the PII Hamiltonian q ψ () 0 n (0) ψ () ψ (1) + ψ () n (1) ψ () ψ (1) u(σ) := q (σ) q(σ) q(σ) 4. ψ () At last, we can specify the contours Γ (k) j and functions f (k) and g (k) in the integrands as in the figure. ψ () ψ (1) ψ (1) + ψ () n () ψ (1) + ψ () ψ (1) ψ (1) ψ (1) 0 n (3) ψ (1) + ψ () ψ () ψ (1) n (4) n (5)
Theorem [Liechty-W] The 4x4 RHP M can be expressed by n (k), the integrals involving entries of Ψ. ( M (0) = n (5) n (0), n (0), n (1), n ()), ( M (1) = n (3), n (0), n (1), n ()), ( M () = n (3), n (4), n (1) + n (), n ()), ( M (3) = n (3), n () n (3), n (5), n (4)), ( M (4) = n (3), n (0), n (5), n (4)), ( M (5) = n (1), n (0), n (5), n (4) + n (5)).