Selfadjoint Polynomials in Independent Random Matrices. Roland Speicher Universität des Saarlandes Saarbrücken

Similar documents
Polynomials in Free Variables

Free Probability Theory and Random Matrices

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Free Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada

Free Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada

The norm of polynomials in large random matrices

Free Probability Theory and Non-crossing Partitions. Roland Speicher Queen s University Kingston, Canada

FREE PROBABILITY THEORY AND RANDOM MATRICES

Freeness and the Transpose

Free Probability Theory and Random Matrices

Combinatorial Aspects of Free Probability and Free Stochastic Calculus

Fluctuations of Random Matrices and Second Order Freeness

Free Probability Theory

Operator norm convergence for sequence of matrices and application to QIT

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

FREE PROBABILITY THEORY

Local law of addition of random matrices

The Free Central Limit Theorem: A Combinatorial Approach

Applications and fundamental results on random Vandermon

Random Matrix Theory Lecture 3 Free Probability Theory. Symeon Chatzinotas March 4, 2013 Luxembourg

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

Limit Laws for Random Matrices from Traffic Probability

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

COMPUTING MOMENTS OF FREE ADDITIVE CONVOLUTION OF MEASURES

Isotropic local laws for random matrices

Characterizations of free Meixner distributions

Supelec Randomness in Wireless Networks: how to deal with it?

Analytic versus Combinatorial in Free Probability

Random Matrices: Invertibility, Structure, and Applications

Second Order Freeness and Random Orthogonal Matrices

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

ASYMPTOTIC EIGENVALUE DISTRIBUTIONS OF BLOCK-TRANSPOSED WISHART MATRICES

Free probability and quantum information

arxiv:math/ v1 [math.oa] 17 Mar 2007

On the Fluctuation of Mutual Information in Double Scattering MIMO Channels

Matricial R-circular Systems and Random Matrices

Failure of the Raikov Theorem for Free Random Variables

Multiuser Receivers, Random Matrices and Free Probability

Large Dimensional Random Matrices: Some Models and Applications

Lyapunov exponents of free operators

Matrix-valued stochastic processes

Non white sample covariance matrices.

The Matrix Dyson Equation in random matrix theory

FREE PROBABILITY THEORY

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

RECTANGULAR RANDOM MATRICES, ENTROPY, AND FISHER S INFORMATION

Channel capacity estimation using free probability theory

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

Quantum symmetries in free probability. Stephen Robert Curran

FIXED POINT ITERATIONS

On Operator-Valued Bi-Free Distributions

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws

Free deconvolution for signal processing applications

Jeffrey H. Schenker and Hermann Schulz-Baldes

Spectral law of the sum of random matrices

Large sample covariance matrices and the T 2 statistic

Local Kesten McKay law for random regular graphs

arxiv:math/ v1 [math.oa] 18 Jun 2006

Free Probability and Random Matrices

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

Meixner matrix ensembles

arxiv: v2 [math.oa] 29 Sep 2017

Spatio-Temporal Big Data Analysis for Smart Grids Based on Random Matrix Theory: A Comprehensive Study

Random Matrix: From Wigner to Quantum Chaos

Lectures on the Combinatorics of Free Probability Theory. Alexandru Nica Roland Speicher

The self-adjoint linearization trick and algebraicity problems

Workshop on Free Probability and Random Combinatorial Structures. 6-8 December 2010

Free Probability and Random Matrices

Lecture I: Asymptotics for large GUE random matrices

Exponential tail inequalities for eigenvalues of random matrices

Universality for random matrices and log-gases

ADMISSIONS EXERCISE. MSc in Mathematical and Computational Finance. For entry 2018

LUCK S THEOREM ALEX WRIGHT

The largest eigenvalue of finite rank deformation of large Wigner matrices: convergence and non-universality of the fluctuations

The Hadamard product and the free convolutions

On the distinguishability of random quantum states

Lecture III: Applications of Voiculescu s random matrix model to operator algebras

Covariance estimation using random matrix theory

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Random Matrices and Wireless Communications

Burgers equation in the complex plane. Govind Menon Division of Applied Mathematics Brown University

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random matrices and free probability. Habilitation Thesis (english version) Florent Benaych-Georges

ILWOO CHO. In this paper, we will reformulate the moments and cumulants of the so-called generating operator T 0 = N

Quantum chaos in composite systems

Octavio Arizmendi, Ole E. Barndorff-Nielsen and Víctor Pérez-Abreu

free pros quantum groups QIT de Finetti -

Two-parameter Noncommutative Central Limit Theorem

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v4 [math.pr] 5 Jan 2013

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

Documentation for the Random Matrix Library

Determinantal point processes and random matrix theory in a nutshell

Quantum Probability and Asymptotic Spectral Analysis of Growing Roma, GraphsNovember 14, / 47

Free Meixner distributions and random matrices

Kernel families of probability measures. Saskatoon, October 21, 2011

arxiv:math/ v3 [math.oa] 16 Oct 2005

WIGNER THEOREMS FOR RANDOM MATRICES WITH DEPENDENT ENTRIES: ENSEMBLES ASSO- CIATED TO SYMMETRIC SPACES AND SAMPLE CO- VARIANCE MATRICES

Free Convolution with A Free Multiplicative Analogue of The Normal Distribution

EIGENVECTOR OVERLAPS (AN RMT TALK) J.-Ph. Bouchaud (CFM/Imperial/ENS) (Joint work with Romain Allez, Joel Bun & Marc Potters)

Transcription:

Selfadjoint Polynomials in Independent Random Matrices Roland Speicher Universität des Saarlandes Saarbrücken

We are interested in the limiting eigenvalue distribution of an N N random matrix for N. Typical phenomena for basic random matrix ensembles: almost sure convergence to a deterministic limit eigenvalue distribution this limit distribution can be effectively calculated

The Cauchy (or Stieltjes) Transform For any probability measure µ on R we define its Cauchy transform 1 G(z) := z t dµ(t) R This is an analytic function G : C + C and we can recover µ from G by Stieltjes inversion formula dµ(t) = 1 π lim IG(t + iε)dt ε 0

Wigner random matrix and Wigner s semicircle Wishart random matrix and Marchenko-Pastur distribution G(z) = z z 2 4 2 G(z) = z+1 λ (z (1+λ)) 2 4λ 2z 0.35 0.9 0.3 0.8 0.7 0.25 0.6 0.2 0.5 0.15 0.4 0.1 0.3 0.2 0.05 0.1 0 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 x 0 0 0.5 1 1.5 2 2.5 x

We are now interested in the limiting eigenvalue distribution of general selfadjoint polynomials p(x 1,..., X k ) of several independent N N random matrices X 1,..., X k Typical phenomena: almost sure convergence to a deterministic limit eigenvalue distribution this limit distribution can be effectively calculated only in very simple situations

for X Wigner, Y Wishart p(x, Y ) = X + Y p(x, Y ) = XY + Y X + X 2 G(z) = G Wishart (z G(z))???? 0.35 0.35 0.3 0.3 0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05 0 2 1 0 1 2 3 4 5 0 5 0 5 10

Existing Results for Calculations of the Limit Eigenvalue Distribution Marchenko, Pastur 1967: general Wishart matrices ADA Pastur 1972: deterministic + Wigner (deformed semicircle) Speicher, Nica 1998; Vasilchuk 2003: commutator or anticommutator: X 1 X 2 ± X 2 X 1 more general models in wireless communications (Tulino, Verdu 2004; Couillet, Debbah, Silverstein 2011): RADA R or i R i A i D i A i R i

Asymptotic Freeness of Random Matrices Basic result of Voiculescu (1991): Large classes of independent random matrices (like Wigner or Wishart matrices) become asymptoticially freely independent, with respect to ϕ = N 1 Tr, if N.

Consequence: Reduction of Our Random Matrix Problem to the Problem of Polynomial in Freely Independent Variables If the random matrices X 1,..., X k are asymptotically freely independent, then the distribution of a polynomial p(x 1,..., X k ) is asymptotically given by the distribution of p(x 1,..., x k ), where x 1,..., x k are freely independent variables, and the distribution of x i is the asymptotic distribution of X i

Can We Actually Calculate Polynomials in Freely Independent Variables? Free probability can deal effectively with simple polynomials the sum of variables (Voiculescu 1986, R-transform) p(x, y) = x + y the product of variables (Voiculescu 1987, S-transform) p(x, y) = xy (= xy x) the commutator of variables (Nica, Speicher 1998) p(x, y) = xy yx

There is no hope to calculate effectively more complicated or general polynomials in freely independent variables with usual free probability theory......but there is a possible way around this: linearize the problem!!!

There is no hope to calculate effectively more complicated or general polynomials in freely independent variables with usual free probability theory......but there is a possible way around this: linearize the problem!!!

The Linearization Philosophy: In order to understand polynomials in non-commuting variables, it suffices to understand matrices of linear polynomials in those variables. Voiculescu 1987: motivation Haagerup, Thorbjørnsen 2005: largest eigenvalue Anderson 2012: the selfadjoint version a (based on Schur complement)

Consider a polynomial p in non-commuting variables x and y. A linearization of p is an N N matrix (with N N) of the form ( ) 0 u ˆp =, v Q where u, v, Q are matrices of the following sizes: u is 1 (N 1); v is (N 1) N; and Q is (N 1) (N 1) each entry of u, v, Q is a polynomial in x and y, each of degree 1 Q is invertible and we have p = uq 1 v

Let ( 0 u ) ˆp = v Q be a linearization of p. For z C put b = ( ) z 0 0 0 Then we have ( z u b ˆp = v Q ) = ( 1 uq 1 0 1 ) ( ) ( z p 0 1 0 0 Q Q 1 v 1 ) hence z p invertible b ˆp invertible

Actually, (b ˆp) 1 = = ( ) ( ) ( 1 0 (z p) 1 0 1 uq 1 Q 1 v 1 0 Q 1 0 1 ( (z p) 1 ) ) and we can get G p (z) = ϕ((z p) 1 ) as the (1,1)-entry of the operator-valued Cauchy-transform Gˆp (b) = id ϕ((b ˆp) 1 ) = ( ϕ((z p) 1 ) ϕ( ) ϕ( ) ϕ( ) )

Theorem (Anderson 2012): One has for each p there exists a linearization ˆp (with an explicit algorithm for finding those) if p is selfadjoint, then this ˆp is also selfadjoint

The selfadjoint linearization of p = xy + yx + x 2 is ˆp = because we have 0 x y + x 2 x 0 1 y + x 2 1 0 (x 1 ) ( ) ( ) 2 x + y 0 1 x 1 0 1 2 x + y = (xy + yx + x 2 )

This means: the Cauchy transform G p (z) of p = xy + yx + x 2 is given as the (1,1)-entry of the operator-valued (3 3 matrix) Cauchy transform of ˆp: Gˆp (b) = id ϕ [ (b ˆp) 1] = where ˆp = 0 0 0 0 0 1 0 1 0 + G p (z) for b = 0 1 1 2 1 0 0 x + 1 2 0 0 0 0 1 0 0 0 y 1 0 0 z 0 0 0 0 0 0 0 0, is a linear polynomial, but with matrix-valued coefficients.

ˆp = 0 0 0 0 0 1 0 1 0 + 0 1 1 2 1 0 0 x + 1 2 0 0 0 0 1 0 0 0 y 1 0 0 In order to understand ˆp, we have to calculate the free convolution of ˆx = 0 1 2 1 1 0 0 2 0 0 1 with respect to E = id ϕ x and ŷ := 0 0 1 0 0 0 y 1 0 0 E is not an expectation, but a conditional expectation (e.g., partial trace).

Let B A. A linear map is a conditional expectation if E : A B E[b] = b b B and E[b 1 ab 2 ] = b 1 E[a]b 2 a A, b 1, b 2 B An operator-valued probability space consists of B A and a conditional expectation E : A B

Consider an operator-valued probability space (A, E : A B). Random variables x i A (i I) are free with respect to E (or free with amalgamation over B) if E[a 1 a n ] = 0 whenever a i B x j(i) are polynomials in some x j(i) with coefficients from B and E[a i ] = 0 i and j(1) j(2) = j(n).

Consider E : A B. Define free cumulants by κ B n : A n B E[a 1 a n ] = π N C(n) κ B π[a 1,..., a n ] arguments of κ B π are distributed according to blocks of π but now: cumulants are nested inside each other according to nesting of blocks of π

Example: π = { {1, 10}, {2, 5, 9}, {3, 4}, {6}, {7, 8} } NC(10), a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 κ B π [a 1,..., a 10 ] = κ B 2 ( a 1 κ B ( 3 a2 κ B 2 (a 3, a 4 ), a 5 κ B 1 (a 6) κ B 2 (a ) ) 7, a 8 ), a 9, a10

For a A define its operator-valued Cauchy transform 1 G a (b) := E[ b a ] = E[b 1 (ab 1 ) n ] n 0 and operator-valued R-transform R a (b) : = κ B n+1 (ab, ab,..., ab, a) n 0 = κ B 1 (a) + κb 2 (ab, a) + κb 3 (ab, ab, a) + Then bg(b) = 1 + R(G(b)) G(b) or G(b) = 1 b R(G(b))

If x and y are free over B, then mixed B-valued cumulants in x and y vanish R x+y (b) = R x (b) + R y (b) we have the subordination G x+y (z) = G x (ω(z))

Theorem (Belinschi, Mai, Speicher 2013): Let x and y be selfadjoint operator-valued random variables free over B. Then there exists a Fréchet analytic map ω : H + (B) H + (B) so that G x+y (b) = G x (ω(b)) for all b H + (B). Moreover, if b H + (B), then ω(b) is the unique fixed point of the map and where f b : H + (B) H + (B), f b (w) = h y (h x (w) + b) + b, ω(b) = lim n f n b (w) for any w H+ (B). H + (B) := {b B (b b )/(2i) > 0}, h(b) := 1 G(b) b

If the random matrices X 1,..., X k are asymptotically freely independent, then the distribution of a polynomial p(x 1,..., X k ) is asymptotically given by the distribution of p(x 1,..., x k ), where x 1,..., x k are freely independent variables, and the distribution of x i is the asymptotic distribution of X i Problem: How do we deal with a polynomial p in free variables? Idea: Linearize the polynomial and use operator-valued convolution for the linearization ˆp!

The linearization of p = xy + yx + x 2 is given by ˆp = 0 x y + x 2 x 0 1 y + x 2 1 0 This means that the Cauchy transform G p (z) is given as the (1,1)-entry of the operator-valued (3 3 matrix) Cauchy transform of ˆp: Gˆp (b) = id ϕ [ (b ˆp) 1] = G p (z) for b = z 0 0 0 0 0 0 0 0.

But ˆp = 0 x y + x 2 x 0 1 y + x 2 1 0 = ˆx + ŷ with ˆx = 0 x x 0 0 x 2 x 2 0 0 and ŷ = 0 0 y 0 0 1 y 1 0.

So ˆp is just the sum of two operator-valued variables ˆp = 0 x x 2 x 0 0 x 2 0 0 + 0 0 y 0 0 1 y 1 0 where we understand the operator-valued distributions of ˆx and of ŷ and ˆx and ŷ are operator-valued freely independent! So we can use operator-valued free convolution to calculate the operator-valued Cauchy transform of ˆx + ŷ.

So we can use operator-valued free convolution to calculate the operator-valued Cauchy transform of ˆp = ˆx + ŷ in the subordination form Gˆp (b) = Gˆx (ω(b)), where ω(b) is the unique fixed point in the upper half plane H + (M 3 (C) of the iteration w Gŷ(b + Gˆx (w) 1 w) 1 (Gˆx (w) 1 w)

Input: p(x, y), G x (z), G y (z) Linearize p(x, y) to ˆp = ˆx + ŷ Gˆx (b) out of G x (z) and Gŷ(b) out of G y (z) Get w(b) as the fixed point of the iteration w Gŷ(b + Gˆx (w) 1 w) 1 (Gˆx (w) 1 w) Gˆp (b) = Gˆx (ω(b)) Recover G p (z) as one entry of Gˆp (b)

P (X, Y ) = XY + Y X + X 2 for independent X, Y ; X is Wigner and Y is Wishart 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 5 0 5 10 p(x, y) = xy + yx + x 2 for free x, y; x is semicircular and y is Marchenko-Pastur

P (X 1, X 2, X 3 ) = X 1 X 2 X 1 + X 2 X 3 X 2 + X 3 X 1 X 3 for independent X 1, X 2, X 3 ; X 1, X 2 Wigner, X 3 Wishart 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 10 5 0 5 10 15 p(x 1, x 2, x 3 ) = x 1 x 2 x 1 + x 2 x 3 x 2 + x 3 x 1 x 3 for free x 1, x 2, x 3 ; x 1, x 2 semicircular, x 3 Marchenko-Pastur

Theorem (Belinschi, Mai, Speicher 2012): Combining the selfadjoint linearization trick with our new analysis of operator-valued free convolution we can provide an efficient and analytically controllable algorithm for calculating the asymptotic eigenvalue distribution of any selfadjoint polynomial in asymptotically free random matrices.

Outlook: How about the case of non selfadjoint polynomials? Drop in for Belinschi s talk on Friday!