Free Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada

Similar documents
Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Selfadjoint Polynomials in Independent Random Matrices. Roland Speicher Universität des Saarlandes Saarbrücken

Free Probability Theory and Non-crossing Partitions. Roland Speicher Queen s University Kingston, Canada

Free Probability Theory and Random Matrices

Free Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada

Polynomials in Free Variables

FREE PROBABILITY THEORY AND RANDOM MATRICES

FREE PROBABILITY THEORY

Fluctuations of Random Matrices and Second Order Freeness

Freeness and the Transpose

Free Probability Theory and Random Matrices

The Free Central Limit Theorem: A Combinatorial Approach

The norm of polynomials in large random matrices

Quantum Symmetries in Free Probability Theory. Roland Speicher Queen s University Kingston, Canada

FREE PROBABILITY THEORY

Second Order Freeness and Random Orthogonal Matrices

Random Matrix Theory Lecture 3 Free Probability Theory. Symeon Chatzinotas March 4, 2013 Luxembourg

On Operator-Valued Bi-Free Distributions

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

free pros quantum groups QIT de Finetti -

Free Probability Theory

Combinatorial Aspects of Free Probability and Free Stochastic Calculus

PATTERNED RANDOM MATRICES

The Matrix Dyson Equation in random matrix theory

Two-parameter Noncommutative Central Limit Theorem

Applications and fundamental results on random Vandermon

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

RECTANGULAR RANDOM MATRICES, ENTROPY, AND FISHER S INFORMATION

Quantum symmetries in free probability. Stephen Robert Curran

ILWOO CHO. In this paper, we will reformulate the moments and cumulants of the so-called generating operator T 0 = N

Isotropic local laws for random matrices

Distribution of Eigenvalues of Weighted, Structured Matrix Ensembles

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

Limit Laws for Random Matrices from Traffic Probability

arxiv:math/ v1 [math.oa] 17 Mar 2007

Eigenvalue Statistics for Toeplitz and Circulant Ensembles

Inhomogeneous circular laws for random matrices with non-identically distributed entries

CLASSICAL AND FREE FOURTH MOMENT THEOREMS: UNIVERSALITY AND THRESHOLDS. I. Nourdin, G. Peccati, G. Poly, R. Simone

Stein s Method and Characteristic Functions

Supelec Randomness in Wireless Networks: how to deal with it?

Operator norm convergence for sequence of matrices and application to QIT

Brown University Analysis Seminar

ECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1

Random Matrices: Invertibility, Structure, and Applications

Fluctuations from the Semicircle Law Lecture 1

arxiv: v1 [math-ph] 19 Oct 2018

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

I = i 0,

On Expected Gaussian Random Determinants

Lectures 2 3 : Wigner s semicircle law

Random matrices: Distribution of the least singular value (via Property Testing)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Matricial R-circular Systems and Random Matrices

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Jeffrey H. Schenker and Hermann Schulz-Baldes

arxiv: v2 [math.oa] 12 Jan 2016

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

The Hilbert Space of Random Variables

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

Notes on Linear Algebra and Matrix Theory

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

THREE LECTURES ON QUASIDETERMINANTS

Math Matrix Algebra

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

Plan 1 Motivation & Terminology 2 Noncommutative De Finetti Theorems 3 Braidability 4 Quantum Exchangeability 5 References

Wigner s semicircle law

Linear Matrix Inequalities vs Convex Sets

Free probability and quantum information

Potentially useful reading Sakurai and Napolitano, sections 1.5 (Rotation), Schumacher & Westmoreland chapter 2

Truncations of Haar distributed matrices and bivariate Brownian bridge

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

EA = I 3 = E = i=1, i k

Lectures on the Combinatorics of Free Probability Theory. Alexandru Nica Roland Speicher

Lecture 1 Operator spaces and their duality. David Blecher, University of Houston

Definition 9.1. The scheme µ 1 (O)/G is called the Hamiltonian reduction of M with respect to G along O. We will denote by R(M, G, O).

De Finetti theorems for a Boolean analogue of easy quantum groups

arxiv:math/ v3 [math.oa] 16 Oct 2005

Analytic versus Combinatorial in Free Probability

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Channel capacity estimation using free probability theory

Non-Commutative Subharmonic and Harmonic Polynomials

Numerical Analysis Lecture Notes

18.S34 linear algebra problems (2007)

On the concentration of eigenvalues of random symmetric matrices

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15.

Recitation 8: Graphs and Adjacency Matrices

ASYMPTOTIC EIGENVALUE DISTRIBUTIONS OF BLOCK-TRANSPOSED WISHART MATRICES

Classical Lie algebras and Yangians

Eigenvalues and Eigenvectors

Multiuser Receivers, Random Matrices and Free Probability

1 Intro to RMT (Gene)

Problem Set 7 Due March, 22

Random Fermionic Systems

MATRICES. a m,1 a m,n A =

Lecture 21: Convergence of transformations and generating a random variable

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

OPERATOR-VALUED FREE FISHER INFORMATION AND MODULAR FRAMES

Transcription:

Free Probability Theory and Random Matrices Roland Speicher Queen s University Kingston, Canada

What is Operator-Valued Free Probability and Why Should Engineers Care About it? Roland Speicher Queen s University Kingston, Canada

Many approximations for calculating eigenvalue distribution of matrices consist in replacing independent Gaussian random variables Reasons for doing so: free (semi)circular variables in limit N, we have this transition asymptotically but even for finite N, this approximation is usually quite close to original problem this is an approximation which is usually exactly calculable for each N

Example: selfadjoint Gaussian N N random matrix X N = (x ij ) N i,j=1 with x ij (1 i j ) are independent complex (i j) or real (i = j) Gaussian random variables x ij = x ji ϕ(x ij ) = 0, ϕ(x ij x ij ) = 1/N

replacing independent Gaussian variables by free (semi)circular variables gives

selfadjoint noncommutative N N random matrix S N = (c ij ) N i,j=1 with c ij (1 i j ) are free circular (i j) or semicircular (i = j) random variables c ij = c ji ϕ(c ij ) = 0, ϕ(c ij c ij ) = 1/N

X N = (x ij ) N i,j=1 S N = (c ij ) N i,j=1 X N has a complicated averaged eigenvalue distribution (i.e., with respect to tr ϕ) S N has a very simple distribution with respect to tr ϕ: for each N, S N is a semicircular variable

X N = (x ij ) N i,j=1 S N = (c ij ) N i,j=1 Morale: S N is an approximation for X N with the approximation gets better for large N distr(s N ) can be calculated exactly

Why is S N better treatable than X N? taking matrices does not fit well with independence and Gaussianity freeness and (semi)circular variables, on the other hand, go very nicely with matrices

freeness and (semi)circular variables go very nicely with matrices caveat: this is true for free (semi)circulars which are centered and all have same variance however: we might be interested in more general situations: there might even be correlations between different x ij and x kl

freeness and (semi)circular variables go very nicely with matrices caveat: this is true for free (semi)circulars which are centered and all have same variance however: we might be interested in more general situations: x ij might have different variances for different ij x ij might not be centered ( Ricean model) there might even be correlations between different x ij and x kl

Example: non-zero mean, independent Gaussian variables with constant variance Y N = A N + X N deterministic matrix of means independent centered Gaussians constant variance

We replace this by... A N + S N same deterministic matrix as before free centered (semi)circulars of constant variance We have: A N is free from S N, thus distr(a N + S N ) = distr(a N ) distr(s N )

Example: non-zero mean, independent Gaussian variables with varying variance Y N = A N + X N deterministic independent matrix of centered Gaussians means ϕ(x ij x ij ) = σ ij /N

We replace this by... A N + S N same deterministic matrix as before free centered (semi)circulars with ϕ(c ij c ij ) = σ ij/n Now we have a problem

We replace this by... A N + S N same deterministic matrix as before free centered (semi)circulars with ϕ(c ij c ij ) = σ ij/n Now we have a problem S N is not semicircular in general A N and S N are not free in general

So what do we gain by replacing independent Gaussians by free (semi)circulars in such a case?

X = (x ij ) N i,j=1 Y = (y kl ) N k,l=1 with {x ij } and {y kl } independent X and Y are not independent actually: relation between X and Y is untreatable

X = (x ij ) N i,j=1 Y = (y kl ) N k,l=1 with {x ij } and {y kl } independent X and Y are not independent actually: relation between X and Y is untreatable

X = (x ij ) N i,j=1 Y = (y kl ) N k,l=1 with {x ij } and {y kl } free X and Y are not free however: relation between X and Y is more complicated, but still treatable operator-valued freeness

X = (x ij ) N i,j=1 Y = (y kl ) N k,l=1 with {x ij } and {y kl } free X and Y are not free however: relation between X and Y is more complicated, but still treatable in terms of operator-valued freeness

Let (C, ϕ) be non-commutative probability space. Consider N N matrices over C: M N (C) := {(a ij ) N i,j=1 a ij C} M N (C) = M N (C) C is a non-commutative probability space with respect to tr N ϕ, but there is also an intermediate level

Instead of M N (C) tr ϕ C consider...

M N (C) = M N (C) C=: A id ϕ=: E M N (C)=: B tr ϕ tr C

M N (C) = M N (C) C=: A id ϕ=: E M N (C)=: B tr ϕ tr C

Let B A. A linear map is a conditional expectation if E : A B E[b] = b b B and E[b 1 ab 2 ] = b 1 E[a]b 2 a A, b 1, b 2 B An operator-valued probability space consists of B A and a conditional expectation E : A B

Consider an operator-valued probability space (A, E : A B). The operator-valued distribution of a A is given by all operator-valued moments E[ab 1 ab 2 b n 1 a] B (n N, b 1,..., b n 1 B)

Consider an operator-valued probability space (A, E : A B). The operator-valued distribution of a A is given by all operator-valued moments E[ab 1 ab 2 b n 1 a] B (n N, b 1,..., b n 1 B) Random variables x, y A are free with respect to E (or free with amalgamation over B) if E[p 1 (x)q 1 (y)p 2 (x)q 2 (y) ] = 0 whenever p i, q j are polynomials with coefficients from B and E[p i (x)] = 0 i and E[q j (y)] = 0 j.

Note: polynomials in x with coefficients from B are of the form x 2 b 0 x 2 b s and x do not commute in general!

Note: polynomials in x with coefficients from B are of the form x 2 b 0 x 2 b 1 xb 2 xb 3 b s and x do not commute in general!

Note: polynomials in x with coefficients from B are of the form x 2 b 0 x 2 b 1 xb 2 xb 3 b 1 xb 2 xb 3 + b 4 xb 5 xb 6 + etc. b s and x do not commute in general!

Operator-valued freeness works mostly like ordinary freeness, one only has to take care of the order of the variables; in all expressions they have to appear in their original order! Example: If x and {y 1, y 2 } are free, then one has E[y 1 xy 2 ] = E [ y 1 E[x]y 2 ] ; and more general E[y 1 b 1 xb 2 y 2 ] = E [ y 1 b 1 E[x]b 2 y 2 ].

Consider E : A B. Define free cumulants by κ B n : A n B E[a 1 a n ] = π N C(n) κ B π[a 1,..., a n ] arguments of κ B π are distributed according to blocks of π but now: cumulants are nested inside each other according to nesting of blocks of π

Example: π = { {1, 10}, {2, 5, 9}, {3, 4}, {6}, {7, 8} } NC(10), 1 2 3 4 5 6 7 8 9 10 κ B π[a 1,..., a 10 ] = κ B 2 ( a 1 κ B ( 3 a2 κ B 2 (a 3, a 4 ), a 5 κ B 1 (a 6) κ B 2 (a ) ) 7, a 8 ), a 9, a10

For a A define its operator-valued Cauchy transform 1 G a (b) := E[ b a ] = E[b 1 (ab 1 ) n ] n 0 and operator-valued R-transform R a (b) : = κ B n+1 (ab, ab,..., ab, a) n 0 = κ B 1 (a) + κb 2 (ab, a) + κb 3 (ab, ab, a) + Then bg(b) = 1 + R(G(b)) G(b) or G(b) = 1 b R(G(b))

If x and y are free over B, then mixed B-valued cumulants in x and y vanish R x+y (b) = R x (b) + R y (b) G x+y (b) = G x [ b Ry ( Gx+y (b) )] subordination If s is a semicircle over B then R s (b) = η(b) where η : B B is a linear map given by η(b) = E[sbs].

Back to random matrices What can we say about A N + S N deterministic matrix of means free centered (semi)circulars with ϕ(c ij c ij ) = σ ij/n

Consider T N := A N + S N We want Cauchy transform 1 g T (z) = tr ϕ[ z T ]

Consider T N := A N + S N We want Cauchy transform 1 g T (z) = tr ϕ[ z T ] M N (C) = M N (C) C id ϕ M N (C) tr ϕ tr C

Consider T := A + S We want Cauchy transform 1 g T (z) = tr ϕ[ z T ] 1 g T (z) = tr ϕ[ z T ] = tr [ 1 id ϕ( z T ) ] = tr[gt (z)] } {{ } G T (z)

We have nice behavior as M N (C)-operator-valued objects A N, S N are free over M N (C), i.e., G T (z) = G A [ z RS ( GT (z) )] S N is semicircular over M N (C), i.e., R S (B) = η(b) with η : M N (C) M N (C) linear, given by η(b) = E[SBS].

Thus the distribution of T N = A N + S N Cauchy-transform g T according to is determined via its g T (z) = tr [ G T (z) ] and G T (z) = G A [ z η ( GT (z) )] = E 1 z η ( G T (z) ) A

Thus the distribution of T N = A N + S N Cauchy-transform g T according to is determined via its g T (z) = tr [ G T (z) ] and G T (z) = G A [ z η ( GT (z) )] = E 1 z η ( G T (z) ) A Note: by [Helton, Far, Speicher IMRN 2007], there exists exactly one solution of above fixed point equation with the right positivity property!

Note: the more symmetries we have in entries of S N, the better is the freeness between A N and S N! For considered situation where different c ij are free, we have for η : M N (C) M N (C) that actually [η(b)] ij = E[SBS] ij = k,l ϕ(c ik b kl c lj ) = ϕ(c ik c jl ) b kl = δ ij }{{} k,l δ kl δ ij σ ik k σ ik b kk thus η is effectively a mapping on diagonal matrices

Consider in addition to E M : M N (C) M N (C); also a 11... a 1N....... a N1... a NN ϕ(a 11 )... ϕ(a 1N )..... ϕ(a N1 )... ϕ(a NN ) D N (C) := {diagonal matrices} M N (C) and corresponding conditional expectation E D : M N (C) D N (C); a 11... a 1N....... a N1... a NN ϕ(a 11 )... 0..... 0... ϕ(a NN )

Then we have in our situation A N + S N deterministic matrix of means free centered (semi)circulars with ϕ(c ij c ij ) = σ ij/n that actually, by [Nica, Shlyakhtenko, Speicher, JFA 2002], A N and S N are free over D N (C) S N is semicircular over D N (C)

M N (C) M N (C) M N (C) E M M N (C) E D tr ϕ tr D N (C) tr C C C with correlation free entries free entries varying variance constant variance

Let us now treat more relevant non-selfadjoint situation H N = A N + C N deterministic matrix of means free circulars no symmetry condition ϕ(c ij c ij ) = σ ij/n Calculate the distribution of HH!

HH has the same distribution as square of T := ( 0 ) H H 0 = ( 0 ) A A 0 + ( 0 ) C C 0 These are 2N 2N selfadjoint matrices of the type considered before.

We have ( ) 0 A A 0 and ( 0 ) C C 0 are free over D 2N (C) ( ) 0 C C 0 where is semicircular over D 2N (C) with η ( B1 0 0 B 2 ) = ( ) η1 (B 2 ) 0, 0 η 2 (B 1 ) η 1 (B 2 ) = E DN [CD 2 C ] η 2 (B 1 ) = E DN [C D 1 C]

We have and Thus G T (z) = zg T 2(z 2 ) G T 2(z) = zg T 2(z 2 ) = G T (z) = G ( 0 A A 0 = E D2N = E D2N ) ( ( G1 (z) 0 ) 0 G 2 (z) z R ( 0 z zη C C 0 ) (G T (z)) ( G1 (z 2 ) 0 0 G 2 (z 2 ) ) ( 0 A A 0 ( z zη1 (G 2 (z 2 )) A A z zη 2 (G 1 (z 2 )) )) 1 ) 1

This yields zg 1 (z) = E DN and zg 2 (z) = E DN ( ( 1 η 1 (G 2 (z)) + A N 1 z zη 2 (G 1 (z)) A N 1 η 2 (G 1 (z)) + A N ) 1 ) 1 1 z zη 1 (G 2 (z)) A N

This yields zg 1 (z) = E DN and zg 2 (z) = E DN ( ( 1 η 1 (G 2 (z)) + A N 1 z zη 2 (G 1 (z)) A N 1 η 2 (G 1 (z)) + A N ) 1 ) 1 1 z zη 1 (G 2 (z)) A N These are actually the fixed point equations of [Hachem, Loubaton, Najim, Ann. Appl. Prob. 2007] for a deterministic equivalent (a la Girko) of square of random matrix with non-centered, independent Gaussians with nonconstant variance as entries.

Conclusion many approximations (like deterministic equivalents a la Girko) consist in replacing independent Gaussians by free (semi)circulars

Conclusion many approximations (like deterministic equivalents a la Girko) consist in replacing independent Gaussians by free (semi)circulars operator-valued free probability allows conceptual and streamlined treatment of those

Conclusion many approximations (like deterministic equivalents a la Girko) consist in replacing independent Gaussians by free (semi)circulars operator-valued free probability allows conceptual and streamlined treatment of those also convergence questions might be treated more uniformly by relying on asymptotic freeness results

Conclusion many approximations (like deterministic equivalents a la Girko) consist in replacing independent Gaussians by free (semi)circulars operator-valued free probability allows conceptual and streamlined treatment of those also convergence questions might be treated more uniformly by relying on asymptotic freeness results this approach also allows to treat classes of random matrices with correlation between entries