Polynomials in Free Variables

Similar documents
Selfadjoint Polynomials in Independent Random Matrices. Roland Speicher Universität des Saarlandes Saarbrücken

Free Probability Theory and Random Matrices

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Free Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada

Free Probability Theory and Random Matrices. Roland Speicher Queen s University Kingston, Canada

The norm of polynomials in large random matrices

Operator norm convergence for sequence of matrices and application to QIT

FREE PROBABILITY THEORY

Free Probability Theory and Non-crossing Partitions. Roland Speicher Queen s University Kingston, Canada

FREE PROBABILITY THEORY AND RANDOM MATRICES

Limit Laws for Random Matrices from Traffic Probability

Free Probability Theory

Fluctuations of Random Matrices and Second Order Freeness

Second Order Freeness and Random Orthogonal Matrices

Local law of addition of random matrices

Free Probability Theory and Random Matrices

Lyapunov exponents of free operators

Freeness and the Transpose

Random Matrix: From Wigner to Quantum Chaos

Characterizations of free Meixner distributions

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

FIXED POINT ITERATIONS

The Free Central Limit Theorem: A Combinatorial Approach

Random matrices: Distribution of the least singular value (via Property Testing)

Lecture I: Asymptotics for large GUE random matrices

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

RECTANGULAR RANDOM MATRICES, ENTROPY, AND FISHER S INFORMATION

Two-parameter Noncommutative Central Limit Theorem

1 Intro to RMT (Gene)

Workshop on Free Probability and Random Combinatorial Structures. 6-8 December 2010

Applications and fundamental results on random Vandermon

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws

Combinatorial Aspects of Free Probability and Free Stochastic Calculus

Random Matrix Theory Lecture 3 Free Probability Theory. Symeon Chatzinotas March 4, 2013 Luxembourg

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

THE INVERSE FUNCTION THEOREM

Analytic versus Combinatorial in Free Probability

free pros quantum groups QIT de Finetti -

Lecture III: Applications of Voiculescu s random matrix model to operator algebras

Spectral Continuity Properties of Graph Laplacians

Random Fermionic Systems

INTRODUCTION TO LIE ALGEBRAS. LECTURE 1.

Homework 10 Solution

MATH 423 Linear Algebra II Lecture 3: Subspaces of vector spaces. Review of complex numbers. Vector space over a field.

Fixed-Point Iteration

Matricial R-circular Systems and Random Matrices

Universality of local spectral statistics of random matrices

ASYMPTOTIC EIGENVALUE DISTRIBUTIONS OF BLOCK-TRANSPOSED WISHART MATRICES

(3) Let Y be a totally bounded subset of a metric space X. Then the closure Y of Y

Problem 1A. Use residues to compute. dx x

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Random Matrices: Invertibility, Structure, and Applications

Failure of the Raikov Theorem for Free Random Variables

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Free Probability and Random Matrices

Complex Variables. Chapter 2. Analytic Functions Section Harmonic Functions Proofs of Theorems. March 19, 2017

Free probability and quantum information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Quantum symmetries in free probability. Stephen Robert Curran

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Non white sample covariance matrices.

The Matrix Dyson Equation in random matrix theory

Brown University Analysis Seminar

Mixed q-gaussian algebras and free transport

Meixner matrix ensembles

CHAPTER X THE SPECTRAL THEOREM OF GELFAND

The Hadamard product and the free convolutions

Quantum Probability and Asymptotic Spectral Analysis of Growing Roma, GraphsNovember 14, / 47

Free Meixner distributions and random matrices

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Universality for random matrices and log-gases

arxiv: v2 [math.oa] 29 Sep 2017

Convolutions and fluctuations: free, finite, quantized.

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

Boundary Conditions associated with the Left-Definite Theory for Differential Operators

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

Math 20F Final Exam(ver. c)

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

CARLESON MEASURES AND DOUGLAS QUESTION ON THE BERGMAN SPACE. Department of Mathematics, University of Toledo, Toledo, OH ANTHONY VASATURO

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

SEA s workshop- MIT - July 10-14

arxiv: v2 [math.ap] 28 Nov 2016

HOMEWORK 3 LOUIS-PHILIPPE THIBAULT

Series Solution of Linear Ordinary Differential Equations

Lectures 6 7 : Marchenko-Pastur Law

MATH 452. SAMPLE 3 SOLUTIONS May 3, (10 pts) Let f(x + iy) = u(x, y) + iv(x, y) be an analytic function. Show that u(x, y) is harmonic.

Chapter 1: Systems of Linear Equations and Matrices

1 Last time: least-squares problems

FREE PROBABILITY THEORY

Solutions for Problem Set #5 due October 17, 2003 Dustin Cartwright and Dylan Thurston

Fast Direct Methods for Gaussian Processes

QUANTUM FIELD THEORY: THE WIGHTMAN AXIOMS AND THE HAAG-KASTLER AXIOMS

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

CSIR - Algebra Problems

Accelerating Convergence

Transcription:

Polynomials in Free Variables Roland Speicher Universität des Saarlandes Saarbrücken joint work with Serban Belinschi, Tobias Mai, and Piotr Sniady

Goal: Calculation of Distribution or Brown Measure of Polynomials in Free Variables Tools: Linearization Subordination Hermitization

We want to understand distribution of polynomials in free variables. What we understand quite well is: sums of free selfadjoint variables So we should reduce: arbitrary polynomial sums of selfadjoint variables This can be done on the expense of going over to operator-valued frame.

Let B A. A linear map is a conditional expectation if E : A B E[b] = b b B and E[b 1 ab 2 ] = b 1 E[a]b 2 a A, b 1, b 2 B An operator-valued probability space consists of B A and a conditional expectation E : A B

Consider an operator-valued probability space E : A B. Random variables x i A (i I) are free with respect to E (or free with amalgamation over B) if E[a 1 a n ] = 0 whenever a i B x j(i) are polynomials in some x j(i) with coefficients from B and E[a i ] = 0 i and j(1) j(2) = j(n).

Consider an operator-valued probability space E : A B. For a random variable x A, we define the operator-valued Cauchy transform: G(b) := E[(b x) 1 ] (b B). For x = x, this is well-defined and a nice analytic map on the operator-valued upper halfplane: H + (B) := {b B (b b )/(2i) > 0}

Theorem (Belinschi, Mai, Speicher 2013): Let x and y be selfadjoint operator-valued random variables free over B. Then there exists a Fréchet analytic map ω : H + (B) H + (B) so that G x+y (b) = G x (ω(b)) for all b H + (B). Moreover, if b H + (B), then ω(b) is the unique fixed point of the map and where f b : H + (B) H + (B), f b (w) = h y (h x (w) + b) + b, ω(b) = lim n f n b (w) for any w H+ (B). H + (B) := {b B (b b )/(2i) > 0}, h(b) := 1 G(b) b

The Linearization Philosophy: In order to understand polynomials in non-commuting variables, it suffices to understand matrices of linear polynomials in those variables. Voiculescu 1987: motivation Haagerup, Thorbjørnsen 2005: largest eigenvalue Anderson 2012: the selfadjoint version a (based on Schur complement)

Consider a polynomial p in non-commuting variables x and y. A linearization of p is an N N matrix (with N N) of the form ( ) 0 u ˆp =, v Q where u, v, Q are matrices of the following sizes: u is 1 (N 1); v is (N 1) N; and Q is (N 1) (N 1) each entry of u, v, Q is a polynomial in x and y, each of degree 1 Q is invertible and we have p = uq 1 v

Consider linearization of p ( ) 0 u ˆp = p = uq 1 v and b = v Q ( ) z 0 0 0 (z C) Then we have (b ˆp) 1 = ( ) ( ) ( 1 0 (z p) 1 0 1 uq 1 Q 1 v 1 0 Q 1 0 1 ) = ( (z p) 1 ) and thus Gˆp (b) = id ϕ((b ˆp) 1 ) = ( ϕ((z p) 1 ) ϕ( ) ϕ( ) ϕ( ) )

Note: ˆp is the sum of operator-valued free variables! Theorem (Anderson 2012): One has for each p there exists a linearization ˆp (with an explicit algorithm for finding those) if p is selfadjoint, then this ˆp is also selfadjoint Conclusion: Combination of linearization and operator-valued subordination allows to deal with case of selfadjoint polynomials.

Input: p(x, y), G x (z), G y (z) Linearize p(x, y) to ˆp = ˆx + ŷ Gˆx (b) out of G x (z) and Gŷ(b) out of G y (z) Get w(b) as the fixed point of the iteration w Gŷ(b + Gˆx (w) 1 w) 1 (Gˆx (w) 1 w) Gˆp (b) = Gˆx (ω(b)) Recover G p (z) as one entry of Gˆp (b)

Example: p(x, y) = xy + yx + x 2 p has linearization ˆp = 0 x y + x 2 x 0 1 y + x 2 1 0

P (X, Y ) = XY + Y X + X 2 for independent X, Y ; X is Wigner and Y is Wishart 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 5 0 5 10 p(x, y) = xy + yx + x 2 for free x, y; x is semicircular and y is Marchenko-Pastur

Example: p(x 1, x 2, x 3 ) = x 1 x 2 x 1 + x 2 x 3 x 2 + x 3 x 1 x 3 p has linearization ˆp = 0 0 x 1 0 x 2 0 x 3 0 x 2 1 0 0 0 0 x 1 1 0 0 0 0 0 0 0 0 x 3 1 0 0 x 2 0 0 1 0 0 0 0 0 0 0 0 x 1 1 x 3 0 0 0 0 1 0

P (X 1, X 2, X 3 ) = X 1 X 2 X 1 + X 2 X 3 X 2 + X 3 X 1 X 3 for independent X 1, X 2, X 3 ; X 1, X 2 Wigner, X 3 Wishart 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 10 5 0 5 10 15 p(x 1, x 2, x 3 ) = x 1 x 2 x 1 + x 2 x 3 x 2 + x 3 x 1 x 3 for free x 1, x 2, x 3 ; x 1, x 2 semicircular, x 3 Marchenko-Pastur

What about non-selfadjoint polynomials? For a measure on C its Cauchy transform G µ (λ) = C 1 λ z dµ(z) is well-defined everywhere outside a set of R 2 -Lebesgue measure zero, however, it is analytic only outside the support of µ. The measure µ can be extracted from its Cauchy transform by the formula (understood in distributional sense) µ = 1 π λ G µ(λ),

Better approach by regularization: G ɛ,µ (λ) = C λ z ɛ 2 + λ z 2dµ(z) is well defined for every λ C. By sub-harmonicity arguments µ ɛ = 1 π λ G ɛ,µ(λ) is a positive measure on the complex plane. One has: lim µ ɛ = µ ɛ 0 weak convergence

This can be copied for general (not necessarily normal) operators x in a tracial non-commutative probability space (A, ϕ). Put G ɛ,x (λ) := ϕ ((λ x) ( (λ x)(λ x) + ɛ 2) 1 ) Then µ ɛ,x = 1 π λ G ɛ,µ(λ) is a positive measure on the complex plane, which converges weakly for ɛ 0, µ x := lim ɛ 0 µ ɛ,x Brown measure of x

Hermitization Method For given x we need to calculate G ɛ,x (λ) = ϕ ((λ x) ( (λ x)(λ x) + ɛ 2) 1 ) Let X = ( 0 ) x x 0 M 2 (A); note: X = X Consider X in the M 2 (C)-valued probability space with repect to E = id ϕ : M 2 (A) M 2 (C) given by [( )] ( ) a11 a E 12 ϕ(a11 ) ϕ(a = 12 ). a 21 a 22 ϕ(a 21 ) ϕ(a 22 )

For the argument ( iɛ λ Λ ɛ = λ iɛ ) M 2 (C) and X = ( 0 ) x x 0 consider now the M 2 (C)-valued Cauchy transform of X G X (Λ ε ) = E [ (Λ ɛ X) 1] = ( ) gɛ,λ,11 g ɛ,λ,12 g ɛ,λ,21 g ɛ,λ,22. Λ ɛ X) 1 = One can easily check that ( iɛ((λ x)(λ x) + ɛ 2 ) 1 (λ x)((λ x) (λ x) + ɛ 2 ) 1 (λ x) ((λ x)(λ x) + ɛ 2 ) 1 iɛ((λ x) (λ x) + ɛ 2 ) 1 ) thus g ɛ,λ,12 = G ε,x (λ).

So for a general polynomial we should 1. hermitize 2. linearise 3. subordinate But: do (1) and (2) fit together???

Consider p = xy with x = x, y = y. For this we have to calculate the operator-valued Cauchy transform of ( ) 0 xy P = yx 0 Linearization means we should split this in sums of matrices in x and matrices in y. Write P = ( 0 ) xy yx 0 = ( ) ( ) ( x 0 0 y x 0 0 1 y 0 0 1 ) = XY X

P = XY X is now a selfadjoint polynomial in the selfadjoint variables ( ) ( ) x 0 0 y X = and Y = 0 1 y 0 XY X has linearization 0 0 X 0 Y 1 X 1 0

thus has linearization P = ( 0 ) xy yx 0 0 0 0 0 x 0 0 0 0 0 0 1 0 0 0 y 1 0 0 0 y 0 0 1 x 0 1 0 0 0 0 1 0 1 0 0 = 0 0 0 0 x 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 x 0 1 0 0 0 0 1 0 1 0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 y 0 0 0 0 y 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 and we can now calculate the operator-valued Cauchy transform of this via subordination.

Does eigenvalue distribution of polynomial in independent random matrices converge to Brown measure of corresponding polynomial in free variables? Conjecture: Consider m independent selfadjoint Gaussian (or, more general, Wigner) random matrices X (1) N,..., X(m) N and put A N := p(x (1) N,..., X(m) N ), x := p(s 1,..., s m ). We conjecture that the eigenvalue distribution µ AN of the random matrices A N converge to the Brown measure µ x of the limit operator x.

Brown measure of xyz 2yzx + zxy with x, y, z free semicircles

Brown measure of x + iy with x, y free Poissons

Brown measure of x1x2 + x2x3 + x3x4 + x4x1