Multivariate trace inequalities. David Sutter, Mario Berta, Marco Tomamichel

Similar documents
Multivariate Trace Inequalities

Universal recoverability in quantum information theory

Recoverability in quantum information theory

Lecture 19 October 28, 2015

On Composite Quantum Hypothesis Testing

A Matrix Expander Chernoff Bound

Capacity Estimates of TRO Channels

Attempting to reverse the irreversible in quantum physics

Markovian Marginals. Isaac H. Kim. Oct. 9, arxiv: IBM T.J. Watson Research Center

Approximate quantum Markov chains

Different quantum f-divergences and the reversibility of quantum operations

LOCAL RECOVERY MAPS AS DUCT TAPE FOR MANY BODY SYSTEMS

Some applications of matrix inequalities in Rényi entropy

Approximate reversal of quantum Gaussian dynamics

arxiv:math/ v1 [math.fa] 4 Jan 2007

MATRIX INEQUALITIES IN STATISTICAL MECHANICS

Semidefinite approximations of the matrix logarithm

arxiv:math/ v2 [math.fa] 29 Mar 2007

Norm inequalities related to the matrix geometric mean

Shunlong Luo. Academy of Mathematics and Systems Science Chinese Academy of Sciences

Converse bounds for private communication over quantum channels

Classical and Quantum Channel Simulations

Strong converse theorems using Rényi entropies

Quantum Achievability Proof via Collision Relative Entropy

A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks

Efficient achievability for quantum protocols using decoupling theorems

Quantum Many Body Systems and Tensor Networks

6.1 Main properties of Shannon entropy. Let X be a random variable taking values x in some alphabet with probabilities.

Einselection without pointer states -

Quantum Fisher information and entanglement

Stein s Method for Matrix Concentration

Mathematical Methods for Quantum Information Theory. Part I: Matrix Analysis. Koenraad Audenaert (RHUL, UK)

Matrix Concentration. Nick Harvey University of British Columbia

Density Matrices. Chapter Introduction

Entanglement: concept, measures and open problems

Quantum Information Theory Tutorial

Extremal properties of the variance and the quantum Fisher information; Phys. Rev. A 87, (2013).

CHAPTER 8. Smoothing operators

The AKLT Model. Lecture 5. Amanda Young. Mathematics, UC Davis. MAT290-25, CRN 30216, Winter 2011, 01/31/11

Strong Converse and Stein s Lemma in the Quantum Hypothesis Testing

Partial traces and entropy inequalities

When Is a Measurement Reversible?

A Holevo-type bound for a Hilbert Schmidt distance measure

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Trace Inequalities and Quantum Entropy: An Introductory Course

Quantum to Classical Randomness Extractors

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Quantum Information Types

Lecture notes for quantum semidefinite programming (SDP) solvers

An Introduction To Resource Theories (Example: Nonuniformity Theory)

MP 472 Quantum Information and Computation

Banach Journal of Mathematical Analysis ISSN: (electronic)

Mathematical Methods wk 2: Linear Operators

3 Symmetry Protected Topological Phase

Lecture 2: Linear operators

Semidefinite programming strong converse bounds for quantum channel capacities

Notes on matrix arithmetic geometric mean inequalities

Is the world more classical or more quantum?

Probability Background

1 Quantum states and von Neumann entropy

A complete criterion for convex-gaussian states detection

Estimation of Optimal Singlet Fraction (OSF) and Entanglement Negativity (EN)

Ronalda Benjamin. Definition A normed space X is complete if every Cauchy sequence in X converges in X.

Monotonicity of quantum relative entropy revisited

4 Matrix product states

TRACE INEQUALITIES AND QUANTUM ENTROPY: An introductory course

1 More on the Bloch Sphere (10 points)

Matrix concentration inequalities

Chapter 5. Density matrix formalism

1 Math 241A-B Homework Problem List for F2015 and W2016

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Fundamental work cost of quantum processes

On a Block Matrix Inequality quantifying the Monogamy of the Negativity of Entanglement

AN EXTENDED MATRIX EXPONENTIAL FORMULA. 1. Introduction

Numerical Simulation of Spin Dynamics

The Metric Geometry of the Multivariable Matrix Geometric Mean

Some Bipartite States Do Not Arise from Channels

Differential Topology Final Exam With Solutions

Convergence of Quantum Statistical Experiments

Automorphic Equivalence Within Gapped Phases

Introduction to quantum information processing

Multipartite Monogamy of the Entanglement of Formation. Abstract

On positive maps in quantum information.

Monogamy, Polygamy and other Properties of Entanglement of Purification

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Properties of Nonnegative Hermitian Matrices and New Entropic Inequalities for Noncomposite Quantum Systems

Topics in Quantum Entropy and Entanglement

Means, Covariance, Fisher Information:

SUPPLEMENTARY INFORMATION

Estimating entanglement in a class of N-qudit states

Unitary rotations. October 28, 2014

Quantum Computing Lecture 2. Review of Linear Algebra

CPSC 540: Machine Learning

Quantum Field Theory III

Clarkson Inequalities With Several Operators

Semidefinite programming strong converse bounds for quantum channel capacities

Nullity of Measurement-induced Nonlocality. Yu Guo

Compound matrices and some classical inequalities

The Lovász ϑ-function in Quantum Mechanics

Contraction Coefficients for Noisy Quantum Channels

Transcription:

Multivariate trace inequalities David Sutter, Mario Berta, Marco Tomamichel

What are trace inequalities and why we should care. Main difference between classical and quantum world are complementarity and entanglement Quantum mechanical observables may not be simultaneously measurable (complementarity) Mathematically this means that operators do not need to commute A and B commute if [A, B] := AB BA = 0

What are trace inequalities and why we should care. Main difference between classical and quantum world are complementarity and entanglement Quantum mechanical observables may not be simultaneously measurable (complementarity) Mathematically this means that operators do not need to commute A and B commute if [A, B] := AB BA = 0 To understand QM we need to comprehend the behavior of functions involving matrices that do not commute Trace inequalities allow us to do that

What are trace inequalities and why we should care. Main difference between classical and quantum world are complementarity and entanglement Quantum mechanical observables may not be simultaneously measurable (complementarity) Mathematically this means that operators do not need to commute A and B commute if [A, B] := AB BA = 0 To understand QM we need to comprehend the behavior of functions involving matrices that do not commute Trace inequalities allow us to do that. Trace inequalities are powerful (mathematical) tools in proofs

Golden-Thompson (GT) inequality (965) Golden-Thompson: Let H and H be Hermitian. Then tr e H +H tr e H e H

Golden-Thompson (GT) inequality (965) Golden-Thompson: Let H and H be Hermitian. Then tr e H +H tr e H e H Not so easy to prove If [H, H ] = 0 then equality holds (trivial) Incredibly useful (wherever matrix exponentials occur) Statistical physics (bound partition function) [Golden-65 & Thompson-65] Random matrix theory (tail bounds via Laplace method) [Ahlswede-Winter-0] Information theory (entropy inequalities) [Lieb-Ruskai-73] Control theory, dynamical systems, Does not extend to n matrices (at least not in an obvious way)

GT inequality for more than two matrices tr e H +H tr e H e H Extensions to three matrices are not immediate tr e H +H +H 3 tr e H e H e H 3

GT inequality for more than two matrices tr e H +H tr e H e H Extensions to three matrices are not immediate tr e H +H +H 3 tr e H e H e H 3 tr e H +H +H 3 tr e H e H e H 3 e H

GT inequality for more than two matrices tr e H +H tr e H e H Extensions to three matrices are not immediate tr e H +H +H 3 tr e H e H e H 3 tr e H +H +H 3 tr e H e H e H 3 e H Lieb s triple matrix inequality (973) tr e H +H +H 3 0 dλ tr e H ( e H + λ ) e H 3 ( e H + λ ) Equivalent to many other interesting statements Lieb s concavity theorem: A tr exp(h + log A) is concave Strong subadditivity of quantum entropy (SSA): H(AB) + H(BC) H(ABC) H(B) 0

GT inequality for more than two matrices tr e H +H tr e H e H Extensions to three matrices are not immediate tr e H +H +H 3 tr e H e H e H 3 tr e H +H +H 3 tr e H e H e H 3 e H Lieb s triple matrix inequality (973) tr e H +H +H 3 0 dλ tr e H ( e H + λ ) e H 3 ( e H + λ ) Equivalent to many other interesting statements Lieb s concavity theorem: A tr exp(h + log A) is concave Strong subadditivity of quantum entropy (SSA): H(AB) + H(BC) H(ABC) H(B) 0 Open problem: extensions of GT for more than 3 matrices?

Outline for the rest of the talk. Understanding GT better (intuitive proof based on pinching). Extending GT to n matrices 3. Tightening the result (using interpolation theory) 4. Application: entropy inequalities via extended GT

The spectral pinching method Question: How do we force matrices to commute, changing them as little as possible?

The spectral pinching method Question: How do we force matrices to commute, changing them as little as possible? Any positive definite matrix A can be written (spectral decomposition) as A = λp λ λ spec(a) The pinching map with respect to A is P A : X λ spec(a) P λ X P λ

The spectral pinching method Question: How do we force matrices to commute, changing them as little as possible? Any positive definite matrix A can be written (spectral decomposition) as A = λp λ λ spec(a) The pinching map with respect to A is P A : X λ spec(a) P λ X P λ Properties of pinching maps:. [P A (X ), A] = 0 for all X 0. tr P A (X )A = tr AX for all X 0 3. P A (X ) spec(a) X for all X 0 trace is cyclic, i.e., tr AB = tr BA

The spectral pinching method Question: How do we force matrices to commute, changing them as little as possible? Any positive definite matrix A can be written (spectral decomposition) as A = λp λ λ spec(a) The pinching map with respect to A is P A : X λ spec(a) P λ X P λ Properties of pinching maps:. [P A (X ), A] = 0 for all X 0. tr P A (X )A = tr AX for all X 0 3. P A (X ) spec(a) X for all X 0 trace is cyclic, i.e., tr AB = tr BA Operator inequality A B A B 0

An intuitive proof of the GT inequality Golden-Thompson: Let H and H be Hermitian. Then tr e H +H tr e H e H Any Hermitian matrix H can be written as log A for some positive definite matrix A Let H k := log A k A k = e H k for k {, }

An intuitive proof of the GT inequality Golden-Thompson: Let H and H be Hermitian. Then tr e H +H tr e H e H Any Hermitian matrix H can be written as log A for some positive definite matrix A Let H k := log A k A k = e H k for k {, } Let A and A be positive definite matrices. Then tr exp(log A + log A ) tra A

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A log tr exp(log A + log A ) = m log tr exp(log A m + log A m ) trace is multiplicative under tensor products, i.e., tr B m = (tr B) m

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A log tr exp(log A + log A ) = m log tr exp(log A m + log A m ) m log tr exp ( log A m + log P A m(a m ) ) log poly(m) + m Pinching property 3: P A (X ) spec(a) X spec(a m ) = ( ) m+d d = poly(m) log( ) is operator monotone, i.e. X Y log X log Y tr exp( ) is operator monotone If spec(a) = {λ, λ } then spec(a ) = {λ, λ λ, λ λ, λ }

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A log tr exp(log A + log A ) = m log tr exp(log A m + log A m ) m log tr exp ( log A m + log P A m(a m ) ) log poly(m) + m = log tr A m m P A m (A m ) + log poly(m) m Pinching property : [P A (X ), A] = 0 log A + log B = log AB if [A, B] = 0

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A log tr exp(log A + log A ) = m log tr exp(log A m + log A m ) m log tr exp ( log A m + log P A m(a m ) ) log poly(m) + m = log tr A m m = m P A m log tr A m A m + (A m ) + log poly(m) m log poly(m) m Pinching property : tr P A (X )A = tr AX

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A log tr exp(log A + log A ) = m log tr exp(log A m + log A m ) m log tr exp ( log A m + log P A m(a m ) ) log poly(m) + m = log tr A m m = m P A m log tr A m A m + = log tr A A + log poly(m) m log poly(m) m (A m ) + log poly(m) m trace is multiplicative under tensor products

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A log tr exp(log A + log A ) = m log tr exp(log A m + log A m ) m log tr exp ( log A m + log P A m(a m ) ) log poly(m) + m = log tr A m m = m P A m log tr A m A m + = log tr A A + log poly(m) m log poly(m) m (A m ) + log poly(m) m = log tr A A log poly(m) lim m m = 0

An intuitive proof of the GT inequality (con t) To show: tr exp(log A + log A ) tra A log tr exp(log A + log A ) = m log tr exp(log A m + log A m ) m log tr exp ( log A m + log P A m(a m ) ) log poly(m) + m = log tr A m m = m P A m log tr A m A m + = log tr A A + log poly(m) m log poly(m) m (A m ) + log poly(m) m = log tr A A Why should this be intuitive?

Extension of GT to n matrices Same proof technique can be applied (pinch iteratively) Fact: For any A > 0 a probability measure µ on R such that P A (X ) = µ(dt)a it XA it Note that A it is a unitary that commutes with A

Extension of GT to n matrices Same proof technique can be applied (pinch iteratively) Fact: For any A > 0 a probability measure µ on R such that P A (X ) = µ(dt)a it XA it Note that A it is a unitary that commutes with A For three matrices we find tr e H +H +H 3 sup tr e H e +it H e H 3 e it H t R Same is true for n matrices (each additional matrix gives an additional pair of unitaries)

Extension of GT to n matrices Same proof technique can be applied (pinch iteratively) Fact: For any A > 0 a probability measure µ on R such that P A (X ) = µ(dt)a it XA it Note that A it is a unitary that commutes with A For three matrices we find tr e H +H +H 3 sup tr e H e +it H e H 3 e it H t R Same is true for n matrices (each additional matrix gives an additional pair of unitaries) Example: n = 4 tr e H +H +H 3 +H 4 sup tr e H e +it H e +it H 3 e H 4 e it H 3 e it H t,t R

Extension of GT to n matrices Same proof technique can be applied (pinch iteratively) Fact: For any A > 0 a probability measure µ on R such that P A (X ) = µ(dt)a it XA it Note that A it is a unitary that commutes with A For three matrices we find tr e H +H +H 3 sup tr e H e +it H e H 3 e it H t R Same is true for n matrices (each additional matrix gives an additional pair of unitaries) Can we replace the supremum Example: n = 4 by something independent of H k? tr e H +H +H 3 +H 4 sup tr e H e +it H e +it H 3 e H 4 e it H 3 e it H t,t R

Extension of GT to n matrices (con t) n matrix extension of GT: Let p, n N and consider a collection {H k } n k= of Hermitian matrices. Then ( n ) log exp n H k dt β 0 (t) log p exp( p ( + it)h k) k= k= where β 0 (t) := π ( cosh(πt) + ) 0.8 0.6 β 0(t) 0.4 0. 0 3 0 3 t

Extension of GT to n matrices (con t) n matrix extension of GT: Let p, n N and consider a collection {H k } n k= of Hermitian matrices. Then ( n ) log exp n H k dt β 0 (t) log p exp( p ( + it)h k) k= k= Let n = 3 and p = tr e H +H +H 3 = 0 dt β 0 (t) tr e H e +it H e H 3 e it H dλ tr e H ( e H + λ ) e H 3 ( e H + λ ) Reproduces Lieb s triple matrix inequality Proof uses complex interpolation theory (Stein-Hirschman see [Junge-Renner-S-Wilde-Winter-5]) Complex interpolation theory has been used in QIT recently, e.g., [Beigi-3], [Dupuis-4], [Wilde-5]

Applications Approximate quantum Markov chains Strengthened strong subadditivity of entropy

Approximate quantum Markov chains A B C Definition: A density matrix ρ ABC is a quantum Markov chain (QMC) if there exists a recovery map R B BC such that ρ ABC = (I A R B BC )(ρ AB )

Approximate quantum Markov chains A B C Definition: A density matrix ρ ABC is a quantum Markov chain (QMC) if there exists a recovery map R B BC such that ρ ABC = (I A R B BC )(ρ AB ) Theorem [Petz-88]: ρ ABC is a QMC iff I (A : C B) = 0 with R B BC : X B ρ BC (ρ B X Bρ B id C )ρ BC

Approximate quantum Markov chains A B C Definition: A density matrix ρ ABC is a quantum Markov chain (QMC) if there exists a recovery map R B BC such that ρ ABC = (I A R B BC )(ρ AB ) Theorem [Petz-88]: ρ ABC is a QMC iff I (A : C B) = 0 with R B BC : X B ρ BC (ρ B X Bρ B id C )ρ BC Question: What about states such that I (A : C B) ɛ?

Approximate quantum Markov chains A B C Definition: A density matrix ρ ABC is a quantum Markov chain (QMC) if there exists a recovery map R B BC such that ρ ABC = (I A R B BC )(ρ AB ) Theorem [Petz-88]: ρ ABC is a QMC iff I (A : C B) = 0 with R B BC : X B ρ BC (ρ B X Bρ B id C )ρ BC Question: What about states such that I (A : C B) ɛ? Theorem [Fawzi-Renner-4]: For any ρ ABC there exists R B BC such that I (A : C B) ρ log F ( ρ ABC, R B BC (ρ AB ) ) 0

Why the classical case is easy Theorem [Fawzi-Renner-4]: For any ρ ABC there exists R B BC such that I (A : C B) ρ log F ( ρ ABC, R B BC (ρ AB ) ) 0 Suppose A, B, and C are classical (i.e., ρ AB, ρ BC, and ρ B are diagonal) I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) = D ( ρ ABC ρ BC (ρ B ρ ABρ = D ( id ρ ABC R B BC (ρ AB ) )B C )ρ ) BC If [X, Y ] := XY YX = 0, then log XY = log X + log Y

Why the classical case is easy Theorem [Fawzi-Renner-4]: For any ρ ABC there exists R B BC such that I (A : C B) ρ log F ( ρ ABC, R B BC (ρ AB ) ) 0 Suppose A, B, and C are classical (i.e., ρ AB, ρ BC, and ρ B are diagonal) I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) = D ( ρ ABC ρ BC (ρ B ρ ABρ = D ( id ρ ABC R B BC (ρ AB ) )B C )ρ ) BC If A, B, and C are quantum, the density matrices ρ AB, ρ BC, and ρ B are not diagonal and do not commute

Why the classical case is easy Theorem [Fawzi-Renner-4]: For any ρ ABC there exists R B BC such that I (A : C B) ρ log F ( ρ ABC, R B BC (ρ AB ) ) 0 Suppose A, B, and C are classical (i.e., ρ AB, ρ BC, and ρ B are diagonal) I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) = D ( ρ ABC ρ BC (ρ B ρ ABρ = D ( id ρ ABC R B BC (ρ AB ) )B C )ρ ) BC If A, B, and C are quantum, the density matrices ρ AB, ρ BC, and ρ B are not diagonal and do not commute D(ρ σ) log F (ρ, σ)

Details about Fawzi-Renner-4 Theorem [Fawzi-Renner-4]: For any ρ ABC there exists R B BC such that I (A : C B) ρ log F ( ρ ABC, R B BC (ρ AB ) ) 0 Measured relative entropy: D M (ρ σ) := sup M D ( M(ρ) M(σ) ). D M (ρ σ) log F (ρ, σ). D M (ρ σ) = D(ρ σ) iff [ρ, σ] = 0 There are several generalizations and improvements of the Fawzi-Renner bound (see QIP 06) Open question: a bound that is tight in the classical case with an explicit and universal recovery map?

Application: Strenghtened strong subadditivity Variational formula for relative entropy [Petz-88]: D(ρ σ) = sup tr ρ log ω + tr exp(log σ + log ω) ω>0 Variational formula for measured relative entropy [Berta-Fawzi-Tomamichel-5]: D M (ρ σ) = sup tr ρ log ω + trσω ω>0

Application: Strenghtened strong subadditivity Variational formula for relative entropy [Petz-88]: D(ρ σ) = sup tr ρ log ω + tr exp(log σ + log ω) ω>0 Variational formula for measured relative entropy [Berta-Fawzi-Tomamichel-5]: D M (ρ σ) = sup tr ρ log ω + trσω ω>0 I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) Follows by definition D(ρ σ) := trρ log ρ trρ log σ

Application: Strenghtened strong subadditivity Variational formula for relative entropy [Petz-88]: D(ρ σ) = sup tr ρ log ω + tr exp(log σ + log ω) ω>0 Variational formula for measured relative entropy [Berta-Fawzi-Tomamichel-5]: D M (ρ σ) = sup tr ρ log ω + trσω ω>0 I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) = sup trρ ABC log ω + tr exp(log ρ AB + log ρ BC log ρ B + log ω) ω>0 Variational formula for relative entropy

Application: Strenghtened strong subadditivity Variational formula for relative entropy [Petz-88]: D(ρ σ) = sup tr ρ log ω + tr exp(log σ + log ω) ω>0 Variational formula for measured relative entropy [Berta-Fawzi-Tomamichel-5]: D M (ρ σ) = sup tr ρ log ω + trσω ω>0 I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) = sup trρ ABC log ω + tr exp(log ρ AB + log ρ BC log ρ B + log ω) ω>0 sup trρ ABC log ω+ ω>0 dtβ 0 (t)trρ +it BC 4 matrix extension of GT (n = 4 and p = ) ) (ρ +it B ρ AB ρ it B id C ρ it BC ω

Application: Strenghtened strong subadditivity Variational formula for relative entropy [Petz-88]: D(ρ σ) = sup tr ρ log ω + tr exp(log σ + log ω) ω>0 Variational formula for measured relative entropy [Berta-Fawzi-Tomamichel-5]: D M (ρ σ) = sup tr ρ log ω + trσω ω>0 I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) = sup trρ ABC log ω + tr exp(log ρ AB + log ρ BC log ρ B + log ω) ω>0 sup trρ ABC log ω+ ω>0 dtβ 0 (t)trρ +it BC ) (ρ +it B ρ AB ρ it B id C ρ it BC ω = D M (ρ ABC R B BC (ρ AB )), Variational formula for meas. rel. entropy with R B BC ( ) = ) dtβ 0(t)ρ +it BC (ρ +it B ( )ρ it B id C ρ it BC

Application: Strenghtened strong subadditivity Variational formula for relative entropy [Petz-88]: D(ρ σ) = sup tr ρ log ω + tr exp(log σ + log ω) ω>0 Variational formula for measured relative entropy [Berta-Fawzi-Tomamichel-5]: D M (ρ σ) = sup tr ρ log ω + trσω ω>0 I (A : C B) ρ = D ( ρ ABC exp(log ρ AB + log ρ BC log ρ B ) ) = sup trρ ABC log ω + tr exp(log ρ AB + log ρ BC log ρ B + log ω) ω>0 sup trρ ABC log ω+ ω>0 dtβ 0 (t)trρ +it BC ) (ρ +it B ρ AB ρ it B id C = D M (ρ ABC R B BC (ρ AB )), with R B BC ( ) = ) dtβ 0(t)ρ +it BC (ρ +it B ( )ρ it B id C ρ it BC ω ρ it BC

Strenghtened strong subadditivity (con t) We just saw that Theorem: I (A : C B) ρ D M ( ρabc R B BC (ρ AB ) ) for R B BC ( ) = dtβ 0 (t)ρ +it BC ) (ρ +it B ( )ρ it B id C ρ it BC

Strenghtened strong subadditivity (con t) We just saw that Theorem: I (A : C B) ρ D M ( ρabc R B BC (ρ AB ) ) for R B BC ( ) = dtβ 0 (t)ρ +it BC Tight for commutative case ) (ρ +it B ( )ρ it B id C ρ it BC Explicit recovery map that is universal (only depends on ρ BC ) Proof based (only) on 4 matrix extension of GT Can be generalized to monotonicity of relative entropy Improves Fawzi-Renner and its follow up papers

Conclusions arxiv:604.0303 Commun. Math. Phys. 06 If matrices do not commute things get complicated Trace inequalities are powerful tools expressing relations between matrices that do not commute Spectral pinching method is an intuitive approach to prove matrix (trace) inequalities Applications: Strengthening of strong subadditivity (FR bound) Hopefully many more (random matrix theory? other entropy inequalities?,...)

Conclusions arxiv:604.0303 Commun. Math. Phys. 06 If matrices do not commute things get complicated Trace inequalities are powerful tools expressing relations between matrices that do not commute Spectral pinching method is an intuitive approach to prove matrix (trace) inequalities Applications: Strengthening of strong subadditivity (FR bound) Hopefully many more (random matrix theory? other entropy inequalities?,...) Thank you

More trace inequalities Let A and B be positive definite matrices and q R + A q := exp(q log A) is well-defined Araki-Lieb-Thirring: Let r [0, ] tr(b r/ A r B r/ ) q r tr(b / AB / ) q If r the inequality holds in the opposite direction

More trace inequalities Let A and B be positive definite matrices and q R + A q := exp(q log A) is well-defined Araki-Lieb-Thirring: Let r [0, ] tr(b r/ A r B r/ ) q r tr(b / AB / ) q If r the inequality holds in the opposite direction Implies the GT inequality via Lie-Trotter formula lim r 0 ( n k= C r k ) r ( n ) = exp log C k k= For q = this gives tr exp(log A + log B) trab

More trace inequalities Let A and B be positive definite matrices and q R + A q := exp(q log A) is well-defined Araki-Lieb-Thirring: Let r [0, ] tr(b r/ A r B r/ ) q r tr(b / AB / ) q If r the inequality holds in the opposite direction Implies the GT inequality via Lie-Trotter formula lim r 0 ( n k= C r k ) r ( n ) = exp log C k k= For q = this gives tr exp(log A + log B) trab Exercise: Prove ALT via the spectral pinching method We can prove extensions to n matrices via pinching or/and interpolation theory

Summary of results n matrix extension of ALT: Let p, r (0, ], n N, and consider a collection {A k } n k= of positive semi-definite matrices. Then log n r A r k n dt β r (t) log A +it k p β r (t) = k= sin(πr) r(cosh(πt)+cos(πr)) k= p is a probability distribution on R.6. 0.8 β 0(t) β 4 (t) β (t) β 3 4 (t) 0.4 0 0 t

Summary of results n matrix extension of ALT: Let p, r (0, ], n N, and consider a collection {A k } n k= of positive semi-definite matrices. Then log n r A r k n dt β r (t) log A +it k p β r (t) = k= sin(πr) r(cosh(πt)+cos(πr)) k= p is a probability distribution on R Proof uses Stein-Hirschman interpolation theorem Using Lie-Trotter (i.e. r 0) we get as a corollary n matrix extension of GT: Let p, n N and consider a collection {H k } n k= of Hermitian matrices. Then ( n ) log exp n H k dt β 0 (t) log k= p exp( p ( + it)h k) k=

Stein-Hirschman operator interpolation theorem Strengthening of the Hadamard three lines theorem see [Junge-Renner-S-Wilde-Winter-5] S := {z C : 0 < Re(z) < } L (H) is the space of bounded linear operators acting on H Let G : S L(H) be uniformly bounded on S holomorphic on S continuous on the boundary S Let θ (0, ) and p θ = θ p 0 + θ p where p 0, p [, ] log G(θ) pθ ( ) dt β θ (t) log G(it) θ p 0 + β θ (t) log G( + it) θ p with R β θ (t) := sin(πθ) θ [cosh(πt) + cos(πθ)]

Proof of n matrix extension of ALT Choose G(z) = n k= Az k is bounded on S, holomorphic on S and continuous on S Let θ = r, p 0 = and p = p log G( + it) θ p = r log n log G(it) θ p 0 k= A+it k p = ( r) log n k= Ait k = 0 = r log n log G(θ) pθ = log n k= Ar k p r k= Ar k r p

Proof of n matrix extension of ALT Choose G(z) = n k= Az k is bounded on S, holomorphic on S and continuous on S Let θ = r, p 0 = and p = p log G( + it) θ p = r log n log G(it) θ p 0 k= A+it k p = ( r) log n k= Ait k = 0 = r log n log G(θ) pθ = log n k= Ar k p r Now we apply Stein-Hirschman log n r A r k p k= n β r (t) log k= k= Ar k r A +it k p p