Quantum Achievability Proof via Collision Relative Entropy

Similar documents
A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks

Strong converse theorems using Rényi entropies

A Holevo-type bound for a Hilbert Schmidt distance measure

Correlation Detection and an Operational Interpretation of the Rényi Mutual Information

Classical codes for quantum broadcast channels. arxiv: Ivan Savov and Mark M. Wilde School of Computer Science, McGill University

Classical and Quantum Channel Simulations

Lecture 11: Quantum Information III - Source Coding

The Communication Complexity of Correlation. Prahladh Harsha Rahul Jain David McAllester Jaikumar Radhakrishnan

Quantum Sphere-Packing Bounds and Moderate Deviation Analysis for Classical-Quantum Channels

Different quantum f-divergences and the reversibility of quantum operations

Lecture 19 October 28, 2015

Lecture 22: Final Review

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EECS 750. Hypothesis Testing with Communication Constraints

Multivariate Trace Inequalities

Capacity Estimates of TRO Channels

Second-Order Asymptotics in Information Theory

Shannon s noisy-channel theorem

Lecture 2: August 31

Multivariate trace inequalities. David Sutter, Mario Berta, Marco Tomamichel

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

Semidefinite programming strong converse bounds for quantum channel capacities

Lecture 10: Broadcast Channel and Superposition Coding

Quantum Information Theory and Cryptography

LECTURE 10. Last time: Lecture outline

On asymmetric quantum hypothesis testing

Quantum rate distortion, reverse Shannon theorems, and source-channel separation

Information Theoretic Limits of Randomness Generation

A New Metaconverse and Outer Region for Finite-Blocklength MACs

The Method of Types and Its Application to Information Hiding

Network coding for multicast relation to compression and generalization of Slepian-Wolf

Simultaneous Nonunique Decoding Is Rate-Optimal

On Third-Order Asymptotics for DMCs

Efficient achievability for quantum protocols using decoupling theorems

ELEC546 Review of Information Theory

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013

Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities

to mere bit flips) may affect the transmission.

Strong converse theorems using Rényi entropies

EE5139R: Problem Set 7 Assigned: 30/09/15, Due: 07/10/15

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016

Introduction to Information Theory. B. Škorić, Physical Aspects of Digital Security, Chapter 2

Sparse Regression Codes for Multi-terminal Source and Channel Coding

Second-Order Asymptotics for the Gaussian MAC with Degraded Message Sets

Two Applications of the Gaussian Poincaré Inequality in the Shannon Theory

Dispersion of the Gilbert-Elliott Channel

On Multiple User Channels with State Information at the Transmitters

Superposition Encoding and Partial Decoding Is Optimal for a Class of Z-interference Channels

Converse bounds for private communication over quantum channels

arxiv: v3 [quant-ph] 9 Jan 2017

X 1 : X Table 1: Y = X X 2

2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

Lecture 15: Conditional and Joint Typicaility

Semidefinite programming strong converse bounds for quantum channel capacities

EE/Stats 376A: Homework 7 Solutions Due on Friday March 17, 5 pm

The Lovász ϑ-function in Quantum Mechanics

Lecture 3: Channel Capacity

The Gallager Converse

Refined Bounds on the Empirical Distribution of Good Channel Codes via Concentration Inequalities

Covert Communication with Channel-State Information at the Transmitter

Strong Converse Theorems for Classes of Multimessage Multicast Networks: A Rényi Divergence Approach

Explicit Receivers for Optical Communication and Quantum Reading

On the Capacity Region of the Gaussian Z-channel

Hypothesis Testing with Communication Constraints

LECTURE 13. Last time: Lecture outline

Lecture 6 I. CHANNEL CODING. X n (m) P Y X

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

An Introduction to Quantum Computation and Quantum Information

Maximal Entanglement A New Measure of Entanglement

On the distinguishability of random quantum states

On Large Deviation Analysis of Sampling from Typical Sets

Problem Set: TT Quantum Information

Simple Channel Coding Bounds

Asymptotic Pure State Transformations

Distributed Source Coding Using LDPC Codes

EE 4TM4: Digital Communications II. Channel Capacity

FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY

Limits on classical communication from quantum entropy power inequalities

Homework Set #2 Data Compression, Huffman code and AEP

II. THE TWO-WAY TWO-RELAY CHANNEL

Single-shot Quantum State Merging

Channels with cost constraints: strong converse and dispersion

Quantum Information Theory Tutorial

Degradable Quantum Channels

Fourier analysis of boolean functions in quantum computation

Information Theory. Lecture 10. Network Information Theory (CT15); a focus on channel capacity results

Multiaccess Channels with State Known to One Encoder: A Case of Degraded Message Sets

On Composite Quantum Hypothesis Testing

Capacity of a channel Shannon s second theorem. Information Theory 1/33

On some special cases of the Entropy Photon-Number Inequality

Quantum Data Compression

Lecture: Quantum Information

Quiz 2 Date: Monday, November 21, 2016

Information Theory for Wireless Communications. Lecture 10 Discrete Memoryless Multiple Access Channel (DM-MAC): The Converse Theorem

Quantum Information Chapter 10. Quantum Shannon Theory

On the Capacity of the Interference Channel with a Relay

Interactive Decoding of a Broadcast Message

EE5585 Data Compression May 2, Lecture 27

Capacity of channel with energy harvesting transmitter

Classical Capacities of Quantum Channels

Transcription:

Quantum Achievability Proof via Collision Relative Entropy Salman Beigi Institute for Research in Fundamental Sciences (IPM) Tehran, Iran Setemper 8, 2014 Based on a joint work with Amin Gohari arxiv:1312.3822

Technique of Yassaee et al (ISIT 2013) x N X!Y y

Technique of Yassaee et al (ISIT 2013) m E x m N X!Y D random encoding maximum likelihood y ˆm

Technique of Yassaee et al (ISIT 2013) m E x m N X!Y D random encoding maximum likelihood y ˆm Random encoding: choose some p X pick i.i.d samples x 1,..., x M

Technique of Yassaee et al (ISIT 2013) m E x m N X!Y D random encoding stochastic likelihood y ˆm Random encoding: choose some p X pick i.i.d samples x 1,..., x M Stochastic likelihood decoder sample from p(m y) p(m, y) = 1 M p(y x m) p(m y) = p(y x m) m p(y x m )

Technique of Yassaee et al (ISIT 2013) Random codebook: C = {x 1,..., x M } 1 E C Pr[succ] = E C M p(y x p(y x m ) m) m p(y x m ) m,y

Technique of Yassaee et al (ISIT 2013) Random codebook: C = {x 1,..., x M } 1 E C Pr[succ] = E C M p(y x p(y x m ) m) m p(y x m ) m,y p(y x 1 ) = E C p(y x 1 ) m p(y x m ) y

Technique of Yassaee et al (ISIT 2013) Random codebook: C = {x 1,..., x M } 1 E C Pr[succ] = E C M p(y x p(y x m ) m) m p(y x m ) m,y p(y x 1 ) = E C p(y x 1 ) m p(y x m ) (Jensen) y p(y x 1 ) E x1 p(y x 1 ) E C\x1 m p(y x m ) y

Technique of Yassaee et al (ISIT 2013) Random codebook: C = {x 1,..., x M } 1 E C Pr[succ] = E C M p(y x p(y x m ) m) m p(y x m ) m,y p(y x 1 ) = E C p(y x 1 ) m p(y x m ) (Jensen) y p(y x 1 ) E x1 p(y x 1 ) E C\x1 m p(y x m ) y p(y x 1 ) = E x1 p(y x 1 ) p(y x 1 ) + (M 1)p(y) y

Technique of Yassaee et al (ISIT 2013) Random codebook: C = {x 1,..., x M } 1 E C Pr[succ] = E C M p(y x p(y x m ) m) m p(y x m ) m,y p(y x 1 ) = E C p(y x 1 ) m p(y x m ) (Jensen) y p(y x 1 ) E x1 p(y x 1 ) E C\x1 m p(y x m ) y p(y x 1 ) = E x1 p(y x 1 ) p(y x 1 ) + (M 1)p(y) y = E xy p(y x) p(y x) + (M 1)p(y) = E xy 1 1 + (M 1) p(y) p(y x)

Plan for the talk Better understanding and quantum generalization of Stochastic likelihood decoder convexity Jensen s inequality Asymptotic analysis of the result

Pretty good measurement Pretty good measure for signal states {ρ 1,..., ρ M } ( ) 1 ( ) 2 1 2 E m = ρ i ρ m ρ i i i

Pretty good measurement Pretty good measure for signal states {ρ 1,..., ρ M } ( ) 1 ( ) 2 1 2 E m = ρ i ρ m ρ i If the signal states are ρ m p(y x m ) then i E m p(y x m) i p(y x i) i

Pretty good measurement Pretty good measure for signal states {ρ 1,..., ρ M } ( ) 1 ( ) 2 1 2 E m = ρ i ρ m ρ i If the signal states are ρ m p(y x m ) then i E m p(y x m) i p(y x i) The probability of successful decoding Pr[succ] = 1 ( ) 1 ( ) 2 1 tr [ρ ] 2 m ρ i ρ m ρ i M m i i i

Collision relative entropy D 2 (ρ σ) = log tr[ρσ 1 2 ρσ 1 2 ]

Collision relative entropy D 2 (ρ σ) = log tr[ρσ 1 2 ρσ 1 2 ] D α (ρ σ) = 1 [( ) α ] log tr σ 1 α 1 α 2α ρσ 2α α 1

Collision relative entropy D 2 (ρ σ) = log tr[ρσ 1 2 ρσ 1 2 ] D α (ρ σ) = 1 [( ) α ] log tr σ 1 α 1 α 2α ρσ 2α α 1 Müller-Lennert et al, arxiv:1306.3142 Wilde, Winter, Yang, arxiv:1306.1586 Frank, Lieb, arxiv:1306.5358 SB, arxiv:1306.5920 (Positivity) D 2 (ρ σ) 0. (Data processing) D 2 (ρ σ) D 2 (N (ρ) N (σ)). (Joint convexity) exp D 2 ( ) is jointly convex

Classical-Quantum channels x N X!Y x

Classical-Quantum channels m E x m N X!Y xm D random encoding pretty good measurement ˆm input distribution p(x) pick random codebook x 1,..., x M pretty good measurement at the decoder

Classical-Quantum channels m E x m N X!Y xm D random encoding pretty good measurement ˆm input distribution p(x) pick random codebook x 1,..., x M pretty good measurement at the decoder σ UXY = 1 M M m m x m x m ρ xm, m=1

c-q channel coding m E random encoding x m N X!Y xm D pretty good measurement ˆm σ UXB = Pr[succ] = 1 M M m=1 1 M m m x m x m ρ xm, [ ] M ( ) 1/2ρxm ( ) 1/2 tr ρ xm ρ xi ρ xi m=1 i i

c-q channel coding m E random encoding x m N X!Y xm D pretty good measurement ˆm σ UXB = Pr[succ] = 1 M M m=1 1 M m m x m x m ρ xm, [ ] M ( ) 1/2ρxm ( ) 1/2 tr ρ xm ρ xi ρ xi m=1 = 1 M exp D 2(σ UXY σ UX σ Y ) i i

c-q channel coding m E random encoding x m N X!Y xm D pretty good measurement ˆm σ UXB = Pr[succ] = 1 M M m=1 1 M m m x m x m ρ xm, [ ] M ( ) 1/2ρxm ( ) 1/2 tr ρ xm ρ xi ρ xi m=1 = 1 M exp D 2(σ UXY σ UX σ Y ) 1 M exp D 2(σ XY σ X σ Y ) i i

Error analysis Use the joint convexity of exp D 2 ( ) E C Pr[succ] 1 M E C exp D 2 (σ XY σ X σ Y ) 1 M exp D 2(E C σ XY E C σ X σ Y )

Error analysis Use the joint convexity of exp D 2 ( ) E C Pr[succ] 1 M E C exp D 2 (σ XY σ X σ Y ) 1 M exp D 2(E C σ XY E C σ X σ Y ) ( = M 1 exp D 2 ρ XY 1 M ρ XY + ( 1 1 ) ) ρx ρ Y, M ρ XY = x p(x) x x ρ x

Information spectrum relative entropy Π U 0 : projection on eigenspaces with non-negative eigenvalues Π U V := Π U V 0

Information spectrum relative entropy Π U 0 : projection on eigenspaces with non-negative eigenvalues Π U V := Π U V 0 D ɛ s(ρ σ) := sup { R tr ( ρπ ρ 2 R σ) ɛ }.

Information spectrum relative entropy Π U 0 : projection on eigenspaces with non-negative eigenvalues Π U V := Π U V 0 D ɛ s(ρ σ) := sup { R tr ( ρπ ρ 2 R σ) ɛ }. D ɛ s(p q) = sup { R x: p(x) 2 R q(x) p(x) ɛ }

Information spectrum relative entropy Π U 0 : projection on eigenspaces with non-negative eigenvalues Π U V := Π U V 0 D ɛ s(ρ σ) := sup { R tr ( ρπ ρ 2 R σ) ɛ }. D ɛ s(p q) = sup { R x: p(x) 2 R q(x) p(x) ɛ } Theorem For every 0 < ɛ, λ < 1 and density matrices ρ, σ we have exp D 2 (ρ λρ + (1 λ)σ) (1 ɛ) [ λ + (1 λ) exp ( D ɛ s(ρ σ) )] 1

A one-shot achievable bound ( E C Pr[succ] M 1 exp D 2 ρ XY 1 M ρ XY + ( 1 1 ) ) ρx ρ Y M (1 ɛ) [ 1 + (M 1) exp ( D ɛ s(ρ XY ρ X ρ Y ) )] 1 Theorem M (1, ɛ) : maximum number of messages with error ɛ ɛ δ M (1, ɛ) max p(x), 0<δ<ɛ 1 ɛ exp ( D δ s(ρ XY ρ X ρ Y ) ) + 1

Asymptotic analysis Relative entropy: D(σ ρ) := tr(σ(log σ log ρ)). Information variance or the relative entropy variance V(ρ σ) := tr ( ρ(log ρ log σ D(ρ σ)) 2). Theorem [Tomamichel & Hayashi 12] For fixed 0 < ɛ < 1 and δ proportional to 1/ n we have D ɛ±δ s (ρ n σ n ) = nd(ρ σ) + nv(ρ σ)φ 1 (ɛ) + O(log n), Φ(u) := u 1 2π e 1 2 t2 dt, and Φ 1 (ɛ) = sup{u Φ(u) ɛ}.

Asymptotic analysis Theorem For 0 < ɛ < 1/2 we have log M (n, ɛ) nc + nvφ 1 (ɛ) + O(log n), where C is the channel capacity and V is the channel dispersion V = C := max I(X; Y) p(x) min V(ρ XB ρ X ρ B ) p(x) argmax I(X;Y) [Tan & Tomamichel 13]: this achievable bound is tight

Summary m E random encoding N n D pretty good measurement ˆm Write probability of successful decoding in terms of collision relative entropy

Summary m E random encoding N n D pretty good measurement ˆm Write probability of successful decoding in terms of collision relative entropy Use joint convexity of collision relative entropy

Summary m E random encoding N n D pretty good measurement ˆm Write probability of successful decoding in terms of collision relative entropy Use joint convexity of collision relative entropy Lower bound collision relative entropy with information spectrum relative entropy

Summary m E random encoding N n D pretty good measurement ˆm Write probability of successful decoding in terms of collision relative entropy Use joint convexity of collision relative entropy Lower bound collision relative entropy with information spectrum relative entropy Compute the asymptotics of information spectrum relative entropy log M (n, ɛ) nc + nvφ 1 (ɛ) + O(log n)

Hypothesis testing Given either ρ or σ Want to decide which one Apply POVM measurement {F ρ, F σ }

Hypothesis testing Given either ρ or σ Want to decide which one Apply POVM measurement {F ρ, F σ } Pr[type I error] = Pr[error ρ] = tr(ρf σ ) Pr[type II error] = Pr[error σ] = tr(σf ρ )

Hypothesis testing Given either ρ or σ Want to decide which one Apply POVM measurement {F ρ, F σ } Pr[type I error] = Pr[error ρ] = tr(ρf σ ) Pr[type II error] = Pr[error σ] = tr(σf ρ ) β ɛ (ρ σ) = min Pr[type II error] Pr[type I error] ɛ

Hypothesis testing Given either ρ or σ Want to decide which one Apply POVM measurement {F ρ, F σ } Pr[type I error] = Pr[error ρ] = tr(ρf σ ) Pr[type II error] = Pr[error σ] = tr(σf ρ ) β ɛ (ρ σ) = min Pr[type II error] Pr[type I error] ɛ Pick some M and define F ρ = (ρ + Mσ) 1/2 ρ(ρ + Mσ) 1/2 F σ = (M 1 ρ + σ) 1/2 σ(m 1 ρ + σ) 1/2

A one-shot hypothesis testing 1 Pr[type I error] = exp D 2 (ρ ρ + Mσ) (1 ɛ) [ 1 + M exp ( D ɛ s(ρ σ) )] 1

A one-shot hypothesis testing 1 Pr[type I error] = exp D 2 (ρ ρ + Mσ) (1 ɛ) [ 1 + M exp ( D ɛ s(ρ σ) )] 1 1 Pr[type II error] = exp D 2 (σ M 1 ρ + σ) (1 + M 1 ) 1,

A one-shot hypothesis testing 1 Pr[type I error] = exp D 2 (ρ ρ + Mσ) (1 ɛ) [ 1 + M exp ( D ɛ s(ρ σ) )] 1 1 Pr[type II error] = exp D 2 (σ M 1 ρ + σ) (1 + M 1 ) 1, Theorem log β 2ɛ (ρ σ) D ɛ s(ρ σ) + log ɛ.

A one-shot hypothesis testing 1 Pr[type I error] = exp D 2 (ρ ρ + Mσ) (1 ɛ) [ 1 + M exp ( D ɛ s(ρ σ) )] 1 1 Pr[type II error] = exp D 2 (σ M 1 ρ + σ) (1 + M 1 ) 1, Theorem Corollary log β 2ɛ (ρ σ) D ɛ s(ρ σ) + log ɛ. log β ɛ (ρ n σ n ) nd(ρ σ) + nv(ρ σ)φ 1 (ɛ) + o(n). This upper bound is tight [Tomamichel & Hayashi 12, Li 12]

Source compression with side information x x Alice Bob

Source compression with side information x Alice m = h(x) x Bob

Source compression with side information x Alice m = h(x) x Bob {F m x 0 : x0 2 X} ρ XY = x p(x) x x ρ x

Source compression with side information x Alice m = h(x) x Bob {F m x 0 : x0 2 X} ρ XY = x p(x) x x ρ x Pr[succ] = x,m p X(x)1 {m=h(x)} tr(ρ x F m x )

Source compression with side information x Alice m = h(x) x Bob {F m x 0 : x0 2 X} ρ XY = x p(x) x x ρ x Pr[succ] = x,m p X(x)1 {m=h(x)} tr(ρ x F m x ) pick a random h : X {1,..., M}

Source compression with side information x Alice m = h(x) x Bob {F m x 0 : x0 2 X} ρ XY = x p(x) x x ρ x Pr[succ] = x,m p X(x)1 {m=h(x)} tr(ρ x F m x ) pick a random h : X {1,..., M} Let {F m x : x X } be the pretty good measurement for {p(x)ρ x x X, h(x) = m}

Source compression with side information σ XB = ( x p(x)1 {h(x)=1} x x ρ x, ) ( ) τ XB = x 1 {h(x)=1} x x x p(x )1 {h(x )=1}ρ x. E h Pr[succ] = ME h exp D 2 (σ XB τ XB )

Source compression with side information σ XB = ( x p(x)1 {h(x)=1} x x ρ x, ) ( ) τ XB = x 1 {h(x)=1} x x x p(x )1 {h(x )=1}ρ x. E h Pr[succ] = ME h exp D 2 (σ XB τ XB ) Theorem There exists some h : X {1,..., M} s.t. ( ( Pr[succ] exp D 2 ρ XB 1 ) 1 ρxb + 1 ) M M I X ρ B 1 ɛ 1 + M 1 exp ( D ɛ s(ρ XB I X ρ B ) )

Source compression with side information σ XB = ( x p(x)1 {h(x)=1} x x ρ x, ) ( ) τ XB = x 1 {h(x)=1} x x x p(x )1 {h(x )=1}ρ x. E h Pr[succ] = ME h exp D 2 (σ XB τ XB ) Theorem There exists some h : X {1,..., M} s.t. ( ( Pr[succ] exp D 2 ρ XB 1 ) 1 ρxb + 1 ) M M I X ρ B 1 ɛ 1 + M 1 exp ( D ɛ s(ρ XB I X ρ B ) ) This also gives a second order asymptotics which again is tight [Tomamichel & Hayashi 12]

Our contribution Reformulation of the technique of Yassaee et al (2013) Stochastic likelihood = pretty good measurement Collision relative entropy: D 2 ( ) Information spectrum relative entropy: D ɛ s( ) c-q channel coding quantum hypothesis testing source compression of quantum side information

Our contribution Reformulation of the technique of Yassaee et al (2013) Stochastic likelihood = pretty good measurement Collision relative entropy: D 2 ( ) Information spectrum relative entropy: D ɛ s( ) c-q channel coding quantum hypothesis testing source compression of quantum side information We avoided Hayashi-Nagaoka operator inequality

What else? Quantum capacity: achievability of coherent information?

What else? Quantum capacity: achievability of coherent information? Entanglement-assisted capacity [Datta, Tomamichel, Wilde 14]

What else? Quantum capacity: achievability of coherent information? Entanglement-assisted capacity [Datta, Tomamichel, Wilde 14] Yassaee et al apply this method to Gelfand-Pinsker (coding over a state-dependent channel) Broadcast channel (Marton bound) Multiple access channel (MAC)

What else? Quantum capacity: achievability of coherent information? Entanglement-assisted capacity [Datta, Tomamichel, Wilde 14] Yassaee et al apply this method to Gelfand-Pinsker (coding over a state-dependent channel) Broadcast channel (Marton bound) Multiple access channel (MAC) Need more convexity properties in the quantum case Yassaee et al use the joint convexity of (x, y) 1 xy

Thanks for your attention!