FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY

Similar documents
to mere bit flips) may affect the transmission.

MP 472 Quantum Information and Computation

Introduction to Quantum Information Hermann Kampermann

An Introduction to Quantum Computation and Quantum Information

Asymptotic Pure State Transformations

9. Distance measures. 9.1 Classical information measures. Head Tail. How similar/close are two probability distributions? Trace distance.

Quantum Data Compression

Lecture 11: Quantum Information III - Source Coding

Quantum Information Theory

Shannon s noisy-channel theorem

Entanglement: concept, measures and open problems

Compression and entanglement, entanglement transformations

Entropy in Classical and Quantum Information Theory

Entropies & Information Theory

Quantum Information Types

Schur-Weyl duality, quantum data compression, tomography

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013

Entanglement Measures and Monotones Pt. 2

Quantum Information Theory and Cryptography

Chapter 14: Quantum Information Theory and Photonic Communications

Introduction to Quantum Computing

Lecture: Quantum Information

Ensembles and incomplete information

The Adaptive Classical Capacity of a Quantum Channel,

Lecture 4 Noisy Channel Coding

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

ELEC546 Review of Information Theory

CSCI 2570 Introduction to Nanocomputing

Quantum Hadamard channels (II)

Notes 3: Stochastic channels and noisy coding theorem bound. 1 Model of information communication and noisy channel

Exercise 1. = P(y a 1)P(a 1 )

On some special cases of the Entropy Photon-Number Inequality

Chapter 2 The Density Matrix

Ph 219/CS 219. Exercises Due: Friday 3 November 2006

Chapter 9 Fundamental Limits in Information Theory

Lecture 19 October 28, 2015

Some Bipartite States Do Not Arise from Channels

Concentration of Measure Effects in Quantum Information. Patrick Hayden (McGill University)

Free probability and quantum information

Problem Set: TT Quantum Information

Optimal Sequences, Power Control and User Capacity of Synchronous CDMA Systems with Linear MMSE Multiuser Receivers

Shannon s Noisy-Channel Coding Theorem

PHY305: Notes on Entanglement and the Density Matrix

Topics in Quantum Information Theory

Quantum Entanglement- Fundamental Aspects

Lecture 1: The Multiple Access Channel. Copyright G. Caire 12

Structure of Unital Maps and the Asymptotic Quantum Birkhoff Conjecture. Peter Shor MIT Cambridge, MA Joint work with Anand Oza and Dimiter Ostrev

Information Dimension

Mathematical Methods for Quantum Information Theory. Part I: Matrix Analysis. Koenraad Audenaert (RHUL, UK)

Lecture 4: Postulates of quantum mechanics

A Holevo-type bound for a Hilbert Schmidt distance measure

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

Lecture Notes 1: Vector spaces

5. Communication resources

Useful Concepts from Information Theory

Quantum Hadamard channels (I)

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2

Capacity of a channel Shannon s second theorem. Information Theory 1/33

Quantum Fisher Information. Shunlong Luo Beijing, Aug , 2006

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

Introduction to quantum information processing

(each row defines a probability distribution). Given n-strings x X n, y Y n we can use the absence of memory in the channel to compute

Lecture 8: Shannon s Noise Models

Ping Pong Protocol & Auto-compensation

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 18: Shanon s Channel Coding Theorem. Lecture 18: Shanon s Channel Coding Theorem

Entanglement-Assisted Capacity of a Quantum Channel and the Reverse Shannon Theorem

Information measures, entanglement and quantum evolution

Lecture 4 Channel Coding

Chapter 5. Density matrix formalism

Error Exponent Region for Gaussian Broadcast Channels

Quantum Stochastic Maps and Frobenius Perron Theorem

Noisy-Channel Coding

Quantum Entanglement and Measurement

Lecture 6: Quantum error correction and quantum capacity

Optimal Sequences and Sum Capacity of Synchronous CDMA Systems

arxiv:quant-ph/ v2 14 May 2002

Introduction To Information Theory

Remarks on the Additivity Conjectures for Quantum Channels

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Quantum rate distortion, reverse Shannon theorems, and source-channel separation

CS/Ph120 Homework 4 Solutions

A One-to-One Code and Its Anti-Redundancy

1. Basic rules of quantum mechanics

16.36 Communication Systems Engineering

National University of Singapore Department of Electrical & Computer Engineering. Examination for

Gilles Brassard. Université de Montréal

Exemple of Hilbert spaces with dimension equal to two. *Spin1/2particle

Kapitza Fall 2017 NOISY QUANTUM MEASUREMENT

Physics 239/139 Spring 2018 Assignment 3 Solutions

Algebraic Theory of Entanglement

Lecture 2. Capacity of the Gaussian channel

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Multi User Detection I

ELEC546 MIMO Channel Capacity

arxiv:quant-ph/ v2 3 Sep 2002

Electrical and Information Technology. Information Theory. Problems and Solutions. Contents. Problems... 1 Solutions...7

Channel capacity. Outline : 1. Source entropy 2. Discrete memoryless channel 3. Mutual information 4. Channel capacity 5.

Appendix B Information theory from first principles

Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871

Transcription:

FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY Emina Soljanin Mathematical Sciences Research Center, Bell Labs April 16, 23

A FRAME 1 A sequence {x i } of vectors in a Hilbert space with the property that there are constants A, B such that A x 2 i x, x i 2 B x 2 for all x in the Hilbert space. Examples?

A SOURCE OF INFORMATION Classical 2 Discrete: produces sequences of letters. Letters belong to a finite alphabet X.

A SOURCE OF INFORMATION Classical 2 Discrete: produces sequences of letters. Letters belong to a finite alphabet X. Memoryless: each letter is produced independently. Probability of letter a is P a.

A SOURCE OF INFORMATION Classical 2 Discrete: produces sequences of letters. Letters belong to a finite alphabet X. Memoryless: each letter is produced independently. Probability of letter a is P a. Example: coin tossing with X = {H, T }.

A SOURCE OF INFORMATION Quantum 3 Letters are transmitted as d-dimensional unit-length vectors.

A SOURCE OF INFORMATION Quantum 3 Letters are transmitted as d-dimensional unit-length vectors. e and e 1 are the basis vectors of 2D space H 2 : [ ] [ ] 1 e = e 1 = 1 A qubit is a vector in H 2 : ψ = α e + β e 1

A SOURCE OF INFORMATION Quantum 3 Letters are transmitted as d-dimensional unit-length vectors. e and e 1 are the basis vectors of 2D space H 2 : [ ] [ ] 1 e = e 1 = 1 A qubit is a vector in H 2 : ψ = α e + β e 1 Example: X = {, 1, 2, 3}, ψ = α e + β e 1 ψ 1 = α 1 e + β 1 e 1 ψ 2 = α 2 e + β 2 e 1 ψ 3 = α 3 e + β 3 e 1.

QUANTUM DISCRETE MEMORYLESS SOURCE The Density Matrix and Von Neumann Entropy 4 Source density matrix: ρ = a X P a ψ a ψ a.

QUANTUM DISCRETE MEMORYLESS SOURCE The Density Matrix and Von Neumann Entropy 4 Source density matrix: ρ = a X P a ψ a ψ a. Von Neumann entropy of the source: S(ρ) = Tr ρ log ρ = i λ i log λ i, where λ i are the eigenvalues of ρ.

QUANTUM DISCRETE MEMORYLESS SOURCE MB Example 5 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2

QUANTUM DISCRETE MEMORYLESS SOURCE MB Example 5 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2

QUANTUM DISCRETE MEMORYLESS SOURCE MB Example 5 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2

QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences 6 Sequences of length n are d n -dimensional vectors. Source vector-sequence (state): Ψ x = ψ x1 ψ xn, x i X.

QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences 6 Sequences of length n are d n -dimensional vectors. Source vector-sequence (state): Ψ x = ψ x1 ψ xn, x i X. Among all states that come from the source, we can distinguish 2 n(s(ρ) ε n) reliably.

SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2

SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 Send φ by sending ψ 1 φ, ψ 2 φ, ψ 3 φ ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2

SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 Send φ by sending ψ 1 φ, ψ 2 φ, ψ 3 φ ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 φ = 2 3 ( ψ 1 φ ψ 1 + ψ 2 φ ψ 2 + ψ 3 φ ψ 3 )

SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 Send φ by sending ψ 1 φ, ψ 2 φ, ψ 3 φ ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 φ = 2 3 ( ψ 1 φ ψ 1 + ψ 2 φ ψ 2 + ψ 3 φ ψ 3 ) φ = 2 3 ( ψ 1 ψ 1 + ψ 2 ψ 2 + ψ 3 ψ 3 ) φ

SYNCHRONOUS CDMA SYSTEMS K users and processing gain N 8 Each user has a signature N 1 length- N complex vector. Let s i be the signature and p i the power of user i for 1 i K.

SYNCHRONOUS CDMA SYSTEMS K users and processing gain N 8 Each user has a signature N 1 length- N complex vector. Let s i be the signature and p i the power of user i for 1 i K. The received vector is given by r = K i=1 pi b i s i + n where b i is the information symbol, for user i, E[b i ] =, E[b 2 i ] = 1; n is the (Gaussian) noise vector; E[n] =, E[nn ] = σ 2 I N.

SYNCHRONOUS CDMA SYSTEMS The Sum Capacity 9 Let user signatures and powers be given: S = [s 1,... s K ] and P = diag{p 1,..., p K } The sum capacity: C sum = 1 2 log[det(i N + σ 2 SP S )]

A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i

A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i K real numbers p i such that p 1 + + p K = 1.

A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i K real numbers p i such that p 1 + + p K = 1. Two matrices: F is the d K matrix whose columns are p i ψ i. Thus F F = K p i ψ i ψ i. i=1

A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i K real numbers p i such that p 1 + + p K = 1. Two matrices: F is the d K matrix whose columns are p i ψ i. Thus F F = K i=1 p i ψ i ψ i. F F is the density matrix (frame operator) F F is the Gram matrix.

A FRAME 11 An ensemble { p i ψ i } of vectors in a Hilbert space with the property that there are constants A, B such that A ϕ ϕ i p i ϕ ψ i 2 B ϕ ϕ for all ϕ in the Hilbert space.

A FRAME 11 An ensemble { p i ψ i } of vectors in a Hilbert space with the property that there are constants A, B such that A ϕ ϕ i p i ϕ ψ i 2 B ϕ ϕ for all ϕ in the Hilbert space. Equivalently, AI d F F BI d

A COMMON MODEL Information Measures 12 The Von Neumann entropy: S = Tr F F log F F The sum capacity: C sum = 1 2 log[det(i d + dσ 2 F F )]

A COMMON MODEL Information Measures 12 The Von Neumann entropy: S = Tr F F log F F The sum capacity: C sum = 1 2 log[det(i d + dσ 2 F F )] Both are maximized by F F =

A COMMON MODEL Information Measures 12 The Von Neumann entropy: S = Tr F F log F F The sum capacity: C sum = 1 2 log[det(i d + dσ 2 F F )] Both are maximized by F F = 1 d I d.

HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix,

HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix, characterization of PDs consistent with a given density matrix.

HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix, characterization of PDs consistent with a given density matrix. Let {p i } be a PD, and p 1 p 2 p K.

HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix, characterization of PDs consistent with a given density matrix. Let {p i } be a PD, and p 1 p 2 p K. Let ρ be a density matrix, and λ 1 λ 2 λ d its eigenvalues. There exist vectors ψ i such that ρ = K i=1 p i ψ i ψ i iff n i=1 p i n λ i for all n < d. i=1

HOW TO MIX A DENSITY MATRIX F F = 1 d I d 14 For ρ = 1 d I d, condition n p i n λ i for all n < d. i=1 i=1 becomes p 1 1/d.

HOW TO MIX A DENSITY MATRIX F F = 1 d I d 14 For ρ = 1 d I d, condition n p i n λ i for all n < d. i=1 i=1 becomes p 1 1/d. In CDMA, user i is said to be oversized if K p i > j=i+1 d i p j

INTERFERENCE MEASURE IN CDMA Total Square Correlation (TSC) 15 The Welch s lower bound to TSC (frame potential): K i=1 K p i p j ψ i ψ j 2 1 d j=1

INTERFERENCE MEASURE IN CDMA Total Square Correlation (TSC) 15 The Welch s lower bound to TSC (frame potential): K i=1 K j=1 p i p j ψ i ψ j 2 1 d with equality iff K i=1 p i ψ i ψ i = 1 d I d. WBE sequences minimize the TSC maximize the sum capacity and Von Neumann entropy

INTERFERENCE MEASURE IN CDMA Total Square Correlation (TSC) 15 The Welch s lower bound to TSC (frame potential): K i=1 K j=1 p i p j ψ i ψ j 2 1 d with equality iff K i=1 p i ψ i ψ i = 1 d I d. WBE sequences minimize the TSC maximize the sum capacity and Von Neumann entropy What does reducing TSC mean?

SOME COMMUNICATION CHANNELS 16 Binary Symmetric Channel: input 1 1 w w w 1 w 1 output

SOME COMMUNICATION CHANNELS 16 Binary Symmetric Channel: 1 w input w w output 1 1 w 1 Noisy Typewriter: B B input H H output N N

A CQ COMMUNICATION CHANNEL A Probabilistic Device 17 Inputs: vectors ψ i, i X. Outputs: vectors ϕ j determined by the chosen measurement. Transition probabilities determined by the chosen measurement. output input ψ i P (j i) ϕ j ϕ l ψ k P (l k)

QUANTUM MEASUREMENT Von Neumann s Measurement 18 A set of pairwise orthogonal projection operators {Π i }. They form a complete resolution of the identity: i Π i = I.

QUANTUM MEASUREMENT Von Neumann s Measurement 18 A set of pairwise orthogonal projection operators {Π i }. They form a complete resolution of the identity: i Π i = I. For input ψ j, output Π i ψ j happens with probability ψ j Π i ψ j.

QUANTUM MEASUREMENT Von Neumann s Measurement 18 A set of pairwise orthogonal projection operators {Π i }. They form a complete resolution of the identity: i Π i = I. For input ψ j, output Π i ψ j happens with probability ψ j Π i ψ j. Example: Π = ψ 1 ψ ψ 2 Π 1 = ψ input output ψ 1 ψ 1 2

QUANTUM MEASUREMENT Positive Operator-Valued Measure 19 Any set of positive-semidefinite operators {E i }. They form a complete resolution of the identity: i E i = I. For input ψ j, output E i ψ j happens with probability ψ j E i ψ j.

QUANTUM MEASUREMENT Positive Operator-Valued Measure 19 Any set of positive-semidefinite operators {E i }. They form a complete resolution of the identity: i E i = I. For input ψ j, output E i ψ j happens with probability ψ j E i ψ j. Example: ψ 1 ϕ 3 ψ ψ ϕ 2 ϕ ϕ ψ input ψ ϕ 3 2 ϕ 3 output ψ 1 ϕ 3 2 ϕ 1 ψ 1 ψ 1 ϕ 1 2 ϕ 1

OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability.

OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability. POVMs attaining the accessible information (number of elements)

OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability. POVMs attaining the accessible information (number of elements). Example: ψ 1 = [ 1 ] ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2

OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability. POVMs attaining the accessible information (number of elements). Example: 1 α ψ 1 = α ψ 1 = [ 1 ] ψ 3 = 1 α/2 3 1 α/2 α [ 1/2 ψ 3 = 3/2 ] ψ 2 = ψ 2 = 1 α/2 3 1 α/2 α [ ] 1/2 3/2

A SOURCE OF INFORMATION Classical 21 Discrete: produces sequences of letters. Letters belong to a finite alphabet X. Memoryless: each letter is produced independently. Probability of letter a is P a. Example: coin tossing with X = {H, T }.

CLASSICAL DISCRETE MEMORYLESS SOURCE Sequences and Large Sets 22 Source sequence x= x 1, x 2,..., x n is in X n. N(a x) denotes the number occurrences of a in x.

CLASSICAL DISCRETE MEMORYLESS SOURCE Sequences and Large Sets 22 Source sequence x= x 1, x 2,..., x n is in X n. N(a x) denotes the number occurrences of a in x. Consider all sequences x for which 1 n N(a x) P a δ for every a X. They form the set of typical sequences T n P,δ.

CLASSICAL DISCRETE MEMORYLESS SOURCE Sequences and Large Sets 22 Source sequence x= x 1, x 2,..., x n is in X n. N(a x) denotes the number occurrences of a in x. Consider all sequences x for which 1 n N(a x) P a δ for every a X. They form the set of typical sequences T n P,δ. Set T n P,δ is probabilistically large: P n (T n P,δ) 1 ɛ n.

DISCRETE MEMORYLESS SOURCE Shannon Entropy 23 H(P ) = a X P a log P a.

DISCRETE MEMORYLESS SOURCE Shannon Entropy 23 H(P ) = a X P a log P a. Set T n P,δ contains approximately 2nH(P ) sequences: 2 n[h(p ) ɛ n] T n P,δ 2 n[h(p )+ɛ n]

DISCRETE MEMORYLESS SOURCE Shannon Entropy 23 H(P ) = a X P a log P a. Set T n P,δ contains approximately 2nH(P ) sequences: 2 n[h(p ) ɛ n] T n P,δ 2 n[h(p )+ɛ n] The probability of typical sequences x is approximately 2 nh(p ) : 2 n[h(p )+ɛ n ] P x 2 n[h(p ) ɛ n ]

24 QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences Sequences of length n are d n -dimensional vectors: e e e e 1 e 1 e e 1 e 1 1 1 1 1

24 QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences Sequences of length n are d n -dimensional vectors: e e e e 1 e 1 e e 1 e 1 1 1 1 1 Source vector-sequence (state) Ψ x = ψ x1 ψ x2 ψ xn, x i X, appears with probability P x = P x1 P x2... P xn.

QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences 25 Source vector-sequence (state) Ψ x = ψ x1 ψ x2 ψ xn, x i X, appears with probability P x = P x1 P x2... P xn. Typical states Ψ x H 2n correspond to typical sequences x. There are approximately 2 nh(p ) typical states.

QUANTUM DISCRETE MEMORYLESS SOURCE Typical Subspace 26 Typical states Ψ x H 2n live in the typical subspace.

QUANTUM DISCRETE MEMORYLESS SOURCE Typical Subspace 26 Typical states Ψ x H 2n live in the typical subspace. Typical subspace Λ n of H 2n : Λ n Ψ x = Ψ x Λ n + Ψ x Λ n Λ n

QUANTUM DISCRETE MEMORYLESS SOURCE Typical Subspace 26 Typical states Ψ x H 2n live in the typical subspace. Typical subspace Λ n of H 2n : Λ n Ψ x = Ψ x Λ n + Ψ x Λ n Λ n The dimension of Λ n is approximately 2 ns(ρ).

Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1

Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1 Ψ x1,..., Ψ xm span U, an r-dimensional subspace of H dn.

Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1 Ψ x1,..., Ψ xm span U, an r-dimensional subspace of H dn. Perform the SVD of F and define a scaled projection on U: F = r λk u k v k, F F = k=1 r λ k u k u k, P U = k=1 r k=1 1 r u k u k

Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1 Ψ x1,..., Ψ xm span U, an r-dimensional subspace of H dn. Perform the SVD of F and define a scaled projection on U: F = r λk u k v k, F F = k=1 r λ k u k u k, P U = k=1 r k=1 1 r u k u k How far is F F from P U?

DISTANCES BETWEEN DENSITY MATRICES ρ and σ 28 Trace distance: D(σ, ω) = 1 2 Tr σ ω, A denotes the positive square root of A A.

DISTANCES BETWEEN DENSITY MATRICES ρ and σ 28 Trace distance: D(σ, ω) = 1 2 Tr σ ω, A denotes the positive square root of A A. Uhlman Fidelity: F (σ, ω) = { Tr [ ( σω σ) 1/2]} 2. 1 F (σ, ω) D(σ, ω) 1 F (σ, ω) 2

DISTANCES BETWEEN DENSITY MATRICES ρ and σ 28 Trace distance: D(σ, ω) = 1 2 Tr σ ω, A denotes the positive square root of A A. Uhlman Fidelity: F (σ, ω) = { Tr [ ( σω σ) 1/2]} 2. 1 F (σ, ω) D(σ, ω) 1 F (σ, ω) 2 Frobenius (Hilbert-Schmidt)?

DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1.

DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: P (a i ) = 1 N

DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: { 1/n, 1 i n P (a i ) = 1 N and Q(a i) = n + 1 i N

DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: { 1/n, 1 i n P (a i ) = 1 N and Q(a i) = n + 1 i N P ({a n+1,..., a N }) 1 as k, K.

DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: { 1/n, 1 i n P (a i ) = 1 N and Q(a i) = n + 1 i N P ({a n+1,..., a N }) 1 as k, K. 1 2 i P (a i) Q(a i ) 1 and i P (a i) Q(a i ) 2.

A BASIS FOR Λ n 3 How far is F F from P U?

A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 ( r = r k=1 λ 2 k ) 1

A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 = r M 2 M i=1 ( r = r k=1 λ 2 k ) M Ψ xi Ψ xj 2 1 j=1 1

A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 = r M 2 1 M M i=1 M i=1 ( r = r k=1 λ 2 k ) M Ψ xi Ψ xj 2 1 j=1 M Ψ xi Ψ xj 2 j=1 j i 1

A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 = r M 2 1 M M i=1 M i=1 ( r = r k=1 λ 2 k ) M Ψ xi Ψ xj 2 1 j=1 M Ψ xi Ψ xj 2 j=1 j i 1 Use the random coding argument!

A BASIS FOR Λ n The Random Coding Argument 31 Averaging over all codes: { M E i=1 M Ψ xi Ψ xj 2} = M(M 1) Tr(ρ n ρ n ) j=1 j i

A BASIS FOR Λ n The Random Coding Argument 31 Averaging over all codes: { M E i=1 M Ψ xi Ψ xj 2} = M(M 1) Tr(ρ n ρ n ) j=1 j i There exists a code C with M codewords s.t. [ r λk 1 ] 2 M Tr(ρ n ρ n ) r k=1

A BASIS FOR Λ n The Random Coding Argument 31 Averaging over all codes: { M E i=1 M Ψ xi Ψ xj 2} = M(M 1) Tr(ρ n ρ n ) j=1 j i There exists a code C with M codewords s.t. [ r λk 1 ] 2 M Tr(ρ n ρ n ) r k=1 There exists a code C, C = 2 nr s.t. on Λ n [ r λ k 1 ] 2 2 n(s(ρ) ε n R) r k=1

COMBINATORICS AND GEOMETRY The MB Example 32 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2

COMBINATORICS AND GEOMETRY The MB Example 32 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 There are 3 n typical vectors forming a frame in H 2n.

COMBINATORICS AND GEOMETRY The MB Example 32 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 There are 3 n typical vectors forming a frame in H 2n. About 2 n of those vectors form a basis of H 2n. Which ones?