FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY Emina Soljanin Mathematical Sciences Research Center, Bell Labs April 16, 23
A FRAME 1 A sequence {x i } of vectors in a Hilbert space with the property that there are constants A, B such that A x 2 i x, x i 2 B x 2 for all x in the Hilbert space. Examples?
A SOURCE OF INFORMATION Classical 2 Discrete: produces sequences of letters. Letters belong to a finite alphabet X.
A SOURCE OF INFORMATION Classical 2 Discrete: produces sequences of letters. Letters belong to a finite alphabet X. Memoryless: each letter is produced independently. Probability of letter a is P a.
A SOURCE OF INFORMATION Classical 2 Discrete: produces sequences of letters. Letters belong to a finite alphabet X. Memoryless: each letter is produced independently. Probability of letter a is P a. Example: coin tossing with X = {H, T }.
A SOURCE OF INFORMATION Quantum 3 Letters are transmitted as d-dimensional unit-length vectors.
A SOURCE OF INFORMATION Quantum 3 Letters are transmitted as d-dimensional unit-length vectors. e and e 1 are the basis vectors of 2D space H 2 : [ ] [ ] 1 e = e 1 = 1 A qubit is a vector in H 2 : ψ = α e + β e 1
A SOURCE OF INFORMATION Quantum 3 Letters are transmitted as d-dimensional unit-length vectors. e and e 1 are the basis vectors of 2D space H 2 : [ ] [ ] 1 e = e 1 = 1 A qubit is a vector in H 2 : ψ = α e + β e 1 Example: X = {, 1, 2, 3}, ψ = α e + β e 1 ψ 1 = α 1 e + β 1 e 1 ψ 2 = α 2 e + β 2 e 1 ψ 3 = α 3 e + β 3 e 1.
QUANTUM DISCRETE MEMORYLESS SOURCE The Density Matrix and Von Neumann Entropy 4 Source density matrix: ρ = a X P a ψ a ψ a.
QUANTUM DISCRETE MEMORYLESS SOURCE The Density Matrix and Von Neumann Entropy 4 Source density matrix: ρ = a X P a ψ a ψ a. Von Neumann entropy of the source: S(ρ) = Tr ρ log ρ = i λ i log λ i, where λ i are the eigenvalues of ρ.
QUANTUM DISCRETE MEMORYLESS SOURCE MB Example 5 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2
QUANTUM DISCRETE MEMORYLESS SOURCE MB Example 5 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2
QUANTUM DISCRETE MEMORYLESS SOURCE MB Example 5 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2
QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences 6 Sequences of length n are d n -dimensional vectors. Source vector-sequence (state): Ψ x = ψ x1 ψ xn, x i X.
QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences 6 Sequences of length n are d n -dimensional vectors. Source vector-sequence (state): Ψ x = ψ x1 ψ xn, x i X. Among all states that come from the source, we can distinguish 2 n(s(ρ) ε n) reliably.
SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2
SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 Send φ by sending ψ 1 φ, ψ 2 φ, ψ 3 φ ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2
SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 Send φ by sending ψ 1 φ, ψ 2 φ, ψ 3 φ ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 φ = 2 3 ( ψ 1 φ ψ 1 + ψ 2 φ ψ 2 + ψ 3 φ ψ 3 )
SENDING PACKETS OVER LOSSY NETWORKS MB Example 7 Send φ by sending φ and φ 1 Send φ by sending ψ 1 φ, ψ 2 φ, ψ 3 φ ψ 1 = [ 1 ] φ = [ φ1 φ ] d = 2 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 φ = 2 3 ( ψ 1 φ ψ 1 + ψ 2 φ ψ 2 + ψ 3 φ ψ 3 ) φ = 2 3 ( ψ 1 ψ 1 + ψ 2 ψ 2 + ψ 3 ψ 3 ) φ
SYNCHRONOUS CDMA SYSTEMS K users and processing gain N 8 Each user has a signature N 1 length- N complex vector. Let s i be the signature and p i the power of user i for 1 i K.
SYNCHRONOUS CDMA SYSTEMS K users and processing gain N 8 Each user has a signature N 1 length- N complex vector. Let s i be the signature and p i the power of user i for 1 i K. The received vector is given by r = K i=1 pi b i s i + n where b i is the information symbol, for user i, E[b i ] =, E[b 2 i ] = 1; n is the (Gaussian) noise vector; E[n] =, E[nn ] = σ 2 I N.
SYNCHRONOUS CDMA SYSTEMS The Sum Capacity 9 Let user signatures and powers be given: S = [s 1,... s K ] and P = diag{p 1,..., p K } The sum capacity: C sum = 1 2 log[det(i N + σ 2 SP S )]
A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i
A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i K real numbers p i such that p 1 + + p K = 1.
A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i K real numbers p i such that p 1 + + p K = 1. Two matrices: F is the d K matrix whose columns are p i ψ i. Thus F F = K p i ψ i ψ i. i=1
A COMMON MODEL The Object of Interest 1 An ensemble: K d-dimensional unit-length vectors ψ i K real numbers p i such that p 1 + + p K = 1. Two matrices: F is the d K matrix whose columns are p i ψ i. Thus F F = K i=1 p i ψ i ψ i. F F is the density matrix (frame operator) F F is the Gram matrix.
A FRAME 11 An ensemble { p i ψ i } of vectors in a Hilbert space with the property that there are constants A, B such that A ϕ ϕ i p i ϕ ψ i 2 B ϕ ϕ for all ϕ in the Hilbert space.
A FRAME 11 An ensemble { p i ψ i } of vectors in a Hilbert space with the property that there are constants A, B such that A ϕ ϕ i p i ϕ ψ i 2 B ϕ ϕ for all ϕ in the Hilbert space. Equivalently, AI d F F BI d
A COMMON MODEL Information Measures 12 The Von Neumann entropy: S = Tr F F log F F The sum capacity: C sum = 1 2 log[det(i d + dσ 2 F F )]
A COMMON MODEL Information Measures 12 The Von Neumann entropy: S = Tr F F log F F The sum capacity: C sum = 1 2 log[det(i d + dσ 2 F F )] Both are maximized by F F =
A COMMON MODEL Information Measures 12 The Von Neumann entropy: S = Tr F F log F F The sum capacity: C sum = 1 2 log[det(i d + dσ 2 F F )] Both are maximized by F F = 1 d I d.
HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix,
HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix, characterization of PDs consistent with a given density matrix.
HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix, characterization of PDs consistent with a given density matrix. Let {p i } be a PD, and p 1 p 2 p K.
HOW TO MIX A DENSITY MATRIX F F = K i=1 p i ψ i ψ i 13 Two problems: classification of ensembles having a given density matrix, characterization of PDs consistent with a given density matrix. Let {p i } be a PD, and p 1 p 2 p K. Let ρ be a density matrix, and λ 1 λ 2 λ d its eigenvalues. There exist vectors ψ i such that ρ = K i=1 p i ψ i ψ i iff n i=1 p i n λ i for all n < d. i=1
HOW TO MIX A DENSITY MATRIX F F = 1 d I d 14 For ρ = 1 d I d, condition n p i n λ i for all n < d. i=1 i=1 becomes p 1 1/d.
HOW TO MIX A DENSITY MATRIX F F = 1 d I d 14 For ρ = 1 d I d, condition n p i n λ i for all n < d. i=1 i=1 becomes p 1 1/d. In CDMA, user i is said to be oversized if K p i > j=i+1 d i p j
INTERFERENCE MEASURE IN CDMA Total Square Correlation (TSC) 15 The Welch s lower bound to TSC (frame potential): K i=1 K p i p j ψ i ψ j 2 1 d j=1
INTERFERENCE MEASURE IN CDMA Total Square Correlation (TSC) 15 The Welch s lower bound to TSC (frame potential): K i=1 K j=1 p i p j ψ i ψ j 2 1 d with equality iff K i=1 p i ψ i ψ i = 1 d I d. WBE sequences minimize the TSC maximize the sum capacity and Von Neumann entropy
INTERFERENCE MEASURE IN CDMA Total Square Correlation (TSC) 15 The Welch s lower bound to TSC (frame potential): K i=1 K j=1 p i p j ψ i ψ j 2 1 d with equality iff K i=1 p i ψ i ψ i = 1 d I d. WBE sequences minimize the TSC maximize the sum capacity and Von Neumann entropy What does reducing TSC mean?
SOME COMMUNICATION CHANNELS 16 Binary Symmetric Channel: input 1 1 w w w 1 w 1 output
SOME COMMUNICATION CHANNELS 16 Binary Symmetric Channel: 1 w input w w output 1 1 w 1 Noisy Typewriter: B B input H H output N N
A CQ COMMUNICATION CHANNEL A Probabilistic Device 17 Inputs: vectors ψ i, i X. Outputs: vectors ϕ j determined by the chosen measurement. Transition probabilities determined by the chosen measurement. output input ψ i P (j i) ϕ j ϕ l ψ k P (l k)
QUANTUM MEASUREMENT Von Neumann s Measurement 18 A set of pairwise orthogonal projection operators {Π i }. They form a complete resolution of the identity: i Π i = I.
QUANTUM MEASUREMENT Von Neumann s Measurement 18 A set of pairwise orthogonal projection operators {Π i }. They form a complete resolution of the identity: i Π i = I. For input ψ j, output Π i ψ j happens with probability ψ j Π i ψ j.
QUANTUM MEASUREMENT Von Neumann s Measurement 18 A set of pairwise orthogonal projection operators {Π i }. They form a complete resolution of the identity: i Π i = I. For input ψ j, output Π i ψ j happens with probability ψ j Π i ψ j. Example: Π = ψ 1 ψ ψ 2 Π 1 = ψ input output ψ 1 ψ 1 2
QUANTUM MEASUREMENT Positive Operator-Valued Measure 19 Any set of positive-semidefinite operators {E i }. They form a complete resolution of the identity: i E i = I. For input ψ j, output E i ψ j happens with probability ψ j E i ψ j.
QUANTUM MEASUREMENT Positive Operator-Valued Measure 19 Any set of positive-semidefinite operators {E i }. They form a complete resolution of the identity: i E i = I. For input ψ j, output E i ψ j happens with probability ψ j E i ψ j. Example: ψ 1 ϕ 3 ψ ψ ϕ 2 ϕ ϕ ψ input ψ ϕ 3 2 ϕ 3 output ψ 1 ϕ 3 2 ϕ 1 ψ 1 ψ 1 ϕ 1 2 ϕ 1
OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability.
OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability. POVMs attaining the accessible information (number of elements)
OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability. POVMs attaining the accessible information (number of elements). Example: ψ 1 = [ 1 ] ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2
OPTIMAL QUANTUM MEASUREMENTS Some Open Problems 2 POVMs minimizing the detection error-probability. POVMs attaining the accessible information (number of elements). Example: 1 α ψ 1 = α ψ 1 = [ 1 ] ψ 3 = 1 α/2 3 1 α/2 α [ 1/2 ψ 3 = 3/2 ] ψ 2 = ψ 2 = 1 α/2 3 1 α/2 α [ ] 1/2 3/2
A SOURCE OF INFORMATION Classical 21 Discrete: produces sequences of letters. Letters belong to a finite alphabet X. Memoryless: each letter is produced independently. Probability of letter a is P a. Example: coin tossing with X = {H, T }.
CLASSICAL DISCRETE MEMORYLESS SOURCE Sequences and Large Sets 22 Source sequence x= x 1, x 2,..., x n is in X n. N(a x) denotes the number occurrences of a in x.
CLASSICAL DISCRETE MEMORYLESS SOURCE Sequences and Large Sets 22 Source sequence x= x 1, x 2,..., x n is in X n. N(a x) denotes the number occurrences of a in x. Consider all sequences x for which 1 n N(a x) P a δ for every a X. They form the set of typical sequences T n P,δ.
CLASSICAL DISCRETE MEMORYLESS SOURCE Sequences and Large Sets 22 Source sequence x= x 1, x 2,..., x n is in X n. N(a x) denotes the number occurrences of a in x. Consider all sequences x for which 1 n N(a x) P a δ for every a X. They form the set of typical sequences T n P,δ. Set T n P,δ is probabilistically large: P n (T n P,δ) 1 ɛ n.
DISCRETE MEMORYLESS SOURCE Shannon Entropy 23 H(P ) = a X P a log P a.
DISCRETE MEMORYLESS SOURCE Shannon Entropy 23 H(P ) = a X P a log P a. Set T n P,δ contains approximately 2nH(P ) sequences: 2 n[h(p ) ɛ n] T n P,δ 2 n[h(p )+ɛ n]
DISCRETE MEMORYLESS SOURCE Shannon Entropy 23 H(P ) = a X P a log P a. Set T n P,δ contains approximately 2nH(P ) sequences: 2 n[h(p ) ɛ n] T n P,δ 2 n[h(p )+ɛ n] The probability of typical sequences x is approximately 2 nh(p ) : 2 n[h(p )+ɛ n ] P x 2 n[h(p ) ɛ n ]
24 QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences Sequences of length n are d n -dimensional vectors: e e e e 1 e 1 e e 1 e 1 1 1 1 1
24 QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences Sequences of length n are d n -dimensional vectors: e e e e 1 e 1 e e 1 e 1 1 1 1 1 Source vector-sequence (state) Ψ x = ψ x1 ψ x2 ψ xn, x i X, appears with probability P x = P x1 P x2... P xn.
QUANTUM DISCRETE MEMORYLESS SOURCE Vector Sequences 25 Source vector-sequence (state) Ψ x = ψ x1 ψ x2 ψ xn, x i X, appears with probability P x = P x1 P x2... P xn. Typical states Ψ x H 2n correspond to typical sequences x. There are approximately 2 nh(p ) typical states.
QUANTUM DISCRETE MEMORYLESS SOURCE Typical Subspace 26 Typical states Ψ x H 2n live in the typical subspace.
QUANTUM DISCRETE MEMORYLESS SOURCE Typical Subspace 26 Typical states Ψ x H 2n live in the typical subspace. Typical subspace Λ n of H 2n : Λ n Ψ x = Ψ x Λ n + Ψ x Λ n Λ n
QUANTUM DISCRETE MEMORYLESS SOURCE Typical Subspace 26 Typical states Ψ x H 2n live in the typical subspace. Typical subspace Λ n of H 2n : Λ n Ψ x = Ψ x Λ n + Ψ x Λ n Λ n The dimension of Λ n is approximately 2 ns(ρ).
Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1
Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1 Ψ x1,..., Ψ xm span U, an r-dimensional subspace of H dn.
Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1 Ψ x1,..., Ψ xm span U, an r-dimensional subspace of H dn. Perform the SVD of F and define a scaled projection on U: F = r λk u k v k, F F = k=1 r λ k u k u k, P U = k=1 r k=1 1 r u k u k
Code C = {x 1,..., x M } X n 27 F is the d n M matrix whose columns are ψ xi / M. Thus F F = 1 M M Ψ xi Ψ xi. i=1 Ψ x1,..., Ψ xm span U, an r-dimensional subspace of H dn. Perform the SVD of F and define a scaled projection on U: F = r λk u k v k, F F = k=1 r λ k u k u k, P U = k=1 r k=1 1 r u k u k How far is F F from P U?
DISTANCES BETWEEN DENSITY MATRICES ρ and σ 28 Trace distance: D(σ, ω) = 1 2 Tr σ ω, A denotes the positive square root of A A.
DISTANCES BETWEEN DENSITY MATRICES ρ and σ 28 Trace distance: D(σ, ω) = 1 2 Tr σ ω, A denotes the positive square root of A A. Uhlman Fidelity: F (σ, ω) = { Tr [ ( σω σ) 1/2]} 2. 1 F (σ, ω) D(σ, ω) 1 F (σ, ω) 2
DISTANCES BETWEEN DENSITY MATRICES ρ and σ 28 Trace distance: D(σ, ω) = 1 2 Tr σ ω, A denotes the positive square root of A A. Uhlman Fidelity: F (σ, ω) = { Tr [ ( σω σ) 1/2]} 2. 1 F (σ, ω) D(σ, ω) 1 F (σ, ω) 2 Frobenius (Hilbert-Schmidt)?
DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1.
DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: P (a i ) = 1 N
DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: { 1/n, 1 i n P (a i ) = 1 N and Q(a i) = n + 1 i N
DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: { 1/n, 1 i n P (a i ) = 1 N and Q(a i) = n + 1 i N P ({a n+1,..., a N }) 1 as k, K.
DISTANCES BETWEEN PD S An Example 29 A N = {a 1,..., a N } N = 2 K and n = 2 k, with k/k = c < 1. Distributions P and Q: { 1/n, 1 i n P (a i ) = 1 N and Q(a i) = n + 1 i N P ({a n+1,..., a N }) 1 as k, K. 1 2 i P (a i) Q(a i ) 1 and i P (a i) Q(a i ) 2.
A BASIS FOR Λ n 3 How far is F F from P U?
A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 ( r = r k=1 λ 2 k ) 1
A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 = r M 2 M i=1 ( r = r k=1 λ 2 k ) M Ψ xi Ψ xj 2 1 j=1 1
A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 = r M 2 1 M M i=1 M i=1 ( r = r k=1 λ 2 k ) M Ψ xi Ψ xj 2 1 j=1 M Ψ xi Ψ xj 2 j=1 j i 1
A BASIS FOR Λ n 3 How far is F F from P U? [ r λ k 1 ] 2 r ( r r λ k 1 ) r k=1 k=1 = r M 2 1 M M i=1 M i=1 ( r = r k=1 λ 2 k ) M Ψ xi Ψ xj 2 1 j=1 M Ψ xi Ψ xj 2 j=1 j i 1 Use the random coding argument!
A BASIS FOR Λ n The Random Coding Argument 31 Averaging over all codes: { M E i=1 M Ψ xi Ψ xj 2} = M(M 1) Tr(ρ n ρ n ) j=1 j i
A BASIS FOR Λ n The Random Coding Argument 31 Averaging over all codes: { M E i=1 M Ψ xi Ψ xj 2} = M(M 1) Tr(ρ n ρ n ) j=1 j i There exists a code C with M codewords s.t. [ r λk 1 ] 2 M Tr(ρ n ρ n ) r k=1
A BASIS FOR Λ n The Random Coding Argument 31 Averaging over all codes: { M E i=1 M Ψ xi Ψ xj 2} = M(M 1) Tr(ρ n ρ n ) j=1 j i There exists a code C with M codewords s.t. [ r λk 1 ] 2 M Tr(ρ n ρ n ) r k=1 There exists a code C, C = 2 nr s.t. on Λ n [ r λ k 1 ] 2 2 n(s(ρ) ε n R) r k=1
COMBINATORICS AND GEOMETRY The MB Example 32 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2
COMBINATORICS AND GEOMETRY The MB Example 32 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 There are 3 n typical vectors forming a frame in H 2n.
COMBINATORICS AND GEOMETRY The MB Example 32 X = {1, 2, 3} P 1 = P 2 = P 3 = 1/3 ψ 1 = [ 1 ] ρ = 1 3 ψ 1 ψ 1 + 1 3 ψ 2 ψ 2 + 1 3 ψ 3 ψ 3 = 1 2 I d = 2 S(ρ) = 1 ψ 3 = [ 1/2 3/2 ] ψ 2 = [ ] 1/2 3/2 There are 3 n typical vectors forming a frame in H 2n. About 2 n of those vectors form a basis of H 2n. Which ones?