Unitary Process Discrimination with Error Margin DEX-SMI Workshop on Quantum Statistical Inference March 2-4, 2009, National Institute of Informatics (NII), Tokyo A. Hayashi (Fukui) T. Hashimoto (Fukui), M. Horibe (Fukui) and M. Hayashi (Tohoku)
Unitary process discrimination with error margin unitary process input output U U U... 1 2 3 φ> U φ>= φ > i i State discrimination { φ i U i φ } Inconclusive result ( I don t know ) is allowed Maximize the success probability P success Margin on error probability P error < m m = 1 : minimum-error discrimination m = 0 : unambiguous discrimination Two solvable cases: {U 1, U 2 } (two unitary processes) {T g } g G (T g is a projective representation of a finite group G)
Two unitary processes {U 1, U 2 } First, fix input φ Two-state discriminaiton ρ 1 = φ 1 φ 1, φ 1 U 1 φ ρ 2 = φ 2 φ 2, φ 2 U 2 φ We assume the same occurrence probabilities POVM {E µ } µ=1,2,3 E 1 for ρ 1, E 2 for ρ 2, E 3 E? for I don t know, Phys. Rev. A78, 012333(2008) by A. Hayashi, T. Hashimoto and M. Horibe Joint probabilities The state is ρ a and measurement outcome is µ: P ρa,e µ = tr [E µ ρ a ]
Strong error-margin conditions Success probability to be optimized ) p = 1 2 (P ρ1,e 1 + P ρ2,e 2 Margin m on conditional error probabilities P ρ2 E 1 = P ρ1 E 2 = tr [E 1 ρ 2 ] tr [E 1 ρ 1 ] + tr [E 1 ρ 2 ] m tr [E 2 ρ 1 ] tr [E 2 ρ 1 ] + tr [E 2 ρ 2 ] m m = 1 m = 0 Minimum error discrimination p = 1 2 (1 + ) 1 φ 1 φ 2 2 Unambiguous discrimination p = 1 φ 1 φ 2
Optimization problem Optimization maximize: p = 1 ( ) tr [E 1 ρ 1 ] + tr [E 2 ρ 2 ], 2 subject to: E 1 0, E 2 0, Equal occurrence probabilities Semidefinite programming E 1 + E 2 1, ( ) tr [E 1 ρ 2 ] m tr [E 1 ρ 1 ] + tr [E 1 ρ 2 ], ( ) tr [E 2 ρ 1 ] m tr [E 2 ρ 1 ] + tr [E 2 ρ 2 ].
Bloch vector representation In the 2-dim subspace = < φ 1, φ 2 > Bloch vector representation ρ a = 1 + n a σ, E µ = α µ + β 2 µ σ Optimization in terms of parameters {α µ, β µ } maximize: p = 1 ) (α 1 + β 2 1 n 1 + α 2 + β 2 n 2, subject to: α 1 β 1, α 2 β 2, α 1 + α 2 + β 1 + β 2 1, ( ) α 1 + β 1 n 2 m 2α 1 + β 1 (n 1 + n 2 ), ( ) α 2 + β 2 n 1 m 2α 2 + β 2 (n 1 + n 2 ).
Optimal success probability p = ) A m (1 φ 1 φ 2, (0 m m c ), 1 2 (1 + ) 1 φ 1 φ 2 2, (m c m 1), where m c = 1 2 ( ) 1 1 φ 1 φ 2 2, and A m is an increasing function of error margin and defined to be A m = 1 m ( (1 2m) 2 1 + 2 ) m(1 m).
Success probability p (m) 1 p : success probability ο 0.8 p m 0.6 0.4 0.2 p u 0 0 0.1 0.2 0.3 0.4 0.5 m c p m : minimum error discrimination p u : unambiguous discrimination m : error margin
Strong and weak error-margin conditions I Strong conditions P ρ2 E 1 m, P ρ1 E 2 m ) p = A m (1 φ 1 φ 2, A m Weak conditions p = P E1,ρ 2 + P E2,ρ 1 m (0 m m c ) 1 m ( (1 2m) 2 1 + 2 ) m(1 m) p = ( m ) 2, + 1 φ1 φ 2 (0 m m c ) POVM E 1, E 2, E 3 : rank 0 or 1
Strong and weak error-margin conditions II 1 p : success probability ο 0.8 p m 0.6 0.4 0.2 Weak Strong p u 0 0 0.1 0.2 0.3 0.4 0.5 m c m : error margin p m : minimum error discrimination p u : unambiguous discrimination
Optimal discrimination by LOCC Local Operations and Classical Communication (LOCC) Bipartite state?? Local operations Alice Classical communication Bob
Two Orthogonal pure states Two orthogonal pure states can be perfectly discriminated by LOCC:(Local Operations and Classical Communication) (Walgate et al., 2000) Example φ 1 = 1 2 ( 0 0 + 1 1 ) = 1 2 ( + + + ) φ 2 = 1 2 ( 0 0 1 1 ) = 1 2 ( + + + ) where, ± = 1 2 ( 0 ± 1 ) In general, if φ 1 φ 2 = 0 { φ1 = i i ξ i φ 2 = i i η i, where { i : Orthonormal basis ξ i η i = 0
Local discrimination of nonorthogonal states Local discrimination Two non-orthogonal pure states (generally entangled) can be optimally discriminated by LOCC. Discrimination with minimum error: Virmani et al. (2001) Error margin : m c m 1, m c = 1 2 Unambiguous discrimination: Chen et al. (2001,2002), Ji et al. (2005) Error margin : m = 0 (1 φ 1 φ 2 2 ) We can show For any error margin, two pure states can be optimally discriminated by LOCC.
Three-element POVM of rank 0 or 1 Optimal POVM of discrimination with error margin {E 1, E 2, E 3 } : each element is of rank 0 or 1 Theorem Let V be a two-dimensional subspace of a multipartite tensor-product space H, and P be the projector onto the subspace V. Then, for any three-element POVM {E 1, E 2, E 3 } of V with every element being of rank 0 or 1, there exists a one-way LOCC POVM {E L 1, E L 2, E L 3 } of H such that E µ = PE L µ P (µ = 1, 2, 3). Proj. Op. P {E,E,E } rank 0 or 1 1 2 3 2-dim. subspace V L L L {E,E,E } : LOCC 1 2 3 Multi partite H E = P E P µ L µ
Unitary process {U 1, U 2 } discrimination I Remark: The optimal success probability is attained by a pure-state input, since P pure (m) is concave with respect to m Discrimination between states { φ 1, φ 2 } P max (m, φ 1, φ 2 ) = f (m, φ 1 φ 2 ) ( m ) 2, + 1 s 0 m < 1 2 (1 ) 1 s 2 f (m, s) = 1 2 (1 + ) 1 s 2 1, 2 (1 ) 1 s 2 m 1 f (m, s) is decreasing with respect to s Discrimination between processes {U 1, U 2 } Pmax pure (m) = max P max(m, φ 1, φ 2 ) = f (m, µ) φ µ min φ φ U + 1 U 2 φ
Discrimination between processes {U 1, U 2 } II 1 È Ñ Ü Ñµ ½ ¾ ½ 0.8 Ô½ ¾ 0.6 0.4 ½ 0.2 0 0 0.2 0.4 0.6 0.8 1 Ñ ½ ¾ m : error margin µ min φ φ U + 1 U 2 φ ½ Ô ½ ¾ Ñ
Minimum fidelity µ µ min φ U 1 + U 2 φ = min φ q a 0, P a qa=1 where {e iθ 1, e iθ 2,..., e iθ d } are eigenvalues of U + 1 U 2 d q a e iθ a a=1 µ µ=0 µ>0
Unitary processes {T g } g G : T g is a unitary projective repersentation of a finite group G Unitary projective representation T g T h = c g,h T gh (T + g = T 1 g, c g,h = 1, g, h G) where {c g,h } is a factor set. State discrimination { P φg = 1 } G, φ g = T g φ g G where G is the order of G.
Covariant measurement Covariant POVM Optimal POVM {E g, E? } can be assumed covariant: T g E? T + g = E?, T g E h T + g = E gh maximize: P = φ E 1 φ subject to: E 1 0, Optimization with covariant POVM g G T g E 1 T + g 1 weak error-margin condition P = φ T + h E 1T h φ m h( 1)
If {T g } is irreducible (I) By Schur s lemma T g E 1 T g + g G = G d tr [E 1] 1 (d = dimension) Completeness of POVM : g G T g E 1 T + g 1 P = tr [E 1 ρ] tr [E 1 ] d G Error-margin condition : g( 1) tr [ T + g E 1 T g ρ ] m P = tr [E 1 ρ] tr [E 1 ] m G d 1
If {T g } is irreducible (II) Maximal success probability { m G P max (m) = 1, (0 m m c) d G, (m c m 1) m c = 1 d G È Ñ Ü Ñµ ¼ Ñ ½ ½ Ñ
General case For any E 1 0 κ T g E 1 T g + E 1, κ min(m r, d r )d r G g G r G = order of G, r = irreducible representation d r = dimension of r, m r = multiplicity of r (Proof: orthogonality of representation matrices and Schwarz inequality) Note: r d r d r G = 1 (Plancherel measure) Completeness of POVM : g G T g E 1 T g + 1 P = tr [E 1 ρ] tr [E 1 ] κ Error-margin condition : g( 1) tr [ T + g E 1 T g ρ ] m P = tr [E 1 ρ] tr [E 1 ] κ 1 κ m
Maximal success probability P max (m) = r(m r 1) { κ 1 κ m, (0 m m c) κ, (m c m 1) m c = 1 κ κ = min(m r, d r )d r G r Note : With a sufficient large ancilla, κ dr 2 G dr 2 = 1 (Plancherel measure) G r Optimal input and POVM φ = 1 κ r min(m r,d r ) a=1 E 1 = P max (m) φ φ d r r, a, a G
Example I : Super dense coding in d dimension Define unitaries T mn on C d T mn = X m Z n (m, n = 0, 1,..., d 1) X = Z = d 1 a a + 1, ( σ x ) a=0 d 1 e i 2π d a a a, ( σ z ) a=0 XZ = e i 2π d ZX {T mn } is an irreducible projective representation of G = Z d Z d T mn T m n = e i 2π d nm T m+m,n+n For C d C d (ancilla), d r = d, m r = d, G = d 2
Maximum success probability of {T nm } P max (m) = { d d d m, (0 m m c) d d, (m c m 1) d = min(d, d), m c = 1 d d È Ñ Ü Ñµ Ñ Ò ¼ µ Ñ ½ Ñ Ò ¼ µ ¼ ½ Ñ
Example II : Color coding (symmetric group S N ) Consider (C d ) N Permutation of N subsystems : T σ (σ S N ) on (C d ) N {T σ } σ SN is a representation of S N Korff and Kempe (PRL 2005) A. Hayashi, T. Hashimoto, and M. Horibe (PRA 2005) Alice : N=3 boxes, d=2 quantum colors (0 or 1) X X (1) (2) (3) Sloppy Bob permutes the boxes (3) (1) (2) Alice guesses which box contains which object
N : number of boxes d : number of colors N = 3, d = 2 N = 4, d = 2 When N P classical = 1 2, P = P ancilla = 5 6 P classical =?, P = 13 24, Pancilla = 14 24 P 1 if d N e (Korff and Kempe) P ancilla 1 if d 2 N (HHH)
Strong error-margin condition Error-margin condition g( 1) P E1 tr [ ρt g E 1 T g + ] g tr [ ρt g E 1 T g + ] m tr [ρe 1 ] κtr [ ρ g T g E 1 T + g ] κ 1 m tr [ρe 1] Maximum success probability { P max 0, (0 m < mc ) (m) = κ, m c m 1 m c = 1 κ, κ = r min(m r, d r )d r G
Summary Unitary process discrimination with error margin Two-unitary case, {U 1, U 2 }, solved Group representation case, {T g } g G, solved Unambiguous discrimination (m = 0) : P max (0) = 0 or 1 Weak error margin : P max (m) is linear in m for m m c Strong error margin : P max (m) = 0 for m m c Many applications (super dense coding, color coding, ) Ancilla Entangled input state for multiple uses of the process
Appendix 1 (The strong condition is stronger than the weak condition) P = P E1,ρ 2 + P E2,ρ 1 = P ρ2 E 1 P E1 + P ρ1 E 2 P E2 m(p E1 + P E2 ) m