Quantum state discrimination with post-measurement information! DEEPTHI GOPAL, CALTECH! STEPHANIE WEHNER, NATIONAL UNIVERSITY OF SINGAPORE!
Quantum states! A state is a mathematical object describing the parameters of a system.! If some experimental procedure prepares a system, the state describes the initial conditions.! Classically: observable dynamical variables. Quantum: normalised vector.! We cannot directly determine the state of a quantum system. However, it is often possible to gain partial information.! Thus, guessing the state of a system is the problem of state discrimination.!
Measurement In the context of quantum information, we consider the Positive Operator-Valued Measure (a general formulation of a measurement). Effectively: a set of Hermitian, positive semidefinite operators that together sum to the identity. This is a generalisation of the more familiar von Neumann measurement scheme. (Recall the decomposition of a Hilbert space into orthogonal projectors.) It can be shown that there are cases for which using the simpler projective measurement is insufficient.
Quantum state discrimination! As a simple game between two people:" Alice chooses a state for the system from a finite set of states.! She gives the system to Bob. He has to guess its state.! Bob knows the possible states with associated probabilities.! Bob performs some measurement, then makes a guess.! x! Alice! Bob! guess x! pick x! measurement!
Quantum state discrimination problems! A more formal description of the problem follows:! We are given a state i chosen from a finite set of states { 1, 2 n } with probabilities {p 1, p 2 p n }. To guess i, we measure the state. If the outcome is j, our guess is j. Consider a positive operator-valued measurement M, whose measurement operators M j (corresponding to j ) satisfy: M j 0 j M j = I The probability p k i that, for input i, the outcome is k, is Tr[M k i ]. Averaged probability of success: j p j Tr[M k i ]. We would like to maximise this!
Semidefinite programming This is a special case of convex optimisation. In terms of some variable X S n (where S n is the set of symmetric n x n matrices): maximize Tr[ C X ] subject to Tr[ A i X ] = b i, i = 1,..., p, and X 0 We solve this using Lagrangian duality: intuitively, we want to extend the objective function with a weighted sum of the constraints. Then, weights = dual variables; it is often then simple to optimise over the dual. This is useful later! (And can generally be solved in polynomial time.)
Semidefinite programming It is worth noting that in semidefinite program form, the usual case of state discrimination described above can schematically be written as (with N being probability/ normalisation): Maximise (N) x Tr[ x M x ] Subject to M x 0 x M x = I
Post-measurement information We address a state discrimination problem in which after the measurement, we are given extra information about the state. Extending the game described earlier: Bob is sent a state xy ; after measurement, Bob is given y.! x is taken from some set X which we call the input. y is taken from Y, the encoding. xy! Alice! Bob! guess x! pick x, y! measurement! y!
Post-measurement information A (simple!) classical example in which post-measurement information is relevant: x {0,1} is a classical bit; we have only one encoding, b {0,1}. Alice chooses x and b at random, sending Bob the bit x b = (x + b) mod 2. Bob therefore has a randomly chosen bit in his possession; without any information on the value of b, his probability of success is only ½. If he is given additionally the value of b, he can always correctly determine the value of x. In quantum situations, as before, we expect Bob to use a measurement, and it would be nice to determine the optimal one.
Post-measurement information It is possible for post-measurement information to be useless: if being given y does not increase the probability that we guess x correctly. When is it useful? It s possible to derive an absolute condition in two dimensions, and some information in higher dimensions. The obvious way to quantify this is by measuring the difference in success probability with and without PI. In fact, classically it is always useful (intuitively we can convince ourselves of this with the previous example).
Post-measurement information We would like more information about possible optimal measurements. (For now, we ll work with both x and y taking values 0 or 1.) Let s quickly derive an optimality condition: For convenience, let s write b 00 = 00 + 01, b 01 = 00 + 11, and so on. Then, our problem can be written as the semidefinite program Maximise xy Tr[b xy M xy ] (this is the success probability!) Subject to M xy 0, xy M xy = I (simply from definition) We solve by setting the optimal solution d* of the dual equal to the optimal solution p* of the primal. Solving, we obtain optimality conditions: b 00 M 0 + b 11 M 1 is Hermitian; b 00 M 0 + b 11 M 1 b ij
Two-dimensional case With two basis states, it is possible to cleanly represent states as vectors on the Bloch sphere.
Two-dimensional case It s in fact helpful here to briefly reduce our case to one without post-measurement information, so that we can write down the optimal measurement. Knowing this will tell us whether post-measurement information is useful! We can use * 0 = ½( 00 + 01 ), * 1 = ½( 10 + 11 ) this effectively removes the encoding from consideration. Previous results in the field (Helstrom 1976) tell us that projectors on the positive and negative eigenspace of * 0 - * 1 are the optimal measurement in this case.
Two-dimensional case If is the vector of the Pauli matrices, then it is possible for us to write in the form 00 = ½ (I + v 0. ), and so on. Note here that v 0 also represents the state on the Bloch sphere! Then we can show in this form that the optimal measurement to distinguish between * 0 and * 1 is given by: M 0 = ½ (I + m 0. ) M 1 = ½ (I + m 1. ) m 0 = - m 1 = (v 0 + v 1 ) / v 0 + v 1 This allows us to compare any result we describe for postmeasurement information to the success probability using these measurements.
Two-dimensional case To derive information about the post-measurement case, we recall the optimality condition for a measurement and retain the notation using Pauli matrices. b 00 = I + ½ (v 0 + v 1 ). and so on. Let s now suggest the measurement given as follows: M 00 = M 0, M 11 = M 1, M 10 = M 01 = 0, where M 0, M 1 are the optimal measurements given for the case without post-measurement information. Thus xy b xy M xy (measurements * states) = I (1 + ½ v 0 + v 1 ). Now recall the measurement optimality condition; we check b 01 = I + ½ (v 0 - v 1 ), and note that for b 00 M 0 + b 11 M 1 b ij to be true, v 0 + v 1 v 0 - v 1, which is true when the angle between states is π/2.
Two-dimensional case We have therefore shown that the optimal measurement for a case in which the states form an angle π/2 is simply the optimal measurement for the case without post-measurement information. This implies that post-measurement information is useless for π/2. We can in fact fairly simply derive the optimal measurement for > π/2: M 0 = ½ (I + m 0. ) M 1 = ½ (I + m 1. ) m 0 = - m 1 = (v 0 - v 1 ) / v 0 + v 1 M 01 = M 0, M 10 = M 1, M 11 = M 00 = 0
Two-dimensional case ρ 00 ρ 01 ρ 11 ρ 10 θ Post-measurement information is useless if the angle is less than π/2. The dashed line corresponds to the measurement we would expect using standard state discrimination; it represents the attempt to distinguish 00 + 01 from 11 + 10, and thus we output the same bit irrespective of encoding information.
Two-dimensional case ρ 00 ρ 01 θ ρ 11 ρ 10 Post-measurement information is useful if the angle is greater than π/2. The dashed line corresponds to the measurement we derived using postmeasurement information; it represents the attempt to distinguish 00 + 10 from 01 + 11, and our output depends on the postmeasurement information received by us.
Three or more states What about larger problems? Unfortunately, our Bloch sphere simplification is no longer so simple. Let s look at an example, in which there are three possible input states with two encodings. (This is mathematically simpler! We will attempt to generalise.) Considering the case of state discrimination without post-measurement information, we can slightly modify the semidefinite program: instead of requiring x M x = I, we require only x M x = ci, for some c.
Three or more states The optimal measurement operators in our new case can be quite simply shown to be c. the optimal measurement operators in the previous case. Then, we add in post-measurement information; and write: x 1 x 2 Tr[b x1 x 2 M x1 x 2 ]=p PI 1 Maximise 6 Subject to M x1 x 2 0 x 1 x 2 M x1 x 2 I
Three or more states In order to solve this, we simply split our problem into three separate subcases. (A partition was chosen at random.) Maximise 1 6 Subject to x Tr[b xxm (1) x M (1) x x ]=p 1 I Maximise 1 6 Subject to x Tr[b x(x+1)m (2) x M (2) x x ]=p 2 I Maximise 1 6 Subject to x Tr[b (x+1)xm (3) x M (3) x x ]=p 3 I
Three or more states Using our generalisation of the state-discrimination SDP, and noting that M xx c 1 I x x x M x(x+1) M (x+1)x c 2 I it is quite simple to show that the success probability with post-measurement information is bounded by the largest of the success probabilities of our three subcases. Thus, the optimal measurement with post-measurement information is the optimal measurement for some case selected from an even partition of the states in the problem. c 3 I
Three or more states We can in fact show that the previous case can be extended to an arbitrary number of states, with certain (fairly intuitive!) restrictions on the partitions chosen. It would be exciting to have a sharper bound! It would also be interesting to know how well certain generic measurements perform in this case. An example is the pretty good measurement, or square-root measurement, which is a measurement weighted by the square root of the probability associated with each state. It is, in fact, generally pretty good.