Quantum state discrimination with post-measurement information!

Similar documents
Lecture 4: Postulates of quantum mechanics

Introduction to Quantum Information Processing QIC 710 / CS 768 / PH 767 / CO 681 / AM 871

Introduction to Quantum Information Hermann Kampermann

Introduction to quantum information processing

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

CS286.2 Lecture 15: Tsirelson s characterization of XOR games

Compression and entanglement, entanglement transformations

AQI: Advanced Quantum Information Lecture 6 (Module 2): Distinguishing Quantum States January 28, 2013

Quantum information and quantum computing

Introduction to Quantum Computing

Lecture 8: Semidefinite programs for fidelity and optimal measurements

Lecture notes for quantum semidefinite programming (SDP) solvers

CS286.2 Lecture 8: A variant of QPCP for multiplayer entangled games

The Adaptive Classical Capacity of a Quantum Channel,

Quantum Entanglement and Cryptography. Deepthi Gopal, Caltech

CS/Ph120 Homework 8 Solutions

Entanglement Measures and Monotones

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Why does nature like the square root of negative one? William K. Wootters Williams College

Chapter 2 The Density Matrix

Introduction to Quantum Mechanics

Qubits vs. bits: a naive account A bit: admits two values 0 and 1, admits arbitrary transformations. is freely readable,

Lecture: Quantum Information

Lecture 18: Quantum Information Theory and Holevo s Bound

Lecture 11: Quantum Information III - Source Coding

Quantum Information Types

Chapter 5. Density matrix formalism

1. Basic rules of quantum mechanics

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions

Lecture 20: Bell inequalities and nonlocality

Asymptotic Pure State Transformations

Repeated Eigenvalues and Symmetric Matrices

SDP Relaxations for MAXCUT

Quantum Error Correcting Codes and Quantum Cryptography. Peter Shor M.I.T. Cambridge, MA 02139

Lecture Notes. edx Quantum Cryptography: Week 3

CS/Ph120 Homework 1 Solutions

arxiv: v1 [quant-ph] 3 Jan 2008

Grothendieck s Inequality

Lecture 3: Superdense coding, quantum circuits, and partial measurements

Lecture 21: Quantum communication complexity

Quantum Nonlocality Pt. 4: More on the CHSH Inequality

By allowing randomization in the verification process, we obtain a class known as MA.

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

arxiv: v2 [quant-ph] 21 Oct 2013

Ensembles and incomplete information

Unitary evolution: this axiom governs how the state of the quantum system evolves in time.

Ph 219/CS 219. Exercises Due: Friday 3 November 2006

CS120, Quantum Cryptography, Fall 2016

1 Quantum states and von Neumann entropy

6.1 Main properties of Shannon entropy. Let X be a random variable taking values x in some alphabet with probabilities.

An Introduction to Quantum Information. By Aditya Jain. Under the Guidance of Dr. Guruprasad Kar PAMU, ISI Kolkata

Lecture 14: Quantum information revisited

8 Approximation Algorithms and Max-Cut

Logic: The Big Picture

Quantum Data Compression

Lecture 3: Semidefinite Programming

Fidelity and trace norm

Lecture Notes. Quantum Cryptography Week 2: The Power of Entanglement

Quantum theory without predefined causal structure

Computational Algebraic Topology Topic B Lecture III: Quantum Realizability

Quantum sampling of mixed states

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Quantum Noise. Michael A. Nielsen. University of Queensland

Quantum Measurements: some technical background

Lecture 21: HSP via the Pretty Good Measurement

Information quantique, calcul quantique :

Kapitza Fall 2017 NOISY QUANTUM MEASUREMENT

conventions and notation

Quantum Gates, Circuits & Teleportation

Entanglement Manipulation

Direct product theorem for discrepancy

Quantum state discrimination and selected applications

Short proofs of the Quantum Substate Theorem

SUPERDENSE CODING AND QUANTUM TELEPORTATION

Estimating entanglement in a class of N-qudit states

Hilbert Space, Entanglement, Quantum Gates, Bell States, Superdense Coding.

Entanglement and information

Quantum and non-signalling graph isomorphisms

SUPPLEMENTARY INFORMATION

Quantum decoherence. Éric Oliver Paquette (U. Montréal) -Traces Worshop [Ottawa]- April 29 th, Quantum decoherence p. 1/2

Security Proofs for Quantum Key Distribution Protocols by Numerical Approaches

9. Distance measures. 9.1 Classical information measures. Head Tail. How similar/close are two probability distributions? Trace distance.

Quantum Computing Lecture 3. Principles of Quantum Mechanics. Anuj Dawar

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Machine Learning 4771

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

Lecture 15 - Zero Knowledge Proofs

2. Introduction to quantum mechanics

Introduction to Quantum Cryptography

Probabilistic exact cloning and probabilistic no-signalling. Abstract

Winter 2011 Josh Benaloh Brian LaMacchia

DECAY OF SINGLET CONVERSION PROBABILITY IN ONE DIMENSIONAL QUANTUM NETWORKS

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

Second Order Reed-Muller Decoding Algorithm in Quantum Computing

Today: Linear Programming (con t.)

The Fundamental Insight

Density Operators and Ensembles

Mutual Information in Conformal Field Theories in Higher Dimensions

Physics 239/139 Spring 2018 Assignment 2 Solutions

Transcription:

Quantum state discrimination with post-measurement information! DEEPTHI GOPAL, CALTECH! STEPHANIE WEHNER, NATIONAL UNIVERSITY OF SINGAPORE!

Quantum states! A state is a mathematical object describing the parameters of a system.! If some experimental procedure prepares a system, the state describes the initial conditions.! Classically: observable dynamical variables. Quantum: normalised vector.! We cannot directly determine the state of a quantum system. However, it is often possible to gain partial information.! Thus, guessing the state of a system is the problem of state discrimination.!

Measurement In the context of quantum information, we consider the Positive Operator-Valued Measure (a general formulation of a measurement). Effectively: a set of Hermitian, positive semidefinite operators that together sum to the identity. This is a generalisation of the more familiar von Neumann measurement scheme. (Recall the decomposition of a Hilbert space into orthogonal projectors.) It can be shown that there are cases for which using the simpler projective measurement is insufficient.

Quantum state discrimination! As a simple game between two people:" Alice chooses a state for the system from a finite set of states.! She gives the system to Bob. He has to guess its state.! Bob knows the possible states with associated probabilities.! Bob performs some measurement, then makes a guess.! x! Alice! Bob! guess x! pick x! measurement!

Quantum state discrimination problems! A more formal description of the problem follows:! We are given a state i chosen from a finite set of states { 1, 2 n } with probabilities {p 1, p 2 p n }. To guess i, we measure the state. If the outcome is j, our guess is j. Consider a positive operator-valued measurement M, whose measurement operators M j (corresponding to j ) satisfy: M j 0 j M j = I The probability p k i that, for input i, the outcome is k, is Tr[M k i ]. Averaged probability of success: j p j Tr[M k i ]. We would like to maximise this!

Semidefinite programming This is a special case of convex optimisation. In terms of some variable X S n (where S n is the set of symmetric n x n matrices): maximize Tr[ C X ] subject to Tr[ A i X ] = b i, i = 1,..., p, and X 0 We solve this using Lagrangian duality: intuitively, we want to extend the objective function with a weighted sum of the constraints. Then, weights = dual variables; it is often then simple to optimise over the dual. This is useful later! (And can generally be solved in polynomial time.)

Semidefinite programming It is worth noting that in semidefinite program form, the usual case of state discrimination described above can schematically be written as (with N being probability/ normalisation): Maximise (N) x Tr[ x M x ] Subject to M x 0 x M x = I

Post-measurement information We address a state discrimination problem in which after the measurement, we are given extra information about the state. Extending the game described earlier: Bob is sent a state xy ; after measurement, Bob is given y.! x is taken from some set X which we call the input. y is taken from Y, the encoding. xy! Alice! Bob! guess x! pick x, y! measurement! y!

Post-measurement information A (simple!) classical example in which post-measurement information is relevant: x {0,1} is a classical bit; we have only one encoding, b {0,1}. Alice chooses x and b at random, sending Bob the bit x b = (x + b) mod 2. Bob therefore has a randomly chosen bit in his possession; without any information on the value of b, his probability of success is only ½. If he is given additionally the value of b, he can always correctly determine the value of x. In quantum situations, as before, we expect Bob to use a measurement, and it would be nice to determine the optimal one.

Post-measurement information It is possible for post-measurement information to be useless: if being given y does not increase the probability that we guess x correctly. When is it useful? It s possible to derive an absolute condition in two dimensions, and some information in higher dimensions. The obvious way to quantify this is by measuring the difference in success probability with and without PI. In fact, classically it is always useful (intuitively we can convince ourselves of this with the previous example).

Post-measurement information We would like more information about possible optimal measurements. (For now, we ll work with both x and y taking values 0 or 1.) Let s quickly derive an optimality condition: For convenience, let s write b 00 = 00 + 01, b 01 = 00 + 11, and so on. Then, our problem can be written as the semidefinite program Maximise xy Tr[b xy M xy ] (this is the success probability!) Subject to M xy 0, xy M xy = I (simply from definition) We solve by setting the optimal solution d* of the dual equal to the optimal solution p* of the primal. Solving, we obtain optimality conditions: b 00 M 0 + b 11 M 1 is Hermitian; b 00 M 0 + b 11 M 1 b ij

Two-dimensional case With two basis states, it is possible to cleanly represent states as vectors on the Bloch sphere.

Two-dimensional case It s in fact helpful here to briefly reduce our case to one without post-measurement information, so that we can write down the optimal measurement. Knowing this will tell us whether post-measurement information is useful! We can use * 0 = ½( 00 + 01 ), * 1 = ½( 10 + 11 ) this effectively removes the encoding from consideration. Previous results in the field (Helstrom 1976) tell us that projectors on the positive and negative eigenspace of * 0 - * 1 are the optimal measurement in this case.

Two-dimensional case If is the vector of the Pauli matrices, then it is possible for us to write in the form 00 = ½ (I + v 0. ), and so on. Note here that v 0 also represents the state on the Bloch sphere! Then we can show in this form that the optimal measurement to distinguish between * 0 and * 1 is given by: M 0 = ½ (I + m 0. ) M 1 = ½ (I + m 1. ) m 0 = - m 1 = (v 0 + v 1 ) / v 0 + v 1 This allows us to compare any result we describe for postmeasurement information to the success probability using these measurements.

Two-dimensional case To derive information about the post-measurement case, we recall the optimality condition for a measurement and retain the notation using Pauli matrices. b 00 = I + ½ (v 0 + v 1 ). and so on. Let s now suggest the measurement given as follows: M 00 = M 0, M 11 = M 1, M 10 = M 01 = 0, where M 0, M 1 are the optimal measurements given for the case without post-measurement information. Thus xy b xy M xy (measurements * states) = I (1 + ½ v 0 + v 1 ). Now recall the measurement optimality condition; we check b 01 = I + ½ (v 0 - v 1 ), and note that for b 00 M 0 + b 11 M 1 b ij to be true, v 0 + v 1 v 0 - v 1, which is true when the angle between states is π/2.

Two-dimensional case We have therefore shown that the optimal measurement for a case in which the states form an angle π/2 is simply the optimal measurement for the case without post-measurement information. This implies that post-measurement information is useless for π/2. We can in fact fairly simply derive the optimal measurement for > π/2: M 0 = ½ (I + m 0. ) M 1 = ½ (I + m 1. ) m 0 = - m 1 = (v 0 - v 1 ) / v 0 + v 1 M 01 = M 0, M 10 = M 1, M 11 = M 00 = 0

Two-dimensional case ρ 00 ρ 01 ρ 11 ρ 10 θ Post-measurement information is useless if the angle is less than π/2. The dashed line corresponds to the measurement we would expect using standard state discrimination; it represents the attempt to distinguish 00 + 01 from 11 + 10, and thus we output the same bit irrespective of encoding information.

Two-dimensional case ρ 00 ρ 01 θ ρ 11 ρ 10 Post-measurement information is useful if the angle is greater than π/2. The dashed line corresponds to the measurement we derived using postmeasurement information; it represents the attempt to distinguish 00 + 10 from 01 + 11, and our output depends on the postmeasurement information received by us.

Three or more states What about larger problems? Unfortunately, our Bloch sphere simplification is no longer so simple. Let s look at an example, in which there are three possible input states with two encodings. (This is mathematically simpler! We will attempt to generalise.) Considering the case of state discrimination without post-measurement information, we can slightly modify the semidefinite program: instead of requiring x M x = I, we require only x M x = ci, for some c.

Three or more states The optimal measurement operators in our new case can be quite simply shown to be c. the optimal measurement operators in the previous case. Then, we add in post-measurement information; and write: x 1 x 2 Tr[b x1 x 2 M x1 x 2 ]=p PI 1 Maximise 6 Subject to M x1 x 2 0 x 1 x 2 M x1 x 2 I

Three or more states In order to solve this, we simply split our problem into three separate subcases. (A partition was chosen at random.) Maximise 1 6 Subject to x Tr[b xxm (1) x M (1) x x ]=p 1 I Maximise 1 6 Subject to x Tr[b x(x+1)m (2) x M (2) x x ]=p 2 I Maximise 1 6 Subject to x Tr[b (x+1)xm (3) x M (3) x x ]=p 3 I

Three or more states Using our generalisation of the state-discrimination SDP, and noting that M xx c 1 I x x x M x(x+1) M (x+1)x c 2 I it is quite simple to show that the success probability with post-measurement information is bounded by the largest of the success probabilities of our three subcases. Thus, the optimal measurement with post-measurement information is the optimal measurement for some case selected from an even partition of the states in the problem. c 3 I

Three or more states We can in fact show that the previous case can be extended to an arbitrary number of states, with certain (fairly intuitive!) restrictions on the partitions chosen. It would be exciting to have a sharper bound! It would also be interesting to know how well certain generic measurements perform in this case. An example is the pretty good measurement, or square-root measurement, which is a measurement weighted by the square root of the probability associated with each state. It is, in fact, generally pretty good.