Fidelity and trace norm

Similar documents
Duality and alternative forms for semidefinite programs

Lecture 8: Semidefinite programs for fidelity and optimal measurements

Lecture 7: Semidefinite programming

Simpler semidefinite programs for completely bounded norms

Lecture 4: Purifications and fidelity

4. Algebra and Duality

Linear and non-linear programming

Bipartite entanglement

EE 227A: Convex Optimization and Applications October 14, 2008

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Exercise Sheet 1.

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

1 Quantum states and von Neumann entropy

Quantum entropy and source coding

Lecture Note 5: Semidefinite Programming for Stability Analysis

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Singular Value Decomposition

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture notes: Applied linear algebra Part 1. Version 2

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Optimal Success Bounds for Single Query Quantum Algorithms Computing the General SUM Problem

Today: Linear Programming (con t.)

Applications of Linear Programming

Linear Algebra Massoud Malek

Recall the convention that, for us, all vectors are column vectors.

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

3 Compact Operators, Generalized Inverse, Best- Approximate Solution

Lecture 7 Spectral methods

A Short Course on Frame Theory

Time-reversal of rank-one quantum strategy functions

Elementary linear algebra

Constrained Optimization and Lagrangian Duality

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Contents. 0.1 Notation... 3

Optimality Conditions for Constrained Optimization

Lecture: Duality.

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

5. Duality. Lagrangian

1 Singular Value Decomposition and Principal Component

Lecture 7: Convex Optimizations

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

subject to (x 2)(x 4) u,

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lecture 3: Semidefinite Programming

Symmetric Matrices and Eigendecomposition

Convex Optimization M2

Lecture 19 October 28, 2015

Projection methods to solve SDP

Using Schur Complement Theorem to prove convexity of some SOC-functions

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Deep Learning Book Notes Chapter 2: Linear Algebra

Lecture 8 : Eigenvalues and Eigenvectors

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

We describe the generalization of Hazan s algorithm for symmetric programming

Moore-Penrose Conditions & SVD

CS-E4830 Kernel Methods in Machine Learning

Lecture: Duality of LP, SOCP and SDP

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Convex Optimization M2

Support Vector Machines

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Convex Optimization & Lagrange Duality

A priori bounds on the condition numbers in interior-point methods

Linear Algebra. Min Yan

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

Quantum state discrimination with post-measurement information!

Convex Analysis and Economic Theory Winter 2018

A Greedy Framework for First-Order Optimization

Convex Optimization Boyd & Vandenberghe. 5. Duality

Homework Set #6 - Solutions

6.854J / J Advanced Algorithms Fall 2008

Semidefinite Programming

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Semidefinite Programming Basics and Applications

The Theory of Quantum Information

MATH 612 Computational methods for equation solving and function minimization Week # 2

Summer School: Semidefinite Optimization

Solving large Semidefinite Programs - Part 1 and 2

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Semidefinite Programming

Transmitter optimization for distributed Gaussian MIMO channels

Lecture 6: Further remarks on measurements and channels

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

SPECTRAL THEORY EVAN JENKINS

14 Singular Value Decomposition

Math. H110 Answers for Final Examination December 21, :56 pm

Spectral Theory, with an Introduction to Operator Means. William L. Green

Problem 1 (Exercise 2.2, Monograph)

Lecture 7: Positive Semidefinite Matrices

CS/Ph120 Homework 4 Solutions

ECE 275A Homework #3 Solutions

4TE3/6TE3. Algorithms for. Continuous Optimization

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Transcription:

Fidelity and trace norm In this lecture we will see two simple examples of semidefinite programs for quantities that are interesting in the theory of quantum information: the first is the trace norm of a given operator and the second is the fidelity between two positive semidefinite operators. When considering both of these semidefinite programming formulations, we will make use of a lemma that will come up several more times throughout the course; it gives a precise characterization of the relationship between the blocks of any 2-by-2 positive semidefinite block operator. 4. Trace norm Suppose X and Y are complex Euclidean spaces and A LY, X is an arbitrary operator. By the singular value decomposition theorem, one may always write A = r s k x k y k 4. k= for r = ranka and for some choice of orthonormal sets {x,..., x r } X and {y,..., y r } Y and positive real numbers s,..., s r which we always assume are ordered from largest to smallest, so s s 2 s r > 0. The values s,..., s r are the singular values of A, and up to their ordering are uniquely determined by A. The trace norm of A, which is denoted A, is defined as the sum of the singular values of A: A = s + + s r. 4.2 There are other, equivalent expressions of the trace norm, such as A = Tr AA = Tr A A 4.3 and A = max { B, A : B LY, X, B }. 4.4

CS 867/IC 890 Semidefinite Programming in uantum Information The second expression may alternatively be written as A = max { Re B, A : B LY, X, B } ; 4.5 it always holds that Re B, A B, A, and for an arbitrary choice of B one can choose a complex number γ with γ =, and therefore γb = B, such that Re γb, A = B, A. We ll write down an SDP for the trace norm of a given operator A momentarily, but first let us state and prove the lemma about block operators that was suggested above. Lemma 4.. Let X and Y be complex Euclidean spaces, let P PosX and PosY be positive semidefinite operators, and let X LY, X be an operator. It holds that P X X PosX Y 4.6 if and only if X = PK for K LY, X satisfying K. Proof. Suppose first that X = PK for K LY, X being an operator for which K. It follows that KK X, and therefore 0 PK K P = For the reverse implication, assume PKK P X P X X X. 4.7 P X X PosX Y, 4.8 and define K = P + X +. 4.9 Here P + and + refer to the Moore Penrose pseudo-inverses of P and, respectively. For a case such as this, where P and are positive semidefinite, they may be obtained by taking a spectral decomposition of either P or and replacing each nonzero eigenvalue by its reciprocal. It will be proved that X = PK and K. Observe first that, for every Hermitian operator H HermX, the block operator H 0 P X H 0 HPH HX 0 X = 0 X H 2 4.0

is positive semidefinite. In particular, for H = Π kerp being the projection onto the kernel of P, one has that the operator 0 Π kerp X X 4. Π kerp is positive semidefinite, which implies that Π kerp X = 0, and therefore Π imp X = X. Through a similar argument, one finds that XΠ im = X. It therefore follows that PK = ΠimPXΠ im = X. 4.2 Next, note that x Px x Xy x 0 P X x 0 = y X x y y 0 y X 0 4.3 0 y for every choice of vectors x X and y Y. Setting x = P + u and y = + v 4.4 for arbitrarily chosen unit vectors u X and v Y, one finds that u Kv u Π v K imp u u Kv u v K u v 0 4.5 Π im v and therefore u Kv. As this inequality holds for all unit vectors u and v, it follows that K, as required. Now we can devise an SDP whose optimal value is A for a given operator A LY, X. The primal problem, in a slightly simplified form, will be the following: maximize: Primal problem 2 K, A + 2 K, A X K 0, K Y K LY, X. The optimal value of this primal form is evidently A, because the objective function can alternatively be expressed as Re K, A, and by the lemma above K is free to range over all elements of LY, X with spectral norm at most. 3

CS 867/IC 890 Semidefinite Programming in uantum Information The primal problem above can be matched to the rigid formal definition of semidefinite programs we have adopted by first defining Φ TX Y as X X 0 Φ = 4.6 Y 0 Y for all X LX and Y LY and the dots representing unnamed elements of LY, X or LX, Y that get zeroed out by the map. The primal problem above is then equivalent to the primal problem of the semidefinite program Φ, 2 0 A A 0, X 0 0 Y. 4.7 The dual problem is particularly easy to calculate for this semidefinite program because Φ is its own adjoint. After just a bit of simplification we obtain the following problem: minimize: Dual problem TrX + TrY X 0 0 Y 2 0 A A 0, X HermX, Y HermY. We can further simplify this problem by first noting that we must have X PosX and Y PosY for every feasible X and Y; and by setting P = 2X and = 2Y we get the following problem statement: minimize: Dual problem 2 TrP + 2 Tr P A A 0, P PosX, PosY. If you think about a singular value decomposition r A = s k x k y k 4.8 k= of A, then the dual problem makes perfect sense: we can take P = r s k x k x r k and = s k y k y k 4.9 k= k= to obtain a feasible solution whose objective value is A, and by weak duality this is the best we can do. 4

4.2 Fidelity The fidelity between two positive semidefinite operators P, PosX is defined as FP, = P. 4.20 Sometimes the fidelity is defined as this quantity squared, but we ll use the definition above, without the square. The alternative definition P FP, = Tr P 4.2 is obtained by expanding the trace norm using the expression 4.3. We will now describe a semidefinite program whose optimal value coincides with FP,. While this task is easily accomplished by applying the semidefinite program for the trace norm to the operator P, we will aim for something simpler that does not require the computation of operator square roots. The primal problem of our semidefinite program will be as follows: maximize: Primal problem 2 TrX + 2 TrX P X X 0, X LX. Notice that the objective function can alternatively be expressed as ReTrX. The lemma proved in the previous section reveals that the optimal value of the primal problem is the fidelity between P and : the operator X ranges precisely over those operators equal to PK for K LX satisfying K, and taking the maximum over all such K yields { } max Re Tr PK : K LX, K { } = max Re P, K : K LX, K { = max Re K, P } 4.22 : K LX, K = P. The primal problem above can be matched to the primal problem of a formally specified semidefinite program in a way that is quite similar to the trace norm 5

CS 867/IC 890 Semidefinite Programming in uantum Information semidefinite program from earlier in the lecture. More specifically, we may define a map Φ TX X exactly as in 4.6, taking Y = X so that each block is an element of LX, and then observe that the primal problem above is in agreement with the primal problem for the semidefinite program Φ, 2 0, 0 P 0. 4.23 0 The dual problem, after just a bit of simplification similar to the dual problem in the trace norm semidefinite program, is as follows: Dual problem minimize: 2 Y, P + Z, 2 Y 0, Z Y, Z LX. The primal problem is feasible although not strictly feasible in case P or is singular and the dual is strictly feasible, so strong duality holds by Slater s theorem. This dual problem can simplified further using the following proposition. Proposition 4.2. Let Y, Z HermX. It holds that if and only if Y, Z > 0 and Z Y. Y PosX X 4.24 Z Proof. Suppose Y, Z > 0 and Z Y. It holds that Y = Z 0 Y 0 Y 0 Z Y Y 0 4.25 and therefore Y PosX X. 4.26 Z Conversely, suppose that Y PosX X. 4.27 Z 6

It holds that 0 u v Y Z u = u Yu u v v u + v Zv 4.28 v for all u, v X. If Y were not positive definite, there would exist a unit vector v for which v Yv = 0, and one could then set to obtain u = Z + v 4.29 2 Z v Zv u, v + v, u = Z +, 4.30 which is absurd. Therefore Y > 0, and by inverting the expression above, we have Y 0 0 Y Y 0 Z Y = Y 0. 4.3 Z 0 This implies Z Y and therefore Z > 0 as required. Now, given that is positive semidefinite, it holds that, Z, Y whenever Z Y, so there would be no point in choosing any Z other than Y when aiming to minimize the dual objective function subject to that constraint. The dual problem above can therefore be phrased as follows: Dual problem minimize: 2 Y, P + 2 Y, Y > 0, Y PosX. Because we have strong duality for our semidefinite program, we obtain the following alternative characterization of the fidelity: FP, = inf Y>0 2 Y, P + 2 Y,. 4.32 This characterization is equivalent to Alberti s theorem, which states that FP, 2 = inf Y>0 Y, P Y,. 4.33 The equivalence can be established through the use of the arithmetic-geometric mean inequality. The following two theorems establish well-known properties of the fidelity function, but through proofs based on the semidefinite programming characterization we have obtained. 7

CS 867/IC 890 Semidefinite Programming in uantum Information Theorem 4.3. Let P 0, P, 0, PosX be positive semidefinite operators, for X a complex Euclidean space. It holds that FP 0 + P, 0 + FP 0, 0 + FP,. 4.34 Proof. By considering the primal problem associated with our semidefinite program, and noting that Slater s theorem implies that an optimal primal solution is always obtained, we see that we may choose operators X 0, X LX such that the block operators P0 X 0 P X X0 and 0 X 4.35 are both positive semidefinite, and such that TrX 0 = FP 0, 0 and TrX = FP,. 4.36 The sum of two positive semidefinite operators is positive semidefinite, and therefore P0 + P X 0 + X P0 X X 0 + X = 0 P X 0 + X0 + 0 X 4.37 is positive semidefinite. By again considering the primal problem of our semidefinite program, this time for FP 0 + P, 0 +, we find that as required. FP 0 + P, 0 + TrX 0 + X = FP 0, 0 + FP,, 4.38 Note that by combining this theorem with the fact that FλP, λ = λ FP,, we find that the fidelity is jointly concave: FλP 0 + λp, λy 0 + λy λ FP 0, Y 0 + λ FP, Y. 4.39 Theorem 4.4. Let X and Y be complex Euclidean spaces, let P, PosX, and let Φ CX, Y be a channel. It holds that FP, FΦP, Φ. 4.40 Proof. One may choose X LX so that P X X 4.4 8

is positive semidefinite and satisfies ReTrX = FP,. By the complete positivity of Φ, the block operator ΦP ΦX ΦP ΦX ΦX = Φ ΦX Φ 4.42 is positive semidefinite as well. By again considering the primal problem of the semidefinite program, this time for FΦP, Φ, along with the fact that Φ is trace-preserving, we find that as required. FΦP, Φ ReTrΦX = ReTrX = FP,, 4.43 9