Symmetry and Quantum Information Feruary 6, 018 Representation theory of S(), density operators, purification Lecture 7 Michael Walter, niversity of Amsterdam Last week, we learned the asic concepts of group representation theory (Lecture 5) and we proved that the symmetric suspaces are irreducile representations of S() (Lecture 6). Today, we will discuss how the symmetric suspaces fit in the representation theory of S() more generally, and we will discuss how to decompose an aritrary representation of S() into irreduciles. In the second half of the lecture, we will switch gears and introduce density operators, which is a generalization of the notion of a quantum state. 7.1 Representation theory of S() We start y introducing some notation. For reasons that will ecome clear soon, it will e convenient to use k instead of n. So we will write Sym k (C ) for the symmetric suspace of the k-th tensor power. Let us also denote y T (k) the restriction of T = k to the symmetric suspace. That is, T (k) is given y the same formula k, ut we only plug in vectors in the symmetric suspace and rememer that the result will automatically y in the symmetric suspace. For k = 0, we define Sym 0 (C ) = C as the trivial representation, with T (0) = I. Thus, the Hilert space Sym k (C ) together with the operators {T (k) } S() defines a representation of S(), and it is this representation that we proved to e irreducile in Lecture 6. A asic question in the representation theory of any group is to ask aout the possile irreducile representations, up to equivalence. For the group S(), one can show that every irreducile representation is equivalent to a symmetric suspace (we will not prove this fact). That is, if H is an aritrary irreducile representation of S(), with corresponding operators {R }, then there exists k 0 and a unitary intertwiner J H Sym k (C ) such that JR J = T (k) S(). We will areviate this situation y the notation H Sym k (C ) and R T (k) introduced last lecture. Moreover, the symmetric suspaces are inequivalent for k l, i.e., Sym k (C ) / Sym l (C ). This follows directly from the fact that dim Sym k (C ) = k + 1, so there cannot e a unitary map etween different symmetric suspaces. To summarize, any irreducile representation H of S() is equivalent to exactly one of the symmetric suspaces Sym k (C ), up to equivalence, and can therefore y laeled y an integer k. We can determine k directly from the dimension formula as k = dim H 1. You may know from your quantum mechanics class that the irreducile representations can also e laeled y their spin j, which is a half-integer. As you might expect, the connection is precisely that j = k/. Let us discuss some examples. A good source of S()-representations are the various tensor powers of C, i.e., (C ) n, so this is what we shall consider. For n = 0, we have the trivial representation and for n = 1, we have (C ) 0 = Sym 0 (C ), (C ) 1 = Sym 1 (C ) = C 47
so this is again irreducile (and not very interesting). The first interesting examples is n =, since here we know that (C ) is not irreducile. In fact: (C ) = C C = Sym (C ) C Ψ, where Ψ = 1 ( 10 01 ) is the singlet state. Both summands are the irreducile the former ecause it is a symmetric suspace, and the latter since it is a one-dimensional invariant suspace. Which symmetric suspace is the latter isomorphic to? Clearly, this must e the one-dimensional Sym 0 (C ). To see this more concretely, recall that in Prolem.1 you showed that ( ) Ψ = det() Ψ for all unitaries. If S() then det() = 1, so Ψ spans indeed a trivial representation. We can summarize this as follows: C C Sym (C ) Sym 0 (C ). (7.1) Is there a systematic way of decomposing higher tensor powers (C ) n for n >? We will discuss this next. 7. Decomposing representations of S() In fact, let us consider a more general question: Suppose we are given an aritrary S()- representation H, with operators {R } S(). We know that we can always decompose a representation into irreduciles, so that H Sym k 1 (C ) Sym k (C )... Sym km (C ), ut how can we determine the numers k 1,..., k m that appear? In other words, how can we figure out how many times a certain irreducile representation Sym k (C ) appears in H? We can solve this y a similar procedure as we used last time in class. Start y defining the operator r = i s=0 [R e is ]. (7.) Note that e is = ( eis 0 ) S(), so this definition makes sense assuming R 0 e is is differentiale as a function of. In general, the operator r will always e Hermitian. (As mentioned in the previous lecture, this definition can e understood more conceptually in terms of the action of the Lie algera of S().) For example, if H = (C ) n with R = n, then r = = I... I + + I... I in the notation of yesterday s lecture, which was one of the ingredients for proving that the symmetric suspaces are irreducile. In particular, we proved that the operator preserves the symmetric suspace. Let us denote its restriction y t (k). Yesterday, we proved that each of the asis vectors ω m,k m for m = 0,..., k are eigenvectors of t (k), with associated eigenvalue m k. Thus, the operator t (k) has eigenvalues { k, k +,..., k, k}, each with multiplicity one. Now assume that H is irreducile and equivalent to some Sym k (C ) y a unitary intertwiner J H Sym k (C ). Then, Jr J = i s=0 [JR e is J ] = i s=0 [T (k) e is ] = t (k), 48
and so we see that r has likewise eigenvalues { k, k +,..., k, k}, each with multiplicity one. How aout the general case, where H Sym k 1 (C ) Sym k (C )... Sym km (C )? Here we have a unitary intertwiner J such that JR J = T (k 1) T (k ) T (km) and hence Jr J = t (k 1) t (k ) t (km) for the same reason as aove. It follows that the eigenvalue spectrum of r is given y the multiset { k 1, k 1 +,..., k 1, k 1 } { k, k +,..., k, k } { k m, k m +,..., k m, k m }. It is not hard to see that one can inductively reverse-engineer the numers k 1, k,..., k m from this multiset: Start y taking the largest numer; it must e one of the k i s. Remove the corresponding { k i, k i +,..., k i, k i } from the set, and repeat the procedure. Let us discuss some examples. First, we can use this to reprove the decomposition in Eq. (7.1). Here, H = C C and r = = I + I as explained aove. Thus, r is diagonal in the computational asis and the eigenvalues of r are This decomposition makes it clear that {, 0, 0, } = {, 0, } {0}. (C ) = C C Sym (C ) Sym 0 (C ), (7.3) which confirms our previous decomposition. Next, let us consider H = C C C, where r = = I I + I I + I I. Here the eigenvalues are which implies that {3, 1, 1, 1, 1, 1, 1, 3} = {3, 1, 1, 3} {1, 1} {1, 1}, (C ) 3 = C C C Sym 3 (C ) Sym 1 (C ) Sym 1 (C ). (7.4) At least in principle it is now clear how to proceed for aritrary tensor powers (C ) n. However, the counting gets more involved the larger n, so it is desirale to figure out an inductive way of computing this decomposition. The asic prolem that we have to solve is the following. 49
Suppose that we have an irreducile representation Sym k (C ) and we tensor it with an additional quit C, i.e., we consider the representation H = Sym k (C ) C, R = T (k). How does it decompose into irreduciles? The answer is the following: H = Sym k (C ) C Sym k+1 (C ) Sym k 1 (C ) if k > 0 C if k = 0. (7.5) To confirm this formula, note that r = t (k) I + I, so that the eigenvalues are { k ± 1, k + ± 1,..., k ± 1, k ± 1} = { (k + 1), (k 1),..., k 1, k + 1} { (k 1),..., k 1}; the second set is empty if k = 0. See Fig. 14 for an illustration. Equation (7.5) is as special case of the so-called Clesch-Gordan rule that you might know from a quantum mechanics class. It tells you more generally how to decompose Sym k (C ) Sym l (C ). We will not need the general result ut it can e proved just like aove. Let s quickly check that Eq. (7.5) reproduces the same results that we derived aove. We start y (C ) = Sym 1 (C ) C Sym (C ) Sym 0 (C ). The last step is using the Clesch-Gordan rule and the result is in agreement with Eqs. (7.1) and (7.3). Next, we decompose the third tensor power y tensoring with an additional quit: (C ) 3 = (C ) C (Sym (C ) Sym 0 (C )) C (Sym (C ) C ) (Sym 0 (C ) C ) (Sym 3 (C ) Sym 1 (C )) (Sym 1 (C )) = Sym 3 (C ) Sym 1 (C ) Sym 1 (C ), which confirms Eq. (7.4). Here we first used the two-quit result, then the distriutivity law, and finally the Clesch-Gordan rule. Similarly, (C ) 4 (Sym 3 (C ) Sym 1 (C ) Sym 1 (C )) C Sym 4 (C ) Sym (C ) Sym (C ) Sym (C ) Sym 0 (C ) Sym 0 (C ). It is now clear how to decompose (C ) n for aritrary n in an inductive fashion. We will use this to great effect in two weeks in Lectures 11 and 1. There, we will also learn how to extend our considerations from S() to (). 7.3 Density operators Before we proceed with entanglement and symmetries, we need to introduce another it of formalism to our toolox that allows us talk aout ensemles of quantum states. Suppose that we have a device let s call it a quantum information source that emits different quantum states ψ i with proaility p i each, where i ranges in some index set, as in the following picture: 50
We call {p i, ψ i } an ensemle of quantum states on some Hilert space H. Importantly, the state ψ i need not e orthogonal. What are the statistics that we otain when we measure a POVM {Q x } x Ω? Clearly this is given y Pr(outcome x) = i p i Pr ψi (outcome x) = i p i ψ i Q x ψ i = i p i tr [ ψ i ψ x Q x ] = tr[ p i ψ i ψ x Q x ], i = ρ where we first used the fact that state ψ i is emitted with proaility p i and then the usual Born s rule. The operator ρ defined aove is called a density operator (or a density matrix) or simply a quantum state on H. It satisfies ρ 0 and tr ρ = 1, and any such operator arises from some ensemle of quantum states (think of the spectral decomposition!). Thus, Born rule for density operators reads Pr ρ (outcome x) = tr[ρ Q x ], as we just calculated. Similarly, if X = x P x is an oservale then its expectation value can likewise e computed in terms of the density operator: E ρ [outcome] = tr[ρ X] as is easily verified. In Prolem 3.4 you will verify that if we perform a projective measurement {P x } x Ω on an ensemle with density operator ρ and we otain the outcome x, then the postmeasurement state corresponds to an ensemle with density operator ρ = P xρp x tr[ρ P x ] If ρ = ψ ψ then we say that it is a pure state and it is not uncommon to also write ρ = ψ in this case (in agreement with our previous definition). Note that ρ is pure iff rk ρ = 1 iff the eigenvalues of ρ are {1, 0,..., 0} iff ρ = ρ. If ρ is not pure then it is called a mixed state (ut this is also often used synonymously with density operator ). Example 7.1 (Warning!). In general, the ensemle that determines a density operator is not unique. E.g., the density operator τ = I/ can e written in an infinite numer of ways: τ = 1 ( 0 0 + 1 1 ) = 1 ( + + + ) = 1 ( 0 0 + 1 1 + + + + ) =... 4 The first two expressions are two different spectral decompositions, which is possily only ecause the operator has a degenerate eigenspace. The last expression, however, is not a spectral decomposition since the states used are not all pairwise orthogonal and the proaility 1/4 is not an eigenvalue of τ. There are infinitely many other ensemles that give rise to τ. More generally, if H is a Hilert space then τ H = I H / dim H is known as the maximally mixed state on H. It is the analog of a uniform distriution in proaility theory. Density operators arise in a numer of places. For example, they descrie quantum information sources (as we saw aove) and ensemles in statistical physics (e.g., Gis states). They also 51
allow us to emed classical proaility distriutions into quantum theory: E.g., if {p x } d x=1 is an ordinary proaility distriution then it makes sense to associate it with the ensemle {p x, x } on C d (since classical states should e perfectly distinguishale and hence orthogonal), and this ensemle in turn gives rise to the density operator p 1 ρ X = p x x x = ( p ). (7.6) x p d More generally, if p(x 1,..., x n ) is a joint proaility distriution then we may consider the ensemle {p(x 1,..., x n ), x 1... x n }. The corresponding density operator is ρ X1...X n = p(x 1,..., x n ) x 1 x 1... x n x n. (7.7) x 1,...,x n We call quantum states as in Eqs. (7.6) and (7.7) classical states. Note that if all proailities p(x 1,..., x n ) are the same then ρ X1,...,X n is a maximally mixed state, ρ = τ. Importantly, density operators also arise descriing the state of quantum susystems, as we will discuss in the following section. 7.4 Reduced density operators and partial trace Suppose that ρ AB is a quantum state on H A H B. We could like to find the mathematical oject (hopefully, a density operator) that descries the state of susystem A, as illustrated elow: As efore, we consider a POVM measurement {Q A,x } x Ω on H A. According to our postulates, we know that we need to consider the POVM {Q A,x I B } when we want to perform this measurement on a joint system ρ AB. Thus, Pr ρab (outcome x) = tr[ρ AB (Q A,x I B )] = a ρ AB (Q A,x I B ) a a, = a (I A )ρ AB (I A )Q A,x a a, = a a (I A )ρ AB (I A )Q A,x a = tr[ (I A ) ρ AB (I A ) Q A,x ] = tr B [ρ AB ] The operation tr B just introduced is called the partial trace over B. If ρ AB is a quantum state then tr B [ρ AB ] is called the reduced density operator (or reduced density matrix of ρ AB. We will often denote it y ρ A = tr B [ρ AB ] (even though this can at times seem amiguous). Conversely, ρ AB is said to e an extension of ρ A. By construction, tr[ρ AB (X A I B )] = tr[ρ A X A ], (7.8) 5
and so the reduced density operator ρ A is the appropriate oject when compuitng proailities and expectation values for measurements on A. E.g., as we derived aove, for every POVM measurement {Q A,x } on H A we have Pr ρab (outcome x) = Pr ρa (outcome x) = tr[ρ A Q A,x ] and, similarly, for every oservale X A on H A, E ρab [outcome] = E ρab [outcome] = tr[ρ A X A ]. Thus, the reduced density operator faithfully descries the state of the susystem A if the overall system is in state ρ AB. We can also compute partial traces of operator that are not quantum states: If M AB is an aritrary operator on H A H B then its partial trace over B is defined just as efore y the formula tr B [M AB ] = (I A ) M AB (I A ). However, if M AB is not a state then we will never denote this partial trace y M A. The following useful rule tells us how to compute partial traces of tensor product operators M A N B and justifies the term partial trace : It follows directly from the definition: tr B [M A N B ] = Other useful properties are tr B [M A N B ] = M A tr[n B ] (7.9) (I A ) (M A N B ) (I A ) = M A N B = M A tr[n B ]. tr B [(M A I B )X AB (M A I B)] = M A tr B [O AB ]M B (we can pull out operators on A), tr B [(I M B )O AB ] = tr B [O AB (I M B )] (the partial trace is cyclic for operators on B). Remark. A useful convention that you will often find in the literature is that tensor products with the identity operator are omitted. E.g., instead of X A I B we would write X A, since the suscripts already convey the necessary information. Thus, instead of Eqs. (7.8) and (7.9) we would write which is argualy easier to read. tr[ρ AB X A ] = tr[ρ A X A ], tr B [M A N B ] = M A tr[n B ] Let us close today s lecture with an example in which we explicitly compute the reduced density operator of the eit. Example (Warning!). Even if ρ AB is a pure state, ρ A can e mixed! Consider the eit state ψ AB = 1 ( 00 + 11 ). The corresponding density operator is ρ AB = ψ ψ AB = 1 ( 00 + 11 ) ( 00 + 11 ) = 1 ( 00 00 + 11 00 + 00 11 + 11 11 ) = 1 ( 0 0 1 1 + 1 0 1 0 + 0 1 0 1 + 1 1 1 1 ), 53
and so the reduced density operator ρ A is given y ρ A = tr B [ ψ ψ AB ] = 1 ( 0 0 + 1 1 ) = ( 1 0 1 0 where we used Eq. (7.9). Thus ρ A is a mixed state. In fact, ρ A is the maximally mixed state τ A introduced in/elow Example 7.1. Note that this matches precisely our calculation in Eq. (.) in Lecture. ), 54