Quantum Stein s Lemma. July 2, 2015
|
|
- Wilfred Spencer
- 5 years ago
- Views:
Transcription
1 Quantum Stein s Lemma July 2, 205
2 Contents Introduction 3. Motivation Histrical Remark Quantum Stein s Lemma 6 2. Mathematical Setup Quantum Hypothesis Testing Quantum Stein s Lemma Proof of Quantum Stein s Lemma 2 3. Direct Part Converse Part Appendix Classical Stein s Lemma The Neyman-Pearson Lemma Properties of von Neumann/Quantum Relative Entropy The Properties of The Functions The Quantum f-divergence
3 Introduction In this paper, we will introduce a profound relation between the quantum relative entropy * and the quantum 2-valued hypothesis testing. It is called quantum Stein s lemma. This theorem (lemma) was proved partly by Hiai and Petz[] in 99, and finally by Ogawa and Nagaoka[2] in In the quantum information theory, we frequently make use of the quantum relative entropy for solving many problems. The quantum Stein s lemma gives the quantum relative entropy a operational interpretation in terms of hypothesis testing. In this chapter, we see the overview of this paper s contents. The pricise mathematical difinitions and details will be given later.. Motivation In the classical information theory, it is known that there are many kinds of entropys which represent the amount of information. The classical relative entropy is one of them and has an important role in the information theory. It is written by S(p q) = p(x)(ln p(x) ln q(x)), (.) x Ω where p = {p(x)} x Ω and q = {q(x)} x Ω are two probability distributions on the same sample space Ω. The relatie entropy S(p q) is non-negative, and S(p q) = 0 if and only if p = q. Morover, Chernoff-Stein s lemmas assert that there exists an aymptotic relation between relative entropy and hypothesis testing. Roughly speaking, the larger S(p q) becomes, the easier it is to distinguish two probabilities p and q by hypothesis testing. Chernoff-Stein s lemma is written in a form of the asymptotic limit of the numbers of the system n: lim n n ln β n(ϵ) = S(p q), (.2) where β is the optimized type II error probability with a restriction on the type I error probability to be less than ϵ. From these facts, we attain a physical interpretation about the classical relative entropy: classical relative entropy S(p q) represents something like distance between {p(x)} x Ω and {q(x)} x Ω. Note that, however, S(p q) = S(q p) is not always satisfied, and hence we cannot interpret in mathematicaly strict meaning S(p q) as a distance. * It is sometimes called Umegaki s entropy. 3
4 These results are on the classical setup. Our purpose is to find out the quantum counterparts of these facts. It turns out that the quantum form of the relative entropy is given by S( ˆρ ˆσ) = Trˆρ(ln ˆρ ln ˆσ), (.3) where ˆρ and ˆσ are the density matrices on the same Hilbert space H, and that the Stein s lemma is also justified by in quantum hypotheisis testing. It holds that, roughly speaking, lim n n ln β n(ϵ) = S(ˆρ ˆσ), (.4) β n(ϵ) e ns(ˆρ ˆσ), (.5) for sufficiently large n. Hence, we also reach a physical viewpoint in quantum system: quantum relative entropy S( ˆρ ˆσ) represents something like distance between two states ˆρ and ˆσ. The main goal in this paper is to prove quantum Stein s lemma. 4
5 .2 Histrical Remark In 990s, it had been known for a long time that the quantum conterpart of Shannon entropy S(p) = x Ω p(x) ln p(x) (.6) is given by von Neumann entropy, S( ˆρ) = Trˆρ ln ˆρ. (.7) where ˆρ is a density matrix on some Hilert space H. However, the quanrum version of the relative entropy did not have a commonly accepted form at the time and many kinds of forms had been suggested. Umegaki s entropy[3], whicht we call now the quantum relative entropy, was introduced in 962. Hiai and Petz in [] also showed that the other forms of relative entropy coincide each other in the asymptotic situation (when the number of the systems n goes to infinity + ), and Umegaki s form naturally comes out lim n n S other(ˆρ n ˆσ n ) = S Umegaki (ˆρ ˆσ). (.8) This fact and the quantum Stein s lemma made the Umegaki s entropy commonly understood as a counterpart of the classical relatve entropy. 5
6 2 Quantum Stein s Lemma In this chapter, we explain the setup of quantum hypothesis testing and confirm the statement of the quantum Stein s lemma. 2. Mathematical Setup In quantum information theory, we deal with a Hilbert space H as a physical system. Let us denote by L(H) the set of the linear operators on H, and by L sa (H) the set of the self-adjoint operators. We write them the operators with hat  L(H). Let H be finite d-dimentional for simplicity. Then linear operators on H are wtritten by d d matrices. Definition 2.. For a self -adjoint operator  L sa(h), we define positive semidefinite by  0 def σ(â) R +, (2.) v, Âv 0 for all v H, (2.2) ˆB L(H) s.t.  = ˆB ˆB. (2.3) where σ(â) is the spectrum of Â. We denote by L +(H) the set of the positive semidefinite matrices. In the case of matrix, the spectrum σ(â) is the set of the eigenvalues of Â. Then L +(H) is the set of the matrices whose eigenvalues are all non-negative. We call the matrices  L +(H) positive for convenience from now. Note that when we say a matrix  is positive, it automatically implies that  is self-adjoint, or Hermitian. We also define an inequality bewteen self-adjoint matrices:  ˆB def ˆB  L +(H). (2.4) Definition 2.2. The physical state of a finite Hilbert space H is represented by density matrices ˆρ S(H), where we define the set of density matrices by S(H) := {ˆρ L(H) ˆρ 0, Trˆρ = }. (2.5) A quantum mesuarment is described by a positive-operator valued measure (POVM), M = { ˆM λ } λ Λ, where Λ is the set of indices which label measurement outcome. POVM should satisfy ˆM λ = Î, and ˆM λ 0, for all λ Λ. (2.6) λ Λ 6
7 If the state of a system H is ˆρ, the probability of observing the outcome λ Λ is given by p ρ (λ) := Trˆρ ˆM λ. (2.7) In particular, if all the elements of M are projection operators, M is called a projection-valued measure (PVM). Definition 2.3. We define a positive projection of a Hermitian matrix Ĥ L sa(h) by ˆP H := Ê i, (2.8) i;µ i>0 where Êi and µ i are defined by spectrul decompositon of Ĥ, Ĥ = i µ i Ê i. (2.9) A positive projection ˆP A projects any vector v H onto the eigenspace of  whose eigenvalues are positive. For convenience, we sometimes denote the positive projection of  by Notice that the left hand side of Eq. (2.0) is an operator on H. { > 0} := ˆP A. (2.0) 2.. Quantum Relative Entropy Definition 2.4. We define the von Neumann entropy of a state ˆρ S(H) by S( ˆρ) = Trˆρ ln ˆρ. (2.) If the state is diagonalized, ˆρ = diag(p,..., p d ), then the von Neumann entropy is equal to the Shannon entropy S(ˆρ) = S(p). So we can think the von Neumann entropy as the generalization of the classical entropy. Definition 2.5. Let us define the quantum relative entropy for states ˆρ S(H) and ˆσ S(H) by S( ˆρ ˆσ) := relative entropy in quantum information theory. Note that S( ˆρ ˆσ) is not necessarily equal to S(ˆσ ˆρ). { Trˆρ(ln ˆρ ln ˆσ) (suppˆρ suppˆσ) + (otherwise), (2.2) where suppˆρ = {v H ˆρv 0}. We define 0 ln 0 := 0. It is the quantum conterpart of the classical 7
8 The important characters of the von Neumann entropy and the quantum relative entropy we use are given below. Classical relation: Rank: Non-negativity: ˆρ = d p i Ê i = S(ˆρ) = p i ln p i. i i (2.3) S( ˆρ) ln rankˆρ. (2.4) S(ˆρ ˆσ) 0, for all ˆρ, ˆσ S(H). (2.5) Nondegeneracy: Additivity: S(ˆρ ˆσ) = 0 ˆρ = ˆσ. (2.6) S(ˆρ n ˆσ n ) = ns(ˆρ ˆσ), ˆρ n, ˆσ n S(H n ). (2.7) Jointconvexity: S( i p i ˆρ i i p iˆσ i ) i p i S(ˆρ i ˆσ i ), i p i =, p i 0. (2.8) Monotonicity: S(ˆρ ˆσ) S(E(ˆρ) E(ˆσ)), E is a TPCP map. (2.9) Example 2.6. For the density matrices ˆρ, ˆσ on H = C 2, ˆρ = 3 ( ) 0, ˆσ = ( ) 3 = 3 Û ˆρÛ, Û = Û, (2.20) S(ˆρ ˆσ) = ln 2, (2.2) 6 S(ˆσ ˆρ) = ln 2. (2.22) 6 ˆρ = ( ) 0, ˆσ = ( ) ( ) = U 0 U, (2.23) S( ˆρ ˆσ) = +, (2.24) S(ˆσ ˆρ) = ln 2. (2.25) 8
9 The commutative case [ˆρ, ˆσ] = 0 has an important role in the proof of quantum Stein s lemma. If ˆρ commutes with ˆσ, then we can diagonalize them simultaneously as d ˆρ = p i Ê i = p..., (2.26) ˆσ = i d q i Ê i = q... i p d q d. (2.27) In this case, the quantum relative entropy S q (ˆρ ˆσ) equals the classical relative entropy S c (p q) from direct caluculation (see Appendix 4.3.3) The superscripts q and c denote quantum and classical, respectively. S q (ˆρ ˆσ) = S c (p q). (2.28) 2.2 Quantum Hypothesis Testing Here we deal with 2-valued quantum hypothesis testing. In the previous section, we noted that quantum states are represented by density matrices and measurements are written by POVMs. Suppose that we only know that the state of a system H is either ˆρ or ˆσ. We denote by H 0 the null hypothesis the state is ρ, and by H the alternative one the state is σ. Definition 2.7. We define a null hypothesis and an alternative hypothesis by Null hypothesis H 0 : ˆρ is the true state of H, (2.29) Alternative hypothesis H : ˆσ is the true state of H. (2.30) We prepare the identical n systems H, and hence the total system is H n. The whole state of the system is independent n-tensor product ˆρ n or ˆσ n corresponding to H 0 or H, respectively. As is the classical case, we want to determine which hypothesis is likely true by some measurement. Such measurement can be written by 2-valued POVM, Definition 2.8. We define 2-valued measurement POVM by A := {Â0, Â} = {Â0, Î Â0}. (2.3) A test on the total space H n is represented by 0 Ân Î, A n := {Ân, Î Ân}, (2.32) where Î is the identity in L(H ). 9
10 We will accept the null hypothesis H 0 if we get the mesured value corresponding to Â0, and reject alternative hypothesis H. Otherwise, we will reject H 0 and accept H. Then we write the 2-valued POVM simply as {Â, Î Â} and we call it a test. If H 0 is true, the probabilities of correct decision a n and wrong decision α n are given by a n := Trˆρ n ˆ An, (2.33) α n := Trˆρ n ( Ân), (2.34) respectively, with a n + α n =. If H is true, the probabilities of correct decision b n and wrong decision β n are given by respectively, with b n + β n =. b n := Trˆσ n (Î Ân), (2.35) β n := Trˆσ n  n, (2.36) Definition 2.9. We define the error probability of the first kind α n and the error probability of the second kind β n for a test A n on Ω n by α n (Ân) := Trˆρ n ( Ân), (2.37) β n (Ân) := Trˆσ n  n, (2.38) We call, as we do in the classical case, α n the type I error probability and β n the type II error probability. These probabilities depend on which test we choose. In actual hypothesis testing, it is usual that the type I error probability is restricted by some number ϵ > 0 such as α n (Ân) ϵ. This ϵ is called level of significance or critical p-value. We often make a setting of hypothesis testing so that it is more important to avoid the type I error than the type II error, so we want to control the type I error probability. Then it is natural to consider the minimum probability of type II error under the restriction α n ϵ. Definition 2.0. The minimum probability of type II error β n under the restriction α n ϵ, β n(ϵ) := min β n (Ân). (2.39) 0 Ân Î α n (Ân) ϵ In the commutative case [ˆρ n, ˆσ n ] = 0, where ˆρ n, ˆσ n S(H n ), there is a one-to-one correspondence 0
11 between classical hypothesis testing and quantam hypothesis testing. Then we can write ˆρ n = i p i Ê i = p..., (2.40) ˆσ n = i q i Ê i = q... p nd q nd. (2.4) A classical test A n is a subset of Ω n = {, 2,..., d} n. The corresponding quantum test Ân is given by  n = Ê i. (2.42) i A n Then the probabilities of hypothesis testing are all the same in classical case and in quantum case. In particular, the quantum type II error probability β q (Ân) is equal to classical one β c (A n ), β q (Ân) = β c (A n ). (2.43) 2.3 Quantum Stein s Lemma This is the main theorem in this paper. Let us consider the asymptotic behavior of the minimum of the type II erroer probability with the upper limit on type I error probability, i.e., we assume for ϵ (0, ) α n (Ân) ϵ, (2.44) and βn(ϵ) is given by Eq.(2.39). We can now state quantum Stein s lemma, which is the quantum counterpart of the classical Chernoff-Stein s lemma. Theorem 2.. Quantum Stein s lemma Suppose S(ˆρ ˆσ) <. F or any ϵ (0, ), it holds that lim n n ln β n(ϵ) = S(ˆρ ˆσ). (2.45) From this theorem, we can regard the relative entropy defined by Eq.(2.2) as the counterpart of the classical relative entropy (see also Appendix 4. and Historical Note), and also as some distance in the set of states S(H) in the meaning of hypothesis testing. Note that the Eq.(2.45) has an asymmetric form about ˆρ and ˆσ. Recall that S(ˆρ ˆσ) = S(ˆσ ˆρ) is not neccesarily satisfied. This is because that we imposed the restriction on the type I error, α n ϵ, and did not on the type II error. If we assume β n ϵ and consider the minimum error probability of type I error, (or equivalentally if we exchange the hypotheses H 0 with H ), then the right hand side of Eq.(2.45) will be S(ˆσ ˆρ).
12 3 Proof of Quantum Stein s Lemma In this chapter, we prove quantum Stein s lemma. Many kinds of proof have been suggested affter [, 2], but here we follow the original way somewhat arranged for mathematical simplification. We will make use of the many techniques in the quantum information theory, so the proof itself may be impressive. The proof is separeted into two sections: The direct part and the converse part lim sup n lim inf n n ln β n(ϵ) S(ˆρ ˆσ), (3.) n ln β n(ϵ) S(ˆρ ˆσ). (3.2) The direct part was first proved by Hiai and Petz in 99, and the converse part was by Ogawa and Nagaoka in After these inequality, the quantum Stein s lemma follows immediately. 3. Direct Part In this section, we prove the direct part: lim sup n n ln β n(ϵ) S(ˆρ ˆσ). (3.3) The physical meaning of this inequality is that for any δ > 0 there exists at least a good test Ân for n large enough which makes β n (Ân) less than e n(s(ρ σ)+δ). In other words, roughly speaking, this part implies that there exist the hypothesis tests {A n } n N by which we can make β n decrease at least as fast as e ns(ˆρ ˆσ), as n. There may be the next question: Is there any test that allows β n to decrease strictly faster than e ns(ˆρ ˆσ)? This problem is resolved by Ogawa and Nagaoka in the next capter, and we will see that the answer is no. In the original paper, the theorem was proved in mathematically more general treatment but here we consider only simplified cases. This simplified setup is sufficient for the usual quantum information theory. 3.. Pinching Map Let H be a d-dimentional Hilbert space H = C d and ˆρ, ˆσ be density matrices on H. We denote the set of indices by D := {, 2,..., d}. We consider a quantum hypothesis testing between ˆρ and ˆσ. We define the null hypothesis H 0 by ˆρ is the true state. We prepare the identical n-times system independentlly, and then the whole space is H n and the state becomes n-tensor product state ˆρ n, ˆσ n, i.i.d.. We 2
13 consider a 2-valued test {Ân, Î Ân}. The type I error and the type II error are given by and β n(ϵ) is given by α n (Ân) = Trˆρ n (Î Ân), (3.4) β n (Ân) = Trˆσ n  n, (3.5) β n(ϵ) = min β n (Ân). (3.6) 0 Ân Î α n (Ân) ϵ Since the density matrices are Hermitian, we can decompose them by the stectral theorem, ˆρ = ˆσ = d λ i Ê i, (3.7) i= d ω i ˆFi, (3.8) i= where {Êi} i D, { ˆF i } i D are the -dimentional projections on H and {λ i } i D, {ω i } D are the eigenvalues of ˆρ and ˆσ, respectively. We defined a set of indices D n := {, 2,..., d} n and a index i n := (i, i 2,..., i n ) I, i k d, k. Then n-tensor product states can be written by ( d ) n ˆρ n = λ i Ê i = i= i n D n ( d ) n ˆσ n = ω i F i = i= i n D n ( n ) λ ik Ê i Êi 2 Êi n, (3.9) k ( n ) ω ik ˆF i ˆF i2 ˆF in. (3.0) k Especially, we denote ˆσ n by ˆσ n (n) I Ω (n) ˆP(n), (3.) where we defined Ω (n) as the n-product of the eigenvalues and ˆP (n) as a sum of the tensor products of projections onto the eigenspace correspondig to Ω (n). I is a new set of indices defined by I := {(n,..., n d ) N d d n i = n}. (3.2) An element of I can be written by (n) = (n, n 2,..., n d ), where n i s are non-negative intergers. Each n i indicates how many Ω (n) contains each eigenvalues ω i. So d i= n i = n have to be satisfied. For example, if the number of types of the eigenvalues d = 4 and that of tensor product n = 3, an element (n) = (, 2, 0, 0) indicates i= Ω (n) = (ω ) (ω 2 ) 2 (ω 3 ) 0 (ω 4 ) 0 = ω ω 2 ω 2, (3.3) ˆP (n) = ˆF ˆF 2 ˆF 2 + ˆF 2 ˆF ˆF 2 + ˆF 2 ˆF 2 ˆF. (3.4) Note that ˆP (n) is also a projection on H n, but not necessarily rank. 3
14 By using the spectral decomposition of ˆσ n, we define a TPCP map, wchich is called a pinching, by κ σ n( ˆX) := ˆP (n) ˆX ˆP (n), ˆX L(H n ). (3.5) (n) I We can regard Eq. (3.5) as a Kraus representation of the pinching map, so it is trace preserving and completly positive, TPCP. We write κ σ n by κ for simplicity. If we take ˆX = ˆρ n, then we get κ(ρ n ) = ˆP (n) ˆρ n ˆP(n). (3.6) The charactor of the pinching map is following: (n) I does not change ˆσ n : commutativity: κ(ˆσ n ) = ˆσ n, (3.7) κ( ˆX)ˆσ n = ˆσ n κ( ˆX), (3.8) trace preserving: Trκ( ˆX) = Tr ˆX, (3.9) cut-and-cover in trace: In particular, Trκ( ˆX)Ŷ = Tr ˆXκ(Ŷ ) = Trκ( ˆX)κ(Ŷ ), (3.20) Trκ( ˆX)f(κ(Ŷ )) = Tr ˆXf(κ(Ŷ )), f C(R) (3.2) for any ˆX, Ŷ L(H n ). The second line is important, and the pinching makes any operator X commute with σ n. 4
15 3..2 The Hiai-Petz Theorem First, we prove the following theorem. Theorem 3.. The Hiai-Petz theorem lim n n S(κ(ˆρ n ) ˆσ n ) = S(ˆρ ˆσ). (3.22) Proof. Because of the monotonicity of quantum relative entropy Eq. (2.9), and κ is a TPCP map, it is clear that ns(ˆρ ˆσ) = S(ˆρ n ˆσ n ) S(κ(ˆρ n ) κ(ˆσ n )) = S(κ(ˆρ n ) ˆσ n ). (3.23) On the other hand, using the propertys of pincihng, it follows from direct caluculation that S := ns(ˆρ ˆσ) S(κ(ˆρ n ) ˆσ n ) (3.24) = S(ˆρ n ˆσ n ) S(κ(ˆρ n ) ˆσ n ) (3.25) = Trˆρ n (ln ˆρ n ln ˆσ n ) Trκ(ˆρ n )(ln κ(ˆρ n ) ln ˆσ n ) (3.26) = Trˆρ n ln ˆρ n Trˆρ n ln ˆσ n Trκ(ˆρ n ) ln κ(ˆρ n ) + Trκ(ˆρ n ) ln ˆσ n (3.27) = Trˆρ n ln ˆρ n Trκ(ˆρ n ) ln κ(ˆρ n ) (3.28) = S(κ(ˆρ n )) S(ˆρ n ), (3.29) = Trˆρ n {ln ˆρ n ln κ(ˆρ n )} (3.30) = S(ˆρ n κ(ˆρ n )), (3.3) where we used Eq. (3.2) for f(x) = ln x and Eq. (3.7) in the fifth line, and Eq. (3.20) in the seventh line. Then we prove the following inequality S d ln(n + ). (3.32) We can assume ˆρ n is a pure state without loss of generality: indeed, if ˆρ n is a mixed state, we can expand it using some pure states {ˆρ k } k and a probability distribution {η k } k into ˆρ n = k κ(ˆρ n ) = k η k ˆρ k, (3.33) η k κ(ˆρ k ), (3.34) where we used the linearity of κ. Then, the joint convexity of the quantum relative entropy Eq. (2.8) leads S = S(ˆρ κ(ˆρ n )) η k S(ˆρ k κ(ˆρ k )). (3.35) k Therefore, if we prove Eq. (3.32) for pure states, it automatically holds for any state since k η k =. 5
16 In the case of pure state, S(ˆρ n ) = 0 holds, and hence S = S(κ(ˆρ n )) S(ˆρ n ) = S(κ(ˆρ n )), (3.36) and rank ˆρ n is. We recall the von Neumann entropy s property S(κ(ˆρ n )) ln rank κ(ˆρ n ), (3.37) so then we estimate the rank of κ(ˆρ n ): κ(ˆρ n ) = (n) I ˆP (n) ˆρ n ˆP(n). (3.38) We remark that the rank of ˆρ n is one, so the rank of each matrix ˆP (n) ρ n ˆP(n) is at most one. Therefore, from Eq. (3.38), we can estimate rank κ(ˆρ n ) I. (3.39) The right-hand side is determined by how many types Ω (n) = n k ω i k could have. We already mentioned that Ω (n) is determined by the numbers of each eigenvalue ω i containd, i.e., the number of the elements of I. An element of I is written by (n) = {n, n 2,..., n d }, 0 n i n, d n i = n. (3.40) Even though we ignore the last condition, the number of the elements of I does not exceed (n + ) d. Therefore, we can safely say It is a rough estimation, but is sufficient for our purpose. i rank κ(ˆρ n ) (n + ) d. (3.4) From Eq. (3.4), we obtain S = S(κ(ˆρ n )) ln rank κ(ˆρ n ) ln(n + ) d = d ln(n + ), (3.42) so S d ln(n + ) is proved for any state ˆρ n. Go back to Eq. (3.24), we get Therefore, it holds for any interger n that Deviding by n, and taking the lim inf, we obtain S = ns(ˆρ ˆσ) S(κ(ˆρ n ) ˆσ n ) d ln(n + ). (3.43) S(κ(ˆρ n ) ˆσ n ) ns(ˆρ ˆσ) d ln(n + ). (3.44) lim inf n From Eq. (3.23) and Eq. (3.45), the lemma is proved. n S(κ(ˆρ n ) ˆσ n ) S(ˆρ ˆσ). (3.45) lim n n S(κ(ˆρ n ) ˆσ n ) = S(ˆρ ˆσ). (3.46) 6
17 3..3 Proof of The Direct Part (we have to indeed use the ϵ δ definition of limit, but it is complicated, so here we say for sufficiently large n for simplicity) We proved in the theorem above that lim inf n Multiplying munis one to both sides, we obtain n S(κ(ˆρ n ) ˆσ n ) S(ˆρ ˆσ). (3.47) lim sup n n S(κ(ˆρ n ) ˆσ n ) S(ˆρ ˆσ). (3.48) And therefore, for any δ > 0, if we choose a sufficiently large n, then it holds that n S(κ(ˆρ n ) ˆσ n ) S(ˆρ ˆσ) + δ. (3.49) From now, we consider the quantum hypothesis testing between ˆρ n := κ(ˆρ n ) and ˆσ n := ˆσ n. We consider n-systems as an initial system H n := H n, and prepare m-systems for hypothesis testing: H m n. Then the total Hilbert space is H m n = H nm. The crucial point is that ˆρ n and ˆσ n are commutative then the quantum hypothesis testing comes down to classical one. As we have seen in the previous chapter, we can replace the quantum relative entropy by the classical one in commutative case. The states are written by (we rewrite Ω (n) for q (n) ) ˆρ n = p (n) ˆP(n), (3.50) (n) I ˆσ n = q (n) ˆP(n), (3.5) (n) I where {p (n) } n I and {q (n) } n I are some classical probability distributions. Then we know S q (ˆρ n ˆσ n ) = S c (p (n) q (n) ). (3.52) Moreover, the quantum hypothesis testing is also equivalent to the classical hypothesis testing between p (n) and q (n). The classical total m-sample space is written by Ω m n := Im and a classical test is written by the subset A n,m Ω m n. A classical test is related to a quantum test by  n,m = ˆP(n) ˆP (n)2 ˆP (n)m. (3.53) (n) (i) A n,m 7
18 From classical Chernoff-Stein s lemma, for any δ n > 0, and for any ϵ (0, ), there exist a classical test A n,m Ω m n which satisfies α c (A n,m ) ϵ and m ln βc (A n.m ) S c (p n q n ) + δ n = S q (ˆρ n ˆσ n ) + δ n, (3.54) for sufficiently large m. And the corresponding quantum test Ân,m L(H nm ) also exist, which satisfies for sufficiently large n and m. Here we used nm ln βq (Ân.m) = nm ln βc (A n.m ) n Sq (ˆρ n ˆσ n ) + δ n n, (3.55) β q (Ân,m) = β c (A n,m ). (3.56) In short, we have proved that for any δ > 0 and for any ϵ (0, ) there exists at least one quantum test Ân,m such that for sufficiently large n, m. nm ln βq (Ân,m) n S(κ(ˆρ n ) ˆσ n ) + δ, (3.57) By Eq. (3.49) and Eq. (3.57), we have already almost finished the proof. The final task is to check that Ân,m derived in commutative testing also satisfies α(ân,m) ϵ about ˆρ and ˆσ. There are two kinds of quantum type I error probability, one is for ˆρ nm, the other is for κ(ˆρ n ) m : α(ân,m) = Trˆρ nm  n,m, (3.58) α(ân,m) = Trκ(ˆρ n ) m  n,m. (3.59) The formar is the original one, and the latter is used in the commutative case and related to the classical error probability of type I. α(ân,m) = α c (A n,m ). (3.60) On the other hand, the type II error probability is automatically the same bewteen two quantum hypothesis testing. β(ân,m) = Trˆσ nm  n,m = Trˆσ n m  n,m = β(ân,m). (3.6) In the previous discussion, we used the classical Stein s lemma and imposed α c (A n,m ) ϵ. Therefore, we can safely say that for any ϵ (0, ) there exist a test Ân,m which satisfies Eq. (3.57) and α(ân,m) ϵ, however, it is not necessarily satisfied that α(ân,m) ϵ. 8
19 So we have to check that Eq. (.64) holds, and hence α(ân,m) ϵ is satisfied. The point is that Ân,m is written by Eq.(3.53). By using this fact, we can calucutate as Trκ(ˆρ n ) m  n,m = Tr ( κ(ˆρ n ) κ(ˆρ n ) ) (n) (i) A n,m ˆP(n) ˆP (n)2 ˆP (n)m (3.62) = Trκ(ˆρ n ) ˆP (n) κ(ˆρ n ) ˆP (n)m (3.63) (n) (i) A n,m = Trˆρ n κ( ˆP (n) ) ˆρ n κ( ˆP (n)m ) (3.64) (n) (i) A n,m = Trˆρ n ˆP(n) ˆρ n ˆP(n)m (3.65) (n) (i) A n,m = Trˆρ nm  n,m. (3.66) where we used κ( ˆP (n) ) = ˆP (n). Hence we confirm that α(ân,m) = α(ân,m) (3.67) for the test given by Eq. (3.53). From the above argument, using Eq. (3.49), Eq. (3.54) and Eq. (3.67), for any δ > 0 and for any ϵ (0, ), there exists a quantum test Ân.m which satisfies α(ân,m) ϵ and for sufficiently large n and m. nm ln β nm(ân,m) S(ˆρ ˆσ) + δ, (3.68) The pricise statement of Eq. (3.68): for any δ > 0 and for any ϵ > 0, there exist N, M N such that for all n N and for all m M there is a test Ân,m which satisfies α(ân,m) ϵ and nm ln β nm(ân,m) S(ˆρ ˆσ) + δ 2. (3.69) We define a interger Ñ by Ñ := max{nm, 2 NS(ˆρ ˆσ) }. (3.70) δ Suppose an integer n is greater than Ñ, then we can write n = Nm + l where m M and 0 l < N. We define a test Ân by Using these setup, we consider hypothesis testing on H n, then  n := ÂN,m Îl. (3.7) n ln β n(ϵ) n ln β n(ân) = n ln β Nm(ÂN,m) Nm Nm δ S(ˆρ ˆσ) + n n 2. (3.72) 9
20 It follows by the direct caluculation, Nm n Nm δ S(ˆρ ˆσ) + n 2 S(ˆρ ˆσ) + l n S(ˆρ ˆσ) + δ 2 < S(ˆρ ˆσ) + N n S(ˆρ ˆσ) + δ 2 S(ˆρ ˆσ) + Ñ N S(ˆρ ˆσ) + δ 2 (3.73) (3.74) (3.75) Hence we finally obtain This means the direct part of quantum Stein s lemma. Theorem 3.2. For any ϵ (0, ), it holds that S(ˆρ ˆσ) + δ 2 + δ 2 (3.76) = S(ˆρ ˆσ) + δ. (3.77) n ln β n(ϵ) S(ˆρ ˆσ) + δ. (3.78) lim sup n n ln β n(ϵ) S(ˆρ ˆσ). (3.79) 20
21 3.2 Converse Part In this section, we prove the converse part: lim inf n n ln β n(ϵ) S(ˆρ ˆσ). (3.80) This inequality implies the nonexistance of too good tests. In other words, there is no test which allows β n (Ân) to goes to zero strictly faster than, roughly speaking, e ns(ˆρ ˆσ) in asymptotic limit n. This result was firstly proved by Ogawa and Nagaoka[2]. It complements the direct part, and we finally obtain quantum Stein s lemma Lemma and Corollary Lemma 3.3. For any test  and any Hermitian matrix Ĥ L sa(h), it holds that TrĤ TrĤ ˆP H. (3.8) Proof. Recall that a test satisfies 0  Πand ˆP H is a positive projection defined by Eq. (2.8). We explisitly write them by d Ĥ = µ i Ê i, (3.82) and i ˆP H = Ê i. (3.83) i:µ i 0 get From the difinition of positive operator, for any projection Êi = i n n i n and for any test Â, we TrÊi = n i n  i n 0. (3.84) In addition, TrÊi(Î Â) is non-negative for any i D because Î Â is also a test. Hence, we obtain TrÊi TrÊi. (3.85) Now, the lemma follows by the direct calculation, TrĤ = µ i TrÊi (3.86) i µ i TrÊi (3.87) i:µ i 0 µ i TrÊi (3.88) i:µ i 0 = TrĤ ˆP H. (3.89) 2
22 Corollary 3.4. Putting Ĥ = ˆρ n e nλˆσ n where λ R, and take a test Ân on H n, Tr(ˆρ n e nλˆσ n )Ân Tr(ˆρ n e nλˆσ n ) ˆP n, (3.90) where we defined ˆP n as the positive projection of ˆρ n e nλˆσ n. Therefore, it holds that α n (Ân) e nλ β n (Ân) + Trˆρ n ˆPn. (3.9) Proof. Eq. (3.90) indicates that Trˆρ n  n e nλ Trˆσ n  n + Trˆρ n ˆPn e nλ Trˆσ n ˆPn. (3.92) By the difinitions of error probabilities, we get α n (Ân) = Trˆρ n  n e nλ Trˆσ n  n + Trˆρ n ˆPn e nλ Trˆσ n ˆPn (3.93) e nλ Trˆσ n  n + Trˆρ n ˆPn (3.94) = e nλ β n (Ân) + Trˆρ n ˆPn. (3.95) Next, we introduce a real functions. Definition 3.5. We define ψ(s) for s 0 and its Legendre transformation φ(λ) for λ 0 as following: ψ(s) := ln Trˆρ +sˆσ s, (3.96) φ(λ) := max {λs ψ(s)}. (3.97) 0 s 22
23 The important properties of these functions are given below: At the origin: Convexity: ψ(0) = 0 and ψ (0) = S(ˆρ ˆσ). (3.98) ψ (s) 0. (3.99) Strict positivity of φ for λ > S(ˆρ ˆσ): φ(λ) > 0 for λ > S(ˆρ ˆσ). (3.00) The proofs are found in Appendix 4.4. The third property follows from the convexity ofψ(s) for s 0, the difinition of φ(λ) and ψ (0) = S(ˆρ ˆσ). Lemma 3.6. Trˆρ n ˆPn e nφ(λ) for any λ 0. (3.0) Proof. We explicitly write the spectral decomposition ˆρ n e nλˆσ n = µ i Ê n,i, (3.02) i ˆP n = Ê n,i. (3.03) i:µ i 0 Using it, we introduce the classical probability distributions {p i } i D and {q i } i D defined by p n,i = Trˆρ n Ê n,i, (3.04) q n,i = Trˆσ n Ê n,i. (3.05) Multiplying Ên,j to Eq.(3.02), and taking the matrix trace, then for any j we get p n,j e nλ q n,j = µ j TrÊn,j. (3.06) Therefore, p n,j e nλ q n,j 0 for j : µ j 0, (3.07) e nλ p n,j q n,j, for j : µ j 0, (3.08) e nλs p s n,jq s n,j, for j : µ j 0, s [0, ]. (3.09) 23
24 Hence, we obtain Trˆρ n ˆPn = Trˆρ n Ê n,i (3.0) i:µ i 0 = p n,i (3.) i:µ i 0 p n,i e nλs p s n,i(q n,i ) s (3.2) i:µ i 0 e nλs (p n,i ) +s (q n,j ) s (3.3) i e nλs Tr [ (ˆρ n ) +s (ˆσ n ) s] (3.4) = e nλs Tr [ (ˆρ +s ) n (ˆσ s ) n] (3.5) = e nλs (e ψ(s) ) n (3.6) = e n(λs ψ(s)), (3.7) for any s [0, ]. Here we used the monotonicity of the quantum f-divergence to obtain the fifth line (see Appendix 4.5). By taking the maximum about s, the lemma is proved: Trˆρ n ˆPn e nφ(λ). (3.8) Proof of the Converse Part From Eq. (3.9) and Eq. (3.0), for any test Ân we get α n (Ân) e nλ β n (Ân) + Trˆρ n ˆPn e nλ β n (Ân) + e nφ(λ). (3.9) In quantum hypothesis testing, we assume α n (Ân) ϵ for some ϵ (0, ), then it holds that ϵ α n (Ân) e nλ β n (Ân) + e nφ(λ), (3.20) and hence β n (Ân) e nλ ( ε e nφ(λ) ), (3.2) for any λ 0. Since Eq. (3.2) holds for any test Ân that satisfies α n (Ân) ϵ, it follows that βn(ϵ) e nλ ( ε e nφ(λ) ). (3.22) Then we substitute S(ˆρ ˆσ)+δ for λ, where δ is arbitrary positive real number. Recall that if λ > S(ˆρ ˆσ) then φ(λ) > 0, and 0 < ε <, so the contents in the parentheses can be taken positive for sufficiently large n. Therefore, we can take the log of both sides, ln βn(ϵ) n(s(ˆρ ˆσ) + δ) + ln( ε e nφ(λ) ). (3.23) 24
25 for sufficiently large n. We devide both sides by n, and take the lim inf for n, n ln β n(ϵ) S(ˆρ ˆσ) δ + n ln( ε e nφ(λ) ), (3.24) lim inf n n ln β n(ϵ) S(ˆρ ˆσ) δ, (3.25) where δ is an arbitrary positive number, so the converse part was proved. Theorem 3.7. For any ϵ (0, ), it holds that lim inf n n ln β n(ϵ) S(ˆρ ˆσ). (3.26) Theorem 3.8. Quantum Stein s Lemma For any ϵ (0, ), it holds that lim n n ln β n(ϵ) = S(ˆρ ˆσ). (3.27) Proof. It immediately follows from the direct part and the converse part, S(ˆρ ˆσ) lim inf n n ln β n(ϵ) lim sup n n ln β n(ϵ) S(ˆρ ˆσ). (3.28) 25
26 4 Appendix 4. Classical Stein s Lemma Here we briefly review related topics in classical information theory. They include classical hypothesis testing, the Neyman-Pearson lemma, and Chernoff-Stein s lemma. These lemmas are the results about classical 2-valued hypothesis testing and we will make use of these classical results effectively for proving quantum counterparts. 4.. Mathematical Setup In the classical information theory, we consider a sample space Ω as a physical system and deal with probability distributions on Ω. Let us denote by P(Ω) the set of probability distributions on Ω and by p = {p(x)} x Ω a probability distribution, where x Ω is an element in the sample space. Here we assume Ω is finite for simplicity. Example 4.. For examlpe, the coin toss trial can be written by Ω = {0, }, p = {p(0), p()}, (4.) with p(0), p() 0 and p(0)+p() =. Here we defined the head as 0 and the tail as. If p(0) = p() = 2, the coin is unbiased. Definition 4.2. We define the classical entropy by S(p) := x Ω p(x) ln p(x). (4.2) Here we defined 0 ln 0 = 0 as usual convention. Definition 4.3. We define the classical relative entropy (or Kullback-Leibler divergence) for probability distributions p and q by S(p q) := { x Ω p(x)(ln p(x) ln q(x)) (p q) + (otherwise), (4.3) where p q means that q(x) = 0 p(x) = 0 for any x Ω. Here we defined 0 ln 0 = 0, too. Note that S(p q) is not necessarily equal to S(q p). 26
27 The character of relative entropy: non-negativity: S(p q) 0, for all p, q P(Ω). (4.4) nondegeneracy: S(p q) = 0 p(x) = q(x) for all x Ω. (4.5) additivity: a S(p n q n ) = ns(p q), p n, q n is i.i.d. of p, q on Ω n. (4.6) a i.i.d.: independent and identicallu distributed. Example 4.4. For the probability distributions p, q on Ω = {0, }, p = { 3, 2 3 }, q = { 2, 2 } (4.7) S(p q) = (5 ln 2 2 ln 3), (4.8) 3 S(q p) = (2 ln 3 3 ln 2), (4.9) 2 p = {, 0}, q = { 2, 2 } (4.0) S(p q) = ln 2, (4.) S(q p) = +. (4.2) These properties lead us to expect that the relative entropy plays a role like distance between p and q in P(Ω). This expectation is justified by classical Stein s lemma. Note that we cannot regard S(p q) as a distance in a strict meaning, because the relative entropy does not satisfy the symmetry and the triangle inequality Classical Hypothesis Testing Suppose that we only know the true probability ditstribution of a sample space Ω is either p P(Ω) or q P(Ω). We want to decide which probability distribution is true from the trial data. Definition 4.5. We define a null hypothesis and an alternative hypothesis by Null hypothesis H 0 : p is the true probability distribution on Ω, (4.3) Alternative hypothesis H : q is the true probability distribution on Ω. (4.4) We prepare identical n-sample spaces independently, and then the total sample space is written by the direct product Ω n. An element of Ω n is x n = (x, x 2,..., x n ) Ω n. 27
28 The probability distributions p n, q n P(Ω n ) are independent and identically distributed (i.i.d.) of p, q, respectively,, so the probability of an element x n Ω n under each hypothesis H 0 and H is H 0 : p n (x n ) = p(x )p(x 2 ) p(x n ), p P(Ω), (4.5) or H : q n (x n ) = q(x )q(x 2 ) q(x n ), q P(Ω), (4.6) respectively. For the hypothesis testing, we have to define an acceptance domain A n Ω n before trial. If the trial results in x n A n, then we accept the null hypothesis H 0 and reject the alternative hypothesis H. If not, we reject H 0 and accept H. From now we call an acceptance domain A n Ω n a test for convenience. A test is some subset of total system Ω n. If H 0 is true, the probabilities of correct decision a n and wrong decision α n are given by a n := p n (A n ) = p n (x n ), (4.7) x n A n α n := p n (A c n) = a n, (4.8) respectively, with a n +α n =. Otherwise, if H is true, the probabilities of correct decision b n and wrong decision β n are given by b n := q n (A c n) = q n (x n ), (4.9) x n A n β n := q n (A n ) = b n, (4.20) with b n + β n =. Definition 4.6. We define the error probability of the first kind α n and the error probability of the second kind β n for a test A n on Ω n by α n (A n ) := p n (A c n) = p n (x n ), (4.2) x n A n β n (A n ) := q n (A n ) = q n (x n ). (4.22) x n A n We also call α n the type I error probability and β n the type II error probability. Note that these probabilities are depending on the acceptance domain A n. In actual hypothesis testing, it is usual that the type I error probability is restricted by some number ϵ > 0 as α n (A n ) ϵ. This ϵ is called level of significance or critical p-value. We often encounter a situation where it is more important to avoid the type I error than to avoid the type II error, so we want to control the type I error probability. 28
29 Then it is natural to ask what is the minimum probability of type II error under the restriction α n ϵ. Definition 4.7. The minimum probability of type II error β n under the restriction α n ϵ, β n(ϵ) := min β n (A n ). (4.23) A n Ω n αn(an) ϵ 4..3 Classical Chernoff-Stein s lemma We want to analyze the asymptotic behavior of βn(ϵ) on the conditon α n ϵ. The word asymptotic means that the number of system goes to infinity n. There is a classical lemma which answers this question. Theorem 4.8. Classical Chernoff-Stein s lemma Suppose S(p q) <. F or any ϵ (0, ), it holds that The proof of this theorem can be found in [4]. lim n n ln β n(ϵ) = S(p q). (4.24) Roughly speaking, this theorem indicates β n e ns(p q) is realized for sufficiently large n. And hence, the greater the relative entropy is, the faster the type II error probability decrease. This is the reason why we interpret the relative entropy as a kind of distance on P(Ω). 29
30 4.2 The Neyman-Pearson Lemma 4.2. Classical Neyman-Pearson s Lemma The first question of hypothesis testing is to find out the most efficient test A n Ω n which keeps the two types of the error probabilities as small as possible under some restriction. This problem was solved by Neyman and Pearson (933). Theorem 4.9. The Classical Neyman-Pearson Lemma For any real number T > 0, let us define a test A n Ω n by A n(t ) := { x n Ω n p n(x n } ) q n (x n ) > T. (4.25) We write the two types of error probability by α n (T ) = p n (A c n (T )) and β n (T ) = q n (A n(t )). Then it holds for any test A n Ω n that α n (A n ) α n (T ) = β n (A n ) β n (T ). (4.26) From this theorem, the type I error and type II error cannot made arbitrarily small simultaneously. In particular, for any test, they can never reach zeros at the same time. Moreover, if we consider restricted testing α n ϵ, we have only to consider such kinds of tests A n whose form is written by Eq.(4.25). Proof. Let us consider a weighted average of the error probabilities for T > 0, M T (A n ) := α n (A n ) + T β n (A n ) (4.27) = p n (x n ) + T q n (x n ) (4.28) x n A c n x n A n = (p n (x n ) T q n (x n )). (4.29) x n A n Then it is obvious that M T (A n ) takes it minimum by A n = A n. Therefore, for any testa n Ω n, M(A n ) = α n (A n ) + T β n (A n ) M(A n) = α n + T β n, (4.30) α n α n T ( β n β n ). (4.3) for any T > 0. From this, the lemma follows immediately. 30
31 4.2.2 Quantum Neyman-Pearson s Lemma Suppose that we want to know the most effective POVM which minimize the error probabilities, given a hypothesis testing between ˆρ S(H) and ˆσ S(H) on n-tensor product system H n. The next theorem is the quantum counterpart of the classical Neyman-Pearson theorem. Theorem 4.0. The Quantum Neyman-Pearson Lemma For any T > 0, we define the test { n, Î Â n}, POVM, as the positive projection:  n(t ) := { ρ n T σ n > 0 }. (4.32) We write the error probabilities of this POVM as α n := Trρ n (Î Â n), β n := Trσ n ( n). Then, for any test {Ân, Î Ân}, it holds that α n (Ân) α n (T ) = β n (Ân) β n (T ). (4.33) As is in the case for classical hypothesis test, we confirm that we cannot take α n and β n arbitrarily small, and the efficient tests are written by Eq.(4.32). The number T is a parameter of these tests. Proof. Let us consider a weighted average of the error probabilities for T > 0, M T (Ân) := α n (Ân) + T β n (Ân) (4.34) = Trρ n (Î Ân) + T Trσ n  n (4.35) = Tr(ρ n T σ n )Ân. (4.36) It is clear that the trace term takes its minimum by Ân =  n. Therefore, for any testân, M(Ân) = α n (Ân) + T β n (Ân) M T ( n) = α n + T β n, (4.37) for any T > 0, and the lemma is proved. 3
32 4.3 Properties of von Neumann/Quantum Relative Entropy 4.3. Additivety of S( ˆρ ˆσ): Proof. Let ˆρ ˆσ be a 2-tensor product state forˆρ S(H A ), ˆσ S(H B ), then we can caluculate as ln(ˆρ ˆσ) = ln( i,j λ i η j ˆPi ˆP j ) (4.38) = i,j (ln λ i η j ) ˆP i ˆP j (4.39) = i (ln λ i ) ˆP i j ˆP j + i ˆP i j (ln η j ) ˆP j (4.40) = ln ˆρ ˆ B + ˆ A ln ˆσ, (4.4) where ˆρ = i λ ˆP i i and ˆσ = j η ˆP j j are the spectral decompositions of ˆρ and ˆσ, respectively. Therefore, the additivity of the quantum relative entropy follows as S(ˆρ A ˆρ B ˆσ A ˆσ B ) = Tr AB ˆρ A ˆρ B (ln(ˆρ A ˆρ B ) ln(ˆσ A ˆσ B )) (4.42) = Tr AB ˆρ A ˆρ B (ln ˆρ A ˆ B + ˆ A ln ˆρ B ln ˆσ A ˆ B ˆ A ln ˆσ B ) (4.43) = Tr A ˆρ A (ln ˆρ A ln ˆσ A )Tr B ˆρ B + Tr A ˆρ A Tr B ˆρ B (ln ˆρ B ln ˆσ B ) (4.44) = S(ˆρ A ˆσ A ) + S(ˆρ B ˆσ B ). (4.45) Rank: Proof. From the non-negativity of the quantum relative entropy, it holds that 0 S(ˆρ ˆσ) = Trˆρ ln ˆρ Trˆρ ln ˆσ (4.46) = S(ˆρ) Trˆρ ln ˆσ, (4.47) for any states ˆρ and ˆσ. Let ˆρ be diagonalized, and then we substitute ˆσ for rankˆρ diag(,...,, 0,..., 0), we obtain S(ˆρ) Trˆρ ln ˆσ = (ln rank ˆρ)Trˆρ = ln rank ˆρ. (4.48) 32
33 4.3.3 Q/C Relative Entropy in The Commutative case Proof. It follows from the direct caluculation, S( ˆρ ˆσ) = Trˆρ(ln ˆρ ln ˆσ) (4.49) ( ) = Tr p i Ê i (ln p j ln q j )Êj (4.50) i j = i,j p i (ln p j ln q j )TrÊiÊj (4.5) = i,j p i (ln p j ln q j )δ i,j (4.52) = S c (p q). (4.53) For the other properties, please see [7, 8]. 33
34 4.4 The Properties of The Functions Proof. By the difinition, ψ(0) = 0 is obvious, and it follows that e ψ(s) = Trˆρ +sˆσ s = Tre (+s) ln ˆρ e s ln ˆσ. (4.54) Then we differentiate the both sides in s, [ ψ (s)e ψ(s) = Tr ln ˆρe (+s) ln ˆρ e s ln ˆσ e (+s) ln ˆρ ln ˆσe s ln ˆσ] (4.55) We substitute s = 0 then = Tr [ˆρ +sˆσ s (ln ˆρ ln ˆσ) ]. (4.56) ψ (0) = S(ˆρ ˆσ). (4.57) Agein we differentiate the both sides in s, [ ] ψ (s)e ψ(s) + (ψ (s)) 2 e ψ(s) = Tr e (+s) ln ˆρ ln ˆρe s ln ˆσ (ln ˆρ ln ˆσ) e (+s) ln ˆρ ln ˆσe s ln ˆσ (ln ˆρ ln (4.58) ˆσ) [ ] = Tr ˆρ (+s) (ln ˆρ ln ˆσ)ˆσ s (ln ˆρ ln ˆσ). (4.59) Moreover, from the direct caluculation, (ψ (s)) 2 e ψ(s) = 2ψ (s){ψ (s)e ψ(s) } (ψ (s)) 2 e ψ(s) (4.60) [ = 2Tr ψ (s)ˆρ (+s)ˆσ ] [ s (ln ˆρ ln ˆσ) Tr (ψ (s)) 2 ˆρ (+s)ˆσ s], (4.6) Hence we obtain [ ψ (s) = e ψ(s) Tr = e ψ(s) Tr ] ˆρ (+s) (ln ˆρ ln ˆσ ψ (s))ˆσ s (ln ˆρ ln ˆσ ψ (s)) ] [ ˆρ (+s) ˆσ s  (4.62) (4.63) ( := ln ˆρ ln ˆσ ψ (s)) (4.64) ) ) ] = e ψ(s) Tr [(ˆρ +s 2 ˆσ s 2 (ˆρ +s 2 ˆσ s 2 (4.65) 0, (4.66) for s 0. 34
35 4.5 The Quantum f-divergence The quantum f-divergence is defined, for by D f (ˆρ ˆσ) = Tr[ˆρf(L ρ R σ )(I)], (suppˆρ suppˆσ), (4.67) where f is a real continuous function on R +, and L ρ and R ρ are the superoperator on L(H) defined by L ρ (X) = ρx and R σ (X) = Xσ, respectively. It f is a convex function, the quantum f-divergence satisfies the monotonicity for any TPCP map, i.e., where E is an arbitrary TPCP map. D f (ˆρ ˆσ) D f (E(ˆρ) E(ˆσ)), (4.68) In particular, we used the f-divergence for a operator convex function f(x) = x s, D f (ˆρ ˆσ) = Trˆρ +sˆσ s, (4.69) and the TPCP map defined by E( ˆX n ) = diag(tr ˆX n Ê n,, Tr ˆX n Ê n,2,..., Tr ˆX n Ê n,n ), (4.70) for ˆX n H n. Hence we obtain E(ˆρ n ) = diag(p n,, p n,2,..., p n,n ), (4.7) E(ˆσ n ) = diag(q n,, q n,2,..., q n,n ), (4.72) We used these facts in Eq. (3.4). See also [0]. 35
36 References [] Fumio Hiai and Dénes Petz, The Proper Formula for Relative Entropy and its Asymptotics in Quantum Probability," Communications in Mathematical Physics December (III) 99, Volume 43, Issue, pp [2] Tomohiro Ogawa, and Hiroshi Nagaoka, Strong Converse and Stein s Lemma in Quantum Hypothesis Testing, IEEE Trans. Inform. Theory, vol. 46, no. 7, pp , [3] H. Umegaki, Conditional expectation in an operator algebra IV. Entropy and Information, Kodai Math. Sem. Rep., vol. 4, pp , 962. [4] T.M. Cover and J.A. Thomas, Elements of Information Theory (Wiley, 99) [5] R. Bhatia, Matrix Analysis. New York: Springer-Verlag, 997. [6] Gerard J. Murphy, C*-Algebra and Operator Theory, Academic Press; edition (September, 990) [7], SGC 32,, [8],,,,,,, 202. [9] Tomohiro Ogawa and Masahito Hayashi, A New Proof of the Direct Part of the Quantum Stein s Lemma, Erato Workshop on Quantum Information Science 200 (EQIS200), Tokyo, Japan, p. 57, September, 200 (Poster Session). [0] F. Hiai, M. Mosonyi, D. Petz, C. Beny, Quantum f-divergences and error correction, Mathematical Physics, volume 23, issue 7, pp , (20) 36
Strong Converse and Stein s Lemma in the Quantum Hypothesis Testing
Strong Converse and Stein s Lemma in the Quantum Hypothesis Testing arxiv:uant-ph/9906090 v 24 Jun 999 Tomohiro Ogawa and Hiroshi Nagaoka Abstract The hypothesis testing problem of two uantum states is
More informationEnsembles and incomplete information
p. 1/32 Ensembles and incomplete information So far in this course, we have described quantum systems by states that are normalized vectors in a complex Hilbert space. This works so long as (a) the system
More informationQuantum Statistics -First Steps
Quantum Statistics -First Steps Michael Nussbaum 1 November 30, 2007 Abstract We will try an elementary introduction to quantum probability and statistics, bypassing the physics in a rapid first glance.
More informationOn Composite Quantum Hypothesis Testing
University of York 7 November 207 On Composite Quantum Hypothesis Testing Mario Berta Department of Computing with Fernando Brandão and Christoph Hirche arxiv:709.07268 Overview Introduction 2 Composite
More informationDifferent quantum f-divergences and the reversibility of quantum operations
Different quantum f-divergences and the reversibility of quantum operations Fumio Hiai Tohoku University 2016, September (at Budapest) Joint work with Milán Mosonyi 1 1 F.H. and M. Mosonyi, Different quantum
More information1 Traces, Traces Everywhere (5 points)
Ph15c Spring 017 Prof. Sean Carroll seancarroll@gmail.com Homework - Solutions Assigned TA: Ashmeet Singh ashmeet@caltech.edu 1 Traces, Traces Everywhere (5 points) (a.) Okay, so the time evolved state
More information1 Quantum states and von Neumann entropy
Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More information1 Mathematical preliminaries
1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical
More informationLecture 19 October 28, 2015
PHYS 7895: Quantum Information Theory Fall 2015 Prof. Mark M. Wilde Lecture 19 October 28, 2015 Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike
More informationCompression and entanglement, entanglement transformations
PHYSICS 491: Symmetry and Quantum Information April 27, 2017 Compression and entanglement, entanglement transformations Lecture 8 Michael Walter, Stanford University These lecture notes are not proof-read
More information1 More on the Bloch Sphere (10 points)
Ph15c Spring 017 Prof. Sean Carroll seancarroll@gmail.com Homework - 1 Solutions Assigned TA: Ashmeet Singh ashmeet@caltech.edu 1 More on the Bloch Sphere 10 points a. The state Ψ is parametrized on the
More informationMultiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2
Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 arxiv:quant-ph/0410063v1 8 Oct 2004 Nilanjana Datta Statistical Laboratory Centre for Mathematical Sciences University of Cambridge
More informationA Holevo-type bound for a Hilbert Schmidt distance measure
Journal of Quantum Information Science, 205, *,** Published Online **** 204 in SciRes. http://www.scirp.org/journal/**** http://dx.doi.org/0.4236/****.204.***** A Holevo-type bound for a Hilbert Schmidt
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More information1 Review of The Learning Setting
COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #8 Scribe: Changyan Wang February 28, 208 Review of The Learning Setting Last class, we moved beyond the PAC model: in the PAC model we
More informationIntroduction and Preliminaries
Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis
More informationMeasurable Choice Functions
(January 19, 2013) Measurable Choice Functions Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/choice functions.pdf] This note
More informationMultivariate Trace Inequalities
Multivariate Trace Inequalities Mario Berta arxiv:1604.03023 with Sutter and Tomamichel (to appear in CMP) arxiv:1512.02615 with Fawzi and Tomamichel QMath13 - October 8, 2016 Mario Berta (Caltech) Multivariate
More informationhere, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional
15. Perturbations by compact operators In this chapter, we study the stability (or lack thereof) of various spectral properties under small perturbations. Here s the type of situation we have in mind:
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationStochastic Processes
qmc082.tex. Version of 30 September 2010. Lecture Notes on Quantum Mechanics No. 8 R. B. Griffiths References: Stochastic Processes CQT = R. B. Griffiths, Consistent Quantum Theory (Cambridge, 2002) DeGroot
More information6.1 Main properties of Shannon entropy. Let X be a random variable taking values x in some alphabet with probabilities.
Chapter 6 Quantum entropy There is a notion of entropy which quantifies the amount of uncertainty contained in an ensemble of Qbits. This is the von Neumann entropy that we introduce in this chapter. In
More informationINFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson
INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,
More informationIntroduction to Quantum Information Hermann Kampermann
Introduction to Quantum Information Hermann Kampermann Heinrich-Heine-Universität Düsseldorf Theoretische Physik III Summer school Bleubeuren July 014 Contents 1 Quantum Mechanics...........................
More informationChapter 5. Density matrix formalism
Chapter 5 Density matrix formalism In chap we formulated quantum mechanics for isolated systems. In practice systems interect with their environnement and we need a description that takes this feature
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationare Banach algebras. f(x)g(x) max Example 7.4. Similarly, A = L and A = l with the pointwise multiplication
7. Banach algebras Definition 7.1. A is called a Banach algebra (with unit) if: (1) A is a Banach space; (2) There is a multiplication A A A that has the following properties: (xy)z = x(yz), (x + y)z =
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationDegenerate Perturbation Theory. 1 General framework and strategy
Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying
More informationChecking Consistency. Chapter Introduction Support of a Consistent Family
Chapter 11 Checking Consistency 11.1 Introduction The conditions which define a consistent family of histories were stated in Ch. 10. The sample space must consist of a collection of mutually orthogonal
More informationExtremal properties of the variance and the quantum Fisher information; Phys. Rev. A 87, (2013).
1 / 24 Extremal properties of the variance and the quantum Fisher information; Phys. Rev. A 87, 032324 (2013). G. Tóth 1,2,3 and D. Petz 4,5 1 Theoretical Physics, University of the Basque Country UPV/EHU,
More informationCharacterization of Bipartite Entanglement
Characterization of Bipartite Entanglement Werner Vogel and Jan Sperling University of Rostock Germany Paraty, September 2009 Paraty, September 2009 UNIVERSITÄT ROSTOCK INSTITUT FÜR PHYSIK 1 Table of Contents
More informationLaw of Cosines and Shannon-Pythagorean Theorem for Quantum Information
In Geometric Science of Information, 2013, Paris. Law of Cosines and Shannon-Pythagorean Theorem for Quantum Information Roman V. Belavkin 1 School of Engineering and Information Sciences Middlesex University,
More informationGeneralized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses
Ann Inst Stat Math (2009) 61:773 787 DOI 10.1007/s10463-008-0172-6 Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Taisuke Otsu Received: 1 June 2007 / Revised:
More informationThe Postulates of Quantum Mechanics
p. 1/23 The Postulates of Quantum Mechanics We have reviewed the mathematics (complex linear algebra) necessary to understand quantum mechanics. We will now see how the physics of quantum mechanics fits
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationMAPS ON DENSITY OPERATORS PRESERVING
MAPS ON DENSITY OPERATORS PRESERVING QUANTUM f-divergences LAJOS MOLNÁR, GERGŐ NAGY AND PATRÍCIA SZOKOL Abstract. For an arbitrary strictly convex function f defined on the non-negative real line we determine
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationDensity Matrices. Chapter Introduction
Chapter 15 Density Matrices 15.1 Introduction Density matrices are employed in quantum mechanics to give a partial description of a quantum system, one from which certain details have been omitted. For
More informationWeyl s Character Formula for Representations of Semisimple Lie Algebras
Weyl s Character Formula for Representations of Semisimple Lie Algebras Ben Reason University of Toronto December 22, 2005 1 Introduction Weyl s character formula is a useful tool in understanding the
More informationCS286.2 Lecture 13: Quantum de Finetti Theorems
CS86. Lecture 13: Quantum de Finetti Theorems Scribe: Thom Bohdanowicz Before stating a quantum de Finetti theorem for density operators, we should define permutation invariance for quantum states. Let
More informationTrace Class Operators and Lidskii s Theorem
Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace
More informationLectures 1 and 2: Axioms for Quantum Theory
Lectures 1 and 2: Axioms for Quantum Theory Joseph Emerson Course: AMATH 900/AMATH 495/PHYS 490 Foundations and Interpretations of Quantum Theory Course Instructors: Joseph Emerson and Ray Laflamme Hosted
More informationObstacle problems and isotonicity
Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle
More informationInformation Theory in Intelligent Decision Making
Information Theory in Intelligent Decision Making Adaptive Systems and Algorithms Research Groups School of Computer Science University of Hertfordshire, United Kingdom June 7, 2015 Information Theory
More informationSPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS
SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties
More informationA NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER
A NOTE ON SUMS OF INDEPENDENT RANDOM MATRICES AFTER AHLSWEDE-WINTER 1. The method Ashwelde and Winter [1] proposed a new approach to deviation inequalities for sums of independent random matrices. The
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationMP463 QUANTUM MECHANICS
MP463 QUANTUM MECHANICS Introduction Quantum theory of angular momentum Quantum theory of a particle in a central potential - Hydrogen atom - Three-dimensional isotropic harmonic oscillator (a model of
More informationLinear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations
Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear
More informationOutput entropy of tensor products of random quantum channels 1
Output entropy of tensor products of random quantum channels 1 Motohisa Fukuda 1 [Collins, Fukuda and Nechita], [Collins, Fukuda and Nechita] 1 / 13 1 Introduction Aim and background Existence of one large
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationIncompatibility Paradoxes
Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationCorrelation Detection and an Operational Interpretation of the Rényi Mutual Information
Correlation Detection and an Operational Interpretation of the Rényi Mutual Information Masahito Hayashi 1, Marco Tomamichel 2 1 Graduate School of Mathematics, Nagoya University, and Centre for Quantum
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J Math Anal (009), no, 64 76 Banach Journal of Mathematical Analysis ISSN: 75-8787 (electronic) http://wwwmath-analysisorg ON A GEOMETRIC PROPERTY OF POSITIVE DEFINITE MATRICES CONE MASATOSHI ITO,
More informationVISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.
VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)
More informationMathematical Introduction
Chapter 1 Mathematical Introduction HW #1: 164, 165, 166, 181, 182, 183, 1811, 1812, 114 11 Linear Vector Spaces: Basics 111 Field A collection F of elements a,b etc (also called numbers or scalars) with
More informationFREE PROBABILITY THEORY
FREE PROBABILITY THEORY ROLAND SPEICHER Lecture 4 Applications of Freeness to Operator Algebras Now we want to see what kind of information the idea can yield that free group factors can be realized by
More informationInformation Theory and Hypothesis Testing
Summer School on Game Theory and Telecommunications Campione, 7-12 September, 2014 Information Theory and Hypothesis Testing Mauro Barni University of Siena September 8 Review of some basic results linking
More informationTHEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1].
THEOREM OF OSELEDETS We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets []. 0.. Cocycles over maps. Let µ be a probability measure
More informationConsistent Histories. Chapter Chain Operators and Weights
Chapter 10 Consistent Histories 10.1 Chain Operators and Weights The previous chapter showed how the Born rule can be used to assign probabilities to a sample space of histories based upon an initial state
More informationMathematical Methods for Quantum Information Theory. Part I: Matrix Analysis. Koenraad Audenaert (RHUL, UK)
Mathematical Methods for Quantum Information Theory Part I: Matrix Analysis Koenraad Audenaert (RHUL, UK) September 14, 2008 Preface Books on Matrix Analysis: R. Bhatia, Matrix Analysis, Springer, 1997.
More informationThe memory centre IMUJ PREPRINT 2012/03. P. Spurek
The memory centre IMUJ PREPRINT 202/03 P. Spurek Faculty of Mathematics and Computer Science, Jagiellonian University, Łojasiewicza 6, 30-348 Kraków, Poland J. Tabor Faculty of Mathematics and Computer
More informationEECS 750. Hypothesis Testing with Communication Constraints
EECS 750 Hypothesis Testing with Communication Constraints Name: Dinesh Krithivasan Abstract In this report, we study a modification of the classical statistical problem of bivariate hypothesis testing.
More informationDistance between physical theories based on information theory
Distance between physical theories based on information theory Jacques Calmet 1 and Xavier Calmet 2 Institute for Cryptography and Security (IKS) Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe,
More informationII. The Machinery of Quantum Mechanics
II. The Machinery of Quantum Mechanics Based on the results of the experiments described in the previous section, we recognize that real experiments do not behave quite as we expect. This section presents
More informationINSTITUT FOURIER. Quantum correlations and Geometry. Dominique Spehner
i f INSTITUT FOURIER Quantum correlations and Geometry Dominique Spehner Institut Fourier et Laboratoire de Physique et Modélisation des Milieux Condensés, Grenoble Outlines Entangled and non-classical
More informationNotes on Mathematics Groups
EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More informationA COMMENT ON FREE GROUP FACTORS
A COMMENT ON FREE GROUP FACTORS NARUTAKA OZAWA Abstract. Let M be a finite von Neumann algebra acting on the standard Hilbert space L 2 (M). We look at the space of those bounded operators on L 2 (M) that
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These
More informationCONTROLLABILITY OF QUANTUM SYSTEMS. Sonia G. Schirmer
CONTROLLABILITY OF QUANTUM SYSTEMS Sonia G. Schirmer Dept of Applied Mathematics + Theoretical Physics and Dept of Engineering, University of Cambridge, Cambridge, CB2 1PZ, United Kingdom Ivan C. H. Pullen
More informationStat 206: Sampling theory, sample moments, mahalanobis
Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because
More informationIntroduction to Quantum Mechanics Physics Thursday February 21, Problem # 1 (10pts) We are given the operator U(m, n) defined by
Department of Physics Introduction to Quantum Mechanics Physics 5701 Temple University Z.-E. Meziani Thursday February 1, 017 Problem # 1 10pts We are given the operator Um, n defined by Ûm, n φ m >< φ
More information. Find E(V ) and var(v ).
Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationRepresentation theory of SU(2), density operators, purification Michael Walter, University of Amsterdam
Symmetry and Quantum Information Feruary 6, 018 Representation theory of S(), density operators, purification Lecture 7 Michael Walter, niversity of Amsterdam Last week, we learned the asic concepts of
More informationNotes on Blackwell s Comparison of Experiments Tilman Börgers, June 29, 2009
Notes on Blackwell s Comparison of Experiments Tilman Börgers, June 29, 2009 These notes are based on Chapter 12 of David Blackwell and M. A.Girshick, Theory of Games and Statistical Decisions, John Wiley
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationNotes 1 : Measure-theoretic foundations I
Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,
More informationSTAT 830 Hypothesis Testing
STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationOn asymmetric quantum hypothesis testing
On asymmetric quantum hypothesis testing JMP, Vol 57, 6, 10.1063/1.4953582 arxiv:1612.01464 Cambyse Rouzé (Cambridge) Joint with Nilanjana Datta (University of Cambridge) and Yan Pautrat (Paris-Saclay)
More information4.3 Lecture 18: Quantum Mechanics
CHAPTER 4. QUANTUM SYSTEMS 73 4.3 Lecture 18: Quantum Mechanics 4.3.1 Basics Now that we have mathematical tools of linear algebra we are ready to develop a framework of quantum mechanics. The framework
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationIntroduction to Group Theory
Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationQuantum Symmetric States
Quantum Symmetric States Ken Dykema Department of Mathematics Texas A&M University College Station, TX, USA. Free Probability and the Large N limit IV, Berkeley, March 2014 [DK] K. Dykema, C. Köstler,
More informationGeneralized Hypothesis Testing and Maximizing the Success Probability in Financial Markets
Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City
More informationEntropy in Classical and Quantum Information Theory
Entropy in Classical and Quantum Information Theory William Fedus Physics Department, University of California, San Diego. Entropy is a central concept in both classical and quantum information theory,
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More informationABELIAN SELF-COMMUTATORS IN FINITE FACTORS
ABELIAN SELF-COMMUTATORS IN FINITE FACTORS GABRIEL NAGY Abstract. An abelian self-commutator in a C*-algebra A is an element of the form A = X X XX, with X A, such that X X and XX commute. It is shown
More informationMathematical Foundations of Quantum Mechanics
Mathematical Foundations of Quantum Mechanics 2016-17 Dr Judith A. McGovern Maths of Vector Spaces This section is designed to be read in conjunction with chapter 1 of Shankar s Principles of Quantum Mechanics,
More informationThe Free Central Limit Theorem: A Combinatorial Approach
The Free Central Limit Theorem: A Combinatorial Approach by Dennis Stauffer A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 (Honour s Seminar)
More information