Matrix limit theorems of Kato type related to positive linear maps and operator means
|
|
- Ambrose Shaw
- 5 years ago
- Views:
Transcription
1 Matrix it theorems of Kato type related to positive linear maps and operator means Fumio Hiai 1 arxiv: v1 [math.fa] 1 Oct Tohoku University (Emeritus), Hakusan , Abiko , Japan Abstract We obtain it theorems for Φ(A p ) 1/p and (A p σb) 1/p as p for positive matrices A,B, where Φ is a positive linear map between matrix algebras (in particular, Φ(A) = KAK ) and σ is an operator mean (in particular, the weighted geometric mean), which are considered as certain reciprocal Lie-Trotter formulas and also a generalization of Kato s it to the supremum A B with respect to the spectral order. 010 Mathematics Subject Classification: Primary 15A45, 15A4, 47A64 Key words and phrases: positive semidefinite matrix, Lie-Trotter formula, positive linear map, operator mean, operator monotone function, geometric mean, antisymmetric tensor power, Rényi relative entropy 1 Introduction For any matrices X and Y, the well-known Lie-Trotter formula is the convergence n (ex/n e Y/n ) n = e X+Y. The symmetric form with a continuous parameter is also well-known for positive semidefinite matrices A,B 0 as pց0 (Ap/ B p A p/ ) 1/p = P 0 exp(loga +logb), (1.1) where P 0 is the orthogonal projection onto the intersection of the supports of A,B and loga +logb is defined as P 0 (loga)p 0 +P 0 (logb)p 0. When σ is an operator mean [13] corresponding to an operator monotone function f on (0, ) such that α := f (1) is in (0,1), the operator mean version of the Lie-Trotter formula is also known to hold as pց0 (Ap σb p ) 1/p = P 0 exp((1 α)loga +αlogb) (1.) 1 address: hiai.fumio@gmail.com 1
2 for matrices A,B 0. In particular, when σ is the geometric mean A#B (introduced first in [15] and further discussed in [13]) corresponding to the operator monotone function f(x) = x 1/, (1.) yields pց0 (Ap #B p ) /p = P 0 exp(loga +logb), (1.3) which has the same right-hand side as (1.1). Due to the Araki-Lieb-Thirring inequality and the Ando-Hiai log-majorization[4, 1], it is worthwhile to note that (A p/ B p A p/ ) 1/p and (A p #B p ) /p both tend to P 0 exp(loga +logb) as p ց 0, with the former decreasing and the latter increasing in the log-majorization order (see [1] for details on log-majorization for matrices). In the previous paper [6], under the name reciprocal Lie-Trotter formula, we considered the question complementary to (1.1) and (1.3), that is, about what happens to the its of (A p/ B p A p/ ) 1/p and (A p #B p ) /p as p tends to instead of 0. If A and B are commuting, then (A p/ B p A p/ ) 1/p = (A p #B p ) /p = AB, independently of p > 0. However, if A and B are not commuting, then the question is rather complicated. Indeed, although we can prove the existence of the it (A p/ B p A p/ ) 1/p, the description of the it has a rather complicated combinatorial nature. Moreover, it is unknown so far whether the it of (A p #B p ) /p as p exists or not. In the present paper, we consider a similar (but seemingly a bit simpler) question about what happens to the its of (BA p B) 1/p and (A p #B) 1/p as p tends to, the case where B is fixed without the p-power, in certain more general settings. The rest of the paper is organized as follows. In Section, we first prove the existence of the it of (KA p K ) 1/p as p and give the description of the it in terms of the diagonalization (eigenvalues and eigenvectors) data of A and the images of the eigenvectors by K. We then extend the result to the it of Φ(A p ) 1/p as p for a positive linear map Φ between matrix algebras. For instance, this it is applied to the map Φ(A B) := (A + B)/ to reformulate Kato s it theorem ((A p + B p )/) 1/p A B in [1]. Another application is given to find the it formula as α ց 0 of the sandwiched α-rényi divergence [14, 18], a new relative entropy relevant to quantum information theory. In Section 3, we discuss the it behavior of (A p σb) 1/p as p for operator means σ. To do this, we may assume without loss of generality that B is an orthogonal projection E. Under a certain condition on σ, we prove that (A p σe) 1/p is decreasing as 1 p, so that the it as p exists. Furthermore, when σ is the the weighted geometric mean, we obtain an explicit description of the it in terms of E and the spectral projections of A. It is worth noting that a it formula in the same vein as those in [1] and this paper was formerly given in [, 3] for the spectral shorting operation.
3 Φ(A p ) 1/p for positive linear maps Φ For each n N we write M n for the n n complex matrix algebra and M + n for the set of positive semidefinite matrices in M n. When A M n is positive definite, we write A > 0. We denote by Tr the usual trace functional on M n. For A M + n, λ 1 (A) λ n (A) are the eigenvalues of A in decreasing order with multiplicities, and rana is the range of A. Let A M + n be given, whose diagonalization is A = Vdiag(a 1,...,a n )V = a i v i v i (.1) with the eigenvalues a 1 a n and a unitary matrix V = [v 1 v n ] so that Av i = a i v i for 1 i d. Let K M n and assume that K 0 (our problem below is trivial when K = 0). Consider the sequence of vectors Kv 1,...,Kv n in C n. Let 1 l 1 < l < < l m be chosen so that if l k 1 < i < l k for 1 k m, then Kv i is in span{kv l1,...,kv lk 1 } (this means, in particular, Kv i = 0 if i < l 1 ). Then {Kv l1,...,kv lm } is a linearly independent subset of {Kv 1,...,Kv n }, so we perform the Gram-Schmidt orthogonalization to obtain an orthonormal vectors u 1,...,u m from Kv l1,...,kv lm. In particular, u 1 = Kv l1 / Kv l1. The next theorem is our first it theorem. This implicitly says that the right-hand side of (.) is independent of the expression of (.1) (note that v i s are not unique for degenerate eigenvalues a i ). Theorem.1. We have and in particular, (KAp K ) 1/p = λ k((ka p K ) 1/p ) = a lk u k u k, (.) { a lk, 1 k m, 0, m < k n. (.3) Proof. Write Z p := (KA p K ) 1/p and λ i (p) := λ i (Z p ) for p > 0 and 1 i n. First we prove (.3). Note that Z p p = KAp K = KVdiag(a p 1,...,ap n )V K = [ a p 1 Kv 1 a p Kv a p n Kv n][ Kv1 Kv Kv n ]. (.4) Since λ 1 (p) p TrZp p = Tr[Kv ] [ 1 Kv n a p 1 Kv 1 a p n Kv ] n = a p i Kv i,kv i a p l 1 Kv i, 3
4 we have Moreover, since supλ 1 (p) a l1. nλ 1 (p) p TrZ p p = a p i Kv i,kv i a p l 1 Kv l1, we have Therefore, (.3) holds for k = 1. inf λ 1(p) a l1. To prove (.3) for k > 1, we consider the antisymmetric tensor powers A k and K k for each k = 1,...,n. Note that Z k p = ( (K k )(A k ) p (K k ) ) 1/p (.5) and A k = 1 i 1 < <i k d The above case applied to A k and K k yields that λ ( ) 1(p)λ (p) λ k (p) = λ 1 Z k p a i1 a ik v i1 v ik v i1 v ik. = max { a i1 a ik : 1 i 1 < < i k d, K k (v i1 v ik ) 0 }. (.6) Since K k (v i1 v ik ) = Kv i1 Kv ik is non-zero if and only if {Kv i1,...,kv ik } is linearly independent, it is easy to see that (.6) is equal to a l1 a lk if k m and equal to 0 if k > m. Therefore, which implies (.3). λ 1(p)λ (p) λ k (p) = { a l1 a lk, 1 k m, 0, m < k n, Now, for p > 0 choose an orthonormal basis {u 1 (p),...,u n (p)} of C n for which Z p u i (p) = λ i (p)u i (p) for 1 i n. To prove (.), write ã k := a lk for 1 k m. If ã 1 = 0 then it is obvious that Z p = 0. So assume that ã 1 > 0 and furthermore ã 1 > ã, i.e., λ 1 (p) > λ (p) at the moment. From (.4) we have Zp p = a p i Kv i Kv i l 1 = a p i Kv i Kv i + i=l 1 4 i=l a p i Kv i Kv i
5 so that ( Zp ã 1 ) p l 1 = i=l 1 ( ai l 1 = a p i Kv i u 1 u 1 + i=l 1 ã 1 ) p Kv i u 1 u 1 + i=l i=l a p i Kv i Kv i ( ai ã 1 ) p Kv i Kv i α u 1 u 1 as p for some α > 0, since a i /ã 1 ã /ã 1 < 1 for i l. Hence, for any p > 0 sufficiently large, the largest eigenvalue of (Z p /ã 1 ) p is simple and the corresponding eigen projection converges to u 1 u 1 as p. Since the eigen projection E 1 (p) of Z p corresponding to the largest eigenvalue λ 1 (p) (simple for any large p > 0) is the same as that of (Z p /ã 1 ) p, we have E 1 (p) = u 1 (p) u 1 (p) u 1 u 1 as p. In the general situation, we assume that ã 1 ã k > ã k+1 with 1 k m, where ã k+1 = 0 if k = m. From (.3) note that λ 1(Zp k ) = ã 1 ã k 1 ã k > ã 1 ã k 1 ã k+1 = λ (Zp k ). Hence, for any sufficiently large p > 0, the largest eigenvalue λ 1 (Zp k) = λ 1(p) λ k (p) of Zp k is simple, and from the above case applied to (.5) it follows that u 1 (p) u k (p) u 1 (p) u k (p) u 1 u k u 1 u k as p, since the vector in the present situation corresponding to u 1 = Kv l1 / Kv l1 is K k (v l1 v lk ) K k (v l1 v lk ) = Kv l 1 Kv lk Kv l1 Kv lk = u 1 u k. (The last identity follows from the fact that, for linearly independent w 1,...,w k, w 1 w k / w 1 w k = w 1 w k if w 1,...,w k are the Gram-Schmidt orthogonalization of w 1,...,w k.) By [6, Lemma.4] we see that the orthogonal projection E k (p) onto span{u 1 (p),...,u k (p)} converges to the orthogonal projection E k of span{u 1,...,u k }. Finally, let 0 = k 0 < k 1 < < k s 1 < k s = m be such that ã 1 = = ã k1 > ã k1 +1 = = ã k > > ã ks 1 +1 = = ã ks. The above argument says that, for every r = 1,...,s 1, the orthogonal projection E kr (p) onto span{u 1 (p),...,u kr (p)} converges to the orthogonal projection E kr onto span{u 1,...,u kr }. When ã ks > 0, this holds for r = s as well. Therefore, when ã ks > 0, we have Z p = λ i (p) u i (p) u i (p) 5
6 = = s k r r=1 i=k r 1 +1 s k r r=1 i=k r i=k s+1 λ i (p) u i (p) u i (p) + i=k s+1 (λ i (p) ã i ) u i (p) u i (p) + λ i (p) u i (p) u i (p) s ã kr (E kr E kr 1 ) = r=1 λ i (p) u i (p) u i (p) s ã kr (E kr (p) E kr 1 (p)) r=1 ã i u i u i, where E 0 (p) = E 0 = 0. When ã ks = 0, we may modify the above estimate as Z p = s 1 k r r=1 i=k r 1 +1 λ i (p) u i (p) u i (p) + i=k s 1 +1 s 1 k s 1 ã kr (E kr E kr 1 ) = ã i u i u i = r=1 λ i (p) u i (p) u i (p) ã i u i u i. The following corollary of Theorem.1 is an improvement of [7, Theorem 1.]. Corollary.. Let A M n be positive definite. We have λ i ((KA p K ) 1/p ) = a i for all i = 1,...,n if and only if {Kv 1,...,Kv n } is linearly independent. Remark.3. Note that Theorem.1 can easily extend to the case where K is a rectangle[ n ] n matrix. In fact, when n < n we may apply Theorem.1 to n n K matrices and A, and when n O > n we may apply to n n matrices [ K O ] and A O n n. A linear map Φ : M n M n is said to be positive if Φ(A) M + n for all A M + n, which is further said to be strictly positive if Φ(I n ) > 0, that is, Φ(A) > 0 for all A M n, A > 0. The following is an extended and refined version of Theorem.1. Theorem.4. Let Φ : M n M n be a positive linear map. Let A M + n be given as A = n a i v i v i with a 1 a n and an orthonormal basis {v 1,...,v n } of C n. Then Φ(A p ) 1/p exists and where M 1 := ranφ( v 1 v 1 ), Φ(Ap ) 1/p = 6 a i P Mi,
7 M i := i ranφ( v j v j ) j=1 i 1 j=1 ranφ( v j v j ), i n, and P Mi is the orthogonal projection onto M i for 1 i n. Proof. Let C (I,A) be the commutative C -subalgebra of M n generated by I,A. We can consider the composition of the conditional expectation from M n onto C (I,A) with respect to Tr and Φ C (I,A) : C (I,A) M n instead of Φ, so we may assume that Φ is completely positive. By the Stinespring representation there are a ν N, a -homomorphism π : M n M nν and a linear map K : C nν C n such that Φ(X) = Kπ(X)K for all X M n. Moreover, since π : M n M nν is represented, under a suitable change of an orthonormal basis of C nν, as Φ(X) = I ν X for all X M n under identification M nν = M ν M n, we can assume that Φ is given (with a change of K) as Φ(X) = K(I ν X)K, X M n. We then write I ν A = (I ν V)diag ( ) a 1,...,a }{{} 1,a,...,a }{{},...,a n,...,a n (Iν V) }{{} ν ν ν = a i ( e 1 v i e 1 v i + + e ν v i e ν v i ). Now, we consider the following sequence of nν vectors in C n : K(e 1 v 1 ),...,K(e ν v 1 ),K(e 1 v ),...,K(e ν v ),...,K(e 1 v n ),...,K(e ν v n ), and if K(e j v i ) is a linear combination of the vectors in the sequence preceding it, then we remove it from the sequence. We write the resulting linearly independent subsequence as K(e j v l1 ) (j J 1 ), K(e j v l ) (j J ),..., K(e j v lm ) (j J m ), where 1 l 1 < l < < l m n and J 1,...,J m {1,...,ν}. Furthermore, by performing the Gram-Schmidt orthogonalization to this subsequence, we end up making an orthonormal sequence of vectors in C n as follows: u (l 1) j (j J 1 ), u (l ) j (j J ),..., u (lm) j (j J m ). Since Φ(A p ) 1/p = K((I ν A) p ) 1/p K, Theorem.1 and Remark.3 imply that Φ(A p ) 1/p exists and ( ) Φ(Ap ) 1/p = a lk u (l k ) (l j u k ) j, j J k 7
8 where u (l k ) (l j J k j u k ) = 0 if Jk =. j The next step of the proof is to find what is u (l k ) j J k j this we first note that ( ν ν ) K(e j v i ) K(e j v i ) = K e j v i e j v i j=1 From Lemma.5 below this implies that j=1 (l u k ) j for 1 k m. For K = K(I ν v i v i )K = Φ( v i v i ). R i := ranφ( v i v i ) = span{k(e j v i ) : 1 j ν}. Through the procedure of the Gram-Schmidt diagonalization we see that R i = 0, 1 i < l 1, R l1 = span { u (l 1) j : j J 1 }, R i R l1, l 1 < i < l, (R l1 R l ) R l1 = span { u (l ) j : j J }, R i R l1 R l, l < i < l 3, (R l1 R l R l3 ) (R l1 R l ) = span { u (l 3) j : j J 3 },. R i R l1 R lm 1, l m 1 < i < l m, (R l1 R lm ) (R l1 R lm 1 ) = span { u (lm) j : j J m }, R i R l1 R lm, l m < i n. Now, let P Mi be the orthogonal projections, respectively, onto the subspaces M 1 := R 1, M i := (R 1 R i ) (R 1 R i 1 ), i n, so that P Mi = 0 if i {l 1,...,l m } and P Mlk is the orthogonal projection onto span { u (l } k) j : j J k for 1 k m. Therefore, we have ( ) Φ(Ap ) 1/p = a lk u (l k ) (l j u k ) j = a lk P Mlk = a i P Mi. j J k Lemma.5. For any finite set {w 1,...,w k } in C n, span{w 1,...,w k } is equal to the range of w 1 w w k w k. More generally, for every B 1,...,B k M + n, k j=1 ranb j is equal to the range of B 1 + +B k. 8
9 Proof. Let Q := w 1 w w k w k. Since Qx = w 1,x w w k,x w k span{w 1,...,w k } for all x C n, we have ranq span{w 1,...,w k }. Since w i w i Q, we have w i ran w i w i ranq, 1 i k. Hence we have span{w 1,...,w k } ranq. The proof of the latter assertion is similar. Thanks to the lemma we can restate Theorem.4 as follows: Theorem.6. Let Φ : M n M n be a positive linear map. Let A M + n be given with the spectral decomposition A = m a kp k, where a 1 > a > > a m > 0. Define Then M 1 := ranφ(p 1 ), M k := ranφ(p 1 + +P k ) ranφ(p 1 + +P k 1 ), k m. Φ(Ap ) 1/p = a k P Mk. Example.7. Consider a linear map Φ : M n M n given by ([ ]) X11 X Φ 1 := X 11 +X, X X 1 X ij M n. Clearly, Φ is completely positive. For any A,B M + n and p > 0 we have Thus, it is well-known [1] that Φ ([ ] p ) 1/p ( ) A 0 A p +B p 1/p Φ =. 0 B ([ A 0 0 B ] p ) 1/p = ( ) A p +B p 1/p = A B, (.7) where A B is the supremum of A,B in the spectral order. Here let us show (.7) from Theorem.6. The spectral decompositions of A,B are given as A = m a i P i, B = b j Q j, j=1 where a 1 > > a m 0, b 1 > > b m 0 and m P i = m j=1 Q j = I. Then A B = l c k R k, 9
10 where {c k } l = {a i} m {b j} m j=1 with c 1 > > c l and P i Q j if a i = b j = c k, R k = P i 0 if a i = c k and b j c k for all j, 0 Q j if b j = c k and a i c k for all i. Note that Φ(R 1 + +R k ) = 1 ( i:a i c k P i + j:b j c k Q j so that by Lemma.5 the support projection F k (i.e., the orthogonal projection onto the range) of Φ(R 1 + +R k ) is ( ) ( ) F k =. Theorem.6 implies that i:a i c k P i ([ ] p ) 1/p A 0 Φ = C := 0 B j:b j c k Q j ) l c k (F k F k 1 ). For every x R we denote by E [x, ) (A) the spectral projection of A corresponding to the interval [x, ), i.e., E [x, ) (A) := P i, i:a i x and similarly for E [x, ) (B) and E [x, ) (C). If c k x > c k+1 for some 1 k < l, then we have E [x, ) (C) = F k = E [x, ) (A) E [x, ) (B). This holds also when x > c 1 and x c l. Indeed, when x > c 1, E [x, ) (C) = 0 = E [x, ) (A) E [x, ) (B). When x c l, E [x, ) (C) = I = E [x, ) (A) E [x, ) (B). This description of C is the same as A B in [1], so we have C = A B. Example.8. The example here is relevant to quantum information. For density matrices ρ,σ M n (i.e., ρ,σ M + n with Trρ = Trσ = 1) and for a parameter α (0, )\{1}, the traditional Rényi relative entropy is { 1 D α (ρ σ) := log[ Trρ α σ 1 α] if ρ 0 σ 0 or 0 < α < 1, α 1 + otherwise, where ρ 0 denotes the support projection of ρ. On the other hand, the new concept recently introduced and called the sandwiched Rényi relative entropy [14, 18] is { 1 α 1 D α (ρ σ) := log[ Tr ( ) σ 1 α α ρσ 1 α α ] α if ρ 0 σ 0 or 0 < α < 1, + otherwise. 10
11 By taking the it we also consider D 0 (ρ σ) := D α (ρ σ) = logtr(ρ 0 σ), αց0 [ D 0 (ρ σ) := Dα (ρ σ) = log αց0 Tr( σ 1 α αց0 1 α α ρσ α ) α ]. (We remark that the notations D α and D α are interchanged from those in [8].) Here, note that Tr( σ 1 α αց0 1 α α ρσ α ) α = Tr(σ p/ ρσ p/ ) 1/p = Tr(ρ 0 σ p ρ 0 ) 1/p, where the existence of Tr(ρ 0 σ p ρ 0 ) 1/p follows from the Araki-Lieb-Thirring inequality [4] (also [1]), and the latter equality above follows since λρ 0 ρ µρ 0 for some λ,µ > 0 and λ 1/p Tr(σ p/ ρ 0 σ p/ ) 1/p Tr(σ p/ ρσ p/ ) 1/p µ 1/p Tr(σ p/ ρ 0 σ p/ ) 1/p. It was proved in [8] that D 0 (ρ σ) D 0 (ρ σ) (.8) and equality holds in (.8) if ρ 0 = σ 0. Let us here prove the following: (1) D 0 (ρ σ) = log Q 0 (ρ σ), where Q 0 (ρ σ) := max { Tr(Pσ) : P an orthogonal projection, [P,σ] = 0, (Pρ 0 P) 0 = P }. () D 0 (ρ σ) = D 0 (ρ σ) holds if and only if [ρ 0,σ] = 0. (Obviously, [ρ 0,σ] = 0 if ρ 0 = σ 0.) Indeed, to prove (1), first note that (Pρ 0 P) 0 = P means that the dimension of ranρ 0 P is equal to that of P, that is, ρ 0 v 1,...,ρ 0 v d are linearly independent when {v 1,...,v d } is an orthonormal basis of ranp. Choose 1 l 1 < l < < l m as in the first paragraph of this section (before Theorem.1) for A = σ and K = ρ 0. Let P 0 be the orthogonal projection onto span{v l1,...,v lm }. Then [P 0,σ] = 0, (P 0 ρ 0 P 0 ) 0 = P 0, and Theorem.1 gives Tr(ρ0 σ p ρ 0 ) 1/p = a lk = Tr(P 0 σ). Ontheother hand, let P beanorthogonalprojectionwith[p,σ] = 0and(Pρ 0 P) 0 = P. From[P,σ] = 0wemayassumethatP = d v i k v ik forsome1 i 1 < < i d n 11
12 (after, if necessary, changing v i for degenerate eigenvalues a i ). Since (Pρ 0 P) 0 = P implies that ρ 0 v i1,...,ρ 0 v id are linearly independent, we have d m and Tr(Pσ) = d a ik a lk = Tr(P 0 σ). Next, to prove (), note that Tr(ρ 0 σ p ρ 0 ) 1/p is increasing in p > 0 by the Araki-Lieb- Thirring inequality mentioned above, which shows that Tr(ρ 0 σ) Tr(ρ 0 σ p ρ 0 ) 1/p. This means inequality (.8), and equality holds in (.8) if and only if Tr(ρ 0 σ p ρ 0 ) 1/p is constant for p 1. By [10, Theorem.1] this is equivalent to the commutativity ρ 0 σ = σρ 0. Finally, we consider the complementary convergence of Φ(A p ) 1/p as p, or Φ(A p ) 1/p as p. Here, the expression Φ(A p ) 1/p for p > 0 is defined in such a way that the ( p)-power of A is restricted to the support of A, i.e., defined in the sense of the generalized inverse, and the ( 1/p)-power of Φ(A p ) is also in this sense. The next theorem is the complementary counterpart of Theorem.6. Theorem.9. Let Φ : M n M n be a positive linear map. Let A M + n be given with the spectral decomposition A = m a kp k, where a 1 > a > > a m > 0. Define M k := ranφ(p k + +P m ) ranφ(p k+1 + +P m ), 1 i m 1, M m := ranφ(p m ). Then Φ(A p ) 1/p = a k P M k. Proof. The proof is just a simple adaptation of Theorem.6. We can write for any p > 0 Φ(A p ) 1/p = { Φ((A 1 ) p ) 1/p} 1, where A 1 and { } 1 are defined in the sense of the generalized inverse so that A 1 = a 1 k P k = a 1 m+1 k P m+1 k with a 1 m > > a 1 1 > 0. By Theorem.6 we have Φ((A 1 ) p ) 1/p = a 1 m+1 k P M m+1 k = 1 a 1 k P M k,
13 where M m := ranφ(p m+1 1 ) = ranφ(p m ), M m+1 k := ranφ(p m P m+1 k ) ranφ(p m P m+1 (k 1) ) = ranφ(p m+1 k + +P m ) ranφ(p m+ k + +P m ), k m. According to the proofs of Theorems.1 and.4, we see that the ith eigenvalue λ i (p) of Φ((A 1 ) p ) 1/p converges to a positive real as p, or otherwise, λ i (p) = 0 for all p > 0. That is, λ i (p) 0 as p occurs only when λ i (p) = 0 for all p > 0. This implies that { } 1 m Φ(A p ) 1/p = Φ((A 1 ) p ) 1/p = a k P Mk. Remark.10. AssumethatΦ : M n M n isaunital positivelinearmap. LetA M n bepositivedefiniteand1 p < q. Sincex p/q andx 1/p areoperatormonotoneon[0, ), we have Φ(A q ) p/q Φ(A p ) and so Φ(A q ) 1/q Φ(A p ) 1/p. Hence Φ(A p ) 1/p increases as 1 p ր. Similarly, Φ(A q ) p/q Φ(A p ) and so Φ(A q ) 1/q Φ(A p ) 1/p since x 1/p is operator monotone decreasing on (0, ). Hence Φ(A p ) 1/p decreases as 1 p ր. Moreover, since x 1 is operator convex on (0, ), we have Φ(A 1 ) 1 Φ(A). (See [5, Theorem.1] for more details.) Combining altogether, when A is positive definite, we have and in particular, Φ(A p ) 1/p Φ(A q ) 1/q, p,q 1, (.9) Φ(A p ) 1/p Φ(A p ) 1/p. However, the latter inequality does not hold unless Φ is unital. For example, let [ ] [ ] [ ] [ ] / 1/ 1/ 1/ P 1 :=, P 0 0 :=, Q :=, Q 1/ 1/ :=, 1/ 1/ and consider Φ : M M given by ([ ]) a11 a Φ 1 := a a 1 a 11 P 1 +a Q 1, and A := ap 1 + bp where a > b > 0. Since P ranφ(p1 +P ) = I, P ranφ(p1 ) = P 1 and P ranφ(p ) = Q 1, Theorems.6 and.9 give Φ(Ap ) 1/p = ap 1 +b(i P 1 ) = ap 1 +bp, Φ(A p ) 1/p = a(i Q 1 )+bq 1 = aq +bq 1. 13
14 We compute which is not positive semidefinite. (ap 1 +bp ) (aq +bq 1 ) = [ a b a b a b b a ], Remark.11. We may always assume that Φ : M n M n is strictly positive. Indeed, we may consider Φ as Φ : M n Q 0 M n Q 0 = Mn, where Q 0 is the support projection of Φ(I n ). Under this convention, another reasonable definition of Φ(A p ) 1/p for p 1 is Φ(A p ) 1/p := εց0 Φ((A+εI n ) p ) 1/p, which is well defined since Φ((A + εi) p ) is increasing so that Φ((A + εi) p ) 1/p is decreasing as ε ց 0. But this definition is different from the above definition of Φ(A p ) 1/p. For example, let Φ : M M be given by Φ(A) := KAK with an invertible K = inverse) so that and so [ a b c d On the other hand, is equal to ], and let A = P = KA p K = [ ]. Then A p = P (in the generalized [ ] a ac ac c (KA p K ) 1/p 1 = ( a + c ) 1 1 p εց0 (K(A+εI) p K ) 1/p = (K 1 (A+εI) p K 1 ) 1/p εց0 1 ad bc /p ( b + d ) 1 1 p [ ] a ac ac c. (.10) = (K 1 A p K 1 ) 1/p = (K 1 PK 1 ) 1/p [ ] d bd bd b. (.11) Hence we find that (.10) and (.11) are very different, even after taking the its as p. Here is a simpler [ example. ] Let ϕ : M[ C ] = M 1 be a state (hence, unital) with 1/ 1/ 1 0 density matrix, and let A =. For the first definition we have 1/ 1/ 0 0 ϕ(a p ) 1/p = 1/p = 1. For the second definition, { } (1+ε) εց0 ϕ((a+εi) p ) 1/p p +ε p 1/p = = 0 εց0 14
15 for all p > 0. Moreover, since ϕ(a p ) 1/p = 1/p for p > 0, this example says also that (.9) does not hold for general positive semidefinite A. Problem.1. It is also interesting to consider the it of (A p BA p ) 1/p as p for A,B M + n, a version different from the it treated in Theorem.1. To consider (A p BA p ) 1/p, we may assume without loss of generality that B is an orthogonal projectione (see theargument around(3.) below). Since (A p EA p ) 1/p = (A p E p A p ) 1/p converges as p by [6, Theorem.5], the existence of the it (A p BA p ) 1/p follows. But it seems that the description of the it is a combinatorial problem much more complicated than that in Theorem.1. 3 (A p σb) 1/p for operator means σ In theory of operator means due to Kubo and Ando [13], a main result says that each operator mean σ is associated with a non-negative operator monotone function f on [0, ) with f(1) = 1 in such a way that AσB = A 1/ f(a 1/ BA 1/ )A 1/ for A,B M + n with A > 0, which is further extended to general A,B M + n as AσB = εց0 (A+εI)σ(B +εi). We write σ f for the operator mean associated with f as above. For 0 α 1, the operator mean corresponding to the function x α (x 0) is the weighted geometric mean # α, i.e., A# α B = A 1/ (A 1/ BA 1/ ) α A 1/ for A,B M + n with A > 0. In particular, # = # 1/ is the so-called geometric mean first introduced by Pusz and Woronowicz [15]. The transpose of f above is given by f(x) := xf(x 1 ), x > 0, which is again an operator monotone function on [0, ) (after extending to [0, ) by continuity) corresponding to the transposed operator mean of σ f, i.e., Aσ fb = Bσ f A. We also write f(x) := { f(x 1 ) = f(x)/x if x > 0, 0 if x = 0. (3.1) In the rest of the section, let f be such an operator monotone function as above and σ f be the corresponding operator mean. We are concerned with the existence and 15
16 the description of the it (A p σ f B) 1/p, in particular, (A p # α B) 1/p for A,B M + n. For this, we may assume without loss of generality that B is an orthogonal projection. Indeed, let E be the support projection of B. Then we can choose λ,µ > 0 with λ < 1 < µ such that λe B µe. Thanks to monotonicity and positive homogeneity of σ f, we have λ(a p σ f E) = (λa p )σ f (λe) A p σ f B (µa p )σ f (µe) = µ(a p σ f E). Hence, for every p 1, since x 1/p (x 0) is operator monotone, λ 1/p (A p σ f E) 1/p (A p σ f B) 1/p µ 1/p (A p σ f B), (3.) so that (A p σ f B) 1/p exists if and only if (A p σ f E) 1/p does, and in this case, both its are equal. In particular, when B > 0, since (A p σ f I) 1/p = f(a p ) 1/p, we note that (Ap σ f B) 1/p = f ( ) (A) whenever f ( ) (x) := f(x p ) 1/p exists for all x 0. For instance, if f(x) = 1 α + αx where 0 < α < 1, then σ f = α, the α-arithmetic mean A α B := (1 α)a+αb, and f ( ) (x) = max{x,1}, if f(x) = x α where 0 α 1, then σ f = # α and f ( ) (x) = f(x) = x 1 α, if f(x) = x/((1 α)x+α) where 0 < α < 1, then σ f =! α, the α-harmonic mean A! α B := (A 1 α B) 1, and f ( ) (x) = min{x,1}. But it is unknown to us that, for any operator monotone function f on [0, ), the it f(x p ) 1/p exists for all x 0, while it seems so. When E is an orthogonal projection, the next proposition gives a nice expression for Aσ f E. This was shown in [11, Lemma 4.7], while the proof is given here for the convenience of the reader. Lemma 3.1. Assume that f(0) = 0. If A M n is positive definite and E M n is an orthogonal projection, then where f is given in (3.1). Proof. For every m = 1,,... we have Aσ f E = f(ea 1 E), (3.3) A 1/ (EA 1 E) m A 1/ = (A 1/ EA 1/ ) m+1. (3.4) Note that the eigenvalues of EA 1 E and those of A 1/ EA 1/ are the same including multiplicities. Choose a δ > 0 such that the positive eigenvalues of EA 1 E and 16
17 A 1/ EA 1/ are included in [δ,δ 1 ]. Then, since f(x) is continuous on [δ,δ 1 ], one can choose a sequence of polynomials p k (x) with p k (0) = 0 such that p k (x) f(x) uniformly on [δ,δ 1 ] as n. By (3.4) we have A 1/ p k (EA 1 E)A 1/ = A 1/ EA 1/ p k (A 1/ EA 1/ ) for every k. Since f(0) = 0 by definition, we have and p k (EA 1 E) f(ea 1 E) A 1/ EA 1/ p k (A 1/ EA 1/ ) A 1/ EA 1/ f(a 1/ EA 1/ ) as k. Since f(0) = 0 by assumption, we have f(x) = x f(x) for all x [0, ). This implies that Therefore, A 1/ EA 1/ f(a 1/ EA 1/ ) = f(a 1/ EA 1/ ). A 1/ f(ea 1 E)A 1/ = f(a 1/ EA 1/ ) so that we have f(ea 1 E) = A 1/ f(a 1/ EA 1/ )A 1/ = Aσ f E, as asserted. Formula (3.3) can equivalently be written as Aσ f E = f((ea 1 E) 1 ), (3.5) where (EA 1 E) 1 is the inverse restricted to rane (in the sense of the generalized inverse) and f((ea 1 E) 1 ) is also restricted to rane. In particular, if f is symmetric (i.e., f = f) with f(0) = 0, then Aσ f E = f((ea 1 E) 1 ). Example 3.. Assume that 0 < α 1 and A,E are as in Lemma 3.1. (1) When f(x) = x α and σ f = # α, f(x) = x α 1 for x > 0 so that A# α E = (EA 1 E) α 1, where the (α 1)-power in the right-hand side is defined with restriction to ran E. () When f(x) = x/((1 α)x+α) and σ f =! α, f(x) = (1 α+αx) 1 for x > 0 so that A! α E = {(1 α)e +αea 1 E} 1 = {E((1 α)i +αa 1 )E} 1, where the inverse of E((1 α)i +αa 1 )E in the right-hand side is restricted to rane. 17
18 (3) When f(x) = (x 1)/logx and so σ f is the logarithmic mean, f(x) = (1 x 1 )/logx for x > 0 so that Aσ f E = (E (EA 1 E) 1 )(logea 1 E) 1, where the right-hand side is defined with restriction to ran E. Theorem 3.3. Assume thatf(0) = 0 and f(x r ) f(x) r for all x > 0 and all r (0,1). Let A M + n and E M n be an orthogonal projection. Then (A p σ f E) 1/p (A q σ f E) 1/q if 1 p < q. (3.6) Proof. First, note that f(x r ) = x r f(x r ) x r f(x 1 ) r = f(x) r for all x > 0, r (0,1). By replacing A with A+εI and taking the it as ε ց 0, we may assume that A is positive definite. Let 1 p < q and r := p/q (0,1). By (3.5) we have (A q σ f E) r = f((ea q E) 1 ) r f((ea q E) r ). (3.7) Since x r is operator monotone on [0, ), we have by Hansen s inequality [9] (EA q E) r EA qr E = EA p E so that (EA q E) r (EA p E) 1. Since f(x) is operator monotone on [0, ), we have Combining (3.7) and (3.8) gives f((ea q E) r ) f((ea p E) 1 ) = A p σ f E. (3.8) (A q σ f E) r A p σ f E. Since x 1/p is operator monotone on [0, ), we finally have (A q σ f E) 1/q (A p σ f E) 1/p. Corollary 3.4. Assume that f(0) = 0 and f(x r ) f(x) r for all x > 0, r (0,1). Then for every A,B M + n, the it exists. (Ap σ f B) 1/p Proof. From the argument around (3.) we may assume that B is an orthogonal projection E. Then Theorem 3.3 implies that (A p σ f E) 1/p converges as p. 18
19 Remark 3.5. Following [17], an operator monotone function f on [0, ) is said to be power monotone increasing (p.m.i. for short) if f(x r ) f(x) r for all x > 0, r > 1 (equivalently, f(x r ) f(x r ) for all x > 0, r (0,1)), and power monotone decreasing (p.m.d.) if f(x r ) f(x) r for all x > 0, r > 1. These conditions play a role to characterize the operator means σ f satisfying Ando-Hiai s inequality [1], see [17, Lemmas.1,.]. For instance, the p.m.d. condition is satisfied for f in (1) and () of Example 3., while f in Example 3.(3) does the p.m.i. condition. Hence, for any α [0,1], (A p # α E) 1/p and (A p! α E) 1/p converge decreasingly as 1 p ր. In fact, for the harmonic mean, we have the it A B := (A p!b p ) 1/p, the decreasing it as 1 p ր for any A,B 0, which is the infimum counterpart of A B in [1] (see also Example.7). The reader might be wondering if the opposite inequality to (3.6) holds (i.e., (A p σ f E) 1/p is increasing as 1 p ր ) when f satisfies the p.m.i. condition. Although this is the case when σ = α the weighted arithmetic mean, it is not the case in general. In fact, if it were true, (A p # α E) 1/p must be constant for p 1 since x α satisfies both p.m.i. and p.m.d. conditions, that is impossible. Finally, for the weighted geometric mean # α we obtain the explicit description of (A p # α E) 1/p for any A M + n. For the trivial cases α = 0,1 note that (A p # 0 E) 1/p = A and (A p # 1 E) 1/p = E for all p > 0. Theorem 3.6. Assume that 0 < α < 1. Let A M + n be given with the spectral decomposition A = m a kp k where a 1 > > a m > 0, and E M n be an orthogonal projection. Then where Q 1 := P 1 E, (Ap # α E) 1/p = a 1 α k Q k, (3.9) Q k := (P 1 + +P k ) E (P 1 + +P k 1 ) E, k m. Proof. First, assume that A is positive definite so that P P m = I. When f(x) = x α with 0 < α < 1, formula (3.5) is given as Since A# α E = (EA 1 E) (1 α). (Ap # α E) 1/p = (EA p E) (1 α)/p = (E(A 1 α ) p E) 1/p, it follows from Theorem.9 that (Ap # α E) 1/p = 19 a 1 α k P M k,
20 where M k := rane(p k + +P m )E rane(p k+1 + +P m )E, 1 k m 1, M m := ranep m E. From Lemma 3.7 below we have and for k m, M 1 = rane ranep 1 E = ranp 1 E, M k = rane(p 1 + +P k 1 ) E rane(p 1 + +P k ) E = [ rane rane(p 1 + +P k ) E ] [ rane rane(p 1 + +P k 1 ) E ] = ran(p 1 + +P k ) E ran(p 1 + +P k 1 ) E = ran [ (P 1 + +P k ) E (P 1 + +P k 1 ) E ]. Therefore, (3.9) is established when A is positive definite. Next, when A is not positive definite, let P m+1 := (P P m ). For any ε (0,a m ) define A ε := A+εP m+1. Then the above case implies that where (A ε# α E) 1/p = a 1 α k Q+ε 1 α Q m+1, Q m+1 := E (P 1 + +P m ) E. Assume that 0 < ε < ε < a m. For every p 1, since A p ε A p ε, we have A p ε# α E A p ε # α E andhence (A p ε # αe) 1/p (A p ε # α E) 1/p. Furthermore, since A p ε # αe A p # α E as a m > ε ց 0, we have (A p # α E) 1/p = a m>εց0 (Ap ε# α E) 1/p decreasingly. (3.10) Now, we can perform a calculation of its as follows: 1 (Ap # α E) 1/p = 1 a (Ap ε # αe) 1/p m>εց0 = a m>εց0 = a m>εց0 = 1 (Ap ε# α E) 1/p ( m a 1 α k Q k. a 1 α k Q+ε 1 α Q m+1 ) In the above, the second equality (the exchange of two its) is confirmed as follows. Let X p,ε := (A p ε# α E) 1/p for p 1 and 0 < ε < a m. Then X p,ε is decreasing as 0
21 1 p ր by Theorem 3.3 and also decreasing as a m > ε ց 0 as seen in (3.10). Let X p := ε X p,ε (= (A p # α E) 1/p ), X ε := p X p,ε, and X := p X p. Since X p,ε X p, we have X ε X and hence ε X ε X. On the other hand, since X p,ε X ε, we have X p ε X ε and hence X ε X ε. Therefore, X = ε X ε, which gives the assertion. Inparticular, whena = P isanorthogonalprojection, we have(a p # α E) 1/p = P E for all p > 0 (see [13, Theorem 3.7]) so that both sides of (3.9) are certainly equal to P E. Lemma 3.7. For every orthogonal projections E and P, ranep E = ran(e P E), or equivalently, ranp E = rane ranep E. Proof. According to the well-known representation of two projections (see [16, pp ]), we write [ ] I 0 E = I I 0 0, 0 0 [ ] C CS P = I 0 I CS S 0, where 0 < C,S < I with C +S = I. We have P E = I Since we also have whose range is that of P = 0 I 0 EP E = 0 I 0 0 I 0 [ ] S CS CS C I, [ ] S 0 0, 0 0 [ ] I 0 0 = E P E, 0 0 which yields the conclusion. Acknowledgments This work was supported by JSPS KAKENHI Grant Number JP17K
22 References [1] T. Ando and F. Hiai, Log majorization and complementary Golden-Thompson type inequalities, Linear Algebra Appl. 197/198 (1994), [] J. Antezana, G. Corach and D. Stojanoff, Spectral shorted matrices, Linear Algebra Appl. 381 (004), [3] J. Antezana, G. Corach and D. Stojanoff, Spectral shorted operators, Integral Equations Operator Theory 55 (006), [4] H. Araki, On an inequality of Lieb and Thirring, Lett. Math. Phys. 19 (1990), [5] K. M. R. Audenaert and F. Hiai, On matrix inequalities between the power means: counterexamples, Linear Algebra Appl. 439 (013), [6] K. M. R. Audenaert and F. Hiai, Reciprocal Lie-Trotter formula, Linear and Multilinear Algebra 64 (016), [7] J.-C. Bourin, Convexity or concavity inequalities for Hermitian operators, Math. Ineq. Appl. 7 (004), [8] N. Datta and F. Leditzky, A it of the quantum Rényi divergence, J. Phys. A: Math. Theor. 47 (014), [9] F. Hansen, An operator inequality, Math. Ann. 46 (1980), [10] F. Hiai, Equality cases in matrix norm inequalities of Golden-Thompson type, Linear and Multilinear Algebra 36 (1994), [11] F. Hiai, A generalization of Araki s log-majorization, Linear Algebra Appl. 501 (016), [1] T. Kato, Spectral order and a matrix it theorem, Linear and Multilinear Algebra 8 (1979), [13] F. Kubo and T. Ando, Means of positive linear operators, Math. Ann. 46 (1980), [14] M. Müller-Lennert, F. Dupuis, O. Szehr, S. Fehr and M. Tomamichel, On quantum Rényi entropies: A new generalization and some properties, J. Math. Phys. 54 (013), 103. [15] W. Pusz and S. L. Woronowicz, Functional calculus for sesquilinear forms and the purification map, Rep. Math. Phys. 8 (1975),
23 [16] M. Takesaki, Theory of Operator Algebras I, Encyclopaedia of Mathematical Sciences, Vol. 14, Springer-Verlag, Berlin, 00. [17] S. Wada, Some ways of constructing Furuta-type inequalities, Linear Algebra Appl. 457 (014), [18] M. M. Wilde, A. Winter and D. Yang, Strong converse for the classical capacity of entanglement-breaking and Hadamard channels via a sandwiched Rényi relative Entropy, Comm. Math. Phys. 331 (014),
Different quantum f-divergences and the reversibility of quantum operations
Different quantum f-divergences and the reversibility of quantum operations Fumio Hiai Tohoku University 2016, September (at Budapest) Joint work with Milán Mosonyi 1 1 F.H. and M. Mosonyi, Different quantum
More informationA generalization of Araki s log-majorization
A generalization of Araki s log-majorization Fumio Hiai 1 arxiv:1601.01715v2 [math.ra] 14 Jan 2016 1 Tohoku University Emeritus), Hakusan 3-8-16-303, Abiko 270-1154, Japan Abstract We generalize Araki
More informationNorm inequalities related to the matrix geometric mean
isid/ms/2012/07 April 20, 2012 http://www.isid.ac.in/ statmath/eprints Norm inequalities related to the matrix geometric mean RAJENDRA BHATIA PRIYANKA GROVER Indian Statistical Institute, Delhi Centre
More informationLecture 4: Purifications and fidelity
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 4: Purifications and fidelity Throughout this lecture we will be discussing pairs of registers of the form (X, Y), and the relationships
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More information2. reverse inequalities via specht ratio To study the Golden-Thompson inequality, Ando-Hiai in [1] developed the following log-majorizationes:
ON REVERSES OF THE GOLDEN-THOMPSON TYPE INEQUALITIES MOHAMMAD BAGHER GHAEMI, VENUS KALEIBARY AND SHIGERU FURUICHI arxiv:1708.05951v1 [math.fa] 20 Aug 2017 Abstract. In this paper we present some reverses
More informationMultivariate trace inequalities. David Sutter, Mario Berta, Marco Tomamichel
Multivariate trace inequalities David Sutter, Mario Berta, Marco Tomamichel What are trace inequalities and why we should care. Main difference between classical and quantum world are complementarity and
More informationCapacity Estimates of TRO Channels
Capacity Estimates of TRO Channels arxiv: 1509.07294 and arxiv: 1609.08594 Li Gao University of Illinois at Urbana-Champaign QIP 2017, Microsoft Research, Seattle Joint work with Marius Junge and Nicholas
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J Math Anal (009), no, 64 76 Banach Journal of Mathematical Analysis ISSN: 75-8787 (electronic) http://wwwmath-analysisorg ON A GEOMETRIC PROPERTY OF POSITIVE DEFINITE MATRICES CONE MASATOSHI ITO,
More informationSome applications of matrix inequalities in Rényi entropy
Some applications of matrix inequalities in Rényi entropy arxiv:608.03362v [math-ph] Aug 206 Hadi Reisizadeh, and S. Mahmoud Manjegani 2, Department of Electrical and Computer Engineering, 2 Department
More informationThe matrix arithmetic-geometric mean inequality revisited
isid/ms/007/11 November 1 007 http://wwwisidacin/ statmath/eprints The matrix arithmetic-geometric mean inequality revisited Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute Delhi Centre 7 SJSS
More informationMultiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2
Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 arxiv:quant-ph/0410063v1 8 Oct 2004 Nilanjana Datta Statistical Laboratory Centre for Mathematical Sciences University of Cambridge
More informationLecture 19 October 28, 2015
PHYS 7895: Quantum Information Theory Fall 2015 Prof. Mark M. Wilde Lecture 19 October 28, 2015 Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationarxiv:math/ v2 [math.fa] 29 Mar 2007
On the Araki-Lieb-Thirring inequality arxiv:math/0701129v2 [math.fa] 29 Mar 2007 Koenraad M.R. Audenaert Institute for Mathematical Sciences, Imperial College London 53 Prince s Gate, London SW7 2PG, United
More informationOn positive maps in quantum information.
On positive maps in quantum information. Wladyslaw Adam Majewski Instytut Fizyki Teoretycznej i Astrofizyki, UG ul. Wita Stwosza 57, 80-952 Gdańsk, Poland e-mail: fizwam@univ.gda.pl IFTiA Gdańsk University
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationNotes on matrix arithmetic geometric mean inequalities
Linear Algebra and its Applications 308 (000) 03 11 www.elsevier.com/locate/laa Notes on matrix arithmetic geometric mean inequalities Rajendra Bhatia a,, Fuad Kittaneh b a Indian Statistical Institute,
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationPOSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS
Adv. Oper. Theory 3 (2018), no. 1, 53 60 http://doi.org/10.22034/aot.1702-1129 ISSN: 2538-225X (electronic) http://aot-math.org POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS
More informationMassachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationLinear Algebra Lecture Notes-II
Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationCompression, Matrix Range and Completely Positive Map
Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationImprovements on Geometric Means Related to the Tracy-Singh Products of Positive Matrices
MATEMATIKA, 2005, Volume 21, Number 2, pp. 49 65 c Department of Mathematics, UTM. Improvements on Geometric Means Related to the Tracy-Singh Products of Positive Matrices 1 Adem Kilicman & 2 Zeyad Al
More informationA 3 3 DILATION COUNTEREXAMPLE
A 3 3 DILATION COUNTEREXAMPLE MAN DUEN CHOI AND KENNETH R DAVIDSON Dedicated to the memory of William B Arveson Abstract We define four 3 3 commuting contractions which do not dilate to commuting isometries
More informationQuantum Information & Quantum Computing
Math 478, Phys 478, CS4803, February 9, 006 1 Georgia Tech Math, Physics & Computing Math 478, Phys 478, CS4803 Quantum Information & Quantum Computing Problems Set 1 Due February 9, 006 Part I : 1. Read
More informationMultivariate Trace Inequalities
Multivariate Trace Inequalities Mario Berta arxiv:1604.03023 with Sutter and Tomamichel (to appear in CMP) arxiv:1512.02615 with Fawzi and Tomamichel QMath13 - October 8, 2016 Mario Berta (Caltech) Multivariate
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationRegular operator mappings and multivariate geometric means
arxiv:1403.3781v3 [math.fa] 1 Apr 2014 Regular operator mappings and multivariate geometric means Fran Hansen March 15, 2014 Abstract We introduce the notion of regular operator mappings of several variables
More informationBasic Elements of Linear Algebra
A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationarxiv:math/ v1 [math.fa] 4 Jan 2007
Tr[ABA] p = Tr[B 1/2 A 2 B 1/2 ] p. On the Araki-Lieb-Thirring inequality arxiv:math/0701129v1 [math.fa] 4 Jan 2007 Koenraad M.R. Audenaert Institute for Mathematical Sciences, Imperial College London
More informationOn a Block Matrix Inequality quantifying the Monogamy of the Negativity of Entanglement
On a Block Matrix Inequality quantifying the Monogamy of the Negativity of Entanglement Koenraad M.R. Audenaert Department of Mathematics, Royal Holloway University of London, Egham TW0 0EX, United Kingdom
More informationAnn. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:www.emis.
Ann. Funct. Anal. 5 (2014), no. 2, 147 157 A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL:www.emis.de/journals/AFA/ ON f-connections OF POSITIVE DEFINITE MATRICES MAREK NIEZGODA This
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationCOMMUTATIVITY OF OPERATORS
COMMUTATIVITY OF OPERATORS MASARU NAGISA, MAKOTO UEDA, AND SHUHEI WADA Abstract. For two bounded positive linear operators a, b on a Hilbert space, we give conditions which imply the commutativity of a,
More informationFidelity and trace norm
Fidelity and trace norm In this lecture we will see two simple examples of semidefinite programs for quantities that are interesting in the theory of quantum information: the first is the trace norm of
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationChapter 5. Density matrix formalism
Chapter 5 Density matrix formalism In chap we formulated quantum mechanics for isolated systems. In practice systems interect with their environnement and we need a description that takes this feature
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationNON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES
REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Volumen 48, Número 1, 2007, Páginas 7 15 NON POSITIVELY CURVED METRIC IN THE SPACE OF POSITIVE DEFINITE INFINITE MATRICES ESTEBAN ANDRUCHOW AND ALEJANDRO VARELA
More informationJENSEN S OPERATOR INEQUALITY AND ITS CONVERSES
MAH. SCAND. 100 (007, 61 73 JENSEN S OPERAOR INEQUALIY AND IS CONVERSES FRANK HANSEN, JOSIP PEČARIĆ and IVAN PERIĆ (Dedicated to the memory of Gert K. Pedersen Abstract We give a general formulation of
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationDiagonalization by a unitary similarity transformation
Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple
More informationTopic 1: Matrix diagonalization
Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationSpectral Theory, with an Introduction to Operator Means. William L. Green
Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps
More informationOn (T,f )-connections of matrices and generalized inverses of linear operators
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 33 2015 On (T,f )-connections of matrices and generalized inverses of linear operators Marek Niezgoda University of Life Sciences
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationarxiv: v1 [math.fa] 19 Aug 2017
EXTENSIONS OF INTERPOLATION BETWEEN THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY FOR MATRICES M. BAKHERAD 1, R. LASHKARIPOUR AND M. HAJMOHAMADI 3 arxiv:1708.0586v1 [math.fa] 19 Aug 017 Abstract. In this paper,
More informationENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS. 1. Introduction
Trends in Mathematics Information Center for Mathematical Sciences Volume 6, Number 2, December, 2003, Pages 83 91 ENTANGLED STATES ARISING FROM INDECOMPOSABLE POSITIVE LINEAR MAPS SEUNG-HYEOK KYE Abstract.
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationEigenvectors and Hermitian Operators
7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationMATH Linear Algebra
MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization
More informationMATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationMATRIX INEQUALITIES IN STATISTICAL MECHANICS
Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 03 22 MATRIX INEQUALITIES IN STATISTICAL MECHANICS N. BEBIANO, J. DA PROVIDÊNCIA JR. AND R. LEMOS Abstract: Some matrix
More informationWhat is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =
STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row
More informationLINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS
LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationAlgebra I Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary
More informationLecture 2: Linear operators
Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study
More informationNotes on basis changes and matrix diagonalization
Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationA CHARACTERIZATION OF THE OPERATOR-VALUED TRIANGLE EQUALITY
J. OPERATOR THEORY 58:2(2007), 463 468 Copyright by THETA, 2007 A CHARACTERIZATION OF THE OPERATOR-VALUED TRIANGLE EQUALITY TSUYOSHI ANDO and TOMOHIRO HAYASHI Communicated by William B. Arveson ABSTRACT.
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationBoolean Inner-Product Spaces and Boolean Matrices
Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationMath 321: Linear Algebra
Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,
More informationClarkson Inequalities With Several Operators
isid/ms/2003/23 August 14, 2003 http://www.isid.ac.in/ statmath/eprints Clarkson Inequalities With Several Operators Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute, Delhi Centre 7, SJSS Marg,
More informationProblem # Max points possible Actual score Total 120
FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to
More informationAlmost Sharp Quantum Effects
Almost Sharp Quantum Effects Alvaro Arias and Stan Gudder Department of Mathematics The University of Denver Denver, Colorado 80208 April 15, 2004 Abstract Quantum effects are represented by operators
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationarxiv: v1 [math.ra] 8 Apr 2016
ON A DETERMINANTAL INEQUALITY ARISING FROM DIFFUSION TENSOR IMAGING MINGHUA LIN arxiv:1604.04141v1 [math.ra] 8 Apr 2016 Abstract. In comparing geodesics induced by different metrics, Audenaert formulated
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationA completely entangled subspace of maximal dimension
isibang/ms/2006/6 March 28th, 2006 http://www.isibang.ac.in/ statmath/eprints A completely entangled subspace of maximal dimension B. V. Rajarama Bhat Indian Statistical Institute, Bangalore Centre 8th
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationTrace Inequalities for a Block Hadamard Product
Filomat 32:1 2018), 285 292 https://doiorg/102298/fil1801285p Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Trace Inequalities for
More informationMixed matrix (operator) inequalities
Linear Algebra and its Applications 341 (2002) 249 257 www.elsevier.com/locate/laa Mixed matrix (operator) inequalities Mitsuru Uchiyama Department of Mathematics, Fukuoka University of Education, Munakata,
More information