MAJORIZATION AND THE SCHUR-HORN THEOREM

Size: px
Start display at page:

Download "MAJORIZATION AND THE SCHUR-HORN THEOREM"

Transcription

1 MAJORIZATION AND THE SCHUR-HORN THEOREM A Thesis Submitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the Requirements for the Degree of Master of Science In Mathematics University of Regina By Maram Albayyadhi Regina, Saskatchewan January 2013 c Copyright 2013: Maram Albayyadhi

2 UNIVERSITY OF REGINA FACULTY OF GRADUATE STUDIES AND RESEARCH SUPERVISORY AND EXAMINING COMMITTEE Maram Albayyadhi, candidate for the degree of Master of Science in Mathematics, has presented a thesis titled, Majorization and the Schur-Horn Theorem, in an oral examination held on December 18, The following committee members have found the thesis acceptable in form and content, and that the candidate demonstrated satisfactory knowledge of the subject material. External Examiner: Supervisor: Committee Member: Dr. Daryl Hepting, Department of Computer Science Dr. Martin Argerami, Department of Mathematics and Statistics Dr. Douglas Farenick, Department of Mathematics and Statistics Chair of Defense: Dr. Sandra Zilles, Department of Computer Science

3 Abstract We study majorization in R n and some of its properties. The concept of majorization plays an important role in matrix analysis by producing several useful relationships. We find out that there is a strong relationship between majorization and doubly stochastic matrices; this relation has been perfectly described in Birkhoff s Theorem. On the other hand, majorization characterizes the connection between the eigenvalues and the diagonal elements of self adjoint matrices. This relation is summarized in the Schur-Horn Theorem. Using this theorem, we prove versions of Kadison s Carpenter s Theorem. We discuss A. Neumann s extension of the concept of majorization to infinite dimension to that provides a Schur-Horn Theorem in this context. Finally, we detail the work of W. Arveson and R.V. Kadison in proving a strict Schur-Horn Theorem for positive trace-class operators. i

4 Acknowledgments Throughout my studying, I could have not done my project without the support of my professors whom I insist on thanking even though my words cannot adequately express my gratitude. Dr. Martin Argerami, I would like to thank you from the bottom of my heart for all the support and the guidance that you provided for me. Dr. Shaun Fallat and Dr. Remus Floricel, if it were not for your classes, I would not have learned as much about my field. I m also honored to thank all my amazing colleagues in the math department, especially my friend Angshuman Bhattacharya. To my father Ibrahim Albayyadhi and my Mother Suad Bakkari, words cannot express my love for you. Your prayers, belief in me and encouragement are the main reasons for my success. If I kept on thanking you all of my life, I could not pay you back. To my husband, Dr. Hadi Mufti, you always make it easier for me whenever I face obstacles; you have always been the wind beneath my wings. Finally, I would like to thank the one who kept wiping my tears on the hard days and saying, Mom... don t give up my son Yazan. ii

5 Contents Abstract i Acknowledgments ii Table of Contents iii 1 Preliminaries Majorization Doubly Stochastic Matrices Doubly Stochastic Matrices and Majorization The Schur-Horn Theorem in the Finite Dimensional Case The Pythagorean Theorem in Finite Dimension The Schur-Horn Theorem in the Finite Dimensional Case A Pythagorean Theorem for Finite Doubly Stochastic Matrices The Carpenter Theorem in the Infinite Dimensional Case The Subspaces K and K both have Infinite Dimension One of the Subspaces K, K has Finite Dimension A Pythagorean Theorem for Infinite Doubly Stochastic Matrices.. 47 iii

6 4 A Schur-Horn Theorem in Infinite Dimensional Case Majorization in Infinite Dimension Neumann s Schur-Horn Theorem A Strict Schur-Horn Theorem for Positive Trace-Class Operators Conclusion and Future Work 66 iv

7 Chapter 1 Preliminaries In this chapter we provide some basic information about majorization and some of it properties, which we will use later. The material in this chapter is basic and can be found in many matrix analysis books [4, 7]. 1.1 Majorization Let x = (x 1,, x n ) R n. Let x = (x 1,, x n) and x = (x 1,, x n) denote the vector x with it s coordinates rearranged in increasing and decreasing orders respectively. Then x i = x n i+1, 1 i n. Definition 1.1. Given x, y in R n, we say x is majorized by y, denoted by x y, 1

8 if and x i = x i y i, 1 k n (1.1) y i. (1.2) Example 1.2. If x i [0, 1], and n x i = 1, then we have ( 1 n,. 1 n ) (x 1,, x n ) (1, 0,, 0). Proposition 1.3. If x y and y x, then there exists a permutation matrix P such that y = Px. Proof. Assume first that x, y are rearranged in decreasing order,i.e. x 1 x 2 x n, y 1 y 2 y n. We proceed by induction on k. For k = 1 we get x 1 y 1, and y 1 x 1 from the majorization equation (1.1), which implies x 1 = y 1. By induction hypothesis we assume that x i = y i, for i = 1,, k and we will show that it works up to k + 1. From ( 1.1) x 1 + x x k + x k+1 y 1 + y y k + y k+1, and by our assumption x 1 + x x k = y 1 + y y k, so if we cancel them we get x k+1 y k+1, as the roles of x and y can be reversed y k+1 x k+1, and this implies x k+1 = y k+1. In general, if we write x and y for the non-increasing rearrangements, there exist permutations P 1 and P 2 such that x = P 1 x, y = P 2 y. By the first part of the proof, x = y, i.e. x = P 1 1 P 2 y. Although majorization is defined through non-increasing rearrangements, it can 2

9 also be done by non-decreasing ones: Proposition 1.4. If x, y R n, then x y if and only if and x i = x i y i, 1 k n (1.3) y i (1.4) Proof. For equation(1.4), we have n x i = n y i, since x y, but n x i = n x i, so n x i = n x i = n y i = n y i, and n x i = n y i. And for equation (1.3), we know that x i = x n i+1, 1 i n. So Then x i = x n i+1. x i = n k x l = x l l=n k+1 l=1 l=1 n k = tr (x) l=1 n k tr (y) l=1 x l x l y l n k = y l y l = l=1 l=1 l=n k+1 y l = y i. 3

10 1.2 Doubly Stochastic Matrices There is a deep relation between majorization and doubly stochastic matrices; we will discuss that relation in this section. Definition 1.5. Let B = (b ij ) be a square matrix. We say B is doubly stochastic if : b ij 0 i, j, n b ij = 1 j, n j=1 b ij = 1 i. Proposition 1.6. The set of square doubly stochastic matrices is a convex set and it is closed under multiplication and the adjoint operation. But it is not a group. Proof. If t 1,, t r [0, 1] with r j=1 t j = 1 and P 1,, P r are permutations, let A = r j=1 t jp j. Clearly every entry of A is non-negative. Also, A kl = k=1 = = r t j (P j ) kl k=1 j=1 r j=1 t j (P j ) kl k=1 r t j = 1, l. j=1 A similar computation shows that n l=1 A kl = 1 for all k. If A, B are doubly stochastic matrices then AB is also doubly stochastic. Indeed, 4

11 the sum over the rows is (AB) ij = j=1 = = A ik B kj j=1 k=1 k=1 A ik j=1 A ik = 1. k=1 B kj And the same thing can be done for the columns, which shows that AB is also doubly stochastic. Also it is clear that if A doubly stochastic, then so is its adjoint A. The class of doubly stochastic matrices is not a group since not every doubly stochastic matrix is invertible; for example if we take the 2 2 matrix, with all its entries equal to 1, this matrix is doubly stochastic and its determinant is zero. 2 Proposition 1.7. Every permutation matrix is doubly stochastic and is an extreme point of the convex set of all doubly stochastic matrices. Proof. A permutation matrix has exactly one entry +1 in each row and in each column and all other entries are zero. So it is doubly stochastic. Now let A = α 1 B + α 2 C, with A a permutation matrix such that α 1, α 2 (0, 1), α 1 + α 2 = 1, and B, C are doubly stochastic matrices. Then every entry of B, C that corresponds to the zero element a ij = 0 of A must be zero; indeed, 0 = a ij = α 1 b ij + α 2 c ij. As α 1, α 2 both are non zero, and B, C are both nonnegative, we have b ij = c ij = 0. Hence, nonzero entries must be all +1, A = B = C. This shows that every permutation matrix is an extreme point of the set of doubly stochastic matrices. 5

12 1.3 Doubly Stochastic Matrices and Majorization Theorem 1.8. A matrix A M n (R) is doubly stochastic if and only if Ax x, for all x R n. Proof. For the implication ( ), assume Ax x for all vectors x. Then Ax = x. From the definition of majorization. If we choose x to be e j, where e j is the vector e j = (0,, 0, 1, 0,, 0), 1 j n, then a 11 a 1n 0 a 1j =. 1. a n1 a nn 0 a nj 0 Then min{a 1j,, a nj } min{1, 0} = 0, so a kj 0 for all k. Also this implies, as j was arbitrary, that the sum over each column is 1. To show the sum over the rows is also 1 we use the vector e = n j=1 e j; we get a 11 a 1n 1 n j=1... = a 1,j 1... a n1 a nn 1 n j=1 a n,j 1 Then n j=1 a ij = 1 for all i, since max{ n j=1 a ij : j} 1, min{ n j=1 a ij; j} 1. For the other direction ( ), let A be doubly stochastic, and let y = Ax. To prove 6

13 y x, we first show that we can assume x and y have their entries in non-increasing order; this because x = Px, and y = Qy for some permutation matrices P and Q, so Qy = APx y = Q 1 APx, y = Bx. where Q 1 AP = B is doubly stochastic, since the permutation matrices are doubly stochastic, and the product of doubly stochastic matrices is doubly stochastic by Proposition 1.6. For any k {1,, n} we have y j = j=1 b ji x i. (1.5) j=1 Let s i = k j=1 b ji, then 0 s i 1, n s i = k and y j = j=1 s i x i. Then y j x j = j=1 j=1 s i x i x i. (1.6) 7

14 By adding and subtracting n s ix k from equation (1.6), y j j=1 x j = j=1 = = = s i x i s i x i s i x i + s i x i + = = x i + ( s i x i + (k i=k+1 i=k+1 (1 s i )x i + s i x i s i x i s i )x k s i )x k, since k = x i + x i + x k x k (x i x k )s i + i=k+1 (s i 1)(x i x k ) + 0. s i x k s i s i x k (1 s i )x k (x i x k )s i i=k+1 i=k+1 s i x k So k j=1 y j k j=1 x j for all k. When k = n, y j = j=1 b ji x j j=1 = ( i b i1 )x ( i b in )x n = x j. j=1 Definition 1.9. Let A R n be a linear map. We say that A is a T-transform if there exists a [0, 1] and j, k such that Ay = (y 1,, y j 1, ay j + (1 a)y k, y j+1,, (1 a)y j + ay k, y k+1,, y n ), 8

15 i.e. A = ai + (1 a)p, where P is the transposition (jk). Theorem Given x, y R n, the following statements are equivalent: 1. x y. 2. x = T y, where T is a product of T -transforms. 3. x conv Sy = conv{py : P is a permutation}. 4. x = Ay, where A is doubly stochastic matrix. Proof. (1) = (2): We want to show that if x y, then x = (T r T 1 )y for some T-transforms T 1,, T r. We will show this is true for any n by induction. Let x, y R n. We can assume that x, y have their coordinates in decreasing order by permuting them, and each of these permutations is a product of T -transforms (because transpositions are T - transforms, and permutations are products of transpositions). So when x y, we have y n x 1 y 1. If we take k n such that y k x 1 y k 1, then x 1 = ty 1 +(1 t)y k for some t [0, 1], and we define T 1 y = (ty 1 + (1 t)y k, y 2,, y k 1, (1 t)y 1 + ty k, y k+1,, y n ). Let x = (x 2,, x n ), y = (y 2,, y k 1, (1 t)y 1 + ty k, y k+1,, y n ) R n 1. Note that the first coordinate of T 1 y is x 1. If we take off x 1 from x and T 1 y, then x = x and T 1 y = y. We will show that x y. For m such that 2 m k 1, m y j j=2 m x 1 j=2 m x j. j=2 9

16 And for k m n we have m 1 j=1 y j = = = = k 1 y j + [(1 t)y 1 + ty k ] + j=2 k 1 y j + y 1 ty 1 + ty k + j=2 m y j ty 1 (1 t)y k, j=1 m y j x 1 j=1 m x j x 1 = j=1 m 1 j=1 x j m j=k+1 m j=k+1 y j y j by adding and subtracting y k The last inequality is an equality when m = n since x y and so x y. By induction hypothesis there exist a finite number of T-transforms T 2,, T r on R n 1 such that x = (T r,, T 2 )y. We can regard each of them as T-transform on R n if we don t touch the first coordinate of any vector. Then we will have (T r T 1 )y = (T r T 2 )(x 1, y ) = (x, x ) = x. (2)= (3): It is clear that each T -transform is a convex combination of permutations. Now we have to show that a product of two convex combinations of permutations is a convex combination of permutations. For P j Q j where each of them is a permutation; t j s j 0, j, k. We have ( l ) ( m ) t j P j s k Q k = j=1 k=1 l m t j s k P j Q k. j=1 k=1 10

17 Where l m t j s k = j=1 k=1 l m t j j=1 k=1 s k = l t j = 1. j=1 (3) = (4): Is trivial, since we have x conv(py), and from Proposition 1.6 we know that a convex combination of permutation matrices is doubly stochastic. (4)= (1): This is Theorem 1.8. Definition If B is a square matrix, and P is a permutation, then we call the set { b 1P(1), b 2P(2),, b np(n) } a diagonal of B. Each element in the diagonal of B has one entry from each column and each row. Theorem (The König- Frobenius Theorem) Given a square matrix B, then every diagonal of B contains a zero element if and only if B has an i j submatrix with all entries zero for some i, j such that i + j > n. Proof. This is equivalent to Hall s Theorem. Theorem (Birkhoff s Theorem) The set of n n doubly stochastic matrices is a convex set whose extreme points are the permutation matrices. Proof. To prove this we have to show two things. First, that a convex combination of doubly stochastic matrices is doubly stochastic; this was proven in Proposition 1.7. Second we have to show that every extreme point is a permutation matrix, and for this we will show that each doubly stochastic matrix is a convex combination of a permutation matrix. This can be proved by induction on the number of nonnegative 11

18 entries of the matrix. When A has n positive entries, if A is doubly stochastic, then A is a permutation matrix. Let A be doubly stochastic. Then A has at least one diagonal with no zero entry; indeed, let [0 k l ] be a submatrix of zeros that A might have. In such case we can find permutation matrices Q 1, Q 2 such that Q 1 AQ 2 = 0 C B ; 0 is a k l submatrix with all entries zero. D Q 1 AQ 2 is doubly stochastic which means the sum of the rows of B is 1 and the sum of the columns of C is 1. i.e. n l b hi = 1, h = 1,, k, and n k c ih = 1, h = 1,, l. Also, looking at the rows and columns that intersect D, l n l c hi + d hi = 1, n k b hi + d hi = 1. h=1 h=1 Let k = n l b hi, and l = l n k c ih. 12

19 Then k = n l b ji = j=1 = n l j=1 b ji ( ) n l n k 1 d ji j=1 = n l d, where d is a positive number. Hence, k + l n. So by The König- Frobenius Theorem 1.12 at least one diagonal of A must have all its entries positive. Now suppose A is a doubly stochastic matrix with n + k non-zero entries. By the previous paragraph A has a never zero diagonal, given by some permutation Q. Let a be the minimum element of this diagonal. Clearly a < 1, because otherwise A would have a 1 in each row and column, making it a permutation, with only n non-zero entries. Let B = A aq 1 a. Then B is doubly stochastic, and the entry in B corresponding to the location of a is zero, so B has at most n + k 1 non-zero entries. By induction hypothesis B is a convex combination of permutation matrices. Since A = (1 a)b + aq, it is clear that A is a convex combination of a permutation matrices too. 13

20 Chapter 2 The Schur-Horn Theorem in the Finite Dimensional Case In this chapter we study some variants of the Pythagorean Theorem. The Pythagorean Theorem plays an important role in describing the relation between the three sides of the right triangle in Euclidean geometry. Among the variations of the Pythagorean Theorem that we will consider, some are trivial while others are not. We will find that these can be solved by using the Schur-Horn Theorem. 2.1 The Pythagorean Theorem in Finite Dimension In the following we will represent the Pythagorean Theorem (PT) in different dimensions, beginning with the classical variant. Also we will formulate the converse of the Pythagorean Theorem, which we call the Carpenter Theorem (CT)[8]. Theorem 2.1. (PT-1) If we have right triangle, such that its two sides are x,y and the angle between them is θ = π 2, then x2 + y 2 = z 2. 14

21 Although less known, the converse of (PT-1) holds. We call this the Carpenter Theorem (CT). Theorem 2.2. (CT-1) If we have a triangle with sides x, y, z, such that x 2 + y 2 = z 2, then θ = π, i.e. we 2 have a right triangle. Let {e 1, e 2 } be an orthonormal basis for R 2. Then for x R 2, we can write x as linear combination of {e 1, e 2 }, and in this case we can re-write Theorem 2.1 as Theorem 2.3. (PT-2) If x = t 1 e 1 + t 2 e 2, and x = 1, then t t 2 2 = 1. Proof. Since the norm of x is one we have 1 = x 2 = t 1 e t 2 e 2 2 = t 1 2 e t 2 2 e 2 2 = t t 2 2. Note that (PT-2) is Parseval s equality, which says that if {e j : j J} is an orthonormal basis in H, then for every x H the following equality holds: x 2 = j J x, e j 2. In what follows, we denote by P K x the orthogonal projection of x onto the subspace K. Theorem 2.4. (CT-2): If t 1, t 2 R +, and t t 2 2 = 1, then there exists x R 2 such that x = 1 and P Re1 x = t 1, P Re2 x = t 2. 15

22 Proof. Let x = t 1 e 1 + t 2 e 2. Then x 2 = t 1 e 1 + t 2 e 2 2 = t 1 e t 2 e 2 2 = t t 2 2 = 1. As P Re1 x = x, e 1 e 1, P Re1 x 2 = x, e 1 2, where x = 2 = t i e i, e 1 2 = t t i e i And the same thing for P Re2 x 2 = t 2 2. Since x = e 1 = 1, then P Re1 x = x, e 1 2 = e 1, x 2 = P Rx e 1. From this point of view, we can rephrase the (PT-2) and (CT-2) as, Theorem 2.5. (PT-3): If K is a one-dimensional subspace of R 2, then P K e P K e 2 2 = 1. Theorem 2.6. (CT-3): If t 1, t 2 R +, and t 1 + t 2 = 1, then there exists one-dimensional K R 2, such that P K e 1 2 = t 1, P K e 2 2 = t 2. Next we see that the same results hold in R n : Theorem 2.7. (PT-4): 16

23 If K is a one-dimensional subspace of R n, and {e j } n j=1 an orthonormal basis, then n j=1 P Ke j 2 = 1. Proof. We choose x = t j e j to be a unit vector for K R n, then it spans K and P K e i 2 = e i, x x 2 = e i, x 2 x 2 = e i, x 2. Then j P Ke j 2 = j e j, x 2 = x 2 by Parseval s equality. Theorem 2.8. (CT-4): If t 1,, t n [0, 1], and n j=1 t j = 1, then there exists a one-dimensional subspace K R n such that P K e j 2 = t j, j = 1,, n. Proof. Let x = n j=1 t 1 2 j e j and put K = Cx. Then P K e i = e i, x x = e i, t 1 2 j e j x = j=1 t 1 2 j e i, e j x j=1 = t 1 2 i x. So P K e i 2 = t 1 2 i x 2 = t i. In the following we are going to generalize the Pythagorean Theorem in R n, by allowing K to have different dimensions. 17

24 Theorem 2.9. (PT-5): If K is an m-dimensional subspace of R n, then n P Ke i 2 = m. Proof. If we choose f 1,, f m to be an orthonormal basis for K R n, then the projection of e i onto K is So P K e i = m e i, f j f j. j=1 P K e i 2 = = = = m e i, f j f j 2 m j=1 j=1 m e i, f j 2 j=1 e i, f j 2 m f j 2 = j=1 j=1 m 1 = m. The converse of (PT-5) would be Theorem (CT-5): If {t i } n [0, 1], and n t i = m, then there exists an m-dimensional subspace K of R n, such that P K e i 2 = t i, i = 1,, n. Suddenly, its is not so obvious how to construct K. So first we will attempt to reformulate the theorem. If K R n, P K is the orthogonal projection of R n on K, and e 1,, e n is an 18

25 orthonormal basis with (t ij ) the matrix of P K, then P K e j 2 = P K e j, P K e j = P K e j, e j = t jj, since P K = PK 2 = P K. Then n P Ke i 2 = n t i. Which we can write as n P Ke i 2 = tr (P K ). With this in mind, we can rewrite (PT,CT-5) as: Theorem (PT-6): If K is an m-dimensional subspace of R n, then tr (P K ) = m. Theorem (CT-6): If t 1,, t n [0, 1], and n j=1 t j = m, then there is K R n, such that the diagonal of P K is (t 1,..., t n ). This formulation of (CT-6) makes it clear that its proof is not going to be trivial as the previous results of (PT-CT). If we have these numbers t 1,, t n [0, 1], such that their sum is m N and we want to look for K R n with P K e i 2 = t i for all i; in short, we want to form a matrix of P K with diagonal of t i, such that P K = PK = P K 2. It is not obvious that such a thing is even possible. If we try to find a projection in that way we will get n(n+1) 2 equations with n(n 1) 2 variables, as we see in the next example. Example Take P K to be a 2 2 matrix such that P K = PK = P K 2, and such 19

26 that the diagonal of P K is (t, 1 t) for a fixed t [0, 1]. So P K = t x x, P 2 K = t2 + x 2 x. 1 t x x 2 + (1 t) 2 As these two should be equal, we get 2 equations t = t 2 + x, 1 t = x 2 + (1 t) 2 with the single unknown x. In this particular case one can check that x = 1 t 2 gives a solution. But for a matrix we will have 55 equations in 45 unknowns. The bigger the projection, the more equations we have to deal with, and the systems will always be over-determined. This is an issue, because such systems may have no solution. We will soon see, however, that this problem can be solved in general and that the Schur-Horn Theorem is the way to go. 2.2 The Schur-Horn Theorem in the Finite Dimensional Case The Schur-Horn theorem characterizes the relation between the eigenvalues and the diagonal elements of selfadjoint matrix by using majorization. Theorem (Schur 1923 [14]) If A M n (C) sa, then diag(a) λ(a), where diag(a) is the diagonal of A and λ(a) is the eigenvalue list of A. Proof. Let A = UDU, where D is diagonal matrix and U is a unitary. Then the 20

27 diagonal of A is given by a kk = h,l = l = l D hl U kh U lk λ l U kl U kl (2.7) λ l U kl 2. Define a matrix T by T kl = U kl 2. The fact that U U = UU = I implies that T is doubly stochastic. Equation 2.7 shows that diag(a) = T λ(a). By Theorem 1.10, diag(a) λ(a). Horn [6] proved in 1954 the converse of Schur s Theorem We offer a proof following ideas of Kadison [8, Theorem 6]. A very similar proof appears in Arveson- Kadison [3, Theorem 2.1], but using only results from Kadison with no acknowledgement whatsoever of the well-known results in majorization theory that we outlined in Chapter 1. The following lemma contains Kadison s key idea. Lemma Let A M n (C) with diagonal y. Let T be a T -transform. Then there exists a unitary U M n (C) such that UAU has diagonal T y. 21

28 Proof. Let A be an n n matrix. Define a unitary U by U = i j i j ξ sin θ cos θ ξ cos θ sin θ Where ξ C, with ξ = 1, and b ij ξ = b ij ξ. Then a straightforward computation shows that diag(uau ) = t. y+(1 t). y σ, Where t = sin 2 θ, σ = (i j) S n. Theorem (Horn 1954 [6]) If x, y R n, and x y, then A M n (C) sa such that diag(a) = x, λ(a) = y Proof. Let x = (x 1,, x n ), y = (y 1,, y n ) such that x y. By Theorem 1.10 x y x = (T r T 1 )y, where T 1,, T r are T transform. Let A 1 M n (R) with diagonal y and zeroes elsewhere. By Lemma 2.15, there exists a unitary V 1 such that A 2 = V 1 A 1 V 1 has diagonal T 1 y. Similarly, there exists a unitary V 2 such that A 3 = V 2 A 2 V 2 has diagonal T 2 (T 1 y) = T 2 T 1 y. Repeating this, 22

29 after r steps, we will have unitaries V 1,, V r such that A = V r V 1 A 1 V1 Vr has diagonal T r T 1 y = x. As unitary conjugation preserves the spectrum, A has spectrum y and diagonal x. We can rephrase Schur s result by saying that, that for every x R n, {M x : x y} D{UM y U : U U(n)}, where M x, M y are diagonal matrices that have x, y at the diagonal. So if we conjugate M y with the unitary matrix U we still have the same vector y at the diagonal of M y. And Horn proved the other inclusion, i.e. for every x R n, {M x : x y} D{UM y U : U U(n)}. So we can rephrase both theorems together as follows : Theorem (Schur-Horn Theorem) For every x R n, {M x : x y} = D{UM y U : U U(n)}. Now we can prove (CT-6) as follows: Proof of (CT-6). If a = (a 1,, a n ) [0, 1] n and n j=1 a j = m, then a m times {}}{ ( 1,, 1, 0,, 0). By the Schur-Horn Theorem, there exists a self adjoint matrix m times {}}{ P M n (C) with diagonal a and eigenvalues ( 1,, 1, 0,, 0). The minimal polynomial of P is f(t) = t(1 t). So P (I P ) = 0, i.e. P = P 2. Thus, P is a 23

30 projection. 2.3 A Pythagorean Theorem for Finite Doubly Stochastic Matrices In this section we consider, following Kadison [8, 9], certain differences of sums of entries of doubly stochastic matrices. We will use results here to generalize Theorem 2.18 below to the infinite dimensional case (Chapter 3). Theorem Let K be an m-dimensional subspace of H, and e 1,, e n an orthonormal basis for H. If a = r P Ke i 2, b = n i=r+1 P K e i 2, then a b = m n + r. Proof. As P K = I P K, we have P K e i 2 = P K e i, e i = (I P K )e i, e i = 1 P K e i, e i = 1 P K e i 2. So a = r a i, b = n i=r+1 1 a i, where a i = P K e i 2, and thus (using Theorem 24

31 2.9) a b = = = r a i ( 1 a i ) r a i + i=r+1 a i 1 i=r+1 r+1 a i (n r) = tr (P K ) (n r) = m n + r. Definition Let A M m,n (R). Fix subsets K {1,, m}, L {1,, n}. Then we can construct the block or submatrix B by taking only the rows in K and the columns in L of A. The complement of the block B is the matrix B with remaining rows and columns of A. The sum of all entries of the block B is the weight of the block B, and we write it as w(b). Definition A doubly stochastic matrix A = (a ij ) is said to be Pythagorean if there is a Hilbert space H with {e i }, {f j } orthonormal basis, such that a ij = e i, f j 2, for all i, j. The following result can be seen as a Pythagorean Theorem for doubly stochastic matrices. Theorem If A is a doubly stochastic matrix and B is a block in A with p rows and q columns, then w(b) w(b ) = p q + n. 25

32 Proof. w(b) w(b ) = j K l L = j K a jl j K l L a jl (1 l L a jl ) l L ( = K a jl j K l L = K L = K (n L ) = K + L n 1 j k L a jl j K l L a jl ) We can use Theorem 2.21 to give another proof of Theorem 2.18: Proof. Let K be an m-dimensional subspace of H, and K its orthogonal complement. Choose {e 1,, e n }, {f 1,, f m }, {f m+1,, f n } to be orthonormal bases for H, K and K. Let A = (a jk ) be the n n doubly stochastic matrix given by a jk = e j, f k 2. Let B be the submatrix of A given by the first r rows and m columns of A. If P k is the projection onto K, then P k e i = m j=1 e i, f j f j, (1 P k )e i = n j=m+1 e i, f j f j and P K e i 2 = m j=1 e i, f j 2 = m j=1 a ij, so r P Ke i 2 = m j=1 a ij = w(b). r Similarly, P K e i 2 = n j=m+1 e i, f j 2 = n j=m+1 a ij, so n i=r+1 P K e i 2 = n j=m+1 a ij = w(b ). By Theorem 2.21, n i=r+1 r P K e i 2 P K e i 2 = w(b) w(b ) = m n + r. i=r+1 26

33 Chapter 3 The Carpenter Theorem in the Infinite Dimensional Case In the second chapter we dealt with the finite dimensional space R n, and we showed many cases of the Pythagorean Theorem. Here we will deal with infinite dimensional Hilbert spaces and we will discuss two cases of the Carpenter Theorem. The first case when the subspace K H and its orthogonal complement K have infinite dimension, and the second case when one of the subspaces K, K has finite dimension [9]. We include here several definitions that we will need to refer to operators on an infinite-dimensional Hilbert space. Definition 3.1. For an operator A B(H) we say A is positive if A = A, and the spectrum of A consists of positive real numbers. i.e. if (Ax, x) 0, x H. Definition 3.2. Let {e i } be an orthonormal basis in B(H) and A B(H) be a 27

34 positive operator. Then we say A is a trace-class operator if tr (A) = Ae i, e i <. If the sum is finite for one orthonormal basis, then it is finite and has the same value for any other orthonormal basis. For an arbitrary A, we say it is trace-class if (A A) 1 2 is trace-class. 3.1 The Subspaces K and K both have Infinite Dimension We will start with some facts that we are going to use later. Lemma 3.3. If P B(H) is a projection that has p 1, p 2, as diagonal elements, and σ : N N is any bijection, then there exists a projection P B(H) with p σ(1), p σ(2), as diagonal elements. Proof. We have P B(H) is a projection with diagonal {p i } i {1,2, } i.e. p i = Pe i, e i, where {e i } is an orthonormal basis. We define a unitary U by Ue i e σ(i). Then we define a projection P = U PU. The i th diagonal element of this projection is given 28

35 by P e i, e i = U PU e i, e i = P Ue i, Ue i = Pe σ(i), e σ(i) = P σ(i). Lemma 3.4. Let α 1, α 2,, α n, β [0, 1], such that α 1 +α 2 + +α n = β+m, m N. Then {}}{ (α 1, α 2,, α n ) (β, 1,..., 1, 0). m Proof. Since α j [0, 1] for all j, we have α 1 1 α 1 + α α α m m. As α 1 + α α n = β + m, we have for any k {m + 1,, n}, α α k α 1 + α α n = m + β. The following lemma is proven in [5]. While the result certainly looks obvious as expected from the finite-dimensional analogue and its proof is not very hard, it is not elementary either. We will generalize it in the proof of Theorem

36 Lemma 3.5. Let P, Q B(H) be two orthogonal projections such that P Q is trace-class. Then tr (P Q) Z. Lemma 3.6. Let T B(H) be a trace class operator. Let R 1, R 2, be pairwise orthogonal finite-rank projections with k R k = I. Then tr (T ) = k tr (T R k ). Proof. k (T R k) = T k (R k) = T I. This implies T = k T R k. So, if we construct an orthonormal basis {e i } by joining orthonormal bases corresponding to each R k H, we get tr (T ) = i T e i, e i = i,k = k = k T R k e i, e i T R k e i, e i e i rang R k tr (T R k ). Let K be subspace in H and consider the orthogonal projection Q K onto K. Let q 1, q 2, be the diagonal of Q K. If Q K has finite rank (i.e. if dim K < ), then j q j = tr (Q K ) N. But what happens if j q j = (i.e. dim K = )? Since Q K is a projection, then I Q K is going to be a projection too with diagonal 1 q 1, 1 q 2,. Therefore tr (I Q K ) = j (1 q j) will have to be an integer if finite (this is an elementary exercise in Functional Analysis, or it can be obtained from Effros Lemma 3.5). For example, if q j = 1 1 j 2, then the diagonal of I Q K would be j 1 j 2 = π2 1, not an integer. So no projection with diagonal {1 } 6 j 2 30

37 exists. That is, when K is finite-dimensional or it is orthogonal complement K is, we obtain an obstruction to what the possible diagonals of Q K are. We will address this case in Section 3.2. What about when K, K are both infinite-dimensional (i.e. j q j = j (1 q j) = )? The next Theorem (3.8) will illustrate this condition and give us the complete idea of the whole situation. The proof of Theorem 3.8 in the original paper [9] is kind of complicated, so we tried to simplify it as much as we could. Definition 3.7. Let {e n } be an orthonormal basis. Then we define the matrix units {E mn } m,n associated to {e n } as the rank-one operators E mn x = x, e n e m, x H. The following is the main result in this thesis. Theorem 3.8. Given {e j } j N an orthonormal basis of H and {a j } j N [0, 1], the following statements are equivalent: 1. There exists an infinite dimensional subspace K H with infinite dimensional complement such that P k e j 2 = a j j N. 2. j N a j = and j N (1 a j) = ; and either (i) or (ii) holds: (i) a = or b = ; (ii) a <, b <, and a b Z, where a = a j 1/2 a j, b = a j >1/2 a j. Proof. 2(i) = (1): First we will show that when a =, then there exists a projection P with diagonal 31

38 {a j }. Let N 0 = {j : a j = 0 or a j = 1}, K = span{e j : j N 0 }. Then for j / N 0, a j (0, 1). If we find P 0 on B(K ) with diagonal {a j : j / N 0 }, then P = P 0 + P 1 satisfies 1, where P 1 = j N 0 a j E jj. So we will assume that a j (0, 1) for all j. We consider a decomposition {a j } = {a j } {a j}, where 0 a a j 1, so a, b will be j=1 a j, j=1 1 a j. Let a j j 1 2, and 1 2 < = a γ(j), a j = a δ(j), and N = {δ(n) : n N}, N = {γ(n) : n N}. Let n(1) = min{n : a 1+a 1+ +a n 3}; notice that each a j 1 2, a 1 < 1 so n(1) 5. Let b 1 = a 1 (b 2,, b n(1) ) = (a 2,, a n(1)). Let m(1) = min{n : a 1 + b b n 3}; then 5 m(1) n(1). Let ǎ = 3 a 1 b m(1) 1 = b m(1) 1 + ǎ b m(1) = b m(1) ǎ. m(1) 1 1 b j As a 1 + m(1) 1 m(1) 2 b j < 3, a 1 + m(1) 2 1 b j + b m(1) 1 = 3, we have ǎ 0 and 0 b m(1) < b m(1) b m(1) 1 < b m(1) 1 1. Let N 1 = {δ(1), γσ 1 (1),, γσ 1 (m(1) 1)}, where b j = a σ 1 (j) = a γσ 1 (j) for certain permutation σ 1. Let j(1) = γσ 1 (m(1)), j(2) = δ(2), and {j(n)} n 3 an increasing enumeration of N \(N 1 {j(1), j(2)}). Let n(2) = min{n : a 2 +b m(1) + n 3 a j(k) 3}. 32

39 Let c 1 = b m(1) c 2 = a 2 c 3 = a j(3) (c 4,, c n(2) ) = (a j(4),, a j(n(2)) ). Let m(2) = min{m : m j=1 c j 3}; then m(2) 1 j=1 c j < 3, 6 m(2) n(2). Define m(2) 1 ˇb = 3 j=1 c m(2) 1 = c m(2) 1 + ˇb c m(2) = c m(2) ˇb. c j Then 0 ˇb c m(2) 1, and 0 c m(2) c m(2) c m(2) 1 c m(2) 1 1. Let N 2 = {j(1), j(2)} {n : k, 3 k m(2) 1 with c k = a n }. Let k(1) be such that a k(1) is c n(2), k(2) = δ(3), write N \ (N 1 N 2 ) = {k(1), k(2), } with {k(3), k(4), } in ascending order. 33

40 Let n(3) = min{n : a 3 + c m(2) + n r=3 a k(r) 3} and let d 1 = c m(2) d 2 = a 3 d 3 = a k(3) (d 4,, d n(3) ) = (a k(4),, a k(n(3)) ), and let m(3) = min{n : m j=1 d j 3}. Then m j=1 d j 3, m(3) 1 j=1 d j 3, and 6 m(3) n(3). Let č = 3 m(3) 1 j=1 d m(3) 1 = d m(3) 1 + č, d m(3) = d m (3) č, d j, so that 0 < č d m(3) 1 2, and 0 d m(3) < d m(3) d m(3) 1 < d m(3) 1 1. By repeating these processes we will build pairwise disjoint subsets N 1, N 2, of N such that N 1 N 2 = N. We can write N j = {p j (1),, p j (m(j) 1)}, then b m(j 1) + a j + m(j) 2 k=3 a pj (k) + b m(j) 1 = 3. (3.8) By Theorem 2.12 we will get a self adjoint projection E j with diagonal {b m(j 1), a j, a pj (3),, a pj (m(j) 2), b m(j) 1}. 34

41 We also have b m(1) + b m(j) 1 = b m(1) + b m(j) 1, (3.9) and 0 b m(j) b m(j) b m(j) 1 b m(j) 1. (3.10) If we write P = E j, we get j=1 P = a 1 a p1 (2) a p1 (m(1) 2) b m(1) 1 b m(1) a 2 0 a p2 (2) b m(2) 1 b m(2).... The projection P has all the a j in its diagonal with the exception of the pairs 35

42 b m(j) 1, b m(j) in place of b m(j) 1, b m(j). We will now construct a unitary operator that will conjugate b m(j) 1, b m(j) into b m(j) 1, b m(j). So let U be U = sin θ 1 cos θ 1 cos θ 1 sin θ sin θ 2 cos θ 2 cos θ 2 sin θ 2..., where θ 1, θ 2, are to be determined. Let Q = UP U. Then every entry of Q outside the 2 2 blocks agrees with P. In the 2 2 blocks, sin θ j cos θ j cos θ j b m(1) 1 0 sin θ j sin θ j 0 b m(1) cos θ j cos θ j = sin θ j 36

43 b m(1) 1 sin2 θ + b m(1) cos2 θ b m(1) sin2 θ + b m(1) 1 cos2 θ The conditions 3.9 and 3.10 guarantee that for each j there exists t j [0, 1] such that b m(j) = t j b m(j) + (1 t)b m(j) 1, b m(j) 1 = (1 t j )b m(j) + t j b m(j) 1. Choosing θ j so that t j = sin θ j, we get the desired diagonal for Q. This proves the case a =. When b =, we can repeat the proof for the coefficients b i = 1 a i. That way we obtain a projection with E with diagonal 1 a j. Then I E is the projection we are looking for. 2(ii) = 1: We will show that if a <, b <, a b Z, then there exists a projection P with diagonal {a j }. By using Lemma 3.3, we can reorder the numbers {a j } [0, 1] as needed. Again write 0 a j 1 2 < a j 1, where {a j } {a j} = {a j }. Let a j = a γ(j), a j = a δ(j) such that N = {γ(n) : n N}, N = {δ(n) : n N}. Then a = a j N a j, b = a j N 1 a j. Since a, b are finite, we can get finite subsets N 1 N, N 1 N such that γ 1 = N \N 1 a j < 1, δ 1 = 1 a j < γ 1. N \N 1 37

44 We are given a b Z, so a b = N 1 a j + γ 1 ( N 1 (1 a j ) + δ 1 ) = a j + a j + γ 1 δ 1 N 1 N 1 N 1 = a j N 1 + γ 1 δ 1 Z. N 1 N 1 Then m = N a 1 N j + γ 1 δ 1 Z. Write N 1 N 1 = {k 1,, k r }. Since 1 0 γ 1 δ 1 < 1, 0 a j 1, and m = a k1 + + a kr + (γ 1 δ 1 ), we get from Lemma 3.4 that (a k1,, a kr, (δ 1 γ 1 )) m times r+1 m times {}}{{}}{ (1,, 1, 0,, 0). By Theorem 2.12 (CT-6), there exists a projection P 0 such that a k1... * P 0 =. a kr * γ 1 δ 1 38

45 By adding the element 1 to the diagonal of P 0 we get another projection P 1 P 1 = a k1... * 0 a kr * δ 1 γ Now we will pay attention only to the 3 3 block at the middle. Let N \ N 1 = {l 1, l 2, }, N \ N 1 = {m 1, m 2, }. We know 0 a li < 1, 0 a mi < δ 1 i N. Let γ 2 = γ 1 a l1, δ 2 = δ 1 (1 a m1 ), then γ 2 δ 2 = γ 1 δ 1 (a l1 + a m1 ) + 1 (3.11) γ 2 δ 2 + a l1 + a m1 = γ 1 δ (3.12) From Lemma 3.4, (a l1, a m1, γ 2 δ 2 ) (γ 1 δ 1, 1, 0). By the Schur-Horn Theorem 2.17 there exists a 3 3 self adjoint matrix U 1 such that 39

46 a l a m γ 2 δ 2 = diag (U 1 γ 1 δ U 1 ). So U 1 P 1 U 1 = a k a kr a l1 a m1 γ2 δ Now we define P 2 as 40

47 P 2 = a k a kr al1 a m1 γ2 δ i.e. P 2 = U 1 P 1 U 1 + E r+4,r+4. Since 0 a l2 γ 2, 0 1 a m2 δ 2, and by letting γ 3 = γ 2 a l2, δ 3 = δ 2 1 a m2, then again by Lemma 3.4, (a l2, a m2, γ 3 δ 3 ) (γ 2 δ 2, 1, 0). If we keep repeating the process we will end up with a sequence of projections {P n } B(H) such that diag(p N ) = a k1,, a kr, a l1,, a ln, a m1,, a mn, γ n+1 δ n+1, 0,. As the unitary U j+1 is the identity except possibly in the first r + 3j + 3 basis 41

48 elements, P i Q r+3j+4 = P j Q r+3j+4, i j (3.13) where is Q s = s h=1 E hh. This shows that the sequence {P n } converges strongly, Indeed, for any i j, ξ H, (P i P j )ξ = (P i P j )Iξ = (P i P j )[(I Q r+3j+4 ) + Q r+3j+4 ]ξ (P i P j )(I Q r+3j+4 )ξ + (P i P j )Q r+3j+4 ξ ( P i + P j ) (I Q r+3j+4 )ξ = 2 (I Q r+3j+4 )ξ. When h, Q h I strongly, so we find out that the sequence {P n ξ} is Cauchy in H, and so convergent. This shows {P n } B(H) is strongly convergent to a projection P. As P i Q r+3j+4 = P j Q r+3j+4, the diagonal of P is {a j }. (1) = (2): The fact that K, K are infinite-dimensional imply that j a j = j 1 a j =. So we need to show that either 2(i) or 2(ii) holds. If 2(i) holds, we are done. Otherwise we want to show that if there is a projection P B(H) such that diag(p ) = a n, then a b Z, where a = a i <, and b = (1 a i ) <. a i N a i N The proof of this is inspired by Effros proof of Lemma 3.5. Let Q B(H) be the 42

49 projection Q = n N E nn. Then tr (QP Q) = tr ( n N E nn P E nn ) = tr ( n N a n E nn ) = n N a n = a, and tr (Q P Q ) = tr ( E nn (I P )E nn ) n N = tr ( (1 a n )E nn ) n N = (1 a n ) n N = b. This implies that both QP Q, Q P Q are trace class, since they are positive. We now notice that P Q is Hilbert-Schmidt, and so in particular it is compact. 43

50 Indeed, using that P = P 2, (1 P ) 2 = (1 P ), we have tr ((P Q ) 2 ) = [(P Q )] hh h = P hk Q hk 2 (no problem exchanging the sums h k = P hk Q hk 2 since every term is non-negative) k h = P hk 2 + (1 P kk ) 2 + P hk 2 k N h k N n k = P kk + (1 P kk ) k N k N = a + b <. Since (P Q ) 2 is positive and compact, then we can write (P Q ) 2 = k λ kr k, where R k = R 1, R 2, are pairwise orthogonal finite rank projections and {λ i } i N arranged strictly in decreasing order and converging to zero. The points {λ i } are isolated points in the spectrum of (P Q ) 2, so there exist continuous functions f 1, f 2, such that E k = f k ((P Q ) 2 ). Since P (P Q ) 2 = (P Q ) 2 P, Q(P Q ) 2 = (P Q ) 2 Q (from a direct computation), we deduce that P R k = R k P, QR k = R k Q, k; in particular P R k, QR k are both finite rank projections. By keeping in 44

51 mind that QP Q, Q P Q are trace class and using Lemma 3.6, a b = tr (QP Q) tr (Q P Q ) = tr (QP Q Q P Q ) = k = k = k = k = k = k tr [(QP Q Q P Q )R k ] tr (QP QR k ) tr (Q P Q R k ) tr (QP R k QR k ) tr (Q P R k Q R k ) tr (P R k QR k P R k Q R k ) tr (P R k QR k (R k R k QR k P R k + P R k QR k )) tr (P R k + QR k R k ). P R k, QR k, and R k are finite rank projections and their traces are integers, so tr (P R k + QR k R k ) Z, k. As every term in the series is an integer, we conclude that they are eventually zero, and that a b Z. 3.2 One of the Subspaces K, K has Finite Dimension In this context we will show the Carpenter s theorem for the finite dimensional subspaces K, K of an infinite dimensional Hilbert space, where the sum of the diagonal elements of the projection P k is finite and integer. The proof comes as consequence of Theorem

52 Theorem 3.9. If {e j } j J is an orthonormal basis for an infinite-dimensional Hilbert space H, and {t j } j J [0, 1], then the following statements are equivalent: 1. There exists m-dimensional K H, such that P k e j 2 = t j. 2. j J t j = m. Proof. First we will show (2) (1): As in Theorem 3.8, we write {t j } = {t j} {t j }, 0 t j 1, and 1 < 2 2 t j j t2 j <, t j 0, so {t j } is necessarily finite, i.e. {t j } = {t 1,, t k }. Let 1. As a = t j, b = 1 1 (1 t j ). Then a b = = t j 1 t j k 1 1 = m k Z. (1 t j ) Since a b Z we apply Theorem 3.8 to get a projection P K such that the diagonal of P K is t 1, t 2,. And dimk = tr (P K ) = j=1 P Ke j, e j = j=1 t j = m. For the other direction (1) (2): Let P K be the orthogonal projection onto K. As dim K = m, we get m = tr (P K ) = j P K e j, e j = j P K e j 2 = j t j. When K has finite co-dimension, we can apply Theorem 3.9 to its orthogonal 46

53 complement K to obtain a projection P K with prescribed diagonal {s j } and P K = I P K. So the diagonal of P K is {1 s j }. We get thus the following result: Theorem If {e j } is an orthonormal basis of H, and {t j } j J [0, 1], then the following statements are equivalent: 1. There exists K H, of co-dimension m, such that P k e j 2 = {t j } j J. 2. j J (1 t j) = m. 3.3 A Pythagorean Theorem for Infinite Doubly Stochastic Matrices In chapter 2 we defined Pythagorean matrix (2.20), then we studied finite doubly stochastic matrices A, and we calculated the weight of some block B on the matrix A and the weight of it complementary block. Then we proved a Pythagorean Theorem (Theorem 2.21) for finite doubly stochastic matrices, where w(b) w(b ) Z. For an infinite doubly stochastic matrix we can t do that immediately because we are dealing with infinite rows and columns. Kadison defines the weight for an infinite doubly stochastic matrix, by taking the sum as a limit. The sets of the rows and the columns are infinite sets, so he takes all the family of the finite subsets, that ordered by inclusion gives us a net, and then he takes the limit over the net. By using Theorem 3.8 we will show a Pythagorean Theorem for infinite Pythagorean matrix when it has infinite complementary blocks, each of them with finite weight. Definition We say that a doubly stochastic matrix A is orthostochastic if A = U ij 2, where U is unitary matrix [4]. 47

54 Lemma If A is a Pythagorean matrix, then it is doubly stochastic. Proof. To show that a Pythagorean matrix is doubly stochastic we will show that it is orthostochastic. Let e i, f j be orthonormal bases with A = e i, f j 2. We know that there exists a unitary U, with Uf i = e i. So e i, f j 2 = Uf i, f j 2 = U ji 2. How can an infinite doubly stochastic matrix that is Pythagorean have infinite complementary blocks with finite weights? To make that clear we will discuss an example. Let 2 i if i Z a i = (1 2 i ) if i Z + and let a = i Z a i = 1, b = i Z + 1 a i = 1. Write Z 0 = Z + Z. By using Theorem 3.8, since a b Z, there exists an infinite dimensional K H, with infinite dimensional orthogonal complement K such that the diagonal of the projection P K is {a i } i Z0. When j Z +, (I P K )e j 2 = 1 P K e j 2 = 1 a j = 2 j. Let {f j } j Z+, {f j } j Z be orthonormal bases for K, K respectively. Let a ij = e i, f j 2 ; then A = (a ij ) is an infinite Pythagorean matrix. a ij = e i, f j 2 = P K e i 2 = a i, i Z j Z + j Z + and so j Z a ij = (I P k )e i 2 = 1 a i, i, 48

55 a ij = a i = 1 j Z + i Z i Z and a ij = a j = 1. i Z j Z + j Z + Thus the weight of both complementary blocks is finite, and the difference is an integer. Theorem If A is an infinite Pythagorean matrix, and B, B are a block and its complement in A, and both their weights are finite, then w(b) w(b ) Z. Proof. Since both blocks have finite weight, then both B, B are infinite (because the complement of a finite block has infinite weight). Let us write A = (a ij ), i, j Z 0, where B = (a ij ) i,j Z, and B = (a ij ) i,j Z+. As A is Pythagorean, there exist {e i } i Z0, {f j } j Z0 orthonormal bases for H such that e i, f j 2 = a ij i, j Z 0. Let K H be the subspace that is spanned by {f j } j Z. Then w(b) = a ij = e i, f j 2 i,j Z i Z j Z = e i, f j f j, e i i Z j Z = P K e i, e i = P K e i 2. i Z i Z Similarly, w(b ) = a ij = e i, f j 2 = (I P K )e i 2 = 1 P K e i 2. i,j Z + j Z + j Z + i Z + i Z + i Z + 49

56 As both weights are finite, (i) (ii) in Theorem 3.8 gives us w(b) w(b ) Z. If we compare the finite and infinite dimensional versions of the Pythagorean Theorem for doubly stochastic matrices 2.21 and 3.13, we note that in the finitedimensional case the result holds for any doubly stochastic matrix, while the infinite dimensional case requires an additional hypothesis (i.e. Pythagorean). We don t know if Theorem 3.13 holds for arbitrary doubly stochastic infinite matrices. We note below that not every doubly stochastic matrix is Pythagorean, starting with dimension 3. For example if we take any 2 2 doubly stochastic matrix, A = a 1 a 1 a a and then construct 2 orthonormal bases e 1, e 2, and f 1, f 2, where f 1 = a e a e 2, f 2 = 1 a e 1 + a e 2, then A ij = e i, f j 2, so A is Pythagorean. What about 3 3 doubly stochastic matrices? Consider A = If A were Pythagorean, n there would exist orthonormal bases {e 1, e 2, e 3 }, {f 1, f 2, f 3 } 50

57 with A ij = e i, f j 2. Then 0 = A 31 = e 3, f 1 2, so f 1 = se 1 + re 2, 0 = A 22 = e 2, f 2 2, so f 2 = pe 1 + qe 3. Also, 1 2 = A 11 = e 1, f 1 2 = s 2, so s = A 12 = e 1, f 2 2 = p 2, so p 0. But then f 1, f 2 = sp 0, contradicting the fact that {f 1, f 2, f 3 } is an orthonormal basis. So every 2 2 doubly stochastic matrix is Pythagorean, but bigger doubly stochastic matrices will not necessarily be Pythagorean. 51

58 Chapter 4 A Schur-Horn Theorem in Infinite Dimensional Case In this chapter we will see a generalization for Schur-Horn Theorem in infinite dimension by A. Neumann [11]. After that we will give an explanation for this result. After that, we will consider W. Arveson and R. Kadison s Schur-Horn theorem for positive trace-class operators on an infinite dimensional Hilbert space [3]. 4.1 Majorization in Infinite Dimension To begin, first we will define majorization in infinite dimension and for that we will use l (N) B(H) instead of using R n, such that l (N) has real entries. We chose l (N) B(H) here because the diagonal of a self adjoint operator as a matrix is a bounded sequence of real numbers, so we can think here about l (N) as diagonal self adjoint matrices inside B(H). We defined majorization in finite dimension as follows: 52

59 For any two finite vectors x, y R n, we say x is majorized by y, denoted x y, if and x i = x i y i for k < n, y i. One can try to generalize this, naively, by saying that for x, y l (N), x is majorized by y if and x i = x i y i for k <, y i. But such majorization would fail to generalize many of the properties that finitedimensional majorization enjoys. For instance in finite dimension (Proposition 1.4), we have that x y if and only if and x i = y i x i k < n, y i. But in l (N), the numbers x n are not defined if x = 1 1, and neither are n x n if x n = 1. Also, looking at Theorem 1.10, the sequences x = (1, 1, 1, ) and n 53

60 y = (2, 2, 2, ) would satisfy x y, but x / convpy (not even in the closure). What A. Neumann does to solve this problem, instead of taking these sums y i x i k < n, he defines U k (x) = sup{ i F x i : F = k} i.e. he takes all possible sums of cardinality k in x, and then he takes the supremum. But this is not enough, we can see the reason in the following example. Example 4.1. Let x, y l (N) such that x = (1, 1, 1, 1, ) and y = (1, 0, 1, 0, ), then U k (x) = k and U k (y) = k. So both vectors would majorize each other, and having double majorization one would expect both vectors to be permutations of each other, as in Theorem 1.3. So we would have x y, y x x = Py, which in this example is clear that can t happen. Note that if we have x i y i, and y i x i k 54

61 then x i = y i. Neumann uses this idea to (implicitly) define majorization in infinite dimension as follows [11]: Definition 4.2. Let x, y l (N), then we say x y if k N, U k (x) U k (y) and L k (x) L k (y), where U k (x) = sup{ i F x i : F = k}, L k (x) = inf{ i F x i : F = k}. So, in Example 4.1, U k (x) = L k (x) = k, U k (y) = k, L k (y) = 0; then x y is true, but y x. 4.2 Neumann s Schur-Horn Theorem We start by defining some terminology to be used later [15]. Definition 4.3. A subalgebra A B(H) is maximal abelian if any two elements from A commute, and A is not properly contained in any other commutative subalgebra A B(H). Definition 4.4. A von Neumann algebra M on H is a -subalgebra of B(H) such that M = M, where M is the double commutant of M. 55

62 Definition 4.5. We say that a von Neumann M is atomic when every nonzero projection majorizes a nonzero minimal projection. Definition 4.6. Let A M be subalgebra a von Neumann algebra M. Then we say a linear map E is a conditional expectation when E : M A is onto, E = E 2, and E = 1. Moreover, we say that E is trace preserving when tr E = tr. we have In the finite dimensional case, Schur-Horn Theorem 2.17 states for every x, y R n {M x x : x y} = D{UM y U : U U(H)}. If we want to do the analog thing for infinite dimensional case, take x, y l (N), where we see l (N) as the diagonal operators for a fixed orthonormal basis. If we write E for the projection onto the diagonal, then we expect {M x : x y} = E{UM y U : U U(H)}. This equality cannot hold without norm closure: if we go back to our Example 4.1, were x = (1, 1, 1, 1, ) and y = (1, 0, 1, 0, ), then x y but M x = I / E{UM y U : U U(H)}, while M x = I E{UM y U : U U(H)}. So the closure seems to be necessary to satisfy the equality of Schur-Horn Theorem in the infinite dimensional case, and this is what Neumann does to form the next theorem ([11], Corollary 2.18, and Theorem 3.13). 56

63 Theorem 4.7 ([11]). For x, y l (N) we have {M x : x y} = E{UM y U : U U(H)}. 4.3 A Strict Schur-Horn Theorem for Positive Trace-Class Operators We will define positive trace-class operators and then we will form a Schur-Horn Theorem for positive trace-class operator on infinite dimensional Hilbert space. We call this theorem strict, because no closure is required after projecting onto the diagonal. We refer to the definitions of positive and trace-class operators at the beginning of Chapter 3. Definition 4.8. For a trace-class operator A, its one-norm is A 1 = tr ((A A) 1 2 ). Definition 4.9. Let A, B B(H) be two trace-class operators. We say A and B are L 1 equivalent if there exist a sequence of unitary operators {U i } such that A U n BU n 1 0, when n. 57

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

We continue our study of the Pythagorean Theorem begun in ref. 1. The numbering of results and remarks in ref. 1 will

We continue our study of the Pythagorean Theorem begun in ref. 1. The numbering of results and remarks in ref. 1 will The Pythagorean Theorem: II. The infinite discrete case Richard V. Kadison Mathematics Department, University of Pennsylvania, Philadelphia, PA 19104-6395 Contributed by Richard V. Kadison, December 17,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

CONSTRUCTIVE PROOF OF THE CARPENTER S THEOREM

CONSTRUCTIVE PROOF OF THE CARPENTER S THEOREM CONSTRUCTIVE PROOF OF THE CARPENTER S THEOREM MARCIN BOWNIK AND JOHN JASPER Abstract. We give a constructive proof of Carpenter s Theorem due to Kadison [14, 15]. Unlike the original proof our approach

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

Constructive Proof of the Carpenter s Theorem

Constructive Proof of the Carpenter s Theorem Canad. Math. Bull. Vol. 57 (3), 2014 pp. 463 476 http://dx.doi.org/10.4153/cmb-2013-037-x c Canadian Mathematical Society 2013 Constructive Proof of the Carpenter s Theorem Marcin Bownik and John Jasper

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Compression, Matrix Range and Completely Positive Map

Compression, Matrix Range and Completely Positive Map Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS J. WARNER SUMMARY OF A PAPER BY J. CARLSON, E. FRIEDLANDER, AND J. PEVTSOVA, AND FURTHER OBSERVATIONS 1. The Nullcone and Restricted Nullcone We will need

More information

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS Adv. Oper. Theory 3 (2018), no. 1, 53 60 http://doi.org/10.22034/aot.1702-1129 ISSN: 2538-225X (electronic) http://aot-math.org POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MATH 426, TOPOLOGY. p 1.

MATH 426, TOPOLOGY. p 1. MATH 426, TOPOLOGY THE p-norms In this document we assume an extended real line, where is an element greater than all real numbers; the interval notation [1, ] will be used to mean [1, ) { }. 1. THE p

More information

MATH 326: RINGS AND MODULES STEFAN GILLE

MATH 326: RINGS AND MODULES STEFAN GILLE MATH 326: RINGS AND MODULES STEFAN GILLE 1 2 STEFAN GILLE 1. Rings We recall first the definition of a group. 1.1. Definition. Let G be a non empty set. The set G is called a group if there is a map called

More information

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch

Definitions, Theorems and Exercises. Abstract Algebra Math 332. Ethan D. Bloch Definitions, Theorems and Exercises Abstract Algebra Math 332 Ethan D. Bloch December 26, 2013 ii Contents 1 Binary Operations 3 1.1 Binary Operations............................... 4 1.2 Isomorphic Binary

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

Discrete Applied Mathematics

Discrete Applied Mathematics Discrete Applied Mathematics 194 (015) 37 59 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: wwwelseviercom/locate/dam Loopy, Hankel, and combinatorially skew-hankel

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS

ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS J. OPERATOR THEORY 44(2000), 243 254 c Copyright by Theta, 2000 ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS DOUGLAS BRIDGES, FRED RICHMAN and PETER SCHUSTER Communicated by William B. Arveson Abstract.

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d

(1) A frac = b : a, b A, b 0. We can define addition and multiplication of fractions as we normally would. a b + c d The Algebraic Method 0.1. Integral Domains. Emmy Noether and others quickly realized that the classical algebraic number theory of Dedekind could be abstracted completely. In particular, rings of integers

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Math 110, Spring 2015: Midterm Solutions

Math 110, Spring 2015: Midterm Solutions Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y). Exercise 71 We have L( x) = x 1 L( v 1 ) + x 2 L( v 2 ) + + x n L( v n ) n = x i (a 1i w 1 + a 2i w 2 + + a mi w m ) i=1 ( n ) ( n ) ( n ) = x i a 1i w 1 + x i a 2i w 2 + + x i a mi w m i=1 Therefore y

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Math 123 Homework Assignment #2 Due Monday, April 21, 2008

Math 123 Homework Assignment #2 Due Monday, April 21, 2008 Math 123 Homework Assignment #2 Due Monday, April 21, 2008 Part I: 1. Suppose that A is a C -algebra. (a) Suppose that e A satisfies xe = x for all x A. Show that e = e and that e = 1. Conclude that e

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS 1. Lie groups A Lie group is a special smooth manifold on which there is a group structure, and moreover, the two structures are compatible. Lie groups are

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

CHAPTER 6. Representations of compact groups

CHAPTER 6. Representations of compact groups CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information