SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE
|
|
- Clare Wade
- 5 years ago
- Views:
Transcription
1 SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices. A set I in K n is said to be a Schur ideal if it is closed under addition, multiplication by nonnegative scalars, and Schur multiplication by any element of K n. A Schur homomorphism of K n is a mapping of K n to itself that preserves addition, (nonnegative) scalar multiplication and Schur products. This paper is concerned with Schur ideals and homomorphisms of K n. We show that in the topology induced by the trace inner product, Schur ideals in K n need not be closed, all finitely generated Schur ideals are closed, and in K 2, a Schur ideal is closed if and only if it is a principal ideal. We also characterize Schur homomorphisms of K n and, in particular, show that any Schur automorphism of K n is of the form Φ(X) = P XP T for some permutation matrix P. Key words. Semidefinite cone, Schur ideals and homomorphisms AMS subject classifications. 5A86, 5B48, 5B57. Introduction. Consider the Euclidean space R n whose elements are regarded as column vectors. Let S n denote the set of all real n n symmetric matrices. It is known that S n is a finite dimensional real Hilbert space under the trace inner product defined by X, Y = trace(xy ) = i,j x ij y ij, where X = x ij and Y = y ij. The space S n also carries the Schur product (sometimes called Hadamard product) defined by X Y := x ij y ij. With respect to the usual addition, (real) scalar multiplication, and the Schur product, S n becomes a commutative algebra. In this algebra, a nonempty set is an ideal if it is closed under addition, scalar multiplication, and (Schur) multiplication by any element of S n. Also, a (Schur) homomorphism of S n is a linear transformation on S n that preserves Schur products. We note that every ideal in this algebra, being a finite December 0, 203, revised September 29, 204 Department of Computational and Applied Mathematics, Rice University, Houston, TX 77005, USA; babhru.joshi@rice.edu Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland 2250, USA; gowda@umbc.edu
2 2 B. Joshi and M.S. Gowda dimensional linear subspace of S n, is topologically closed (in the topology induced by the trace inner product) and finitely generated. Also, it is easy to describe ideals and homomorphisms of this algebra, see Sections 3 and 6. The aim of this paper is to study Schur ideals and homomorphisms of the semidefinite cone. A matrix A S n is said to be positive semidefinite if x T Ax 0 for all x R n. The set of all positive semidefinite matrices in S n is the semidefinite cone and is denoted by K n. It is known that K n is a closed convex cone in S n ; this means that K n is topologically closed and X, Y K n, 0 λ R X + Y K n, λx K n. A famous theorem of Schur (see 5, Theorem 7.5.3) says that X, Y K n X Y K n. So K n is closed under addition, multiplication by nonnegative scalars, and the Schur product. Thus, following 2, we say that K n is a semi-algebra (under these operations). We call nonempty subsets of K n that share these properties sub-semi-algebras of K n, and note the following examples: Example. Let DN N n denote the set of all n n doubly nonnegative matrices these are matrices in S n that are both nonnegative and positive semidefinite. This set is a sub-semi-algebra of K n. Example 2. Let CP n denote the set of all completely positive matrices in S n. These are matrices of the form N uu T, where u R n + (the nonnegative orthant of R n ) and N is a natural number. Alternatively, these are matrices of the form BB T, where B is a nonnegative (rectangular) matrix. Since uu T vv T = (u v) (u v) T, where u v is the componentwise product of two vectors u and v, it follows that the Schur product of two completely positive matrices is once again a completely positive matrix. Thus, CP n is a sub-semi-algebra of K n. These two cones, along with K n, have become very important in optimization, particularly in conic linear programming (where one minimizes a linear function subject to linear constraints over a cone); see and 4. Thus, the problem of finding/characterizing such sub-semi-algebras of K n and studying their properties becomes interesting and useful. Motivated by this, we consider special sub-semi-algebras
3 Schur ideals and homomorphisms 3 of K n called Schur ideals. These are nonempty subsets of K n that are closed under addition, multiplication by nonnegative scalars, and Schur multiplication by any element of K n. A nonempty set I K n is a Schur ideal in K n if X, Y I, Z K n, 0 λ R X + Y, λx, X Z I. We also consider related Schur homomorphisms of K n, which are mappings from K n to K n that preserve addition, (nonnegative) scalar multiplication, and Schur products. Now, motivated by the problem of describing Schur ideals and homomorphisms of the semidefinite cone, we raise the following questions (using the term closed to mean topologically closed ): (a) Is every Schur ideal in K n closed? How about finitely generated ones? (b) Is every closed Schur ideal of K n finitely generated? (c) What are the Schur homomorphisms/automorphisms of K n? We show that: () Schur ideals in K n need not be closed, (2) Finitely generated Schur ideals are always closed, (3) In K 2, a Schur ideal is closed if and only if it is a principal ideal, and (4) Any Schur homomorphism of K n is, up to permutation, conjugation by a 0/-matrix with each row containing at most one followed by pinching. In particular, a Schur automorphism of K n (this is a bijection of K n that preserves the three operations on K n ) is of the form where P is a permutation matrix. Φ(X) = P XP T, Section 2 covers some background material. Section 3 deals with two elementary results characterizing Schur ideals in S n and K n. In Section 4, we give an example of a non-closed Schur ideal in any K n, n 2. In Section 5, we show that all finitely generated Schur ideals in K n are closed. Finally, in Section 6, we describe Schur homomorphisms/automorphisms on K n. 2. Preliminaries. In the Euclidean space R n, we let e, e 2,..., e n denote the standard unit vectors (so that e i has one in the i th slot and zeros elsewhere). We let e = e + e e n and E := ee T. We also let E ij be a symmetric matrix with ones in the (i, j) and (j, i) slots and zeros elsewhere; in particular, E ii has a in the (i, i) slot and zeros elsewhere. A matrix is a 0/-matrix if each entry is either 0 or. We let S n denote the space of all n n real symmetric matrices. S n becomes a Hilbert space under the trace product X, Y = trace(xy ) = i,j x ijy ij, where
4 4 B. Joshi and M.S. Gowda X = x ij and Y = y ij. The corresponding norm (called the Frobenius norm) is given by X = ij x2 ij. Thus, convergence in this space is entrywise. From now on, we say that a set in S n is closed if it is topologically closed under the Frobenius norm. We note the following property of the inner product: for all X, Y and Z in S n. We let X Y, Z = X, Y Z K n := {A S n : x T Ax 0 x R n }. It is known that K n is a closed convex cone in S n. In particular, as convergence in S n is entrywise, every bounded sequence in K n has a subsequence that converges to an element of K n. In K n, we have the Löwner ordering: It is known (see 5, Corollary 7.5.4) that Y X or X Y when Y X K n. (2.) U, V 0 U, V 0. Thus, (2.2) 0 X Y X, X X, Y Y, Y X Y. It is known that a real symmetric matrix is positive semidefinite if and only if all its principal minors are nonnegative. In particular, for X = x ij K n, (2.3) x ii 0 and x ii x jj x 2 ij, for all i and j. Thus, if a diagonal entry of X (belonging to K n ) is zero, then all elements in the row/column containing this entry are zero. Also, if some off-diagonal entry is nonzero, then the corresponding diagonal entry is nonzero. Let diag(k n ) denote the set of all diagonal matrices in K n. The interior of K n is denoted by K n. Note that this interior consists of all positive definite matrices of S n these are matrices A in S n satisfying x T Ax > 0 for all 0 x R n. The pinching operation/transformation on S n is defined as follows: By choosing a partition of := {, 2,..., n}, we write any matrix X S n in the block form X = X αβ. Then the pinching operation/transformation (corresponding to this partition)
5 Schur ideals and homomorphisms 5 takes X to Y = Y αβ, where Y αβ = X αβ when α = β and 0 otherwise. (This means that Y is obtained from X by setting all the off-diagonal blocks to zero.). We write Y = X pinch. For any finite set {C, C 2,..., C N } in K n, we let { } I {C,C 2,...,C N } := Ci X i : X i K n, i =, 2,..., N denote the Schur ideal generated by {C, C 2,..., C N }. We say that a Schur ideal of K n is a principal ideal if it is generated by a single matrix in K n. 3. Two elementary results on Schur ideals. We start with a description of Schur ideals in S n. Let := {, 2, 3..., n}. We say that a subset I of is symmetric if (i, j) I (j, i) I. Proposition 3.. Given any symmetric subset I of, the set I = {X = x ij S n : x kl = 0 for all (k, l) I} is a Schur ideal of S n. Conversely, every Schur ideal of S n arises this way. Proof. The first part of the proposition is easy to verify. We prove the converse part. Let I be any Schur ideal of S n. If I = {0}, we take I =. If I = S n, we take I =. Now assume that {0} I S n. Define I := {(k, l) : x kl = 0 for all X = x ij I} and let J be the complement of I in. Let Ĩ = {X = x ij S n : x kl = 0 (k, l) I}. Then, I Ĩ. Now for any (k, l) J, there is a Y I with y kl 0. For this (k, l), E kl = y kl Y E kl I. Therefore, for any X Ĩ, X = x ij = i j n x ije ij = k l n,(k,l) J x kle kl J R E ij I. It follows that I = Ĩ. The following proposition gives a necessary and sufficient condition for a set to be a Schur ideal in K n. Proposition 3.2. Let I be a set in K n containing zero. Then I is a Schur ideal in K n if and only if (i) I is a convex cone, and (ii) DXD I for every diagonal matrix D S n and X I.
6 6 B. Joshi and M.S. Gowda Proof. Suppose I is a Schur ideal in K n. Then for any λ 0 and X I, we have λx I. From this, it follows that I is a convex cone. Now take any diagonal matrix D = diag(r, r 2,..., r n ) in S n. Then, for r = r, r 2,..., r n T, rr T K n and hence DXD = X rr T I. This proves (ii). The converse is proved by using the fact that any matrix in K n is a finite sum of matrices of the form rr T. 4. A non-closed Schur ideal of K n. In this section, we show that in any K n, n 2, there is a Schur ideal that is not closed. First, we construct such an ideal in the cone K 2 of all 2 2 real symmetric positive semidefinite matrices. Example 3. The set E = diag(k 2 ) K 2 is a non-closed ideal in K 2. To see this, let λ 0, A diag(k 2 ), B K2 and X K 2. Then, A + A diag(k 2 ), K 2 + B K2, λ(a + B) E, and A X diag(k 2 ). If X is diagonal, then B X diag(k 2 ). Suppose X is not diagonal so that x y X =, y z with x > 0, z > 0 and xz y 2 0. Now write a b B = b c, where a > 0, c > 0 and ac b 2 > 0; we see that ax > 0, cz > 0 and (ax)(cz) > (b 2 )(y 2 ). This means that B X K2. Hence in all cases, (A + B) X E. Thus, E is an ideal of K 2. To see that E is not closed, we produce a sequence in E whose limit is not in E: k k. Now, in any K n, n 3, we define the set of block matrices { } X 0 F = : X E. This is a Schur ideal in K n that is not closed. 5. Finitely generated ideals in K n. Example 3 raises the question of which Schur ideals are closed in K n. The following result partially answers this question.
7 Schur ideals and homomorphisms 7 Theorem 5.. Every finitely generated Schur ideal in K n is closed. The proof is based on the following two lemmas. Lemma 5.2. Let A K n with diag(a) > 0. If X k is a sequence in K n with A X k convergent, then X k is a bounded sequence. Proof. Let X k =, A = a ij and A X k Y. Then, by componentwise x (k) ij convergence, for all i, i n, a ii x (k) ii positive semidefinite and so by (2.3), y ii and so x (k) ii x (k) ii x(k) jj (x (k) ij )2 yii a ii. Now, each X k is for all i, j =, 2, 3,..., n. As x (k) ii and x (k) jj are bounded sequences, it follows that x (k) ij is also bounded for all i, j. Hence X k is a bounded sequence. Lemma 5.3. Let X k be a sequence in K n and A K n with A X k Z K n. Then there is an X K n such that A X = Z. Proof. If A = 0, we can take X = 0. So, assume A 0 and without loss of A 0 generality write A =, where A is positive semidefinite and diag(a ) > 0. If we partition each X k and Z conformally to A, we get A 0 X (k) X (k) 2 (X (k) 2 )T X (k) 3 Z Z 2 Z2 T Z 3. Thus, A X (k) Z, Z 2 = 0, and Z 3 = 0. Now, from Lemma 5.2, X (k) is a bounded sequence; we may assume that it converges to X. Then Hence Since X 0 A X (k) 0 A X 0 X 0 A K n, the result follows. = Z. = Z 0 Proof of Theorem 5.. Let E be an ideal in K n generated by a finite number of matrices. For simplicity, we assume that E is generated by two, say, A and B, so that any element of E is of the form A X + B Y for some X, Y K n..
8 8 B. Joshi and M.S. Gowda To show that E is closed, let X k and Y k be sequence of matrices in K n such that A X k + B Y k Z. We show that Z = A X + B Y for some X, Y K n. Now, with respect to the Löwner ordering, 0 A X k A X k + B Y k. From (2.2), we get A X k A X k + B Y k. Thus, the sequence A X k is bounded. We may assume (by going through a subsequence, if necessary) that A X k U. By Lemma 5.3, we may write U = A X for some X K n. Similarly, we may assume that B Y k V, where V = B Y for some Y K n. It follows that Z = A X + B Y with X, Y K n. Thus, E is closed. Remarks. By modifying the above proof, one can show the following: If I and J are two closed ideals in K n, then so is I + J. In particular, if I is a closed ideal in K n and I := {Y K n : Y, X = 0 X I}, then I + I is a closed ideal in K n. We note, however, that I may be just {0}, as in the case of the ideal consisting of all diagonal matrices in K n. Remarks. Every finitely generated ideal in K n is closed, but we do not know if the converse holds. In the following theorem, we show that the converse does hold for K 2. Theorem 5.4. Every closed ideal in K 2 is a principal ideal. Conversely, every principal ideal in K 2 is closed. We begin with a lemma. Lemma 5.5. Let E be a closed ideal in K 2 that contains a matrix with an offdiagonal entry nonzero. Let (5.) Then, u u { β α β u := sup : αγ β γ E. Moreover, a b E b c } E and α, β, γ > 0. a b u c b u K 2. α β Proof. Let E with β 0; we may, if necessary, consider the Schur β γ product of this matrix with the positive semidefinite matrix and assume that β > 0. Then α, β, and γ are positive. Since αγ β 2, it follows that u in (5.)
9 Schur ideals and homomorphisms 9 αk β is well defined. Now consider a sequence k E such that α k, β k, γ k > 0 β k γ k and β k αk γ k u as k. Since α k αk γ k αk γ k γ k is positive semidefinite (as it is symmetric and all its principal minors are nonnegative) and E is an ideal, it follows that β αk β k α k αk γ k αk k γ = k E. β k γ k αk γ k γ k Since E is closed, letting k, we have u u E. β αk k γ k Now for the second part of the lemma. The implication is obvious if b = 0. If a b b 0, then a, c > 0 and E and so (by definition of u), u b b c ac. This a b shows that the symmetric matrix u b belongs to K 2 as its principal minors are u c nonnegative. Proof of Theorem 5.4. If E is a principal ideal, then it is closed by Theorem 5.. For the converse, let E be a closed ideal in K 2. Suppose first that E contains a matrix with a nonzero off-diagonal entry. Then from the preceding lemma there exists a u nonzero number u with U := E. Now, the principal ideal generated by U u is contained in E. The identity a b u a b = u b c u c shows that every element of E is in this principal ideal. Hence, E is a principal ideal generated by U. Now suppose that every off-diagonal entry of every matrix in E is zero. Then E consists of diagonal matrices. In this case, E is generated by one of the following: 0 0,,,. 0 0 Thus, we have shown that E is a principal ideal. b u 6. Schur homomorphisms of K n. In this section, we characterize Schur homomorphisms of K n. A map f : K n K n is a Schur homomorphism if for all X, Y K n
10 0 B. Joshi and M.S. Gowda and λ 0 in R, we have f(x + Y ) = f(x) + f(y ), f(λ X) = λf(x), and f(x Y ) = f(x) f(y ). These are special cases of Schur multiplicative maps (which preserve the Schur products) studied in 3. To motivate our results, we consider an n n 0/-matrix Q with each row containing at most one. Then, the mapping X QXQ T is a Schur homomorphism of K n. The process of pinching (corresponding to a partition of ) followed by a permutation continues to retain the Schur homomorphism property, that is, the mapping X P (QXQ T ) pinch P T is a Schur homomorphism of K n, where P denotes a permutation matrix. We now show that every Schur homomorphism arises this way. Let f be a Schur homomorphism on K n. Then f can be extended to S n in the following way. As S n = K n K n, any element X of S n can be written as X = A B with A, B K n. Then we define (6.) F (X) := f(a) f(b). Note that F (X) is well defined: If X = A B = C D with A, B, C, D K n, then A + D = B + C and so f(a) + f(d) = f(b) + f(c) yielding F (X) = f(a) f(b) = f(c) f(d). Then, F : S n S n is a Schur homomorphism of S n, that is, F is linear and for all X, Y S n, F (X Y ) = F (X) F (Y ). We now characterize Schur homomorphisms of S n, such as F. First, a definition. We say that a mapping φ : is symmetric if φ(i, j) = φ(j, i) for all (i, j); for notational convenience, we sometimes write φ(ij) in place of φ(i, j). Corresponding to such a mapping, we define Φ : S n S n by (6.2) Φ(X) = Y where y ij = x φ(ij). Theorem 6.. Given any symmetric mapping φ on and a symmetric 0/-matrix Z, the mapping F : S n S n defined by F (X) = Z Φ(X) (X S n ) is a Schur homomorphism on S n. arises in this way. Conversely, every Schur homomorphism of S n Proof. The first part in the theorem is easily verified. We prove the converse. Consider a Schur homomorphism F on S n. In order to simplify the proof and make it more transparent, we think of matrices X and Y as vectors x and y in R d, where d = n2 +n 2. Then F (X) can be regarded as a linear transformation f(x) = Ax on R d
11 Schur ideals and homomorphisms (for some square matrix A) satisfying A(x y) = A(x) A(y). This equation splits into d independent equations of the form a T (x y) = (a T x)(a T y), where a is any row vector of A. From this we see that a is a 0/-vector with at most one. Thus, A is a 0/-matrix with at most one in each row. Reverting back to F, we get the stated result. Remarks. The preceding theorem is analogous to Theorem. in 3, proved for Schur multiplicative maps on R m n under a nonsingularity assumption, but without any linearity assumptions. Before we present our general result on the Schur homomorphisms of K n, we introduce some notation and prove a lemma. For m n, let α := {, 2,..., m}. Corresponding to a symmetric mapping ψ : α α, we define Ψ : S n S m by (6.3) Ψ(X) = Y, where y ij = x ψ(ij). Lemma 6.2. Consider ψ and Ψ defined in (6.3). Then Ψ(K n ) K m if and only if there exists an m n 0/-matrix Q such that each row of Q contains exactly one and Ψ(X) = QXQ T for all X K n. Proof. Since the if part is obvious, we prove the only if part. Suppose Ψ(K n ) K m. We prove the following statements: (a) If ψ(i, i) = (k, l), then k = l. (b) If i j, ψ(i, i) = (k, k) and ψ(j, j) = (l, l), then ψ(i, j) {(k, l), (l, k)}. (This implies that y ij = x ψ(ij) = x kl.) (a) If possible, let ψ(i, i) = (k, l) with k l. Then there exists a matrix X K n such that x kl =. But then, for Y = Ψ(X), y ii = x ψ(ii) = x kl =. This cannot happen as Y, being positive semidefinite, has all nonnegative diagonal entries. Hence k = l. (b) Let i j, ψ(i, i) = (k, k) and ψ(j, j) = (l, l). We consider two cases. Case : k = l. Without loss of generality, let k = l =. Suppose ψ(i, j) (, ). Consider u =, 2, 3,..., n T and X = uu T K n. Then every entry in X other than the (, ) is bigger than one. Moreover, y ii = y jj = x = and y ij = x ψ(ij) >. As Y = Ψ(X) K m, these values of Y violate (2.3). Hence, ψ(i, j) = (, ) = (k, l). Case 2: k l. Without loss of generality (using (a)), let ψ(i, i) = (, ) and ψ(j, j) = (2, 2). We need to show that ψ(i, j) {(, 2), (2, )}. Assuming this does not hold, we consider the following 2 2 principal submatrix of Y : yii y ij xψ(ii) x = ψ(ij) x x (6.4) = ψ(ij). y ij y jj x ψ(ij) x 22 x ψ(ij) x ψ(jj)
12 2 B. Joshi and M.S. Gowda Case 2.: Suppose ψ(i, j) = (, ). Then, taking u = 2,, 0,..., 0 T, X = uu T, we see that the principal submatrix (6.4) of Y violates (2.3). Hence this case cannot arise. Case 2.2: Suppose ψ(i, j) = (2, 2). Then, taking u =, 2, 0,..., 0, X = uu T, we see that the principal submatrix (6.4) of Y violates (2.3). Hence this case cannot arise. Case 2.3: Suppose ψ(i, j) {(, ), (2, 2), (, 2), (2, )}. Then, taking X = uu T, where u =,, 2, 2,..., 2 T, we see that even in this case, the principal submatrix (6.4) of Y violates (2.3). We conclude that ψ(i, j) {(, 2), (2, )}. Thus, we have (b). Having proved Items (a) and (b), we now define the m n 0/-matrix Q = q ij as follows: For any pair (i, k) with i {, 2,..., m} and k {, 2,..., n}, q ik = when ψ(i, i) = (k, k) and q ik = 0 otherwise. Then each row of Q contains exactly one and Ψ(X) = QXQ T for all X S n. We now come to the description of Schur homomorphisms of K n. Theorem 6.3. Let Q be an n n 0/ matrix with each row containing at most one and P be an n n permutation matrix. Given a (specific) pinching operation, f(x) := P (QXQ T ) pinch P T (X S n ) defines a Schur homomorphism of K n. Conversely, every Schur homomorphism of K n arises this way. Proof. We have shown that f(x) := P (QXQ T ) pinch P T (X S n ) defines a Schur homomorphism. We prove the converse. Suppose f is a nonzero Schur homomorphism of K n. We extend f to F on S n via (6.). By Theorem 6., there exists a nonzero 0/ symmetric matrix Z and a symmetric mapping φ such that F (X) = Z Φ(X) X S n, where Φ is defined by (6.2). Since Φ(E) = E (for any mapping φ), we see that Z = Z E = Z Φ(E) = F (E) = f(e) K n. Thus, the 0/ symmetric matrix Z is also positive semidefinite. Now by Proposition 2 in 6, we see that up to a permutation, Z is of the form Z 0 0 Z Z = (6.5) 0,. 0 Z k 0 0 where k n and each Z i is a matrix of ones. We assume without loss of generality that Z is of the form (6.5) (this induces our permutation P ) and F (X) = Z Φ(X) for
13 Schur ideals and homomorphisms 3 all X. For each i =, 2,..., k, let α i be a subset of {, 2,..., n} such that Z αiα i = Z i. We let m i := α i (the number of elements in α i ). These sets are disjoint. Now (F (X)) αiα i = Z i (Φ(X)) αiα i = (Φ(X)) αiα i. We know that for each X K n, F (X) = f(x) K n. Thus, for each i =, 2,..., k, (Φ(X)) αiα i K mi. Now we define ψ i : α i α i by ψ i (k, l) := φ(k, l). Then, for the corresponding mapping Ψ i we have Ψ i (K n ) K mi. By Lemma 6.2, for each i =, 2,..., k, there exists an m i n 0/-matrix Q i (with exactly one in each row) such that Ψ i (X) = Q i XQ T i. Now, let Q k+ be the zero matrix and define the n n 0/-matrix Q with blocks Q, Q 2,..., Q k+. A computation shows that QXQ T has diagonal blocks Q i XQ T i, i =, 2,..., k +. By setting the offdiagonal blocks to zero (pinching), We see that F (X) = (QXQ T ) pinch. By taking into consideration the permutation used to rearrange rows and columns of Z, we finally arrive at f(x) = P (QXQ T ) pinch P T. The following result characterizes Schur automorphisms of K n. Corollary 6.4. Let f : K n K n be a Schur homomorphism. Then the following are equivalent: (i) f is one-to-one (or onto) on K n. (ii) There exists a permutation matrix P such that f(x) = P XP T K n. (iii) f is a Schur automorphism of K n. for all X Proof. Let f : K n K n be a Schur homomorphism. Then, Theorem 6. ensures that its extension F to S n can be written in the form F (X) = Z Φ(X) for all X S n. (i) (ii): Suppose that f is one-to-one. Then, F is also one-to-one. (When f is onto, that is, f(k n ) = K n, its extension F satisfies F (K n ) = K n. Since F is linear and its range contains an open set, namely, the interior of K n, F is also onto and hence one-to-one.) We now show that Z = E (the matrix of all ones). Suppose that Z has a zero entry, say z ij = 0. Then, for any X, F (X), E ij = 0. Thus, X, F T (E ij ) = 0 for all X S n. This implies that F T (E ij ) = 0. As F is one-to-one (hence invertible), F T is also one-to-one. Thus, E ij = 0, which is a contradiction. Hence, Z has no zero entries. As Z is a 0/-matrix, we have Z = E. Now, this implies that F (X) = E Φ(X) = Φ(X) so that Φ(= F ) is a one-to-one Schur homomorphism on S n such that Φ(K n ) K n. By Lemma 6.2, there exists a (square) 0/-matrix Q with each row containing exactly one such that F (X) = QXQ T for all X S n. As F is one-to-one, it is necessarily invertible. This implies (for example, by looking at F (X) = I) that Q is invertible. This means that Q cannot have a pair of identical
14 4 B. Joshi and M.S. Gowda rows. Thus, Q has exactly one in each column/row. Hence, Q is a permutation matrix. The implications (ii) (iii) (i) are easy to verify or obvious. Concluding Remarks. Schur homomorphisms of K n are completely characterized, but the problem of describing all closed Schur ideals of K n remains open. In particular, it is not known if every closed Schur ideal of K n is finitely generated. Acknowledgments. We thank the referee and an Advisory Editor for their comments and suggestions which simplified the proofs (particularly in Section 6) and the exposition. REFERENCES M.F. Anjos and J.B. Lasserre (Eds). Handbook on Semidefinite, Conic and Polynomial Optimization. International Series in Operations Research & Management Science, Vol. 66, Springer, New York, F.F. Bonsall. Lectures on some fixed point theorems in Functional Analysis. TIFR Lecture Notes, S. Clark, C.-K. Li, and A. Rastogi. Schur multiplicative maps on matrices. Bulletin of the Australian Mathematical Society, 77 (2008) H. Dong and K. Anstreicher. Separating doubly nonnegative and completely positive matrices. Mathematical Programming, Series A, 37 (3-54) R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, A.N. Letchford and M.M. Sorensen. Binary positive semidefinite matrices and associated integer polytopes. Mathematical Programming, Series A, 3 (202)
Permutation invariant proper polyhedral cones and their Lyapunov rank
Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu
More informationKey words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone
ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called
More informationComplementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras
Optimization Methods and Software Vol. 00, No. 00, January 2009, 1 14 Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras M. Seetharama Gowda a, J. Tao
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationStrict diagonal dominance and a Geršgorin type theorem in Euclidean
Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More informationGeometric Mapping Properties of Semipositive Matrices
Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices
More informationThe eigenvalue map and spectral sets
The eigenvalue map and spectral sets p. 1/30 The eigenvalue map and spectral sets M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationPositive and doubly stochastic maps, and majorization in Euclidean Jordan algebras
Positive and doubly stochastic maps, and majorization in Euclidean Jordan algebras M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More informationPOSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS
Adv. Oper. Theory 3 (2018), no. 1, 53 60 http://doi.org/10.22034/aot.1702-1129 ISSN: 2538-225X (electronic) http://aot-math.org POSITIVE MAP AS DIFFERENCE OF TWO COMPLETELY POSITIVE OR SUPER-POSITIVE MAPS
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationOn the game-theoretic value of a linear transformation relative to a self-dual cone
On the game-theoretic value of a linear transformation relative to a self-dual cone M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES
ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More information0.2 Vector spaces. J.A.Beachy 1
J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a
More informationAnalysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103
Analysis and Linear Algebra Lectures 1-3 on the mathematical tools that will be used in C103 Set Notation A, B sets AcB union A1B intersection A\B the set of objects in A that are not in B N. Empty set
More informationOn the game-theoretic value of a linear transformation on a symmetric cone
On the game-theoretic value of a linear transformation ona symmetric cone p. 1/25 On the game-theoretic value of a linear transformation on a symmetric cone M. Seetharama Gowda Department of Mathematics
More informationSome New Results on Lyapunov-Type Diagonal Stability
Some New Results on Lyapunov-Type Diagonal Stability Mehmet Gumus (Joint work with Dr. Jianhong Xu) Department of Mathematics Southern Illinois University Carbondale 12/01/2016 mgumus@siu.edu (SIUC) Lyapunov-Type
More informationAn Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables
More informationSpectral sets and functions on Euclidean Jordan algebras
Spectral sets and functions on Euclidean Jordan algebras Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu
More information0 Sets and Induction. Sets
0 Sets and Induction Sets A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a A to denote that a is an element of the set
More information642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004
642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section
More informationA note on 5 5 Completely positive matrices
A note on 5 5 Completely positive matrices Hongbo Dong and Kurt Anstreicher October 2009; revised April 2010 Abstract In their paper 5 5 Completely positive matrices, Berman and Xu [BX04] attempt to characterize
More informationON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION
J. Appl. Math. & Informatics Vol. 33(2015), No. 1-2, pp. 229-234 http://dx.doi.org/10.14317/jami.2015.229 ON POSITIVE SEMIDEFINITE PRESERVING STEIN TRANSFORMATION YOON J. SONG Abstract. In the setting
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More information10. Smooth Varieties. 82 Andreas Gathmann
82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSufficiency of Signed Principal Minors for Semidefiniteness: A Relatively Easy Proof
Sufficiency of Signed Principal Minors for Semidefiniteness: A Relatively Easy Proof David M. Mandy Department of Economics University of Missouri 118 Professional Building Columbia, MO 65203 USA mandyd@missouri.edu
More informationOHSx XM511 Linear Algebra: Solutions to Online True/False Exercises
This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationAnalysis-3 lecture schemes
Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationWalker Ray Econ 204 Problem Set 3 Suggested Solutions August 6, 2015
Problem 1. Take any mapping f from a metric space X into a metric space Y. Prove that f is continuous if and only if f(a) f(a). (Hint: use the closed set characterization of continuity). I make use of
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationMATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005
MATH 225 Summer 25 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 25 Department of Mathematical and Statistical Sciences University of Alberta Question 1. [p 224. #2] The set of all
More informationLinear Systems and Matrices
Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationMatrix functions that preserve the strong Perron- Frobenius property
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu
More informationSome characterizations of cone preserving Z-transformations
Annals of Operations Research manuscript No. (will be inserted by the editor) Some characterizations of cone preserving Z-transformations M. Seetharama Gowda Yoon J. Song K.C. Sivakumar This paper is dedicated,
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationOn the adjacency matrix of a block graph
On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationUpper triangular matrices and Billiard Arrays
Linear Algebra and its Applications 493 (2016) 508 536 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Upper triangular matrices and Billiard Arrays
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets
FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be
More informationAn Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace
An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well
More informationINVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas
Opuscula Mathematica Vol. 31 No. 3 2011 http://dx.doi.org/10.7494/opmath.2011.31.3.303 INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES Aikaterini Aretaki, John Maroulas Abstract.
More informationThe rank of connection matrices and the dimension of graph algebras
The rank of connection matrices and the dimension of graph algebras László Lovász Microsoft Research One Microsoft Way Redmond, WA 98052 August 2004 Microsoft Research Technical Report TR-2004-82 Contents
More informationHigher rank numerical ranges of rectangular matrix polynomials
Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid
More informationM. Seetharama Gowda a, J. Tao b & Roman Sznajder c a Department of Mathematics and Statistics, University of Maryland
This article was downloaded by: [University of Maryland Baltimore County] On: 03 April 2013, At: 09:11 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationOn Euclidean distance matrices
On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationRANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA
Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationA MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ
A MODEL-THEORETIC PROOF OF HILBERT S NULLSTELLENSATZ NICOLAS FORD Abstract. The goal of this paper is to present a proof of the Nullstellensatz using tools from a branch of logic called model theory. In
More informationA property concerning the Hadamard powers of inverse M-matrices
Linear Algebra and its Applications 381 (2004 53 60 www.elsevier.com/locate/laa A property concerning the Hadamard powers of inverse M-matrices Shencan Chen Department of Mathematics, Fuzhou University,
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationAnalytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications
Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationMatrix Completion Problems for Pairs of Related Classes of Matrices
Matrix Completion Problems for Pairs of Related Classes of Matrices Leslie Hogben Department of Mathematics Iowa State University Ames, IA 50011 lhogben@iastate.edu Abstract For a class X of real matrices,
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationUniqueness of the Solutions of Some Completion Problems
Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationChapter 2 Linear Transformations
Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more
More informationNonlinear Programming Algorithms Handout
Nonlinear Programming Algorithms Handout Michael C. Ferris Computer Sciences Department University of Wisconsin Madison, Wisconsin 5376 September 9 1 Eigenvalues The eigenvalues of a matrix A C n n are
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationTwo applications of the theory of primary matrix functions
Linear Algebra and its Applications 361 (2003) 99 106 wwwelseviercom/locate/laa Two applications of the theory of primary matrix functions Roger A Horn, Gregory G Piepmeyer Department of Mathematics, University
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationarxiv: v1 [math.co] 10 Aug 2016
POLYTOPES OF STOCHASTIC TENSORS HAIXIA CHANG 1, VEHBI E. PAKSOY 2 AND FUZHEN ZHANG 2 arxiv:1608.03203v1 [math.co] 10 Aug 2016 Abstract. Considering n n n stochastic tensors (a ijk ) (i.e., nonnegative
More informationLinear algebra 2. Yoav Zemel. March 1, 2012
Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.
More informationCompression, Matrix Range and Completely Positive Map
Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n
More information16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.
Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to
More informationSEMI-INVARIANTS AND WEIGHTS OF GROUP ALGEBRAS OF FINITE GROUPS. D. S. Passman P. Wauters University of Wisconsin-Madison Limburgs Universitair Centrum
SEMI-INVARIANTS AND WEIGHTS OF GROUP ALGEBRAS OF FINITE GROUPS D. S. Passman P. Wauters University of Wisconsin-Madison Limburgs Universitair Centrum Abstract. We study the semi-invariants and weights
More informationMatrices and Matrix Algebra.
Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square
More informationa Λ q 1. Introduction
International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE
More informationKey words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.
May 7, DETERMINING WHETHER A MATRIX IS STRONGLY EVENTUALLY NONNEGATIVE LESLIE HOGBEN 3 5 6 7 8 9 Abstract. A matrix A can be tested to determine whether it is eventually positive by examination of its
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationPositive entries of stable matrices
Positive entries of stable matrices Shmuel Friedland Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago Chicago, Illinois 60607-7045, USA Daniel Hershkowitz,
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More information1 Positive and completely positive linear maps
Part 5 Functions Matrices We study functions on matrices related to the Löwner (positive semidefinite) ordering on positive semidefinite matrices that A B means A B is positive semi-definite. Note that
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationPolynomial complementarity problems
Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016
More informationSign Patterns with a Nest of Positive Principal Minors
Sign Patterns with a Nest of Positive Principal Minors D. D. Olesky 1 M. J. Tsatsomeros 2 P. van den Driessche 3 March 11, 2011 Abstract A matrix A M n (R) has a nest of positive principal minors if P
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationELA
CONSTRUCTING COPOSITIVE MATRICES FROM INTERIOR MATRICES CHARLES R. JOHNSON AND ROBERT REAMS Abstract. Let A be an n by n symmetric matrix with real entries. Using the l -norm for vectors and letting S
More informationThe extreme rays of the 5 5 copositive cone
The extreme rays of the copositive cone Roland Hildebrand March 8, 0 Abstract We give an explicit characterization of all extreme rays of the cone C of copositive matrices. The results are based on the
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationMATH 581D FINAL EXAM Autumn December 12, 2016
MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all
More informationREU 2007 Apprentice Class Lecture 8
REU 2007 Apprentice Class Lecture 8 Instructor: László Babai Scribe: Ian Shipman July 5, 2007 Revised by instructor Last updated July 5, 5:15 pm A81 The Cayley-Hamilton Theorem Recall that for a square
More information