Methods for Constructing Distance Matrices and the Inverse Eigenvalue Problem
|
|
- Rosemary Lawson
- 5 years ago
- Views:
Transcription
1 Methods for Constructing Distance Matrices and the Inverse Eigenvalue Problem Thomas L. Hayden, Robert Reams and James Wells Department of Mathematics, University of Kentucky, Lexington, KY 0506 Department of Mathematics, College of William & Mary, Williamsburg, VA Abstract Let D 1 R k k and D R l l be two [ distance ] matrices. We provide necessary conditions on Z R k l D1 Z in order that D = Z T R D n n be a distance matrix. We then show that it is always possible to border an n n distance matrix, with certain scalar multiples of its Perron eigenvector, to construct an (n + 1) (n + 1) distance matrix. We also give necessary and sufficient conditions for two principal distance matrix blocks D 1 and D be used to form a distance matrix as above, where Z is a scalar multiple of a rank one matrix, formed from their Perron eigenvectors. Finally, we solve the inverse eigenvalue problem for distance matrices in certain special cases, including n = 3,, 5, 6, any n for which there exists a Hadamard matrix, and some other cases. 1. Introduction A matrix D = (d ij ) R n n is said to be a (squared) distance matrix if there are vectors x 1, x,..., x n R r (1 r n) such that d ij = x i x j, for all i, j = 1,,..., n, where denotes the Euclidean norm. Note that this definition allows the n n zero distance matrix. Because of its centrality in formulating our results, we begin Section with a theorem of Crouzeix and Ferland []. Our proof reorganizes and simplifies their argument and, in addition, corrects a gap in their argument that C implies C1 (see below). We then show in Section 3 that this theorem implies some necessary conditions on the block matrix Z, in order that two principal [ distance] matrix blocks D 1 and D can be used to D1 Z construct the distance matrix D = Z T R D n n. A different approach has been been considered by Bakonyi and Johnson [1], where they show that if a partial distance matrix has a chordal graph (a matrix with two principal distance matrix blocks has a chordal graph), then there is a one entry at a time procedure to complete the matrix to a distance matrix. Generally, graph theoretic techniques were employed in these studies, following similar methods used in the completion of positive semidefinite matrices. Section, we use Fiedler s construction of nonnegative symmetric matrices [6] to show that an n n distance matrix can always be bordered with certain scalar multiples of its Perron eigenvector, to form an (n + 1) (n + 1) distance matrix. Further, starting 1 In
2 with two principal distance matrix blocks D 1 and D, we give necessary and sufficient conditions on Z, [ where Z ] is a rank one matrix formed from the Perron eigenvectors, in D1 Z order that D = Z T R D n n be a distance matrix. Finally, in Section 5, we show that if a symmetric matrix has eigenvector e = (1, 1,..., 1) T, just one positive eigenvalue and zeroes on the diagonal, then it is a distance matrix. It will follow that by performing an orthogonal similarity of a trace zero, diagonal matrix with (essentially) a Hadamard matrix, where the diagonal matrix has just one positive eigenvalue, we obtain a distance matrix. We show then that the above methods lead, in some special cases, to a solution of the inverse eigenvalue problem for distance matrices. That is, given one nonnegative real number and n 1 nonpositive real numbers, with the sum of these n numbers equal to zero, can one contruct a distance matrix with these numbers as its eigenvalues? We will, as usual, denote by e i the vector with a one in the ith position and zeroes elsewhere.. Almost positive semidefinite matrices A symmetric matrix A is said to be almost positive semidefinite (or conditionally positive semidefinite) if x T Ax 0, for all x R n such that x T e = 0. Theorem.1 gives some useful equivalent conditions in the theory of almost positive semidefinite matrices. Theorem.1 (Crouzeix, Ferland) Let A R n n be symmetric, and a R n, a 0. Then the following are equivalent: C1. h T a = 0 implies h T Ah 0. C. A is positive semidefinite or A has just one negative eigenvalue and there exists b R n such that Ab = a, where [ b T a ] 0. A a C3. The bordered matrix a T has just one negative eigenvalue. 0 Proof: C1 implies C: Since A is positive semidefinite on a subspace of dimension n 1, A has at most one negative eigenvalue. If there doesn t exist b R n such that Ab = a, then writing a = x+y, where x kera and y rangea, we must have x 0. But then x T a = x T x + x T y = x T x 0. For any v R n, define h = (I xat x T a )v = v at v x T a x, then ht a = 0, and h T Ah = v T Av 0, so A is positive semidefinite. Suppose A is not positive semidefinite, Ab = a and b T a 0. For v R n, let h = (I bat b T a )v, thus we can write any v Rn as v = h + at v b T a b, where ht a = 0. Then v T Av = h T Ah + at v b T a ht Ab + ( at v b T a ) b T Ab ( at v b T a ) b T a, and choosing v so that 0 > v T Av we see that 0 > b T a. C implies C1:
3 If A is positive semidefinite we re done. Otherwise, let λ be the negative eigenvalue of A, with unit eigenvector u. Write A = C T C +λuu T, so that a = Ab = C T Cb+λ(u T b)u. If u T b = 0 then 0 b T a = b T C T Cb, which implies Cb = 0 but then a = 0 also, contradicting the hypotheses of our theorem. So we must have u T b 0. Again, a = C T Cb + λ(u T b)u gives 0 b T a = b T C T Cb + λ(u T b), which we can rewrite as 1 bt C T Cb. Noting also that u = λ(u T b) If h T a = 0 then A = C T C + 1 λ(u T b) [a CT Cb], we have 1 λ(u T b) [a CT Cb][a C T Cb] T. h T Ah =h T C T Ch + 1 λ(u T b) (ht C T Cb), h T C T Ch (ht C T Cb) b T C T Cb, = Ch [(Ch)T (Cb)] Cb 0, where the last inequality is from Cauchy-Schwarz. Finally, if b T C T Cb = 0 then Cb = 0 which implies a = λ(u T b)u. Then h T a = 0 implies h T u = 0, so that h T Ah = h T (C T C + λuu T )h = h T C T Ch 0, as required. The equivalence of C1 and C3 was proved by Ferland in [5]. Schoenberg [15] (see also Blumenthal [,p106]) showed that a symmetric matrix D = (d ij ) R n n (with d ii = 0, 1 i n) is a distance matrix if and only if x T Dx 0, for all x R n such that x T e = 0. Note that D almost negative semidefinite with zeroes on the diagonal implies that D is a nonnegative matrix, since (e i e j ) T e = 0 and so (e i e j ) T D(e i e j ) = d ij 0. Let s R n, where s T e = 1. Gower [7] proved that D is a distance matrix if and only if (I es T )D(I se T ) is negative semidefinite. (This follows since for any y R n, x = (I se T )y is orthogonal to e. Conversely, if x T e = 0 then x T (I es T )D(I se T )x = x T Dx.) The vectors from the origin to the n vertices are the columns of X R n n in (I es T ) 1 D(I set ) = X T X. (This last fact follows by writing X T X = (I es T )D(I se T ) = D fe T ef T, where f = Ds 1 (st Ds), then (e i e j ) T X T X(e i e j ) = (e i e j ) T D(e i e j ) = d ii + d jj d ij = d ij, so that d ij = Xe i Xe j.) These results imply a useful theorem for later sections (see also [7], [18]). Theorem. Let D = (d ij ) R n n, with d ii = 0, for all i, 1 i n, and suppose that D is not the zero matrix. Then D is a distance matrix if and only if D has just one positive eigenvalue and there exists w R n such that Dw = e and w T e 0. 3
4 Proof: This follows from taking A = D and a = e in Theorem.1, Schoenberg s result, and the fact that d ii = 0, for all i, 1 i n, implies D can t be negative semidefinite. Remark: Observe that if there are two vectors w 1, w R n such that Dw 1 = Dw = e, then w 1 w = u ker(d), and so e T w 1 e T w = e T u. But e T u = w1 T Du = 0, so we can conclude that w1 T e = w T e. 3. Construction of distance matrices. Let D 1 R k k be a nonzero distance matrix, and D R l l a distance matrix. Of course, [ thinking] geometrically, there are many choices for Z R k l, k + l = n, such that D1 Z D = Z T R D n n persists in being a distance matrix. Our first theorem provides some necessary conditions that any such Z must satisfy. Theorem 3.1 If Z is chosen so that D is a distance matrix, then Z = D 1 W, for some W R k l. Further, D W T D 1 W is negative[ semidefinite. ] D1 Z Proof: Let Ze i = z i, 1 i l. Since D = Z T is a distance matrix then the [ ] D D1 z (k + 1) (k + 1) bordered matrix S i = i zi T is a distance matrix also, for 1 i l. 0 This implies that S i has just one positive eigenvalue, and so z i = D 1 w i, using condition C3 in Theorem.1. Thus Z = D 1 W, where W e i = w i, 1 i l. The last [ part follows from ] the [ identity] [ ] [ ] D1 D 1 W I 0 D1 0 I W W T = D 1 D W T I 0 D W T, D 1 W 0 I and Sylvester s theorem [9], since the matrix on the left and D 1 on the right both have just one positive eigenvalue. Remark: In claiming D 1 is a nonzero distance matrix this means that k. If D 1 happened to be a zero distance matrix (for example when k = 1), then the off-diagonal block Z need not necessarily be in the range of that zero D 1 block. In the proof above if S i has just one positive eigenvalue we can only apply C3 and conclude that z i = D 1 w i when D 1 is not negative semidefinite (i.e. when D 1 is not a zero distance matrix). The[ converse ] D1 z of Theorem 3.1 is not necessarily true, since consider the case where l = 1, S = z T 0 and z = D 1 w, with w T D 1 w 0. We will see in the next section that if z is the Perron eigenvector for D 1, only certain scalar multiples of z will cause S to be a distance matrix. If D 1 and D are both nonzero distance matrices then we can state Theorem 3.1 in the following form, where the same proof works with the rows of Z. Theorem 3. If Z is chosen so that D is a distance matrix, then Z = D 1 W = V D, for some W, V R k l. Further, D W T D 1 W and D 1 V D V T semidefinite. are both negative
5 . Using the Perron vector to construct a distance matrix. As a consequence of the Perron-Frobenius Theorem [9], since a distance matrix has all nonnegative entries then it has a real eigenvalue r λ for all eigenvalues λ of D. Furthermore, the eigenvector that corresponds to r has nonnegative entries. We will use the notation that the vector e takes on the correct number of components depending on the context. Theorem.1 Let D R n n be a nonzero distance matrix with Du = ru, where r is the Perron eigenvalue and u is a unit [ Perron eigenvector. ] Let Dw = e, w T e 0 and ρ > 0. Then the bordered matrix ˆD D ρu = ρu T is a distance matrix if and only if ρ lies in the 0 interval [α, α + ], where α ± r = u T e re T w. Proof: We first show that ˆD has just one positive eigenvalue and e is in the range of ˆD. From [6] the set of eigenvalues of ˆD consists of the n 1 nonpositive eigenvalues of D, as [ ] r ρ well as the two eigenvalues of the matrix, namely r ± r + ρ. Thus ρ 0 ˆD has just one positive eigenvalue. It is easily checked that ˆDŵ = e if [ ] w ( ut e ŵ = r 1 ρ )u 1 ρ (ut e r ρ ). It remains to determine for which values of ρ we have e T ŵ 0, i.e. e T w 1 r (ut e) + ρ (ut e) r ρ = 1 r [ut e r(e T w) r ρ ][ ut e r(e T w) + r ρ ] 0. After multiplying across by ρ, we see that this inequality holds precisely when ρ is between the two roots α ± r = u T e of the quadratic. We can be sure of the re T w existence of ρ > 0 by arguing as follows. Dw = e implies u T Dw = ru T w = u T e. Also, D = ruu T + λ u u T + (the spectral decomposition of D) implies wt e = w T Dw = r(w T u) + λ (w T u ) + < r(w T u), i.e. w T e < r( ut e r ) = (ut e), so that (u T e) > r rw T e, which completes the proof. Example: Let D be the distance matrix which corresponds to a unit square in the plane, then [ De = e. ] In this case, α = 1, α + =, and the new figure that corresponds to D ρu ˆD = ρu T, for ρ [α 0, α + ], has an additional point on a ray through the center of the square and perpendicular to the plane. Fiedler s construction also leads to a more complicated version of Theorem.1. Theorem. Let D 1 R k k, D R l l be nonzero distance matrices with D 1 u = r 1 u, D v = r v, where r 1 and r are the Perron eigenvalues of D 1 and D respectively. Likewise, 5
6 u and v are the corresponding unit Perron eigenvectors. Let D 1 w 1 = e, where w T 1 e 0, D w = e, where w T e 0, [ and ρ > 0 with ] ρ r 1 r. Then the matrix ˆD D1 ρuv = T ρvu T is a distance matrix if and only if ρ D > r 1 r, [(u T e) r 1 e T w 1 r 1 e T w ][(v T e) r e T w 1 r e T w )] 0 and ρ is in the interval [α, α + ], where α ± = (ut e)(v T e) ± [(u T e) r 1 e T w 1 r 1 e T w ][(v T e) r e T w 1 r e T w )] (u T e) r 1 + (vt e) r e T w 1 e T w Proof: Fiedler s results show that the eigenvalues of ˆD are the nonpositive eigenvalues of [ D 1 together ] with the nonpositive eigenvalues of D, as well as the two eigenvalues of r1 ρ. The latter two eigenvalues are r 1+r ± (r 1 +r ) +[ρ r 1 r ]. Thus if ρ ρ r r 1 r > 0 then ˆD has just one positive eigenvalue. Conversely, if ˆD has just one positive eigenvalue then 0 r 1 + r (r 1 + r ) + [ρ r 1 r ], which implies (r 1 + r ) + [ρ r 1 r ] (r 1 + r ), i.e. ρ r 1 r 0, but ρ r 1 r 0 so ρ r 1 r > 0. Also, ˆDŵ = e where [ ρ w1 + r ŵ = 1 r ρ [ ρ r 1 (u T e) v T ] e]u ρ w + r 1 r ρ [ ρ r (v T e) u T. e]v Then using this ŵ we have e T ŵ = e T w 1 + e T ρ w + r 1 r ρ [ ρ (u T e) + ρ (v T e) (u T e)(v T e)] 0, r 1 r which we can rewrite, on multiplying across by r 1 r ρ, as (e T w 1 + e T w )r 1 r ρ(u T e)(v T e) + ρ [ (ut e) + (vt e) e T w 1 e T w ] 0. r 1 r After some simplification the roots of this quadratic turn out to be α ± = (ut e)(v T e) ± [(u T e) r 1 e T w 1 r 1 e T w ))][(v T e) r e T w 1 r e T w )]. (u T e) r 1 + (vt e) r e T w 1 e T w We saw in the proof of Theorem.1 that e T w 1 < (ut e) r 1 and e T w < (vt e) r. The proof is complete once we note that that the discriminant of the quadratic is greater than or equal to zero precisely when e T ŵ 0 for some ρ. Remark: The case ρ = r 1 r is dealt with separately. In this case [ if ˆDŵ = e, ] then it w 1 ut e can be checked that ut e r1 = vt e r. Also ˆDŵ = e, if we take ŵ = r 1 +r u w vt e r 1 +r v, where D 1 w 1 = e and D w = e. Then ˆD is a distance matrix if and only if (for this ŵ) ŵ T e The inverse eigenvalue problem for distance matrices. Let σ = {λ 1, λ,..., λ n } C. The inverse eigenvalue problem for distance matrices is that of finding necessary and sufficient conditions that σ be the spectrum of a (squared) distance matrix D. Evidently, since D is symmetric σ R. We have seen that if D R n n 6
7 is a nonzero distance matrix then D has just one positive eigenvalue. Also, since the diagonal entries of D are all zeroes, we have trace(d) = n i=1 λ i = 0. The inverse eigenvalue problem for nonnegative matrices remains as one of the longstanding unsolved problems in matrix theory; see for instance [3],[10],[13]. Suleimanova [17] and Perfect [1] solved the inverse eigenvalue problem for nonnegative matrices in the special case where σ R, λ 1 0 λ λ n and n i=1 λ i 0. They showed that these conditions are sufficient for the construction of a nonnegative matrix with spectrum σ. Fiedler [6] went further and showed that these same conditions are sufficient even when the nonnegative matrix is required to be symmetric. From what follows it appears that these conditions are sufficient even if the nonnegative matrix is required to be a distance matrix (when n i=1 λ i = 0). We will use the following lemma repeatedly. Lemma 5.1 Let λ 1 0 λ λ n, where n i=1 λ i = 0. Then (n 1) λ n λ 1 and (n 1) λ λ 1. Proof: These inequalities follow since λ 1 = n i= λ i and λ λ 3 λ n. Theorem 5. Let σ = {λ 1, λ, λ 3 } R, with λ 1 0 λ λ 3 and λ 1 + λ + λ 3 = 0. Then σ is the spectrum of a distance matrix D R 3 3. Proof: Construct an isoceles triangle in the xy-plane with one vertex at the origin, and λ the other two vertices at ( (λ + +λ 3 )λ 3, ± λ ). These vertices yield the distance matrix D = (λ 0 λ +λ 3 )λ 3 (λ λ 0 +λ 3 )λ 3, (λ +λ 3 )λ 3 (λ +λ 3 )λ 3 0 and it is easily verified that D has characteristic polynomial (x λ )(x λ 3 )(x+λ +λ 3 ) = x 3 (λ λ 3 + λ + λ 3 )x + λ λ 3 (λ + λ 3 ). Note that λ (λ + +λ 3 )λ 3 = λ + which completes the proof. λ 1 λ 3 λ λ + 1 = λ +λ 1 = λ 3+λ 1 0, Some distance matrices have e as their Perron eigenvector. These were discussed in [8], and were seen to correspond to vertices which form regular figures. Lemma 5.3 Let D R n n be symmetric, have zeroes on the diagonal, have just one nonnegative eigenvalue λ 1, and corresponding eigenvector e. Then D is a distance matrix. Proof: Let D have eigenvalues λ 1, λ,..., λ n, then we can write D = λ 1 n eet +λ u u T +. If x T e = 0 then x T Dx 0, as required. 7
8 A matrix H R n n is said to be a Hadamard matrix if each entry is equal to ±1 and H T H = ni. These matrices exist only if n = 1, or n 0 mod [19]. It is not known if there exists a Hadamard matrix for every n which is a multiple of, although this is a well-known conjecture. Examples of Hadamard matrices are: [ ] n =, H = ; n =, H = ; and H 1 H is a Hadamard matrix, where H 1 and H are Hadamard matrices and denotes the Kronecker product. For any n for which there exists a Hadamard matrix our next theorem solves the inverse eigenvalue problem. Theorem 5. Let n be such that there exists a Hadamard matrix of order n. Let λ 1 0 λ λ n and n i=1 λ i = 0. Then there is a distance matrix D with eigenvalues λ 1, λ,..., λ n. Proof: Let H R n n be a Hadamard matrix, and U = 1 n H so that U is an orthogonal matrix. Let Λ = diag(λ 1, λ,, λ n ), then D = U T ΛU has eigenvalues λ 1, λ,..., λ n. D has eigenvector e, since for any Hadamard matrix H we can assume that one of the columns of H is e. From D = U T ΛU = λ 1 n eet + λ u u T +, it can be seen that each of the diagonal entries of D are n i=1 λ i/n = 0. The theorem then follows from Lemma 5.3. Since there are Hadamard matrices for all n a multiple of such that n < 8, Theorem 5. solves the inverse eigenvalue problem in these special cases. See [16] for a more extensive list of values of n for which there exists a Hadamard matrix. The technique of Theorem 5. was used in [10] to partly solve the inverse eigenvalue problem for nonnegative matrices when n =. Theorem 5.6 will solve the inverse eigenvalue problem for any n + 1, such that there exists a Hadamard matrix of order n. We will use a theorem due to Fiedler [6]. Theorem 5.5 Let α 1 α α k be the eigenvalues of the symmetric nonnegative matrix A R k k, and β 1 β β l the eigenvalues of the symmetric nonnegative matrix B R l l, where α 1 β 1. Moreover, Au = α 1 u, Bv = β 1 v, so that u and v are the corresponding unit Perron eigenvectors. Then with ρ = [ ] σ(α 1 β 1 + σ) the matrix A ρuv T ρvu T has eigenvalues α B 1 + σ, β 1 σ, α,..., α k, β,..., β l, for any σ 0. Remark: The assumption α 1 β 1 is only for convenience. It is easily checked that it is sufficient to have α 1 β 1 + σ 0. In [18] two of the authors called a distance matrix circum Euclidean if the points which generate it lie on a hypersphere. Suppose that the distance matrix D has the property 8
9 that there is an s R n such that Ds = βe and s T e = 1 (easily arranged if Dw = e and w T e > 0). Then using this s we have (see the paragraph just before Theorem.) (I es T )D(I se T ) = D βee T = X T X. Since the diagonal entries of D are all zero, if Xe i = x i, then x i = β/, for all i, 1 i n. Evidently the points lie on a hypersphere of radius R, where R = β/ = st Ds. Theorem 5.6 Let n be such that there exists a Hadamard matrix of order n. Let λ 1 0 λ λ n+1 and n+1 i=1 λ i = 0, then there is an (n + 1) (n + 1) distance matrix ˆD with eigenvalues λ 1, λ,..., λ n+1. Proof: Let D R n n be a distance matrix constructed using a Hadamard matrix, as in Theorem 5., with eigenvalues λ 1 +λ n+1, λ,..., λ n. Note that λ 1 +λ n+1 = λ λ n and [ De = ](λ 1 + λ n+1 )e, i.e. D has Perron eigenvector e. Using Theorem 5.5 let ˆD = D ρu ρu T, where A = D, B = 0, u = 0 e n, α 1 = λ 1 +λ n+1, α i = λ i, for i n, β 1 = 0 and σ = λ n+1. In this case ρ = λ 1 λ n+1, and ˆD has eigenvalues λ 1, λ,..., λ n+1. We will show that there exist vectors ˆx 1,..., ˆx n+1 such that ˆd ij = ˆx i ˆx j, for all i, j, 1 i, j n + 1, where ˆD = ( ˆd ij ). For the distance matrix D, and in the notation of the paragraph above, s = e n and β = (λ 1 + λ n+1 ). Let x 1,..., x n R n be vectors to the n vertices that will correspond n to D, i.e. d ij = x i x j, for all i, j, 1 i, j n. These points lie on a hypersphere of radius R, where R = x i = (λ 1 + λ n+1 ), for all i, 1 i n. n Let the vectors that correspond to the n + 1 vertices of ˆD be ˆx1 = (x 1, 0),..., ˆx n = (x n, 0) R n+1 and ˆx n+1 = (0, t) R n+1, then ˆd ij = d ij, for all i, j, 1 i, j n. Furthermore, the right-most column of ˆD has entries ˆdi(n+1) = ˆx n+1 ˆx i = t + R, for each i, 1 i n. Finally, we must show that we can choose t so that t + R = be possible only if R ρ n, i.e. (λ 1 + λ n+1 ) n ρ n. But this will λ1 λ n+1, which we can rewrite, on n dividing both sides by λ 1 n as 1 p np, where p = λ n+1. Since p 0 we must have λ 1 1 p 1. But also, from Lemma 5.1, p 1 n implies that np 1, and the proof is complete. Note that the above proof works for any distance matrix D with eigenvector e, and with the appropriate eigenvalues, not just when D has been constructed using a Hadamard matrix. The same is true for the matrix D 1 in our next theorem. Our next theorem bears a 9
10 close resemblance to the previous one, although it will solve the inverse eigenvalue problem only in the cases n = 6, 10, 1 and 18. Theorem 5.7 Let n =, 8, 1 or 16, so that there exists a Hadamard matrix of order n. Let λ 1 0 λ λ n+1 λ n+ and n+ i=1 λ i = 0, then there is an (n + ) (n + ) distance matrix ˆD with eigenvalues λ 1, λ,..., λ n+1, λ n+. Proof: Let D 1 R n n be a distance matrix, constructed using [ a Hadamard matrix ] as before, with eigenvalues λ 1 + λ n+1 + λ n+, λ,..., λ n. Let ˆD D1 ρ e e T = n, ρ e e T n D [ ] 0 λ where D = n+1, and apply Theorem 5.5 where A = D λ n+1 0 1, B = D, u = 1 n e, v = 1 e, α 1 = λ 1 + λ n+1 + λ n+, α i = λ i, for i n, β 1 = λ n+1, β = λ n+1 and σ = λ n+1 λ n+. In this case ρ = ( λ n+1 λ n+ )(λ 1 + λ n+1 ), and ˆD has the desired eigenvalues λ 1, λ,..., λ n+1, λ n+. The vectors x 1,..., x n to the n vertices that correspond to D 1 lie on a hypersphere of radius R = λ 1+λ n+1 +λ n+ n. Let the vectors that correspond to the n + vertices of ˆD be ˆx 1 = (x 1, 0, 0),..., ˆx n = (x n, 0, 0) R n+ λn+1 and ˆx n+1 = (0, t, ), ˆx n+ = λn+1 (0, t, ) R n+, then ˆd ij = ˆx i ˆx j = x i x j, for all i, j, 1 i, j n, and ˆd ij = ˆx i ˆx j, for n + 1 i, j n +. Furthermore, the rightmost two columns of ˆD (excluding the entries of the bottom right block) have entries ˆd i(n+1) = ˆx n+1 ˆx i = R + t λ n+1 = ˆd i(n+) = ˆx n+ ˆx i, for each i, 1 i n. Finally, we must show that we can choose t so that R + t λ n+1 = ρ, i.e. we n must show that λ 1 + λ n+1 + λ n+ λ n+1 ( λn+1 λ n+ )(λ 1 + λ n+1 ). n n Writing p = λ n+1 and q = λ n+, we can rewrite the above inequality as λ 1 λ 1 1 p q + p (p + q)(1 p) n. n This inequality can be rearranged to become 1 n p + p + q (p + q)(1 p) n +, n or + n p + q 1 p n [ n + ], then taking square roots of both sides and again rearranging, we have n 1 p + q + 1 p = f(p, q). + n + n Thus, we need to show that f(p, q) 1, where q p, 1 p+q, and np 1 q (the last 10
11 inequality comes from noting that λ 1 + λ n+ λ λ n λ n+1, since n+ i=1 λ i = 0, and using Lemma 5.1). These three inequalities describe the interior of a triangular region in the pq-plane. For fixed p, f(p, q) increases as q increases, so we need only check that f(p, q) 1, on the lower border (i.e. closest to the p-axis) of the triangular region. One edge of this lower border is when p = q, and in this case 1 p 1 n+1. Differentiation of f(p, p) tells us that a maximum is achieved when p = n+, and 1 n+ 1 n. Minima are achieved at the end-points p = 1 and p = 1 n+1 f( 1, 1 1 ) 1, means that 16 n, and that f( n+1, 1 n+1 n+1 since. It is easily checked that ) 1 is true in any case. Along the other lower border q = np + 1, and in this case f(p, np + 1) is found to have df dp < 0, and we re done since f( 1 n+1, 1 n+1 ) 1. The methods described above do not appear to extend to the missing cases, particularly the case of n = 7. Since we have no evidence that there are any other necessary conditions on the eigenvalues of a distance matrix we make the following conjecture: Conjecture Let λ 1 0 λ λ n, where n i=1 λ i = 0. Then there is a distance matrix with eigenvalues λ 1, λ,..., λ n. Acknowledgements We are grateful to Charles R. Johnson for posing the inverse eigenvalue problem for distance matrices. The second author received funding as a Postdoctoral Scholar from the Center for Computational Sciences, University of Kentucky. References [1] M. Bakonyi and C. R. Johnson, The Euclidian distance matrix completion problem, SIAM J. Matrix Anal. Appl. 16:66 65 (1995). [] L. M. Blumenthal, Theory and applications of distance geometry, Oxford University Press, Oxford, Reprinted by Chelsea Publishing Co., New York, [3] M. Boyle and D. Handelman, The spectra of nonnegative matrices via symbolic dynamics, Annals of Mathematics 133:9 316 (1991). [] J. P. Crouzeix and J. Ferland, Criteria for quasiconvexity and pseudoconvexity: relationships and comparisons, Mathematical Programming 3(): (198). [5] J. Ferland, Matrix-theoretic criteria for the quasi-convexity of twice continuously differentiable functions, Linear Algebra and its Applications 38:51-63 (1981). [6] M. Fiedler, Eigenvalues of nonnegative symmetric matrices, Linear Algebra and its Applications 9:119 1 (197). [7] J. C. Gower, Euclidean distance geometry, Math. Scientist, 7:1 1 (198). [8] T. L. Hayden and P. Tarazaga, Distance matrices and regular figures, Linear Algebra and its Applications 195:9-16 (1993). 11
12 [9] R. A. Horn and C. R. Johnson, Matrix analysis, Cambridge Uni. Press, [10] R. Loewy and D. London, A note on an inverse problem for nonnegative matrices, Lin. and Multilin. Alg. 6:83 90 (1978). [11] H. Minc, Nonnegative matrices, John Wiley and Sons, New York, [1] H. Perfect, Methods of constructing certain stochastic matrices. II, Duke Math J. : (1955). [13] R. Reams, An inequality for nonnegative matrices and the inverse eigenvalue problem, Linear and Multilinear Algebra, 1: (1996). [1] I. J. Schoenberg, Remarks to Maurice Fréchet s article Sur la definition axiomatique d une class d espaces distanciés vectoriellement applicable sur l espace de Hilbert, Ann. Math., 36(3):7 73 (1935). [15] I. J. Schoenberg, Metric spaces and positive definite functions, Trans. Amer. Math. Soc., :5 536 (1938). [16] J. R. Seberry and M. Yamada, Hadamard matrices, sequences and block designs, In J. H. Dinitz, D. R. Stinson (Eds.) Contemporary Design Theory A Collection of Surveys, John Wiley, New York, 199. [17] H. R. Suleimanova, Stochastic matrices with real characteristic numbers. (Russian) Doklady Akad. Nauk SSSR (N.S.) 66:33 35 (199). [18] P. Tarazaga, T. L. Hayden and J. Wells, Circum-Euclidean distance matrices and faces, Linear Algebra and its Applications, 3:77 96 (1996). [19] W. D. Wallis, Hadamard matrices, In R. A. Brualdi, S. Friedland, V. Klee (Eds.) Combinatorial and graph-theoretical problems in linear algebra, I.M.A., Vol. 50, Springer- Verlag, pp. 35 3,
THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 124, Number 12, December 1996, Pages 3647 3651 S 0002-9939(96)03587-3 THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationOn Euclidean distance matrices
On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationELA
CONSTRUCTING COPOSITIVE MATRICES FROM INTERIOR MATRICES CHARLES R. JOHNSON AND ROBERT REAMS Abstract. Let A be an n by n symmetric matrix with real entries. Using the l -norm for vectors and letting S
More informationOn the diagonal scaling of Euclidean distance matrices to doubly stochastic matrices
Linear Algebra and its Applications 397 2005) 253 264 www.elsevier.com/locate/laa On the diagonal scaling of Euclidean distance matrices to doubly stochastic matrices Charles R. Johnson a, Robert D. Masson
More informationOn the eigenvalues of Euclidean distance matrices
Volume 27, N. 3, pp. 237 250, 2008 Copyright 2008 SBMAC ISSN 00-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH Department of Mathematics and Statistics University
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationThe Nonnegative Inverse Eigenvalue Problem
1 / 62 The Nonnegative Inverse Eigenvalue Problem Thomas Laffey, Helena Šmigoc December 2008 2 / 62 The Nonnegative Inverse Eigenvalue Problem (NIEP): Find necessary and sufficient conditions on a list
More informationOperators with numerical range in a closed halfplane
Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,
More information642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004
642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section
More informationOn characterizing the spectra of nonnegative matrices
1 / 41 On characterizing the spectra of nonnegative matrices Thomas J. Laffey (University College Dublin) July 2017 2 / 41 Given a list σ = (λ 1,..., λ n ) of complex numbers, the nonnegative inverse eigenvalue
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationarxiv: v3 [math.ra] 10 Jun 2016
To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationCharacterization of half-radial matrices
Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the
More informationAn Inequality for Nonnegative Matrices and the Inverse Eigenvalue Problem
An Inequality for Nonnegative Matrice and the Invere Eigenvalue Problem Robert Ream Program in Mathematical Science The Univerity of Texa at Dalla Box 83688, Richardon, Texa 7583-688 Abtract We preent
More informationSpectral Properties of Matrix Polynomials in the Max Algebra
Spectral Properties of Matrix Polynomials in the Max Algebra Buket Benek Gursoy 1,1, Oliver Mason a,1, a Hamilton Institute, National University of Ireland, Maynooth Maynooth, Co Kildare, Ireland Abstract
More informationAffine iterations on nonnegative vectors
Affine iterations on nonnegative vectors V. Blondel L. Ninove P. Van Dooren CESAME Université catholique de Louvain Av. G. Lemaître 4 B-348 Louvain-la-Neuve Belgium Introduction In this paper we consider
More informationZ-Pencils. November 20, Abstract
Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is
More informationarxiv:quant-ph/ v1 22 Aug 2005
Conditions for separability in generalized Laplacian matrices and nonnegative matrices as density matrices arxiv:quant-ph/58163v1 22 Aug 25 Abstract Chai Wah Wu IBM Research Division, Thomas J. Watson
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationIntegral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract
Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract Let F be a field, M n (F ) the algebra of n n matrices over F and
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More information1 Principal component analysis and dimensional reduction
Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an
More informationA Note on the Symmetric Nonnegative Inverse Eigenvalue Problem 1
International Mathematical Forum, Vol. 6, 011, no. 50, 447-460 A Note on the Symmetric Nonnegative Inverse Eigenvalue Problem 1 Ricardo L. Soto and Ana I. Julio Departamento de Matemáticas Universidad
More informationFiedler s Theorems on Nodal Domains
Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More information8. Diagonalization.
8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard
More informationScaling of Symmetric Matrices by Positive Diagonal Congruence
Scaling of Symmetric Matrices by Positive Diagonal Congruence Abstract Charles R. Johnson a and Robert Reams b a Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, VA
More informationGeometric Mapping Properties of Semipositive Matrices
Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationTwo Characterizations of Matrices with the Perron-Frobenius Property
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2009;??:1 6 [Version: 2002/09/18 v1.02] Two Characterizations of Matrices with the Perron-Frobenius Property Abed Elhashash and Daniel
More informationREALIZABILITY BY SYMMETRIC NONNEGATIVE MATRICES
Proyecciones Vol. 4, N o 1, pp. 65-78, May 005. Universidad Católica del Norte Antofagasta - Chile REALIZABILITY BY SYMMETRIC NONNEGATIVE MATRICES RICARDO L. SOTO Universidad Católica del Norte, Chile
More informationChapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors
Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there
More informationPrincipal Components Theory Notes
Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory
More informationCONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES. Alexander Barvinok
CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES Alexander Barvinok Abstract. We call an n-tuple Q 1,...,Q n of positive definite n n real matrices α-conditioned for some α 1 if for
More informationNOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES
NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES MIKE BOYLE. Introduction By a nonnegative matrix we mean a matrix whose entries are nonnegative real numbers. By positive matrix we mean a matrix
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationSome New Results on Lyapunov-Type Diagonal Stability
Some New Results on Lyapunov-Type Diagonal Stability Mehmet Gumus (Joint work with Dr. Jianhong Xu) Department of Mathematics Southern Illinois University Carbondale 12/01/2016 mgumus@siu.edu (SIUC) Lyapunov-Type
More informationSymmetric matrices and dot products
Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationBounds for Levinger s function of nonnegative almost skew-symmetric matrices
Linear Algebra and its Applications 416 006 759 77 www.elsevier.com/locate/laa Bounds for Levinger s function of nonnegative almost skew-symmetric matrices Panayiotis J. Psarrakos a, Michael J. Tsatsomeros
More informationExtra Problems for Math 2050 Linear Algebra I
Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationTutorials in Optimization. Richard Socher
Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationFiedler s Theorems on Nodal Domains
Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors
More informationLINEAR MAPS PRESERVING NUMERICAL RADIUS OF TENSOR PRODUCTS OF MATRICES
LINEAR MAPS PRESERVING NUMERICAL RADIUS OF TENSOR PRODUCTS OF MATRICES AJDA FOŠNER, ZEJUN HUANG, CHI-KWONG LI, AND NUNG-SING SZE Abstract. Let m, n 2 be positive integers. Denote by M m the set of m m
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationPerron-Frobenius theorem for nonnegative multilinear forms and extensions
Perron-Frobenius theorem for nonnegative multilinear forms and extensions Shmuel Friedland Univ. Illinois at Chicago ensors 18 December, 2010, Hong-Kong Overview 1 Perron-Frobenius theorem for irreducible
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationThis property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.
34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2
More informationOn nonnegative realization of partitioned spectra
Electronic Journal of Linear Algebra Volume Volume (0) Article 5 0 On nonnegative realization of partitioned spectra Ricardo L. Soto Oscar Rojo Cristina B. Manzaneda Follow this and additional works at:
More informationMATH 581D FINAL EXAM Autumn December 12, 2016
MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationIRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents
IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS NEEL PATEL Abstract. The goal of this paper is to study the irreducible representations of semisimple Lie algebras. We will begin by considering two
More informationIBM Research Report. Conditions for Separability in Generalized Laplacian Matrices and Diagonally Dominant Matrices as Density Matrices
RC23758 (W58-118) October 18, 25 Physics IBM Research Report Conditions for Separability in Generalized Laplacian Matrices and Diagonally Dominant Matrices as Density Matrices Chai Wah Wu IBM Research
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationSymmetric nonnegative realization of spectra
Electronic Journal of Linear Algebra Volume 6 Article 7 Symmetric nonnegative realization of spectra Ricardo L. Soto rsoto@bh.ucn.cl Oscar Rojo Julio Moro Alberto Borobia Follow this and additional works
More informationCentral Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J
Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central
More informationHamilton Cycles in Digraphs of Unitary Matrices
Hamilton Cycles in Digraphs of Unitary Matrices G. Gutin A. Rafiey S. Severini A. Yeo Abstract A set S V is called an q + -set (q -set, respectively) if S has at least two vertices and, for every u S,
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationLinear Operators Preserving the Numerical Range (Radius) on Triangular Matrices
Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Chi-Kwong Li Department of Mathematics, College of William & Mary, P.O. Box 8795, Williamsburg, VA 23187-8795, USA. E-mail:
More informationReview of Some Concepts from Linear Algebra: Part 2
Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set
More informationCLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES
Bull Korean Math Soc 45 (2008), No 1, pp 95 99 CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES In-Jae Kim and Bryan L Shader Reprinted
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationPart IA. Vectors and Matrices. Year
Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s
More informationarxiv: v1 [math.ca] 7 Jan 2015
Inertia of Loewner Matrices Rajendra Bhatia 1, Shmuel Friedland 2, Tanvi Jain 3 arxiv:1501.01505v1 [math.ca 7 Jan 2015 1 Indian Statistical Institute, New Delhi 110016, India rbh@isid.ac.in 2 Department
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n
More informationSOMEWHAT STOCHASTIC MATRICES
SOMEWHAT STOCHASTIC MATRICES BRANKO ĆURGUS AND ROBERT I. JEWETT Abstract. The standard theorem for stochastic matrices with positive entries is generalized to matrices with no sign restriction on the entries.
More informationA Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices
University of Wyoming Wyoming Scholars Repository Mathematics Faculty Publications Mathematics 9-1-1997 A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices Bryan L. Shader University
More informationb jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60
On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department
More informationOn Euclidean distance matrices
Linear Algebra and its Applications 424 2007 108 117 www.elsevier.com/locate/laa On Euclidean distance matrices R. Balaji 1, R.B. Bapat Indian Statistical Institute, Delhi Centre, 7 S.J.S.S. Marg, New
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationOn Hadamard Diagonalizable Graphs
On Hadamard Diagonalizable Graphs S. Barik, S. Fallat and S. Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A2 Abstract Of interest here is a characterization
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationInformation About Ellipses
Information About Ellipses David Eberly, Geometric Tools, Redmond WA 9805 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationImproved Upper Bounds for the Laplacian Spectral Radius of a Graph
Improved Upper Bounds for the Laplacian Spectral Radius of a Graph Tianfei Wang 1 1 School of Mathematics and Information Science Leshan Normal University, Leshan 614004, P.R. China 1 wangtf818@sina.com
More informationLecture 23: 6.1 Inner Products
Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such
More informationQUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY
QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationMatrix functions that preserve the strong Perron- Frobenius property
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu
More informationFour new upper bounds for the stability number of a graph
Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which
More information