Methods for Constructing Distance Matrices and the Inverse Eigenvalue Problem

Similar documents
THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

On Euclidean distance matrices

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

ELA

On the diagonal scaling of Euclidean distance matrices to doubly stochastic matrices

On the eigenvalues of Euclidean distance matrices

Chapter 3 Transformations

The Nonnegative Inverse Eigenvalue Problem

Operators with numerical range in a closed halfplane

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

On characterizing the spectra of nonnegative matrices

Lecture 7: Positive Semidefinite Matrices

Notes on Linear Algebra and Matrix Theory

Maximizing the numerical radii of matrices by permuting their entries

1 Last time: least-squares problems

arxiv: v3 [math.ra] 10 Jun 2016

Recall the convention that, for us, all vectors are column vectors.

Characterization of half-radial matrices

An Inequality for Nonnegative Matrices and the Inverse Eigenvalue Problem

Spectral Properties of Matrix Polynomials in the Max Algebra

Affine iterations on nonnegative vectors

Z-Pencils. November 20, Abstract

arxiv:quant-ph/ v1 22 Aug 2005

Basic Calculus Review

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

1 Principal component analysis and dimensional reduction

A Note on the Symmetric Nonnegative Inverse Eigenvalue Problem 1

Fiedler s Theorems on Nodal Domains

Chapter 6: Orthogonality

Lecture Notes 1: Vector spaces

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

8. Diagonalization.

Scaling of Symmetric Matrices by Positive Diagonal Congruence

Geometric Mapping Properties of Semipositive Matrices

2. Matrix Algebra and Random Vectors

Two Characterizations of Matrices with the Perron-Frobenius Property

REALIZABILITY BY SYMMETRIC NONNEGATIVE MATRICES

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Principal Components Theory Notes

CONCENTRATION OF THE MIXED DISCRIMINANT OF WELL-CONDITIONED MATRICES. Alexander Barvinok

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

Some New Results on Lyapunov-Type Diagonal Stability

Symmetric matrices and dot products

Algebra II. Paulius Drungilas and Jonas Jankauskas

A Brief Outline of Math 355

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices

Extra Problems for Math 2050 Linear Algebra I

Some bounds for the spectral radius of the Hadamard product of matrices

Tutorials in Optimization. Richard Socher

Lecture 1: Review of linear algebra

Fiedler s Theorems on Nodal Domains

LINEAR MAPS PRESERVING NUMERICAL RADIUS OF TENSOR PRODUCTS OF MATRICES

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Lecture 8: Linear Algebra Background

Perron-Frobenius theorem for nonnegative multilinear forms and extensions

Linear Algebra Massoud Malek

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

On nonnegative realization of partitioned spectra

MATH 581D FINAL EXAM Autumn December 12, 2016

The following definition is fundamental.

Spectral inequalities and equalities involving products of matrices

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

IBM Research Report. Conditions for Separability in Generalized Laplacian Matrices and Diagonally Dominant Matrices as Density Matrices

Basic Concepts in Matrix Algebra

Symmetric nonnegative realization of spectra

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Hamilton Cycles in Digraphs of Unitary Matrices

Linear algebra and applications to graphs Part 1

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Review of Some Concepts from Linear Algebra: Part 2

CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES

Review of Linear Algebra

Part IA. Vectors and Matrices. Year

arxiv: v1 [math.ca] 7 Jan 2015

Eigenvalues and Eigenvectors

SOMEWHAT STOCHASTIC MATRICES

A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

On Euclidean distance matrices

Markov Chains, Stochastic Processes, and Matrix Decompositions

On Hadamard Diagonalizable Graphs

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

Lecture 2: Linear Algebra Review

Information About Ellipses

Lecture 3: Review of Linear Algebra

Improved Upper Bounds for the Laplacian Spectral Radius of a Graph

Lecture 23: 6.1 Inner Products

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

Linear Algebra- Final Exam Review

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Functional Analysis Review

Matrix functions that preserve the strong Perron- Frobenius property

Four new upper bounds for the stability number of a graph

Transcription:

Methods for Constructing Distance Matrices and the Inverse Eigenvalue Problem Thomas L. Hayden, Robert Reams and James Wells Department of Mathematics, University of Kentucky, Lexington, KY 0506 Department of Mathematics, College of William & Mary, Williamsburg, VA 3187-795 Abstract Let D 1 R k k and D R l l be two [ distance ] matrices. We provide necessary conditions on Z R k l D1 Z in order that D = Z T R D n n be a distance matrix. We then show that it is always possible to border an n n distance matrix, with certain scalar multiples of its Perron eigenvector, to construct an (n + 1) (n + 1) distance matrix. We also give necessary and sufficient conditions for two principal distance matrix blocks D 1 and D be used to form a distance matrix as above, where Z is a scalar multiple of a rank one matrix, formed from their Perron eigenvectors. Finally, we solve the inverse eigenvalue problem for distance matrices in certain special cases, including n = 3,, 5, 6, any n for which there exists a Hadamard matrix, and some other cases. 1. Introduction A matrix D = (d ij ) R n n is said to be a (squared) distance matrix if there are vectors x 1, x,..., x n R r (1 r n) such that d ij = x i x j, for all i, j = 1,,..., n, where denotes the Euclidean norm. Note that this definition allows the n n zero distance matrix. Because of its centrality in formulating our results, we begin Section with a theorem of Crouzeix and Ferland []. Our proof reorganizes and simplifies their argument and, in addition, corrects a gap in their argument that C implies C1 (see below). We then show in Section 3 that this theorem implies some necessary conditions on the block matrix Z, in order that two principal [ distance] matrix blocks D 1 and D can be used to D1 Z construct the distance matrix D = Z T R D n n. A different approach has been been considered by Bakonyi and Johnson [1], where they show that if a partial distance matrix has a chordal graph (a matrix with two principal distance matrix blocks has a chordal graph), then there is a one entry at a time procedure to complete the matrix to a distance matrix. Generally, graph theoretic techniques were employed in these studies, following similar methods used in the completion of positive semidefinite matrices. Section, we use Fiedler s construction of nonnegative symmetric matrices [6] to show that an n n distance matrix can always be bordered with certain scalar multiples of its Perron eigenvector, to form an (n + 1) (n + 1) distance matrix. Further, starting 1 In

with two principal distance matrix blocks D 1 and D, we give necessary and sufficient conditions on Z, [ where Z ] is a rank one matrix formed from the Perron eigenvectors, in D1 Z order that D = Z T R D n n be a distance matrix. Finally, in Section 5, we show that if a symmetric matrix has eigenvector e = (1, 1,..., 1) T, just one positive eigenvalue and zeroes on the diagonal, then it is a distance matrix. It will follow that by performing an orthogonal similarity of a trace zero, diagonal matrix with (essentially) a Hadamard matrix, where the diagonal matrix has just one positive eigenvalue, we obtain a distance matrix. We show then that the above methods lead, in some special cases, to a solution of the inverse eigenvalue problem for distance matrices. That is, given one nonnegative real number and n 1 nonpositive real numbers, with the sum of these n numbers equal to zero, can one contruct a distance matrix with these numbers as its eigenvalues? We will, as usual, denote by e i the vector with a one in the ith position and zeroes elsewhere.. Almost positive semidefinite matrices A symmetric matrix A is said to be almost positive semidefinite (or conditionally positive semidefinite) if x T Ax 0, for all x R n such that x T e = 0. Theorem.1 gives some useful equivalent conditions in the theory of almost positive semidefinite matrices. Theorem.1 (Crouzeix, Ferland) Let A R n n be symmetric, and a R n, a 0. Then the following are equivalent: C1. h T a = 0 implies h T Ah 0. C. A is positive semidefinite or A has just one negative eigenvalue and there exists b R n such that Ab = a, where [ b T a ] 0. A a C3. The bordered matrix a T has just one negative eigenvalue. 0 Proof: C1 implies C: Since A is positive semidefinite on a subspace of dimension n 1, A has at most one negative eigenvalue. If there doesn t exist b R n such that Ab = a, then writing a = x+y, where x kera and y rangea, we must have x 0. But then x T a = x T x + x T y = x T x 0. For any v R n, define h = (I xat x T a )v = v at v x T a x, then ht a = 0, and h T Ah = v T Av 0, so A is positive semidefinite. Suppose A is not positive semidefinite, Ab = a and b T a 0. For v R n, let h = (I bat b T a )v, thus we can write any v Rn as v = h + at v b T a b, where ht a = 0. Then v T Av = h T Ah + at v b T a ht Ab + ( at v b T a ) b T Ab ( at v b T a ) b T a, and choosing v so that 0 > v T Av we see that 0 > b T a. C implies C1:

If A is positive semidefinite we re done. Otherwise, let λ be the negative eigenvalue of A, with unit eigenvector u. Write A = C T C +λuu T, so that a = Ab = C T Cb+λ(u T b)u. If u T b = 0 then 0 b T a = b T C T Cb, which implies Cb = 0 but then a = 0 also, contradicting the hypotheses of our theorem. So we must have u T b 0. Again, a = C T Cb + λ(u T b)u gives 0 b T a = b T C T Cb + λ(u T b), which we can rewrite as 1 bt C T Cb. Noting also that u = λ(u T b) If h T a = 0 then A = C T C + 1 λ(u T b) [a CT Cb], we have 1 λ(u T b) [a CT Cb][a C T Cb] T. h T Ah =h T C T Ch + 1 λ(u T b) (ht C T Cb), h T C T Ch (ht C T Cb) b T C T Cb, = Ch [(Ch)T (Cb)] Cb 0, where the last inequality is from Cauchy-Schwarz. Finally, if b T C T Cb = 0 then Cb = 0 which implies a = λ(u T b)u. Then h T a = 0 implies h T u = 0, so that h T Ah = h T (C T C + λuu T )h = h T C T Ch 0, as required. The equivalence of C1 and C3 was proved by Ferland in [5]. Schoenberg [15] (see also Blumenthal [,p106]) showed that a symmetric matrix D = (d ij ) R n n (with d ii = 0, 1 i n) is a distance matrix if and only if x T Dx 0, for all x R n such that x T e = 0. Note that D almost negative semidefinite with zeroes on the diagonal implies that D is a nonnegative matrix, since (e i e j ) T e = 0 and so (e i e j ) T D(e i e j ) = d ij 0. Let s R n, where s T e = 1. Gower [7] proved that D is a distance matrix if and only if (I es T )D(I se T ) is negative semidefinite. (This follows since for any y R n, x = (I se T )y is orthogonal to e. Conversely, if x T e = 0 then x T (I es T )D(I se T )x = x T Dx.) The vectors from the origin to the n vertices are the columns of X R n n in (I es T ) 1 D(I set ) = X T X. (This last fact follows by writing X T X = (I es T )D(I se T ) = D fe T ef T, where f = Ds 1 (st Ds), then (e i e j ) T X T X(e i e j ) = (e i e j ) T D(e i e j ) = d ii + d jj d ij = d ij, so that d ij = Xe i Xe j.) These results imply a useful theorem for later sections (see also [7], [18]). Theorem. Let D = (d ij ) R n n, with d ii = 0, for all i, 1 i n, and suppose that D is not the zero matrix. Then D is a distance matrix if and only if D has just one positive eigenvalue and there exists w R n such that Dw = e and w T e 0. 3

Proof: This follows from taking A = D and a = e in Theorem.1, Schoenberg s result, and the fact that d ii = 0, for all i, 1 i n, implies D can t be negative semidefinite. Remark: Observe that if there are two vectors w 1, w R n such that Dw 1 = Dw = e, then w 1 w = u ker(d), and so e T w 1 e T w = e T u. But e T u = w1 T Du = 0, so we can conclude that w1 T e = w T e. 3. Construction of distance matrices. Let D 1 R k k be a nonzero distance matrix, and D R l l a distance matrix. Of course, [ thinking] geometrically, there are many choices for Z R k l, k + l = n, such that D1 Z D = Z T R D n n persists in being a distance matrix. Our first theorem provides some necessary conditions that any such Z must satisfy. Theorem 3.1 If Z is chosen so that D is a distance matrix, then Z = D 1 W, for some W R k l. Further, D W T D 1 W is negative[ semidefinite. ] D1 Z Proof: Let Ze i = z i, 1 i l. Since D = Z T is a distance matrix then the [ ] D D1 z (k + 1) (k + 1) bordered matrix S i = i zi T is a distance matrix also, for 1 i l. 0 This implies that S i has just one positive eigenvalue, and so z i = D 1 w i, using condition C3 in Theorem.1. Thus Z = D 1 W, where W e i = w i, 1 i l. The last [ part follows from ] the [ identity] [ ] [ ] D1 D 1 W I 0 D1 0 I W W T = D 1 D W T I 0 D W T, D 1 W 0 I and Sylvester s theorem [9], since the matrix on the left and D 1 on the right both have just one positive eigenvalue. Remark: In claiming D 1 is a nonzero distance matrix this means that k. If D 1 happened to be a zero distance matrix (for example when k = 1), then the off-diagonal block Z need not necessarily be in the range of that zero D 1 block. In the proof above if S i has just one positive eigenvalue we can only apply C3 and conclude that z i = D 1 w i when D 1 is not negative semidefinite (i.e. when D 1 is not a zero distance matrix). The[ converse ] D1 z of Theorem 3.1 is not necessarily true, since consider the case where l = 1, S = z T 0 and z = D 1 w, with w T D 1 w 0. We will see in the next section that if z is the Perron eigenvector for D 1, only certain scalar multiples of z will cause S to be a distance matrix. If D 1 and D are both nonzero distance matrices then we can state Theorem 3.1 in the following form, where the same proof works with the rows of Z. Theorem 3. If Z is chosen so that D is a distance matrix, then Z = D 1 W = V D, for some W, V R k l. Further, D W T D 1 W and D 1 V D V T semidefinite. are both negative

. Using the Perron vector to construct a distance matrix. As a consequence of the Perron-Frobenius Theorem [9], since a distance matrix has all nonnegative entries then it has a real eigenvalue r λ for all eigenvalues λ of D. Furthermore, the eigenvector that corresponds to r has nonnegative entries. We will use the notation that the vector e takes on the correct number of components depending on the context. Theorem.1 Let D R n n be a nonzero distance matrix with Du = ru, where r is the Perron eigenvalue and u is a unit [ Perron eigenvector. ] Let Dw = e, w T e 0 and ρ > 0. Then the bordered matrix ˆD D ρu = ρu T is a distance matrix if and only if ρ lies in the 0 interval [α, α + ], where α ± r = u T e re T w. Proof: We first show that ˆD has just one positive eigenvalue and e is in the range of ˆD. From [6] the set of eigenvalues of ˆD consists of the n 1 nonpositive eigenvalues of D, as [ ] r ρ well as the two eigenvalues of the matrix, namely r ± r + ρ. Thus ρ 0 ˆD has just one positive eigenvalue. It is easily checked that ˆDŵ = e if [ ] w ( ut e ŵ = r 1 ρ )u 1 ρ (ut e r ρ ). It remains to determine for which values of ρ we have e T ŵ 0, i.e. e T w 1 r (ut e) + ρ (ut e) r ρ = 1 r [ut e r(e T w) r ρ ][ ut e r(e T w) + r ρ ] 0. After multiplying across by ρ, we see that this inequality holds precisely when ρ is between the two roots α ± r = u T e of the quadratic. We can be sure of the re T w existence of ρ > 0 by arguing as follows. Dw = e implies u T Dw = ru T w = u T e. Also, D = ruu T + λ u u T + (the spectral decomposition of D) implies wt e = w T Dw = r(w T u) + λ (w T u ) + < r(w T u), i.e. w T e < r( ut e r ) = (ut e), so that (u T e) > r rw T e, which completes the proof. Example: Let D be the distance matrix which corresponds to a unit square in the plane, then [ De = e. ] In this case, α = 1, α + =, and the new figure that corresponds to D ρu ˆD = ρu T, for ρ [α 0, α + ], has an additional point on a ray through the center of the square and perpendicular to the plane. Fiedler s construction also leads to a more complicated version of Theorem.1. Theorem. Let D 1 R k k, D R l l be nonzero distance matrices with D 1 u = r 1 u, D v = r v, where r 1 and r are the Perron eigenvalues of D 1 and D respectively. Likewise, 5

u and v are the corresponding unit Perron eigenvectors. Let D 1 w 1 = e, where w T 1 e 0, D w = e, where w T e 0, [ and ρ > 0 with ] ρ r 1 r. Then the matrix ˆD D1 ρuv = T ρvu T is a distance matrix if and only if ρ D > r 1 r, [(u T e) r 1 e T w 1 r 1 e T w ][(v T e) r e T w 1 r e T w )] 0 and ρ is in the interval [α, α + ], where α ± = (ut e)(v T e) ± [(u T e) r 1 e T w 1 r 1 e T w ][(v T e) r e T w 1 r e T w )] (u T e) r 1 + (vt e) r e T w 1 e T w Proof: Fiedler s results show that the eigenvalues of ˆD are the nonpositive eigenvalues of [ D 1 together ] with the nonpositive eigenvalues of D, as well as the two eigenvalues of r1 ρ. The latter two eigenvalues are r 1+r ± (r 1 +r ) +[ρ r 1 r ]. Thus if ρ ρ r r 1 r > 0 then ˆD has just one positive eigenvalue. Conversely, if ˆD has just one positive eigenvalue then 0 r 1 + r (r 1 + r ) + [ρ r 1 r ], which implies (r 1 + r ) + [ρ r 1 r ] (r 1 + r ), i.e. ρ r 1 r 0, but ρ r 1 r 0 so ρ r 1 r > 0. Also, ˆDŵ = e where [ ρ w1 + r ŵ = 1 r ρ [ ρ r 1 (u T e) v T ] e]u ρ w + r 1 r ρ [ ρ r (v T e) u T. e]v Then using this ŵ we have e T ŵ = e T w 1 + e T ρ w + r 1 r ρ [ ρ (u T e) + ρ (v T e) (u T e)(v T e)] 0, r 1 r which we can rewrite, on multiplying across by r 1 r ρ, as (e T w 1 + e T w )r 1 r ρ(u T e)(v T e) + ρ [ (ut e) + (vt e) e T w 1 e T w ] 0. r 1 r After some simplification the roots of this quadratic turn out to be α ± = (ut e)(v T e) ± [(u T e) r 1 e T w 1 r 1 e T w ))][(v T e) r e T w 1 r e T w )]. (u T e) r 1 + (vt e) r e T w 1 e T w We saw in the proof of Theorem.1 that e T w 1 < (ut e) r 1 and e T w < (vt e) r. The proof is complete once we note that that the discriminant of the quadratic is greater than or equal to zero precisely when e T ŵ 0 for some ρ. Remark: The case ρ = r 1 r is dealt with separately. In this case [ if ˆDŵ = e, ] then it w 1 ut e can be checked that ut e r1 = vt e r. Also ˆDŵ = e, if we take ŵ = r 1 +r u w vt e r 1 +r v, where D 1 w 1 = e and D w = e. Then ˆD is a distance matrix if and only if (for this ŵ) ŵ T e 0. 5. The inverse eigenvalue problem for distance matrices. Let σ = {λ 1, λ,..., λ n } C. The inverse eigenvalue problem for distance matrices is that of finding necessary and sufficient conditions that σ be the spectrum of a (squared) distance matrix D. Evidently, since D is symmetric σ R. We have seen that if D R n n 6

is a nonzero distance matrix then D has just one positive eigenvalue. Also, since the diagonal entries of D are all zeroes, we have trace(d) = n i=1 λ i = 0. The inverse eigenvalue problem for nonnegative matrices remains as one of the longstanding unsolved problems in matrix theory; see for instance [3],[10],[13]. Suleimanova [17] and Perfect [1] solved the inverse eigenvalue problem for nonnegative matrices in the special case where σ R, λ 1 0 λ λ n and n i=1 λ i 0. They showed that these conditions are sufficient for the construction of a nonnegative matrix with spectrum σ. Fiedler [6] went further and showed that these same conditions are sufficient even when the nonnegative matrix is required to be symmetric. From what follows it appears that these conditions are sufficient even if the nonnegative matrix is required to be a distance matrix (when n i=1 λ i = 0). We will use the following lemma repeatedly. Lemma 5.1 Let λ 1 0 λ λ n, where n i=1 λ i = 0. Then (n 1) λ n λ 1 and (n 1) λ λ 1. Proof: These inequalities follow since λ 1 = n i= λ i and λ λ 3 λ n. Theorem 5. Let σ = {λ 1, λ, λ 3 } R, with λ 1 0 λ λ 3 and λ 1 + λ + λ 3 = 0. Then σ is the spectrum of a distance matrix D R 3 3. Proof: Construct an isoceles triangle in the xy-plane with one vertex at the origin, and λ the other two vertices at ( (λ + +λ 3 )λ 3, ± λ ). These vertices yield the distance matrix D = (λ 0 λ +λ 3 )λ 3 (λ λ 0 +λ 3 )λ 3, (λ +λ 3 )λ 3 (λ +λ 3 )λ 3 0 and it is easily verified that D has characteristic polynomial (x λ )(x λ 3 )(x+λ +λ 3 ) = x 3 (λ λ 3 + λ + λ 3 )x + λ λ 3 (λ + λ 3 ). Note that λ (λ + +λ 3 )λ 3 = λ + which completes the proof. λ 1 λ 3 λ λ + 1 = λ +λ 1 = λ 3+λ 1 0, Some distance matrices have e as their Perron eigenvector. These were discussed in [8], and were seen to correspond to vertices which form regular figures. Lemma 5.3 Let D R n n be symmetric, have zeroes on the diagonal, have just one nonnegative eigenvalue λ 1, and corresponding eigenvector e. Then D is a distance matrix. Proof: Let D have eigenvalues λ 1, λ,..., λ n, then we can write D = λ 1 n eet +λ u u T +. If x T e = 0 then x T Dx 0, as required. 7

A matrix H R n n is said to be a Hadamard matrix if each entry is equal to ±1 and H T H = ni. These matrices exist only if n = 1, or n 0 mod [19]. It is not known if there exists a Hadamard matrix for every n which is a multiple of, although this is a well-known conjecture. Examples of Hadamard matrices are: [ ] 1 1 1 1 1 1 1 1 1 1 n =, H = ; n =, H = ; 1 1 1 1 1 1 1 1 1 1 and H 1 H is a Hadamard matrix, where H 1 and H are Hadamard matrices and denotes the Kronecker product. For any n for which there exists a Hadamard matrix our next theorem solves the inverse eigenvalue problem. Theorem 5. Let n be such that there exists a Hadamard matrix of order n. Let λ 1 0 λ λ n and n i=1 λ i = 0. Then there is a distance matrix D with eigenvalues λ 1, λ,..., λ n. Proof: Let H R n n be a Hadamard matrix, and U = 1 n H so that U is an orthogonal matrix. Let Λ = diag(λ 1, λ,, λ n ), then D = U T ΛU has eigenvalues λ 1, λ,..., λ n. D has eigenvector e, since for any Hadamard matrix H we can assume that one of the columns of H is e. From D = U T ΛU = λ 1 n eet + λ u u T +, it can be seen that each of the diagonal entries of D are n i=1 λ i/n = 0. The theorem then follows from Lemma 5.3. Since there are Hadamard matrices for all n a multiple of such that n < 8, Theorem 5. solves the inverse eigenvalue problem in these special cases. See [16] for a more extensive list of values of n for which there exists a Hadamard matrix. The technique of Theorem 5. was used in [10] to partly solve the inverse eigenvalue problem for nonnegative matrices when n =. Theorem 5.6 will solve the inverse eigenvalue problem for any n + 1, such that there exists a Hadamard matrix of order n. We will use a theorem due to Fiedler [6]. Theorem 5.5 Let α 1 α α k be the eigenvalues of the symmetric nonnegative matrix A R k k, and β 1 β β l the eigenvalues of the symmetric nonnegative matrix B R l l, where α 1 β 1. Moreover, Au = α 1 u, Bv = β 1 v, so that u and v are the corresponding unit Perron eigenvectors. Then with ρ = [ ] σ(α 1 β 1 + σ) the matrix A ρuv T ρvu T has eigenvalues α B 1 + σ, β 1 σ, α,..., α k, β,..., β l, for any σ 0. Remark: The assumption α 1 β 1 is only for convenience. It is easily checked that it is sufficient to have α 1 β 1 + σ 0. In [18] two of the authors called a distance matrix circum Euclidean if the points which generate it lie on a hypersphere. Suppose that the distance matrix D has the property 8

that there is an s R n such that Ds = βe and s T e = 1 (easily arranged if Dw = e and w T e > 0). Then using this s we have (see the paragraph just before Theorem.) (I es T )D(I se T ) = D βee T = X T X. Since the diagonal entries of D are all zero, if Xe i = x i, then x i = β/, for all i, 1 i n. Evidently the points lie on a hypersphere of radius R, where R = β/ = st Ds. Theorem 5.6 Let n be such that there exists a Hadamard matrix of order n. Let λ 1 0 λ λ n+1 and n+1 i=1 λ i = 0, then there is an (n + 1) (n + 1) distance matrix ˆD with eigenvalues λ 1, λ,..., λ n+1. Proof: Let D R n n be a distance matrix constructed using a Hadamard matrix, as in Theorem 5., with eigenvalues λ 1 +λ n+1, λ,..., λ n. Note that λ 1 +λ n+1 = λ λ n and [ De = ](λ 1 + λ n+1 )e, i.e. D has Perron eigenvector e. Using Theorem 5.5 let ˆD = D ρu ρu T, where A = D, B = 0, u = 0 e n, α 1 = λ 1 +λ n+1, α i = λ i, for i n, β 1 = 0 and σ = λ n+1. In this case ρ = λ 1 λ n+1, and ˆD has eigenvalues λ 1, λ,..., λ n+1. We will show that there exist vectors ˆx 1,..., ˆx n+1 such that ˆd ij = ˆx i ˆx j, for all i, j, 1 i, j n + 1, where ˆD = ( ˆd ij ). For the distance matrix D, and in the notation of the paragraph above, s = e n and β = (λ 1 + λ n+1 ). Let x 1,..., x n R n be vectors to the n vertices that will correspond n to D, i.e. d ij = x i x j, for all i, j, 1 i, j n. These points lie on a hypersphere of radius R, where R = x i = (λ 1 + λ n+1 ), for all i, 1 i n. n Let the vectors that correspond to the n + 1 vertices of ˆD be ˆx1 = (x 1, 0),..., ˆx n = (x n, 0) R n+1 and ˆx n+1 = (0, t) R n+1, then ˆd ij = d ij, for all i, j, 1 i, j n. Furthermore, the right-most column of ˆD has entries ˆdi(n+1) = ˆx n+1 ˆx i = t + R, for each i, 1 i n. Finally, we must show that we can choose t so that t + R = be possible only if R ρ n, i.e. (λ 1 + λ n+1 ) n ρ n. But this will λ1 λ n+1, which we can rewrite, on n dividing both sides by λ 1 n as 1 p np, where p = λ n+1. Since p 0 we must have λ 1 1 p 1. But also, from Lemma 5.1, p 1 n implies that np 1, and the proof is complete. Note that the above proof works for any distance matrix D with eigenvector e, and with the appropriate eigenvalues, not just when D has been constructed using a Hadamard matrix. The same is true for the matrix D 1 in our next theorem. Our next theorem bears a 9

close resemblance to the previous one, although it will solve the inverse eigenvalue problem only in the cases n = 6, 10, 1 and 18. Theorem 5.7 Let n =, 8, 1 or 16, so that there exists a Hadamard matrix of order n. Let λ 1 0 λ λ n+1 λ n+ and n+ i=1 λ i = 0, then there is an (n + ) (n + ) distance matrix ˆD with eigenvalues λ 1, λ,..., λ n+1, λ n+. Proof: Let D 1 R n n be a distance matrix, constructed using [ a Hadamard matrix ] as before, with eigenvalues λ 1 + λ n+1 + λ n+, λ,..., λ n. Let ˆD D1 ρ e e T = n, ρ e e T n D [ ] 0 λ where D = n+1, and apply Theorem 5.5 where A = D λ n+1 0 1, B = D, u = 1 n e, v = 1 e, α 1 = λ 1 + λ n+1 + λ n+, α i = λ i, for i n, β 1 = λ n+1, β = λ n+1 and σ = λ n+1 λ n+. In this case ρ = ( λ n+1 λ n+ )(λ 1 + λ n+1 ), and ˆD has the desired eigenvalues λ 1, λ,..., λ n+1, λ n+. The vectors x 1,..., x n to the n vertices that correspond to D 1 lie on a hypersphere of radius R = λ 1+λ n+1 +λ n+ n. Let the vectors that correspond to the n + vertices of ˆD be ˆx 1 = (x 1, 0, 0),..., ˆx n = (x n, 0, 0) R n+ λn+1 and ˆx n+1 = (0, t, ), ˆx n+ = λn+1 (0, t, ) R n+, then ˆd ij = ˆx i ˆx j = x i x j, for all i, j, 1 i, j n, and ˆd ij = ˆx i ˆx j, for n + 1 i, j n +. Furthermore, the rightmost two columns of ˆD (excluding the entries of the bottom right block) have entries ˆd i(n+1) = ˆx n+1 ˆx i = R + t λ n+1 = ˆd i(n+) = ˆx n+ ˆx i, for each i, 1 i n. Finally, we must show that we can choose t so that R + t λ n+1 = ρ, i.e. we n must show that λ 1 + λ n+1 + λ n+ λ n+1 ( λn+1 λ n+ )(λ 1 + λ n+1 ). n n Writing p = λ n+1 and q = λ n+, we can rewrite the above inequality as λ 1 λ 1 1 p q + p (p + q)(1 p) n. n This inequality can be rearranged to become 1 n + 1 1 p + p + q (p + q)(1 p) n +, n or + n p + q 1 p n [ n + ], then taking square roots of both sides and again rearranging, we have n 1 p + q + 1 p = f(p, q). + n + n Thus, we need to show that f(p, q) 1, where q p, 1 p+q, and np 1 q (the last 10

inequality comes from noting that λ 1 + λ n+ λ λ n λ n+1, since n+ i=1 λ i = 0, and using Lemma 5.1). These three inequalities describe the interior of a triangular region in the pq-plane. For fixed p, f(p, q) increases as q increases, so we need only check that f(p, q) 1, on the lower border (i.e. closest to the p-axis) of the triangular region. One edge of this lower border is when p = q, and in this case 1 p 1 n+1. Differentiation of f(p, p) tells us that a maximum is achieved when p = n+, and 1 n+ 1 n. Minima are achieved at the end-points p = 1 and p = 1 n+1 f( 1, 1 1 ) 1, means that 16 n, and that f( n+1, 1 n+1 n+1 since. It is easily checked that ) 1 is true in any case. Along the other lower border q = np + 1, and in this case f(p, np + 1) is found to have df dp < 0, and we re done since f( 1 n+1, 1 n+1 ) 1. The methods described above do not appear to extend to the missing cases, particularly the case of n = 7. Since we have no evidence that there are any other necessary conditions on the eigenvalues of a distance matrix we make the following conjecture: Conjecture Let λ 1 0 λ λ n, where n i=1 λ i = 0. Then there is a distance matrix with eigenvalues λ 1, λ,..., λ n. Acknowledgements We are grateful to Charles R. Johnson for posing the inverse eigenvalue problem for distance matrices. The second author received funding as a Postdoctoral Scholar from the Center for Computational Sciences, University of Kentucky. References [1] M. Bakonyi and C. R. Johnson, The Euclidian distance matrix completion problem, SIAM J. Matrix Anal. Appl. 16:66 65 (1995). [] L. M. Blumenthal, Theory and applications of distance geometry, Oxford University Press, Oxford, 1953. Reprinted by Chelsea Publishing Co., New York, 1970. [3] M. Boyle and D. Handelman, The spectra of nonnegative matrices via symbolic dynamics, Annals of Mathematics 133:9 316 (1991). [] J. P. Crouzeix and J. Ferland, Criteria for quasiconvexity and pseudoconvexity: relationships and comparisons, Mathematical Programming 3():193 05 (198). [5] J. Ferland, Matrix-theoretic criteria for the quasi-convexity of twice continuously differentiable functions, Linear Algebra and its Applications 38:51-63 (1981). [6] M. Fiedler, Eigenvalues of nonnegative symmetric matrices, Linear Algebra and its Applications 9:119 1 (197). [7] J. C. Gower, Euclidean distance geometry, Math. Scientist, 7:1 1 (198). [8] T. L. Hayden and P. Tarazaga, Distance matrices and regular figures, Linear Algebra and its Applications 195:9-16 (1993). 11

[9] R. A. Horn and C. R. Johnson, Matrix analysis, Cambridge Uni. Press, 1985. [10] R. Loewy and D. London, A note on an inverse problem for nonnegative matrices, Lin. and Multilin. Alg. 6:83 90 (1978). [11] H. Minc, Nonnegative matrices, John Wiley and Sons, New York, 1988. [1] H. Perfect, Methods of constructing certain stochastic matrices. II, Duke Math J. :305 311 (1955). [13] R. Reams, An inequality for nonnegative matrices and the inverse eigenvalue problem, Linear and Multilinear Algebra, 1:367-375 (1996). [1] I. J. Schoenberg, Remarks to Maurice Fréchet s article Sur la definition axiomatique d une class d espaces distanciés vectoriellement applicable sur l espace de Hilbert, Ann. Math., 36(3):7 73 (1935). [15] I. J. Schoenberg, Metric spaces and positive definite functions, Trans. Amer. Math. Soc., :5 536 (1938). [16] J. R. Seberry and M. Yamada, Hadamard matrices, sequences and block designs, In J. H. Dinitz, D. R. Stinson (Eds.) Contemporary Design Theory A Collection of Surveys, John Wiley, New York, 199. [17] H. R. Suleimanova, Stochastic matrices with real characteristic numbers. (Russian) Doklady Akad. Nauk SSSR (N.S.) 66:33 35 (199). [18] P. Tarazaga, T. L. Hayden and J. Wells, Circum-Euclidean distance matrices and faces, Linear Algebra and its Applications, 3:77 96 (1996). [19] W. D. Wallis, Hadamard matrices, In R. A. Brualdi, S. Friedland, V. Klee (Eds.) Combinatorial and graph-theoretical problems in linear algebra, I.M.A., Vol. 50, Springer- Verlag, pp. 35 3, 1993. 1