Generalizations of the Strong Arnold Property and the minimum number of distinct eigenvalues of a graph

Similar documents
Generalizations of the Strong Arnold Property and the Inverse Eigenvalue Problem of a Graph

A variant on the graph parameters of Colin de Verdière: Implications to the minimum rank of graphs

Iowa State University and American Institute of Mathematics

FORBIDDEN MINORS FOR THE CLASS OF GRAPHS G WITH ξ(g) 2. July 25, 2006

Using a new zero forcing process to guarantee the Strong Arnold Property arxiv: v1 [math.co] 6 Jan 2016

The Minimum Rank of Symmetric Matrices Described by a Graph: A Survey

POSITIVE SEMIDEFINITE ZERO FORCING

An upper bound for the minimum rank of a graph

arxiv: v1 [math.co] 27 Aug 2012

Vertex and edge spread of zero forcing number, maximum nullity, and minimum rank of a graph

ZERO FORCING PARAMETERS AND MINIMUM RANK PROBLEMS. March 2, 2010

Lecture 7: Positive Semidefinite Matrices

The minimum rank of matrices and the equivalence class graph

POSITIVE SEMIDEFINITE ZERO FORCING

1. General Vector Spaces

Lecture Summaries for Linear Algebra M51A

Minimum rank of a graph over an arbitrary field

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Diagonal Entry Restrictions in Minimum Rank Matrices, and the Inverse Inertia and Eigenvalue Problems for Graphs

NORMS ON SPACE OF MATRICES

Differential Topology Final Exam With Solutions

Linear Algebra 2 Spectral Notes

NOTES on LINEAR ALGEBRA 1

Generic maximum nullity of a graph

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES

Chapter 1. Matrix Algebra

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

arxiv: v1 [math.co] 20 Sep 2012

A Colin de Verdiere-Type Invariant and Odd-K 4 - and Odd-K3 2 -Free Signed Graphs

Symmetric 0-1 matrices with inverses having two distinct values and constant diagonal

Eigenvalues and Eigenvectors

Laplacian Integral Graphs with Maximum Degree 3

Recall the convention that, for us, all vectors are column vectors.

The Minimum Rank, Inverse Inertia, and Inverse Eigenvalue Problems for Graphs. Mark C. Kempton

The Strong Largeur d Arborescence

Positive semidefinite maximum nullity and zero forcing number. Travis Anthony Peters. A dissertation submitted to the graduate faculty

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Average Mixing Matrix of Trees

Chapter 4 Euclid Space

Knowledge Discovery and Data Mining 1 (VO) ( )

Elementary linear algebra

Chapter 5 Eigenvalues and Eigenvectors

Refined Inertia of Matrix Patterns

Very few Moore Graphs

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Four new upper bounds for the stability number of a graph

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

UNIVERSALLY OPTIMAL MATRICES AND FIELD INDEPENDENCE OF THE MINIMUM RANK OF A GRAPH. June 20, 2008

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Math Linear Algebra II. 1. Inner Products and Norms

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Chapter 2 Spectra of Finite Graphs

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

On Hadamard Diagonalizable Graphs

Foundations of Matrix Analysis

MAT Linear Algebra Collection of sample exams

1 Last time: least-squares problems

A linear algebraic view of partition regular matrices

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

The rank of connection matrices and the dimension of graph algebras

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

Linear algebra and applications to graphs Part 1

Smith Normal Form and Acyclic Matrices

A Brief Outline of Math 355

A new graph parameter related to bounded rank positive semidefinite matrix completions

MINIMUM RANK WITH ZERO DIAGONAL. February 27, 2014

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Basic Elements of Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Minimum rank of skew-symmetric matrices described by a graph

Results on minimum skew rank. of matrices described by a graph. Laura Leigh DeLoss. A thesis submitted to the graduate faculty

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Chapter Two Elements of Linear Algebra

Classification of root systems

Stat 159/259: Linear Algebra Notes

The Singular Value Decomposition and Least Squares Problems

For a square matrix A, we let q(a) denote the number of distinct eigenvalues of A. For a graph G, we define. q(g) = min{q(a) : A S(G)}.

Boolean Inner-Product Spaces and Boolean Matrices

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Introduction to Linear Algebra, Second Edition, Serge Lange

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

G1110 & 852G1 Numerical Linear Algebra

Linear Algebra: Matrix Eigenvalue Problems

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

GQE ALGEBRA PROBLEMS

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Properties of Matrices and Operations on Matrices

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Linear Algebra Formulas. Ben Lee

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Transcription:

1 3 4 5 Generalizations of the Strong Arnold Property and the minimum number of distinct eigenvalues of a graph Wayne Barrett Shaun Fallat H. Tracy Hall Leslie Hogben Jephian C.-H. Lin Bryan L. Shader April 15, 016 6 7 8 9 10 11 1 13 14 15 16 17 18 19 0 1 3 4 Abstract For a given graph G and an associated class of real symmetric matrices whose offdiagonal entries are governed by the adjacencies in G, the collection of all possible spectra for such matrices is considered. Building on the pioneering work of Colin de Verdière in connection with the Strong Arnold Property, two extensions are devised that target a better understanding of all possible spectra and their associated multiplicities. These new properties are referred to as the Strong Spectral Property and the Strong Multiplicity Property. Finally, these ideas are applied to the minimum number of distinct eigenvalues associated with G, denoted by q(g). The graphs for which q(g) is at least the number of vertices of G less one are characterized. Keywords. Inverse Eigenvalue Problem, Strong Arnold Property, Strong Spectral Property, Strong Multiplicity Property, Colin de Verdière type parameter, maximum multiplicity, distinct eigenvalues. AMS subject classifications. 05C50, 15A18, 15A9, 15B57, 58C15. 1 Introduction Inverse eigenvalue problems appear in various contexts throughout mathematics and engineering. The general form of an inverse eigenvalue problem is the following: given a family F of matrices and a spectral property P, determine if there exists a matrix A F with property P. Examples of families are tridiagonal, Toeplitz, or all symmetric matrices with a given Department of Mathematics, Brigham Young University, Provo UT 8460, USA (wb@mathematics.byu.edu, H.Tracy@gmail.com). Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan, S4S0A, Canada (shaun.fallat@uregina.ca). Department of Mathematics, Iowa State University, Ames, IA 50011, USA (LHogben@iastate.edu) and American Institute of Mathematics, 600 E. Brokaw Road, San Jose, CA 9511, USA (hogben@aimath.org). Department of Mathematics, Iowa State University, Ames, IA 50011, USA (chlin@iastate.edu). Department of Mathematics, University of Wyoming, Laramie, WY 8071, USA (bshader@uwyo.edu). 1

5 6 7 8 9 30 31 3 33 34 35 36 37 38 39 40 41 4 43 44 45 46 47 48 49 50 51 5 53 54 55 56 57 58 59 60 61 6 63 64 65 graph. Examples of properties include: having a prescribed rank, a prescribed spectrum, a prescribed eigenvalue and corresponding eigenvector, or a prescribed list of multiplicities. Our focus is on inverse eigenvalue problems where F is a set of symmetric matrices associated with a graph. These have received considerable attention, and a rich mathematical theory has been developed around them (see, for example, [13]). All matrices in this paper are real. Let G = (V (G), E(G)) be a (simple, undirected) graph with vertex set V (G) = {1,..., n} and edge set E(G). The set S(G) of symmetric matrices described by G consists of the set of all symmetric n n matrices A = (a ij ) such that for i j, a ij 0 if and only if ij E(G). We denote the spectrum of A, i.e., the multiset of eigenvalues of A, by spec(a). The inverse spectrum problem for G, also known as the inverse eigenvalue problem for G, refers to determining the possible spectra that occur among the matrices in S(G). The inverse spectrum problem for G seems to be difficult, as evidenced by that fact that it has been completely solved for only a few special families of graph, e.g. paths, generalized stars, double generalized stars, and complete graphs [7, 9, 1]. To gain a better understanding of the inverse eigenvalue problem for graphs, other spectral properties have been studied. For example, the maximum multiplicity problem for G is: Determine M(G), where M(G) = max{mult A (λ) : A S(G), λ spec(a)}, and mult A (λ) denotes the multiplicity of λ as an eigenvalue of A. A related invariant is the minimum rank of G, which is defined by mr(g) = min{rank A : A S(G)}. The minimum rank problem is: Given a graph G, determine mr(g). As mr(g)+m(g) = G, where G denotes the number of vertices of G, the maximum multiplicity and minimum rank problems are essentially the same. These problems have been extensively studied in recent years; see [13, 14] for surveys. If the distinct eigenvalues of A are λ 1 < λ < < λ q and the multiplicity of these eigenvalues are m 1, m,..., m q respectively, then the ordered multiplicity list of A is m = (m 1, m,..., m q ). This notion gives rise to the inverse ordered multiplicity list problem: Given a graph G, determine which ordered multiplicity lists arise among the matrices in S(G). This problem has been studied in [6, 7, 1]. A recently introduced spectral problem (see [1]) is the minimum number of distinct eigenvalues problem: Given a graph G, determine q(g) where q(g) = min{q(a) : A S(G)} and q(a) is the number of distinct eigenvalues of A. For a specific graph G and a specific property P, it is often difficult to find an explicit matrix A S(G) having property P (e.g., consider the challenge of finding a matrix whose graph is a path on five vertices and that has eigenvalues 0, 1,, 3, 4). In this paper, we carefully describe the underlying theory of a technique based on the implicit function theorem, and develop new methods for two types of inverse eigenvalue problems. Suppose G is a graph and P is a spectral property (such as having a given spectrum, given ordered multiplicity list, or given multiplicity of the eigenvalue 0) of a matrix in S(G). The theory is applied to determine conditions (dependent on the property P) that guarantee if a matrix A S(G) has property P and satisfies these conditions, then

66 67 68 69 70 71 7 73 74 75 76 77 78 79 80 81 8 83 84 85 86 87 88 89 90 91 9 93 94 95 96 97 98 99 for every supergraph G of G (on the same vertex set) there is a matrix B S( G) satisfying property P. (A graph G is a supergraph of G if G is a subgraph of G.) In Section, the technique is developed, and related to both the implicit function theorem and the work of Colin de Verdière [10, 11]. The technique is then applied to produce two new properties for symmetric matrices called the Strong Spectral Property (SSP) and the Strong Multiplicity Property (SMP) that generalize the Strong Arnold Property (SAP). In Section 3, we establish general theorems for the inverse spectrum problem, respectively the inverse multiplicity list problem, using matrices satisfying the SSP, respectively the SMP. In Section 4, we use the SSP and SMP to prove properties of q(g). In particular we answer a question raised in [1] by giving a complete characterization of the graphs G for which q(g) G 1. Motivation and fundamental results We begin by recalling an inverse problem due to Colin de Verdière. In his study of certain Schrödinger operators, Colin de Verdière was concerned with the maximum nullity, µ(g), of matrices in the class of matrices A S(G) satisfying: (i) all off-diagonal entries of A are non-positive; (ii) A has a unique negative eigenvalue with multiplicity 1; and (iii) O is the only symmetric matrix X satisfying AX = O, A X = O and I X = O. Here denotes the Schur (also known as the Hadamard or entrywise) product and O denotes the zero matrix. Condition (iii) is known as the Strong Arnold Property (or SAP for short). Additional variants, called Colin de Verdière type parameters, include ξ(g), which is the maximum nullity of matrices A S(G) satisfying the SAP [8], and ν(g), which is the maximum nullity of positive semidefinite matrices A S(G) satisfying the SAP [11]. The maximum multiplicity M(G) is also the maximum nullity of a matrix in S(G), so ξ(g) M(G). Analogously, ν(g) M + (G), where M + (G) is the maximum nullity among positive semidefinite matrices in S(G). Colin de Verdière [10] used results from manifold theory to show conditions equivalent to (i) (iii) imply that if G is a subgraph of G, then the existence of A S(G) satisfying (i) (iii), implies the existence of à S( G) having the same nullity as A and satisfying (i) (iii). 1 The formulation of the SAP in (iii) is due to van der Holst, Lovász and Schrijver [19], which gives a linear algebraic treatment of the SAP and the Colin de Verdière number µ. Using a technique similar to that in [19], Barioli, Fallat, and Hogben [8] showed that if there exists A S(G) satisfying the SAP and G is a subgraph of G, then there exists à S( G) such that à has the SAP and A and à have the same nullity. 1 In fact, much more was shown: If G is a minor of H, then there exists B S(H) satisfying (i) (iii) such that A and B have the same nullity. A minor of a graph is obtained by contracting edges and by deleting edges and vertices, and a graph parameter β is minor monotone if β(g) β(h). Again, the result was established for minors: If G is a minor of H, then there exists B S(H) satisfying the SAP such that A and B have the same nullity. 3

100 101 10 103 104 105 106 107 108 109 110 111 11 113 114 115 116 117 118 119 10 11 1 13 14 15 16 17 18 19 130 131 13 133 134 135 136 137 The Strong Arnold Property has been used to obtain many results about the maximum nullity of a graph. Our goal in this section is to first describe the general technique behind Colin de Verdière s work, and then develop analogs of the SAP for the inverse spectrum problem and the inverse multiplicity list problem. For convenience, we state below a version of the Implicit Function Theorem (IFT) because it is central to the technique; see [1, ]. Theorem.1. Let F : R s+r R t be a continuously differentiable function on an open subset U of R s+r defined by F (x, y) = (F 1 (x, y), F (x, y),..., F t (x, y)), where x = (x 1,..., x s ) R s and y R r. Let (a, b) be an element of U with a R s and b R r, and c R t such that F (a, b) = c. If the t s matrix ( ) Fi (1) x j has rank t, then there exist an open neighborhood V containing a and an open neighborhood W containing b such that V W U and a continuous function φ : W V such that F (φ(y), y) = c for all y W. The IFT concerns robustness of solutions to the system F (x, y) = c. Namely, the existence of a nice solution x = a to F (x, b) = c guarantees the existence of solutions to all systems F (x, b) = c with b sufficiently close to b. Here, nice means that the columns of the matrix in (1) span R t. Remark.. More quantitative proofs of the implicit function theorem (see [, Theorem 3.4.10]) show that there exists an ɛ > 0 that depends only on U and the Jacobian in (1) such that there is such a continuous function φ with F (φ(y), y) = c for all y with y b < ɛ. A useful application of the IFT arises in the setting of smooth manifolds. We refer the reader to [, 3] for basic definitions and results about manifolds. Given a manifold M embedded smoothly in an inner product space and a point x in M, we denote the tangent space to M at x by T M.x and the normal space 3 to M at x by N M.x. For the manifolds of matrices discussed here, the inner product of n n matrices A and B is A, B = tr(a B), or equivalently, the Euclidean inner product on R n (with matrices viewed as n -tuples). Many problems, including our inverse problems, can be reduced to determining whether or not the intersection of two manifolds M 1 and M is non-empty. There is a condition known as transversality such that if M 1 and M intersect transversally at x, then any manifold near M 1 and any manifold near M intersect non-trivially. In other words, the existence of a nice solution implies the existence of a solution to all nearby problems. More precisely, let M 1 and M be manifolds in some R d, and x be a point in M 1 M. The manifolds M 1 and M intersect transversally at x provided T M1.x + T M.x = R d, or equivalently N M1.x N M.x = {O}. By a smooth family M(s) (s ( 1, 1)) of manifolds in R d we mean that M(s) is a manifold in R d for each s ( 1, 1), and M(s) varies smoothly as a function of s. Thus, if M(s) is a smooth family of manifolds in R d, then (a,b) 3 That is, the orthogonal complement of the tangent space in the ambient inner product space. 4

138 139 140 141 14 143 144 145 146 147 148 149 150 151 15 153 154 155 156 157 158 159 160 161 16 (i) there is a k such that M(s) has dimension k for all s ( 1, 1); (ii) there exists a collection {U α : α I} of relatively open sets whose union is s ( 1,1) M(s); and (iii) for each α there is a diffeomorphism F α : ( 1, 1) k+1 U α. Note that if x 0 ( 1, 1) k, s 0 ( 1, 1), y 0 M(t 0 ) and F α (x 0, s 0 ) = y 0, then the tangent space to M(s 0 ) at y 0 is the column space of the d k matrix ( ) Fα. x j (x=x0,s=t 0 ) The following theorem can be viewed as a specialization of Lemma.1 and Corollary. of [19] to the case of two manifolds. Theorem.3. Let M 1 (s) and M (t) be smooth families of manifolds in R d, and assume that M 1 (0) and M (0) intersect transversally at y 0. Then there is a neighborhood W R of the origin and a continuous function f : W R d such that for each ɛ = (ɛ 1, ɛ ) W, M 1 (ɛ 1 ) and M (ɛ ) intersect transversally at f(ɛ). Proof. Let k, respectively l, be the common dimension of each of the manifolds M 1 (s) (s ( 1, 1)) and each of the manifolds M (t) (t ( 1, 1)), respectively. Let {U α : α I} and F α : ( 1, 1) k+1 U α be the open sets and diffeomorphisms for M 1 (t) discussed above. Let {V β : β J } and G β : ( 1, 1) l+1 V β be the similar open sets and diffeomorphisms for M (t). Choose α so that y 0 U α and β so that y 0 V β. Define H : R k ( 1, 1) R l ( 1, 1) R d by H(u, s, v, t) = F α (u, s) G β (v, s), where u ( 1, 1) k, v ( 1, 1) l, and s, t ( 1, 1). There exists u(0) and v(0) such that F α (u(0), 0) = y 0 and G β (v(0), 0) = y 0. Hence H(u(0), 0, v(0), 0) = 0. Since F and G are diffeomorphisms, the Jacobian of the function H restricted to s = 0 and t = 0 and evaluated at u = u(0) and v = v(0) is the d by k + l matrix Jac = F i u j (u(0),0) G i v j (v(0),0). 163 164 165 166 167 The assumption that M 1 (0) and M (0) intersect transversally at y(0) implies that the column space of Jac is all of R d. The result now follows by applying the implicit function theorem (Theorem.1) and the fact that every matrix in a sufficiently small neighborhood of a full rank matrix has full rank. 5

168 169 170 171 17 173 174 175 176 177 178 179 180 181 18 183 184 185 186 187 188 189 190 191 19 193 194 195 196.1 The Strong Arnold Property We now use Theorem.3 to describe a tool for the inverse rank problem. This tool is described in [19] and used there and in [4, 5, 8]. We use this problem to familiarize the reader with the general technique. Let G be a graph of order n, and A S(G). For this problem the two manifolds that concern us are S(G) and the manifold R A = {B S n (R) : rank(b) = rank(a)}, consisting of all n n symmetric matrices with the same rank as A. We view both of these as subsets of S n (R), the set of all n n symmetric matrices endowed with the inner product V, W = tr(v W ). Thus S(G) and R A can be thought of as submanifolds of R n(n+1)/. It is easy to see that T S(G).A = {X S n (R) : x ij 0 = ij is an edge of G or i = j }, and N S(G).A = {X S n (R) : A X = O and I X = O}. For N RA.A we have the following result [19]. Lemma.4. Let A be a symmetric n n matrix of rank r. Then N RA.A = {X S n (R) : AX = O}. Proof. There exists an invertible r r principal submatrix of A, and by permutation similarity we may take this to be the leading principal submatrix. Hence A has the form A 1 A 1 U U A 1 U A 1 U for some invertible r r matrix A 1 and some r (n r) matrix U. Let B(t) (t ( 1, 1)) be a differentiable path of symmetric rank r matrices such that B(0) = A. For t sufficiently small, the leading r r principal submatrix of B(t) is invertible and B(t) has the form B 1(t) B 1 (t)u(t), U(t) B 1 (t) U(t) B 1 (t)u(t) where B 1 (t) and U(t) are differentiable, B 1 (0) = A 1 and U(0) = U. Differentiating with respect to t and then evaluating at t = 0 gives Ḃ(0) = Ḃ1(0) Ḃ 1 (0)U O A + 1 U(0). U Ḃ 1 (0) U Ḃ 1 (0)U U(0) A 1 U(0) A 1 U + U A 1 U(0) It follows that T RA.A = T 1 + T, where T 1 := R RU : R is an arbitrary symmetric r r matrix U R U RU 6

197 198 and T := O A 1S S A 1 S A 1 U + U A 1 S : S is an arbitrary r (n r) matrix. 199 00 Consider an n n symmetric matrix W = C D D E, 01 where C is r r. Then W T if and only if 0 03 04 05 06 07 08 09 10 11 1 13 14 15 16 17 18 19 0 1 3 or equivalently tr(ds A 1 + D A 1 S + ES A 1 U + EU A 1 S) = 0 for all S tr((d A 1 + EU A 1 )S) = 0 for all S. Thus, W T if and only if D A 1 + EU A 1 = O, which is equivalent to D = EU since A 1 is invertible. Similarly, W T1 if and only if C + DU + UD + UEU is skew symmetric. As C + DU + UD + UEU is symmetric, we have C T1 if and only if C + DU + UD + UEU = 0. For W T1 T, C = UEU and therefore N RA.A = UEU UE : E is an arbitrary symmetric (n r) (n r) matrix EU E. () It is easy to verify that this is precisely that set of symmetric matrices X such that AX = O. Lemma.4 implies that R A and S(G) intersect transversally at A if and only if A satisfies the SAP. Now that we know the pertinent tangent spaces for the inverse rank problem for G, we can apply Theorem.3 to easily obtain the following useful tool, which was previously proved and used in [19]. Theorem.5. If A S(G) has the SAP, then every supergraph of G with the same vertex set has a realization that has the same rank as A and has the SAP. Proof. Let G be a supergraph of G with the same vertex set, and define = E( G) E(G). For t ( 1, 1), define the manifold M(t) as those B = (b ij ) S n (R) with b ij 0 if ij E(G), b ij = t if ij, and b ij = 0 if i j and ij E( G). Since M(0) = S(G), and A has the SAP, R A and M(0) intersect transversally at A. Therefore Theorem.3 guarantees a continuous function f such that for ɛ sufficiently small, R A and M(ɛ) intersect transversally at f(ɛ), so f(ɛ) has the same rank as A and f(ɛ) has the SAP. For ɛ > 0, f(ɛ) S( G). 7

4 5 6 7 8 9 30 31 3 33 34 35 36 37 38 39 40 41 4 43 44 45. The Strong Spectral Property We now follow the general method outlined in the previous subsection to derive an analog of the SAP and Theorem.5 for the inverse spectrum problem. Certain aspects of this section were in part motivated by discussions with Dr. Francesco Barioli in connection with the inverse eigenvalue problem for graphs [3]. Given a multiset Λ of real numbers that has cardinality n, the set of all n n symmetric matrices with spectrum Λ is denoted by E Λ. Thus if A E Λ, then E Λ is all symmetric matrices cospectral with A. It is well known that E Λ is a manifold []. A comment on notation: The notation E Λ for the constant spectrum manifold was chosen because this manifold is determined by Λ. Then a symmetric matrix A is in E spec(a). In conformity with this, the constant rank manifold containing A should be denoted R rank A, but we follow the literature in denoting it by R A. Let A be a symmetric n n matrix. The centralizer of A is the set of all matrices that commute with A, and is denoted by C(A). The commutator, AB BA of two matrices is denoted by [A, B]. The next result is well known but we include a brief proof for completeness. Proposition.6. Suppose A S n (R) and v i, i = 1,..., n is an orthonormal basis of eigenvectors for A with Av i = µ i v i (where the µ i need not be distinct). Then C(A) = span({v i v j : µ i = µ j }). Proof. Let S = span({v i v j : µ i = µ j }). Clearly S C(A). For the reverse inclusion, observe that {v i v j : i = 1,..., n, j = 1,..., n} is a basis for R n n. Thus any B C(A) can be expressed as B = n n i=1 j=1 β ijv i vj. Then 46 O = [A, B] = n n β ij (µ i µ j )v i vj. i=1 j=1 47 48 49 50 51 5 53 54 55 By the independence of the v i vj, β ij (µ i µ j ) = 0 for all i, j. Therefore, µ i µ j implies β ij = 0, so C(A) S. Throughout this section we assume that λ 1,..., λ q are the distinct eigenvalues of A and that A = q i=1 λ ie i is the spectral decomposition of A (i.e. the E i are mutually orthogonal idempotents that sum to I). Lemma.7. Let A be a symmetric n n matrix with spec(a) = Λ. Then N EΛ.A = C(A) S n (R). Proof. Consider a differentiable path B(t) (t ( 1, 1)) on E Λ such that B(0) = A. Then B(t) has spectral decomposition 56 B(t) = q λ i F i (t). i=1 8

57 58 59 60 61 6 63 64 65 66 67 68 69 70 71 7 73 74 75 76 77 78 79 80 81 8 83 84 85 Clearly F i (0) = E i for i = 1,,..., q. As F i (t) is given by a polynomial in B(t) with coefficients that depend only on the spectrum of B(t), i.e. on Λ, [18, Corollary to Theorem 9 (Chapter 9)], F i (t) is a differentiable function of t. Since the F i (t) are mutually orthogonal idempotents, we have that Post-multiplying (3) by E j gives Pre-multiplying (4) by E j gives Post-multiplying (3) by E i gives F i (0)E i + E i F i (0) = F i (0) and (3) F i (0)E j + E i F j (0) = O i, j = 1,,..., q and i j. (4) E i F i (0)E j = F i (0)E j for all j i (5) E j F i (0)E j = O for all j i. (6) E i F i (0)E i = O for all i. (7) Equation (5) implies that the image of F i (0)E j is contained in the image of E i (j i). Consider eigenvectors x and y of A. First suppose that they correspond to the same eigenvalue, say λ i. Since x = E i x and y = E i y, (6) and (7) imply that y F j (0)x = 0 for all j. Thus ( ) q ) tr (Ḃ(0)(xy + yx ) = tr λ j F j (0)(xy + yx ) = q j=1 = 0. j=1 λ j y F j (0)x Therefore xy + yx N EΛ.A for all such choices of x and y corresponding to the same eigenvalue. Now suppose that x and y are unit eigenvectors of A corresponding to distinct eigenvalues, say λ 1 and λ. Let µ 3,..., µ n be the remaining eigenvalues of A (with no assumption that they are distinct from each other or λ 1 or λ ) and z 3,..., z n a corresponding orthonormal set of eigenvectors. Let ( ) ( ) ( ) cos t sin t λ1 0 cos t sin t D(t) = diag (µ sin t cos t 0 λ sin t cos t 3,..., µ n ), let S = ( ) x y z 3 z n, and let B(t) = SD(t)S. Then B(t) is cospectral with A, B(0) = A, and Ḃ(0) = SḊ(0)S = ( x y ) ( ) ( ) 0 λ λ 1 x λ λ 1 0 y = (λ λ 1 )(xy + yx ). 9

86 87 88 89 90 91 9 93 94 95 It follows that xy + yx T EΛ.A for any eigenvectors x and y corresponding to distinct eigenvalues of A. Let v 1,..., v n be a basis of eigenvectors of A. Then {v i v j + v j v i : i j} forms a basis for S n (R). We have shown that if v i and v j correspond to distinct eigenvalues, then v i v j + v j v i T EΛ.A and if v i and v j correspond to the same eigenvalues, then v i v j + v j v i N EΛ.A. It follows that: T EΛ.A = span({v i v j + v j v i : v i and v j correspond to distinct eigenvalues of A}). (8) N EΛ.A = span({v i v j + v j v i : v i and v j correspond to the same eigenvalue of A}). (9) By Proposition.6, C(A) is the span of the set of matrices of the form v i v j where v i and v j correspond to the same eigenvalue of A. Thus N EΛ.A = C(A) S n (R). 96 97 98 99 300 301 30 303 304 305 306 307 308 309 310 311 31 313 314 315 316 Definition.8. The symmetric matrix A has the Strong Spectral Property (or A has the SSP for short) if the only symmetric matrix X satisfying A X = O, I X = O and [A, X] = O is X = O. Observation.9. For symmetric matrices A and X, [A, X] = O if and only if AX is symmetric, so the SSP could have been defined as: The only symmetric matrix X satisfying A X = O, I X = O and AX is symmetric is X = O. Lemma.7 asserts that A has the SSP if and only if the manifolds S(G) and E Λ intersect transversally at A, where G is the graph such that A S(G) and Λ = spec(a). A proof similar to that of Theorem.5 yields the next result. Theorem.10. If A S(G) has the SSP, then every supergraph of G with the same vertex set has a realization that has the same spectrum as A and has the SSP. For every A S(K n ), A X = O and I X = O imply X = O, so trivially A has the SSP. Next we discuss some additional examples. Example.11. Let A = 0 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 Then spec(a) = {0, 0, 3, 3}. To show that A has the SSP, consider X S 4 (R) such that A X = 0, I X = O and [A, X] = O. Then X is a matrix of the form 0 0 0 0 X = 0 0 u v 0 u 0 w. 0 v w 0 The fact that X commutes with A implies that X has all row sums and column sums equal to zero, which in turn implies X = O. Thus, A has the SSP, and by Theorem.10, every supergraph of K 1,3 has a realization with spectrum {0, 0, 3, 3}.. 10

317 318 319 30 31 3 33 34 35 36 37 38 39 330 331 33 333 334 335 336 337 338 339 340 341 34 343 344 Example.1. Let G be the star on n 5 vertices having 1 as a central vertex, let A S(G), and let λ be an eigenvalue of A of multiplicity at least 3. From a theorem of Parter and Wiener (see, for example, [13, Theorem.1]), λ occurs on the diagonal of A(1) at least 4 times. 4 Without loss of generality, we may assume that a = a 33 = a 44 = a 55 = λ. Let u = [0, 0, 0, a 15, a 14, 0,..., 0], v = [0, a 13, a 1, 0, 0, 0,..., 0], and X = uv + vu. It can be verified that AX = λx = XA, A X = O and I X = O. Thus, A does not have the SSP. Therefore, no matrix in S(G) with an eigenvalue of multiplicity at least 3 has the SSP. Example.13. Let G be as in the previous example. Let A S(G) such that no eigenvalue of A has multiplicity 3 or more. Without loss of generality we may assume that A(1) = k j=1λ j I nj for some distinct λ 1,..., λ k and positive integers n 1,..., n k with n 1 = n 1 +n + + n k. As every eigenvalue of A has multiplicity or less, each n j 3 for j = 1,,..., k. Let X be a symmetric matrix with [A, X] = O, A X = O and I X = O. The last two conditions imply that all entries in the first row, the first column, and on the diagonal of X are zero. This and the first condition imply that X(1) is in C(A(1)). By the distinctness of the λ j we conclude that X(1) = k j=1x j where X j is a symmetric matrix of order n j with ( ) α a zeros on the diagonal. Partition A as A = and the partition a = (a a A(1) 1,..., a k ) conformally with X(1). Then [A, X] = O implies that X j a j = 0. As n j 3, X j is symmetric and has zeros on its diagonal and every entry of a j is nonzero, this implies X j = O. Thus, X = O and we conclude that A has the SSP. Observation.14. If the diagonal matrix D = diag(µ 1, µ,..., µ n ) has a repeated eigenvalue, say µ i = µ j, then D does not have the SSP as validated by the matrix X with a 1 in positions (i, j) and (j, i) and zeros elsewhere. Remark.15. Note that a diagonal matrix D = diag(λ 1, λ,..., λ n ) with distinct eigenvalues has the SSP, because DX = XD implies all off-diagonal entries of X are zero. Therefore, by Theorem.10, every graph on n vertices has a realization that is cospectral with D and has the SSP. The existence of a cospectral matrix was proved in [4] via a different method. However, not every matrix with all eigenvalues distinct has the SSP. 345 346 Example.16. Let 3 0 0 1 0 1 0 0 A = 0 1 3 1 0 0 0 1 0 1 0 0 3 and X = 0 0 1 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 1 1 0 0. 347 348 Then A does not have the SSP because A X = O, I X = O, [A, X] = O, but X O. Note that spec(a) = { ( 1 + ) ( ) ( ) ( )}, 4, 1 1 + 17, 1, 1 1 17. 4 A(1) is the principal submatrix of A obtained by deleting row and column 1. 11

349 350 351 35 353 354 355 356 357 358 359 360 361 36 363 364 365 366 367.3 The Strong Multiplicity Property Let m = (m 1,..., m q ) be an ordered list of positive integers with m 1 + m + + m q = n. We let U m denote the set of all symmetric matrices whose ordered multiplicity list is m. Thus, if A has multiplicity list m, then U m = {B S n (R) : B has the same ordered multiplicity list as A}. It follows from results in [] that U m is a manifold. In the next lemma we determine N Um.A. Lemma.17. Let A be an n n symmetric matrix with exactly q distinct eigenvalues, spectrum Λ, and ordered multiplicity list m. Then N Um.A = {X C(A) S n (R) : tr(a i X) = 0 for i = 0,..., q 1}. Proof. Let A be an n n symmetric matrix with spectrum given by the multiset Λ, and let B(t) (t ( 1, 1)) be a differentiable path of matrices having the same ordered multiplicity list as A. Let A and B(t) have spectral decomposition A = q j=1 λ je j, and B(t) = q j=1 λ j(t)f j (t), respectively. Then Ḃ(0) = q j=1 λ j (0)E j + q j=1 λ jf j (0), and we conclude that the tangent space of U m is the sum of { q } T := and the tangent space, T EΛ.A. Therefore j=1 c j E j : c j R N Um.A = N EΛ.A {S S n (R) : tr(e j S) = 0 for all j = 1,..., q}. (10) As span(e 1,..., E q ) = span(i = A 0,..., A q 1 ), tr(a i S) = 0 for i = 0,..., q 1 if and only if tr(e j S) = 0 for j = 1,..., q. The result now follows. 368 369 370 371 37 373 374 375 376 377 378 379 Definition.18. The n n symmetric matrix A satisfies the Strong Multiplicity Property (or A has the SMP for short) provided the only symmetric matrix X satisfying A X = O, I X = O, [A, X] = O and tr(a i X) = 0 for i = 0,..., n 1 is X = O. Remark.19. The minimal polynomial provides a dependency relation among the powers of A, so we can replace i = 0,..., n 1 by i = 0,..., q 1 where q is the number of distinct eigenvalues of A. Since I X = O and A X = O imply tr(ix) = 0 and tr(ax) = 0, we can replace i = 0,..., q 1 by i =,..., q 1. Therefore, for multiplicity lists with only distinct eigenvalues, the SSP and the SMP are equivalent. Lemma.17 asserts that S(G) and U m intersect transversally at A if and only if A has the SMP. A proof similar to that of Theorem.5 yields the next result. Theorem.0. If A S(G) has the SMP, then every supergraph of G with the same vertex set has a realization that has the same ordered multiplicity list as A and has the SMP. 1

380 381 38 383 384 385 386 387 388 389 390 391 39 393 394 395 396 397 398 399 400 401 40 403 404 405 406 407 408 Observation.1. Clearly the SSP implies the SMP. Example.16 and the next remark show the SSP and the SMP are distinct. Remark.. In contrast to Example.16, every symmetric matrix whose eigenvalues are distinct has the SMP. To see this, let A S n (R) have n distinct eigenvalues and let its spectral decomposition be n A = λ j y j yj. i=1 Suppose X S n (R), I X = O, A X = O, [A, X] = O, and tr(a i X) = 0 for i = 0, 1,..., n 1. Since A and X commute, each eigenspace of A is an invariant subspace of X. These conditions and the distinctness of eigenvalues imply that each y j is an eigenvector of X, and tr(p(a)x) = 0 for all polynomials p(x). The distinctness of eigenvalues implies that y j yj is a polynomial in A for j = 1,..., n. Hence 0 = tr(y j yj X) = yj Xy j, so the eigenvalue of X for which y j is an eigenvector of X is 0 for j = 1,..., n. Thus X = O, and we conclude that A has the SMP. Observation.3. Clearly the SMP implies the SAP. The next example shows that the SMP and the SAP are distinct. Example.4. Consider the matrices 1 1 1 0 1 0 0 0 1 3 0 1 0 1 0 0 1 0 3 1 0 0 1 0 A = 0 1 1 1 0 0 0 1 1 0 0 0 3 1 1 0, X = 0 1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 0 0 0 1 0 1 1 3 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 Clearly A X = O and I X = O. It is straightforward to verify (using computational software) that spec(a) = {0 (4), 4 (4) } and [A, X] = O, where λ (m) denotes that eigenvalue λ has multiplicity m. Since A has only two eigenvalues, A does not have the SMP by Remark.19. It is also straightforward to verify (using computational software) that both A and A 4I have the SAP. Note that G(A) = Q 3 and A is a diagonal scaling of the positive semidefinite matrix of rank four constructed in [5, Example.1]. Observation.5. If A has the SSP (SMP), then A+λI has the SSP (SMP) for all λ R. Remark.6. If λ is the only multiple eigenvalue of A, then A has the SMP if and only if A λi has the SAP. To see this assume that λ 1 is the only multiple eigenvalue of A, A λ 1 I has the SAP, λ,..., λ q are the remaining eigenvalues of A, and y j is a unit eigenvector of A corresponding to λ j (j =,..., q). Then the spectral decomposition of A has the form A = λ 1 E 1 + q j= λ jy j yj. Assume X is a symmetric matrix such that A X = O, I X = O,. 13

409 410 411 41 413 414 415 416 417 418 419 40 41 4 43 [A, X] = O, and tr(a k X) = 0 for all k. As in Remark., each y j is an eigenvector of X and y j yj is a polynomial in A, so 0 = tr(y j yj X) = yj Xy j. Thus we conclude that y j is in the null space of X for j =,..., q. Therefore, AX = λ 1 E 1 X = λ 1 X (with the latter equality coming from E 1 + q j= y jyj = I). Thus, (A λ 1 I)X = O. Since A λ 1 I has the SAP, X = O and we conclude that A has the SMP. 3 Properties of matrices having the SSP or SMP Section 3.1 presents characterizations of the tangent spaces T RA.A, T EΛ.A, and T Um.A and applies these to obtain lower bounds on the number of edges in a graph where a matrix has the associated strong property. Section 3. describes a computational test for determining whether a matrix has the SSP or the SMP. Section 3.3 presents the Gershgorin intersection graph and uses it to test for the SSP. Section 3.4 characterizes when block diagonal matrices have the SSP or the SMP in terms of the diagonal blocks. 3.1 Tangent spaces for the strong property manifolds We begin by giving equivalent, but more useful, descriptions of the tangent spaces T RA.A, T EΛ.A, and T Um.A. The set of all n n skew-symmetric matrices is denoted by K n (R). 44 45 46 47 (r+1 48 49 430 431 43 433 434 435 Theorem 3.1. Let A be an n n symmetric matrix with distinct eigenvalues λ 1,..., λ q, spectral decomposition A = q j=1 λ je j, ordered multiplicity list m = (m 1,..., m q ) and rank r; the spectrum of A is Λ = {λ (m 1) 1,..., λ (mq) q }. Then (a) [19] T RA.A = {AY + Y A : Y is an n n matrix} and dim T RA.A = ( ) ( n+1 n r+1 ) ) = + r(n r); ( mi ) ; (b) T EΛ.A = {AK KA : K K n (R)} and dim T EΛ.A = ( n ) q j=1 (c) T Um.A = T EΛ.A + span{i = A 0,..., A q 1 } and dim T Um.A = ( n ) q j=1 ( mi ) + q. Proof. Throughout the proof {v 1,..., v n } is an orthogonal basis of eigenvectors of A with corresponding eigenvalues µ 1,..., µ n with µ i 0 for i = 1,..., r and µ i = 0 for i = r + 1,..., n. To prove (a), consider an n n matrix Y and X N RA.A. First note that Lemma.4 asserts that X S n (R) and AX = O. Thus XA = O and 436 437 438 tr((ay + Y A)X) = tr(ay X + Y AX) = tr(xay + Y AX) = tr(o + O) = 0. Therefore {AY + Y A} NR A.A = T R A.A. Next note that if i r, then v i vj + v j vi = AY + Y A where Y = 1 λ i v i vj. Define Ω to be the set of pairs (i, j) with 1 i j n and 14

439 440 i r. Thus span{v i vj + v j vi : (i, j) Ω} {AY + Y A : Y is an n n matrix} T RA.A. It is easy to verify that {v i vj + v j vi 441 : (i, j) Ω} is an orthogonal set and hence ( ) r + 1 dim span{v i vj + v j vi 44 : (i, j) Ω} = + r(n r). By (), dim N RA.A = ( ) n r+1, and hence dim TRA.A = ( ) ( n+1 n r+1 ) 443. It follows that dim span{v i vj + v j vi 444 : (i, j) Ω} = dim T RA.A. Therefore, equality holds throughout (11), 445 and (a) holds. 446 To prove (b) consider K K n (R) and B N EΛ.A. Note that Lemma.7 asserts that 447 B C(A) S n (R). Thus 448 449 450 451 tr((ak KA)B) = tr(akb KAB) = tr(k(ba AB)) = tr(o) = 0. Observe that if µ i µ j, then v i vj + v j vi = AK KA where K is the skew-symmetric 1 matrix µ i µ j (v i vj v j vi ). Hence, {v i vj + v j vi : µ i µ j } {AK KA : K K n (R)} N EΛ.A = T EΛ.A = {v i vj + v j vi : µ i µ j }. 45 where the last equality is (8). Thus, equality holds throughout (1). The dimension of span{v i vj + v j vi : µ i µ j } is easily seen to be ( ) n+1 q ( mi ) ( +1 j=1 = n ) q ( mi ) 453 j=1, 454 and we have proven (b). 455 Statement (c) follows from (b) and (10), and the facts that 456 457 458 459 and span{e 1,..., E q } = span{i = A 0,..., A q 1 } T EΛ.A span{a 0,..., A q 1 } = {O} since span{a 0,..., A q 1 } C(A) S n (R) = N EΛ.A. (11) (1) 460 461 46 463 464 Remark 3.. Let M be a manifold in S n (R) and G be a graph of order n such that M and S(G) intersect transversally at A. Then ( ) n + 1 dim T M.A + dim T S(G).A dim (T M.A T S(G).A ) = dim S n (R) =. Since dim T S(G).A = n + E(G) and ( ) ( n+1 = n ) + n, ( ) n E(G) = dim T M.A + dim(t M.A T S(G).A ). 15

465 466 467 468 Corollary 3.3. Let G be a graph on n vertices and let A S(G) with spectrum Λ, ordered multiplicity list m = (m 1,..., m q ), and rank r. Assume that A is not a scalar matrix. Then (a) [17] If R A and S(G) intersect transversally at A, then { ( n r+1 ) if G is not bipartite, E(G) ) 1 if G is bipartite; ( n r+1 469 (b) If E Λ and S(G) intersect transversally at A, then E(G) q j=1 ( mi ) ; and (c) If U m and S(G) intersect transversally at A, then E(G) q ( mi ) 470 j=1 q +. Proof. Statement (b) follows immediately from Remark 3. and part (b) of Theorem 3.1. 471 47 473 474 475 476 Statement (c) follows from Remark 3., part (c) of Theorem 3.1, and the fact that I and A are linearly independent, and lie in T Um.A T S(G). Statement (a) can be established using Remark 3. and part (a) of Theorem 3.1 but the argument is more complicated. Since it was previously established in in [17] (see Theorem 6.5 and Corollary 6.6) we refer the reader there. 477 478 479 480 481 48 483 484 485 486 487 488 489 490 491 49 493 494 495 496 3. Equivalent criteria for the strong properties Let H be a graph with vertex set {1,,..., n} and edge-set {e 1,..., e p }, where e k = i k j k. For A = (a i,j ) S n (R), we denote the p 1 vector whose kth coordinate is a ik,j k by vec H (A). Thus, vec H (A) makes a vector out of the elements of A corresponding to the edges in H. Note that vec H ( ) defines a linear transformation from S n (R) to R p. The complement G of G is the graph with the same vertex set as G and edges exactly where G does not have edges. Proposition 3.4. Let M be a manifold in S n (R), let G be a graph with vertices 1,,..., n such that A M S(G), and let p be the number of edges in G. Then M and S(G) intersect transversally at A if and only if {vec G (B) : B T M.A } = R p. Proof. First assume that M and S(G) intersect transversally at A. Consider c R p, and let C = (c ij ) S n (R) with c ij = c k if ij is the kth edge of G. The assumption of transversality implies there exist B T M.A and D T S(G).A such that C = B + D. Hence c = vec G (C) = vec G (B) + vec G (D) = vec G (B), and we conclude that {vec G (B) : B T M.A } = R p. Conversely, assume that {vec G (B) : B T M.A } = R p and consider C S n (R). Then vec G (C) = vec G (B) for some B T M.A. Note that C B T S(G).A. Since C = B +(C B), we conclude that T M.A + T S(G).A = S n (R). In the following, E ij denotes the n n matrix with a 1 in position (i, j) and 0 elsewhere, and K ij denotes the n n skew-symmetric matrix E ij E ji. Theorem 3.1 and Proposition 3.4 imply the next result. 16

497 498 499 500 501 50 503 504 Theorem 3.5. Let G be a graph, let A S(G) with q distinct eigenvalues, and let p be the number of edges in G. Then (a) A has the SAP if and only if the matrix whose columns are vec G (AE ij + EijA) (1 i j n) has rank p; (b) A has the SSP if and only if the matrix whose columns are vec G (AK ij K ij A) (1 i < j n) has rank p; and (c) A has the SMP if and only if the matrix whose columns are vec G (AK ij K ij A) (1 i < j n) along with vec G (A k ) (k = 0, 1,..., q 1) has rank p. 505 506 Example 3.6. Let A = 1 1 0 0 6 1 1 0 0 6 0 0 4 1 5 0 0 1 4 5 6 6 5 5 16. 507 508 509 510 511 51 513 514 515 516 Then [A, K 1,3 ] = [A, K,3 ] = 0 0 3 1 5 0 0 1 0 0 3 1 0 0 6 1 0 0 0 0 5 0 6 0 0 0 0 1 0 0 0 0 3 1 5 1 3 0 0 6 0 1 0 0 0 0 5 6 0 0, [A, K 1,4] =, [A, K,4] = 0 0 1 3 5 0 0 0 1 0 1 0 0 0 0 3 1 0 0 6 5 0 0 6 0 0 0 0 1 0 0 0 1 3 5 0 1 0 0 0 1 3 0 0 6 0 5 0 6 0 Let G be the graph of A, and let M be the matrix defined in part (b) of Theorem 3.5 whose columns are vec G ([A, K ij ]). The submatrix M of columns of M corresponding to K ij where (i, j) {(1, 3), (1, 4), (, 3), (, 4)} is 3 1 1 0 M = 1 3 0 1 1 0 3 1. 0 1 1 3 Since M is strictly diagonally dominant (equivalently, 0 is not in the union of Gershgorin discs of M), M is invertible, and so rank M = 4. Therefore M has rank 4 and by Theorem 3.5, we conclude that A has the SSP.,. 17

517 518 519 50 51 5 53 54 55 56 57 58 59 530 531 53 533 534 535 536 537 538 539 540 541 54 543 544 545 546 547 548 549 550 3.3 Gershgorin discs and the SSP Given a square matrix A = (a ij ) C n n, the Gershgorin intersection graph of A is the graph on vertices labeled 1,..., n in which two vertices i j are adjacent exactly when Gershgorin discs i and j of A intersect, that is, when the inequality n n a ii a jj a il + a jl (13) l=1,l i l=1,l j is satisfied. If A has real spectrum, then Gershgorin discs intersect if and only if they intersect on the real line, and the Gershgorin intersection graph of A is an interval graph. Note that when graphs have a common vertex labeling, one of them may be a subgraph up to isomorphism of another without being identically a subgraph. The next result requires the stronger condition of being identically a subgraph. Theorem 3.7. Let G be a graph with vertices labeled 1,..., n and let A S(G). If the Gershgorin intersection graph of A is identically a subgraph of G, then A satisfies the SSP. Proof. Suppose that e k = i k j k (k = 1,,..., p) are the edges of G. Let M be the p p matrix whose k-th column is vec G (AK ik,j k K ik,j k A). The (k, k)-entry of M has absolute value a ik,i k a jk,j k and the remaining entries of the k-th column of M are, up to sign, a subset of the entries a ik,l, l i k, and a jk,l, l j k. If the Gershgorin intersection graph of A is identically a subgraph of G, then inequality (13) is not satisfied for any k (because the e k are nonedges of G and therefore of the Gershgorin intersection graph). Thus M is strictly diagonally dominant, so M is invertible and has rank p. Therefore, by Theorem 3.5, A has the SSP. Of course it is possible to have M strictly diagonally dominant implying the invertibility of M even when the Gershgorin intersection graph of A is not a subgraph of G, as in Example 3.6. 3.4 Block diagonal matrices Theorem 3.8. Let A i S ni (R) for i = 1,. Then A := A 1 A has the SSP (respectively, SMP) if and only if both A 1 and A have the SSP (respectively, SMP) and spec(a 1 ) spec(a ) =. ( ) X1 W Proof. Let X = W be partitioned conformally with A. X First, suppose that A 1 and A have the SSP, spec(a 1 ) spec(a ) =, A X = O, I X = O, and [A, X] = O. Since A i X i = O, I X i = O, and [A i, X i ] = O for i = 1, and A i has SSP, X i = O for i = 1,. The 1, -block of [A, X] is O = A 1 W W A, so by [0, Theorem.4.4.1], W = O. Thus X = O and A has the SSP. Now, suppose A 1 and A have the SMP rather than the SSP (and the spectra are disjoint). Assume A X = O, I X = O, [A, X] = O and X also satisfies tr(a k X) = 0 for k = 18

551 55 553 554 555 556 557 558 559 560 561 56 563 564 565 566 567 568 569 570 571 57 573 574 575,..., n 1. As before, W = O so X = X 1 X, A i X i = O, I X i = O, [A i, X i ] = O for i = 1,. To obtain X i = O, i = 1, (and thus X = O and A has the SMP) it suffices to show that tr(a k i X i ) = 0 for i = 1, and k =,..., n 1. Consider the spectral decompositions of the diagonal blocks of A, A i = q i j=1 λ(i) j E(i) j for i = 1,. Since spec(a 1 ) spec(a ) =, the spectral decomposition of A is A = q 1 j=1 ( ) λ (1) j E (1) j O + q j=1 ( λ () j O E () j Since each projection in the spectral decomposition is a polynomial in A, A 1 O and O A are polynomials in A. Therefore, tr(a k 1X 1 ) = tr (( A k 1 O ) X ) = 0 and tr(a k X ) = 0. Conversely, ( assume ) A 1 A has the SSP and A 1 X 1 = O, I X 1 = O, and [A 1, X 1 ] = O. X1 O Then X := satisfies A X = O, I X = O, and [A, X] = O, so X = O, implying O O X 1 = O. In the case of the SMP, tr(a k 1X 1 ) = 0 implies tr(a k X) = 0. We show that spec(a 1 ) spec(a ) implies A does not have the SMP (and thus does not have the SSP): Suppose λ spec(a 1 ) spec(a ). For i = 1,, choose z i 0 such that A i z i = λz i. Define ( ) O z1 z Z := z z. 1 O Then A Z = O, I Z = O, tr(a k Z) = 0, and [A, Z] = O, but Z O, showing that A does not have the SMP. As one application we give an upper bound on q(g) in terms of chromatic numbers. Theorem 3.9. Let G be a graph and G its complement. Then q(g) χ(g). Proof. The graph G contains a disjoint union of χ(g) cliques. Taking the direct sum of realizations of each clique each having at most two distinct eigenvalues and the SSP, and the eigenvalues of different cliques distinct gives a matrix having the SSP by Theorem 3.8. The result then follows from Theorem.10. Another application gives an upper bound on the number of distinct eigenvalues required for a supergraph on a superset of vertices. ). 576 Theorem 3.10. Let A be a symmetric matrix of order n with graph G. If A has the SSP 577 (or the SMP) and Ĝ is a graph on m vertices containing G as a subgraph, then there exists B S( Ĝ) such that spec(a) spec( B) 578 (or has the ordered multiplicity list of A augmented with ones), and B 579 has the SSP (or the SMP). Furthermore, q(ĝ) m n + q(a). If A has the SSP, then we can prescribe spec( B) to be spec(a) Λ where Λ is any set of distinct real 580 581 58 583 numbers such that spec(a) Λ =,. Proof. Assume that A has the SSP (respectively, SMP), and without loss of generality that V (G) = {1,,..., n}, V (Ĝ) = {1,,..., m}, and G is a subgraph of Ĝ. 19

584 585 586 587 588 589 590 591 59 Consider the matrix ( A O B = O diag(λ) Note that the eigenvalues of diag(λ) are distinct and distinct from the eigenvalues of A. It follows that diag(λ) has the SSP (see Remark.15 or note this follows from Theorem 3.8), and thus has the SMP. By Theorem 3.8, if A satisfies the SSP (SMP), then B satisfies the SSP (SMP). By Theorem.10 (or Theorem.0), every supergraph of the graph of B on the same vertex set has a realization B that is cospectral with B and has the SSP (or has the same ordered multiplicity list and has the SMP). Hence q(ĝ) q(b) = m n + q(a). ). 593 594 595 596 597 598 599 600 601 60 603 604 605 606 607 608 609 610 611 61 613 614 615 616 617 Remark 3.11. By taking a realization in S(G) with row sums 0, mr(g) G 1. It is a well known result that the eigenvalues of an irreducible tridiagonal matrix are distinct, that is mr(p n ) = n 1. A classic result of Fiedler [15] asserts that mr(g) = G 1 if and only if G is a path. Theorem 3.10 can be used to derive this characterization, as follows. If G contains a vertex of degree 3 or more, then G contains K 1,3 as a subgraph, and hence by Theorem 3.10 and Example.11 we conclude that mr(g) G. Also, it is easy to see if G is disconnected, then mr(g) G. Thus, if mr(g) = G 1, then G has maximum degree and is connected. Hence G is a path or a cycle. The adjacency matrix of a cycle C has a multiple eigenvalue, which implies that mr(c) C. 4 Application of strong properties to minimum number of distinct eigenvalues The SSP and the SMP allow us to characterize graphs having q(g) G 1 (see Section 4.). First we introduce new parameters based on the minimum number of eigenvalues for matrices with the given graph that have the given strong property. 4.1 New parameters q S (G) and q M (G) Recall that ξ(g) is defined as the maximum nullity among matrices in S(G) that satisfy the SAP, and ν(g) is the maximum nullity of positive semidefinite matrices having the SAP, so ξ(g) M(G) and ν(g) M + (G). These bounds are very useful because of the minor monotonicity of ξ and ν (especially the monotonicity on subgraphs). In analogy with these definitions, we define parameters for the minimum number of eigenvalues among matrices having the SSP or the SMP and described by a given graph. In order to do this we need the property that every graph has at least one matrix with the property SSP (and hence SMP). For any set of G distinct real numbers, there is matrix in S(G) with these eigenvalues that has the SSP by Remark.15. 0

618 619 Definition 4.1. We define q M (G) = min{q(a) : A S(G) and A has the SMP} 60 61 and q S (G) = min{q(a) : A S(G) and A has the SSP}. 6 63 64 65 66 67 68 69 630 631 63 633 634 635 636 637 638 639 640 Observation 4.. From the definitions, q(g) q M (G) q S (G) for any graph G. One might ask why we have not defined a parameter q A (G) for the SAP. The reason is that the SAP is not naturally associated with the minimum number of eigenvalues. The Strong Arnold Property considers only the eigenvalue zero; that is, if zero is a simple eigenvalue of A (or not an eigenvalue of A), then A automatically has the SAP. The next result is immediate from Remark.19 and the fact that q(g) = 1 if and only if G has no edges. Corollary 4.3. Suppose G is connected. Then q S (G) = if and only if q M (G) =. The next result is immediate from Theorem 3.8. Corollary 4.4. If G is the disjoint union of connected components G i, i = 1,..., h with h, then q S (G) = h i=1 q S(G i ) and q M (G) = h i=1 q M(G i ). Remark 4.5. Suppose G is the disjoint union of connected components G i, i = 1,..., h with h. Since any graph has a realization for any set of distinct eigenvalues (Remark.15), q(g) max h i=1 G i. Clearly q(g) max h i=1 q(g i ). The next result is an immediate corollary of Theorem 3.10. Corollary 4.6. If H is a subgraph of G, H = n, and G = m, then q(g) q S (G) m n + q S (H) and q(g) q M (G) m n + q M (H). 4. High values of q(g) In this section we characterize graphs having q(g) G 1. Proposition 4.7. Let G be one of the graphs shown in Figure 1. Then q S (G) G. 641 64 Proof. Let A 1 = 0 0 1 0 0 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 1 1 0 0 0 1 1 0 0 0 0 1 0 1, A = 1 1 1 0 0 1 1 1 0 0 1 1 0 1 1 0 0 1 0 0 0 0 1 0 0, 1

1 5 1 3 4 3 6 4 5 1) H tree ) Campstool 7 6 1 4 5 6 3 5 3 3) Long Y tree 1 4 4) 3-sun Figure 1: Graphs for Proposition 4.7. 643 644 645 646 647 A 3 = 0 1 0 1 0 1 0 1 1 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 1 1 0 0 0 0 0 1 1, and A 4 = 1 1 1 0 0 1 1 0 1 0 1 1 0 0 1 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 The graphs of matrices A 1, A, A 3 and A 4 are the H tree, the campstool, the long Y tree, and the 3-sun, respectively. Also, q(a i ) = G for i = 1,, 3, 4. It is straightforward to verify that each of the matrices A 1, A, A 3 and A 4 has the SSP (see, for example, [6]).. 648 649 650 651 65 653 654 Corollary 4.8. If a graph G contains a subgraph isomorphic to the H tree, the campstool, the long Y tree, or the 3-sun then q(g) q S (G) G. Proof. This follows from Corollary 4.6 and Proposition 4.7. The parameters M(G) and M + (G) can be used to construct lower bounds on q(g). For a n graph G of order n, clearly q(g). The next result improves on this in many cases, M(G) in particular for G = K 1,3.