Four new upper bounds for the stability number of a graph

Similar documents
Applications of the Inverse Theta Number in Stable Set Problems

Operations Research. Report Applications of the inverse theta number in stable set problems. February 2011

On the projection onto a finitely generated cone

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

Introduction to Semidefinite Programming I: Basic properties a

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

Semidefinite programs and combinatorial optimization

Sandwich Theorem and Calculation of the Theta Function for Several Graphs

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Operations Research. Report Multiplically independent word systems. September 2011

Notes on Linear Algebra and Matrix Theory

Spectra of Adjacency and Laplacian Matrices

Chapter 6 Orthogonal representations II: Minimal dimension

LNMB PhD Course. Networks and Semidefinite Programming 2012/2013

New Lower Bounds on the Stability Number of a Graph

Graph coloring, perfect graphs

Linear algebra and applications to graphs Part 1

Relaxations of combinatorial problems via association schemes

Lecture 7: Positive Semidefinite Matrices

Conic approach to quantum graph parameters using linear optimization over the completely positive semidefinite cone

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Max k-cut and the smallest eigenvalue

FORBIDDEN MINORS FOR THE CLASS OF GRAPHS G WITH ξ(g) 2. July 25, 2006

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Laplacian Integral Graphs with Maximum Degree 3

Eight theorems in extremal spectral graph theory

Graphs with convex-qp stability number

Copositive Programming and Combinatorial Optimization

Modeling with semidefinite and copositive matrices

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Chapter 3 Transformations

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

The chromatic number and the least eigenvalue of a graph

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Maximum k-regular induced subgraphs

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Ma/CS 6b Class 20: Spectral Graph Theory

WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES?

The Colin de Verdière parameter, excluded minors, and the spectral radius

Semidefinite Programming

Tutorials in Optimization. Richard Socher

1 The independent set problem

Z-Pencils. November 20, Abstract

Index coding with side information

Math Matrix Algebra

Fiedler s Theorems on Nodal Domains

On the sum of two largest eigenvalues of a symmetric matrix

Preliminaries and Complexity Theory

Inverse Perron values and connectivity of a uniform hypergraph

New feasibility conditions for directed strongly regular graphs

On Hadamard Diagonalizable Graphs

The spread of the spectrum of a graph

The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results

Mustapha Ç. Pinar 1. Communicated by Jean Abadie

Lecture 4: January 26

We describe the generalization of Hazan s algorithm for symmetric programming

Semidefinite Programming Basics and Applications

The Colin de Verdière parameter, excluded minors, and the spectral radius

NORMS ON SPACE OF MATRICES

1 Last time: least-squares problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Lecture 18. Ramanujan Graphs continued

Optimization Theory. A Concise Introduction. Jiongmin Yong

EE 227A: Convex Optimization and Applications October 14, 2008

Symmetric Matrices and Eigendecomposition

Spectral radius, symmetric and positive matrices

Reformulation of the Hadamard conjecture via Hurwitz-Radon word systems

A Characterization of Distance-Regular Graphs with Diameter Three

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations

Ma/CS 6b Class 20: Spectral Graph Theory

Szemerédi s Lemma for the Analyst

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Strongly Regular Decompositions of the Complete Graph

On the adjacency matrix of a block graph

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Product distance matrix of a tree with matrix weights

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Non-linear index coding outperforming the linear optimum

Integer Programming, Part 1

Recursive generation of partitionable graphs

Approximating the independence number via the ϑ-function

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

2. Matrix Algebra and Random Vectors

Fiedler s Theorems on Nodal Domains

3. Linear Programming and Polyhedral Combinatorics

Lecture 1 and 2: Random Spanning Trees

Detailed Proof of The PerronFrobenius Theorem

A taste of perfect graphs

arxiv: v1 [math.co] 20 Sep 2014

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

1 Review: symmetric matrices, their eigenvalues and eigenvectors

Transcription:

Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which has several remarkable properties (for example, it is exact for perfect graphs). A variant, the inverse theta number, defined recently by the author in a previous work, also constitutes an upper bound on the stability number. In the paper we will describe counterparts of theorems due to Wilf and Hoffman, four spectral upper bounds on the stability number, which differ from both the theta and the inverse theta numbers. Keywords: stability number, spectral bound 1 Introduction The earliest spectral bounds (upper, resp. lower bounds for the chromatic number of a graph) were derived in the late 1960s by H.S. Wilf and A.J. Hoffman (see e.g. Exercises 11.20, 21 in [4]). In 1979, L. Lovász applied the method of variables to the Wilf s and Hoffman s bounds, obtaining the theta number, which is sandwiched between the stability number of the graph and the chromatic number of the complementary graph, and, as the optimal value of a semidefinite program, is easily computable (see [5], [2]). In 1986 H.S. Wilf derived spectral lower bounds on the stability number (see [13]). In this paper we will describe counterparts of Hoffman s and Wilf s bounds, four spectral upper bounds on the stability number. We begin the paper with stating the main results. First, we fix some notation. Let n N, and let G = (V (G), E(G)) be an undirected graph, with vertex set V (G) = {1,..., n}, and with edge set E(G) {{i, j} : i j}. Let A(G) be the 0-1 adjacency matrix of the graph G, that is let { A(G) := (a ij ) {0, 1} n n 0, if {i, j} E(G),, where a ij := 1, if {i, j} E(G). H-2600 Vác, Szent János utca 1. HUNGARY 1

The set of n by n real symmetric matrices A = A T R n n satisfying A A(G) will be denoted by A. (Here T stands for transpose, and,. are meant elementwise.) The complementary graph G is the graph with adjacency matrix A(G) := J I A(G), where I is the identity matrix, and J denotes the matrix with all elements equal to one. The disjoint union of the graphs G 1 and G 2 is the graph G 1 + G 2 with adjacency matrix A(G 1 + G 2 ) := ( A(G1 ) 0 0 A(G 2 ) We will use the notation K n for the clique graph, and K s1,...,s k for the complete multipartite graph K s1 +... + K sk. Let (δ 1,..., δ n ) be the sum of the row vectors of the adjacency matrix A(G). The elements of this vector are the degrees of the vertices of the graph G. Let δ G, G, µ G be the minimum, maximum, resp. the arithmetic mean of the degrees in the graph. By Rayleigh s theorem (see [8]) for a symmetric matrix M = M T R n n the minimum and maximum eigenvalue, λ M resp. Λ M, can be expressed as ). λ M = min u =1 ut Mu, Λ M = max u =1 ut Mu. Attainment occurs if and only if u is a unit eigenvector corresponding to λ M and Λ M, respectively. By the Perron-Frobenius theorem (see [7], Theorem 9.1.3) for an elementwise nonnegative symmetric matrix M = M T 0, we have λ M Λ M = u T Mu (1) for some u 0, u T u = 1. The minimum and maximum eigenvalue of the adjacency matrix A(G) will be denoted by λ G resp. Λ G. It is a consequence of the Rayleigh and Perron- Frobenius theorems that for A A, Λ A Λ A Λ G. (2) Also, λ A (resp. Λ A ) as a function of the symmetric matrix A is concave (resp. convex), specially the function λ A + Λ A is continuous, and attains its minimum and maximum on the compact convex set A. The set of the n by n real symmetric positive semidefinite matrices will be denoted by S n +, that is S n + := { M R n n : M = M T, u T Mu 0 (u R n ) }. 2

It is well-known (see [8]), that the following statements are equivalent for a symmetric matrix M = (m ij ) R n n : a) M S+; n b) λ M 0; c) M is Gram matrix, that is m ij = vi T v j (i, j = 1,..., n) for some vectors v 1,..., v n. Furthermore, by Lemma 2.1 in [10], the set S+ n can be described as ( ) n S+ n a T i = a j (a i a T j ) 1 m N, a i R m (1 i n) 11 a T i a i = 1 (1 i n). (3) i,j=1 For example, diagonally dominant matrices (that is M = (m ij ) R n n with m ii i j m ij for i = 1,..., n) are positive semidefinite by the Gerschgorin s disc theorem, see [8]. Hence, F s1,...,s k := k(j A(K s1,...,s k )) J S n +. (Note that if F 1,...,1 is Gram matrix then so is F s1,...,s k.) The stability number, α(g), is the maximum cardinality of the (so-called stable) sets S V (G) such that {i, j} S implies {i, j} E(G). The chromatic number, χ(g), is the minimum number of stable sets covering the vertex set V (G). Let us define an orthonormal representation of the graph G (shortly, o.r. of G) as a system of vectors a 1,..., a n R m for some m N, satisfying a T i a i = 1 (i = 1,..., n), a T i a j = 0 ({i, j} E(G)). In the seminal paper [5] L. Lovász proved the following result, now popularly called sandwich theorem, see [3]: α(g) ϑ(g) χ(g), (4) where ϑ(g) is the Lovász number of the graph G, defined as { } 1 ϑ(g) := inf max 1 i n (a i a T i ) : a 1,..., a n o.r. of G. 11 The Lovász number has several equivalent descriptions, see [5]. For example, by (3) and standard semidefinite duality theory (see e.g. [9]), it is the common optimal value of the Slater-regular primal-dual semidefinite programs (T P ) min λ, x ii = λ 1 (i V (G)), x ij = 1 ({i, j} E(G)), X = (x ij ) S n +, λ R and (T D) max tr (JY ), tr (Y ) = 1, y ij = 0 ({i, j} E(G)), Y = (y ij ) S n +. 3

(Here tr stands for trace.) Note that for appropriately chosen s 1,..., s k and s, the matrices ( ) J/s 0 X = F s1,...,s k, Y = 0 0 are feasible solutions of programs (T P ) and (T D), respectively, with values λ = k = χ(g) and tr (JY ) = s = α(g), resulting in a nice proof of the sandwich theorem (see [2]). Analogously, the inverse theta number, ι(g), satisfies the inverse sandwich inequality, (α(g)) 2 + n α(g) ι(g) nϑ(g), (5) see [12]. Here the inverse theta number, defined as { n } 1 ι(g) := inf (a i a T i ) : a 1,..., a n o.r. of G, 11 i=1 equals the common attained optimal value of the primal-dual semidefinite programs (T P ) inf tr (W ) + n, w ij = 1 ({i, j} E(G)), W = (w ij ) S+, n m ii = 1 (i = 1,..., n), (T D ) sup tr (JM), m ij = 0 ({i, j} E(G)), M = (m ij ) S+. n Both bounds can be obtained via convex spectral optimization: obviously (compare with (T P )), ϑ(g) = min Λ z ii = 1 (i V (G)), Z z ij = 1 ({i, j} E(G)), Z = (z ij ) = (z ji ) R n n, and, similarly (compare with (T P )), { u ι(g) = min n tr U + nλ ij = 1 ({i, j} E(G)), U U = (u ij ) S+ n In this paper we will consider spectral upper bounds on the stability number, different from both the Lovász number and the inverse theta number. Let us define }. ι 1 (G) := n Λ G, Σ Λ G ι 2 (G) := n 1 Λ G λ G, ι 3 (G) := 1 ( n Λ G + ) (n Λ G ) 2 2 + 4Λ G (n 1 Λ G ), ι 4 (G) := n 1 + Σ 2 Λ G, 4

where Σ := (u 1 +... + u n ) 2 with { u = (ui ) R n, u 0, u T u = 1, u T A(G)u = Λ G. (Note that Λ G +1 Σ n by the Cauchy-Schwarz inequality, hence each bound is at least n Λ G.) We will prove in Sections 2, 3, 4, and 4, respectively, that the inequalities α(g) ι 1 (G), ι 2 (G), ι 3 (G), ι 4 (G) (6) hold. These upper bounds are efficiently computable via methods in [8]. (For lower bounds on α(g), see e.g. [11].) Several open problems arise: Can the four bounds give better results than other upper bounds (see e.g. [11]), such as Λ G + 1 (Wilf s upper bound for the chromatic number of the complementary graph), or ϑ(g), n + 1 ϑ(g)? How do they relate to χ(g), n + 1 χ(g)? How do they relate to each other? These questions (partially answered in the paper) need further investigation. 2 The counterpart of Wilf s bound In this section we will describe the counterpart of Wilf s lower bound on the stability number. The spectral upper bound ι 1 (G) is derived via estimating from above the maximum eigenvalue of the adjacency matrix. In [13] Wilf proved, as a consequence of a theorem of Motzkin-Straus, the relation n n Λ G α(g). (7) The next proposition describes a weaker form of (7). PROPOSITION 2.1. For any graph G, χ(g) n/(n Λ G ). Proof. Let S 1,..., S k be stable sets in G with cardinality s 1,..., s k, respectively, so that s 1 +... + s k = n. Then, G is a subgraph of H = K s1,...,s k. In other words, 0 A(G) A(H) = J F s 1,...,s k + J, k implying, by F s1,...,s k S+, n that Λ G Λ H (k 1)n. k As here k = χ(g) can be chosen, so the statement follows. The counterpart of Proposition 2.1 can be proved similarly, and leads us to the bound ι 1 (G). 5

PROPOSITION 2.2. For any graph G, the inequality α(g) n Λ G n Λ G =: ˇι 1 (G) holds. Proof. Let {1,..., s} be a stable set in G. Then, G is a subgraph of the graph H = K s,1,...,1. In other words, 0 A(G) A(H) = J F s,1,...,1 + J n s + 1. Thus, for the maximal eigenvalues the inequalities Λ G Λ H (n s)n n s + 1 hold, by F s,1,...,1 S n +. The statement follows with s = α(g). In [13] Wilf used the method of variables to strengthen the bound in (7): he proved the relation Σ Σ Λ G α(g). (8) Analogously, the proof of Proposition 2.2 can easily be adapted to imply THEOREM 2.1. For any graph G, α(g) ι 1 (G) holds. Note that (8) implies the stronger relation α(g) χ(g) n + 1 α(g) ι 1 (G) (9) also, but the proof of Theorem 2.1 does not use the Motzkin-Straus theorem. 3 The counterpart of Hoffman s bound In this section we describe the counterpart of a spectral lower bound for the chromatic number due to Hoffman. The proof relies on estimating from above the minimum eigenvalue of the adjacency matrix. Hoffman s theorem (see e.g. [4]) states that for any graph G, χ(g) 1 + Λ G λ G. (10) The proof remains valid for arbitrary matrix A A instead of A(G), and the strongest bound obtained this way (by the so-called method of variables) is the Lovász number ϑ(g) (see [5], [6]). The proof of the counterpart closely follows the proof of (10). 6

THEOREM 3.1. For any graph G, α(g) ι 2 (G). Proof. Let A := A(G), and suppose that {1,..., s} is a stable set in G for some 1 s n 1. Then, the matrix A can be partitioned as ( ) A11 A A = 12, A 21 A 22 where A 11 = 0 R s s, A 12 = A T 21, A 22 = A T 22 R (n s) (n s). Let x R n be an eigenvector corresponding to the eigenvalue Λ A. Let x = (x T 1, x T 2 ) T, where x 1 R s, x 2 R n s. Let us denote by y 1 R s the vector with first element x 1, otherwise zero, and let us define similarly the vector y 2 R n s, too. Let y R n be the vector obtained by stacking the vectors y 1, y 2 on the top of each other. Let us choose orthogonal matrices B 1 R s s, B 2 R (n s) (n s) such that B 1 y 1 = x 1 and B 2 y 2 = x 2 hold. Let B be the block-diagonal matrix formed by the matrices B 1, B 2. Then, B R n n is an orthogonal matrix, By = x, and B 1 ABy = B 1 Ax = Λ A B 1 x = Λ A y. Hence, the vector y is an eigenvector (with eigenvalue Λ A ) of the matrix B 1 AB = (B 1 i A ij B j ) i,j=1,2. Let us consider the submatrix C = (c ij ) R 2 2, C = ((B 1 i A ij B j ) 11 ) i,j=1,2. As B 1 ABy = Λ A y, so Cz = Λ A z with the 2-vector z := ( x 1, x 2 ) T, implying Λ A Λ C. By A 11 = 0, we have c 11 = 0, thus the trace of the matrix C equals Λ C + λ C = c 22. Furthermore, as A 22 J I, so the matrix (n s 1)I A 22 is diagonally dominant, necessarily positive semidefinite. Hence, the inequalities c 22 Λ B 1 2 A22B2 = Λ A 22 n s 1 hold. Cauchy s theorem on interlacing eigenvalues (see [8]) gives λ A λ C and Λ C Λ A. Summarizing, we have λ A λ C = c 22 Λ C = c 22 Λ A n s 1 Λ A, where s can be chosen to be the stability number α(g). This completes the proof of the theorem. We already have mentioned that ι 2 (G) n Λ G, but more can be claimed: 7

PROPOSITION 3.1. For any graph G, holds. ι 2 (G) n Λ G + Λ G ϑ(g) 1 1 =: ˆι 2(G) Proof. By the remark preceding Theorem 3.1, we have where A := A(G). Hence, λ A ι 2 (G) n 1 Λ A and the statement follows from (2). Λ A ϑ(g) 1, ( ) 1 1, ϑ(g) 1 Both proofs can be carried through with A A instead of A = A(G), which means that ι 2 (G, A) α(g), ˆι 2 (G) for A A, where ι 2 (G, A) := n 1 Λ A λ A. By compactness of the set A there exists an optimal matrix A A such that ι 2 (G, A ) = min{ι 2 (G, A) : A A}. As concluding remarks in this section, we will show examples when A A(G), and when A = A(G) meets the requirements. First, note that there exists a matrix B A such that ι 2 (G, B) n + 1 ϑ(g). In fact, let us choose a matrix B = (b ij ) A satisfying ϑ(g) = 1 + Λ B λ B (by the remark preceding Theorem 3.1 there exists such an optimal matrix B). We can assume that for some indices i j, b ij = 1. Then, from Rayleigh s theorem, λ B 1 follows. Moreover, we have Summarizing, we obtain Λ B + λ B = λ B (ϑ(g) 2) 0. n + 1 ϑ(g) = n 1 Λ B + λ B λ B ι 2 (G, B). 8

On the other hand, for the perfect graph we have G 0 := K 3 + K 2,2 n + 1 ϑ(g 0 ) = 5 < 6 = n 1 Λ G0 λ G0 ; we can see that in this case A A(G 0 ). Finally, note that ˆι 2 (G) = n + 1 ϑ(g) if and only if ϑ(g) = 2 (e.g. for a bipartite graph) or ϑ(g) = Λ G + 1 (e.g. when G = K s1 +... + K sk ). Hence, for bipartite graphs and for disjoint unions of cliques we have A = A(G) with ι 2 (G) = n 1 and ι 2 (G) = n Λ G, respectively. 4 Variants In this section we describe two further spectral upper bounds on the stability number, derived via similar methods, and hence considered as variants of, ι 1 (G). In order to derive the bound ι 3 (G) we will use the following technical lemma. LEMMA 4.1. Let 1 s n 1, and let M := I + A(K s + K n s ). Then, Λ M = 1 2 ( n s 1 + ) (n s 1) 2 + 4(s 1)(n s) is the maximum eigenvalue of the matrix M. Proof. The eigenvalue Λ M can be rewritten as Λ M = min{λ R : λi M S n +}. For λ 1, λi M S+, n as then the diagonal elements of λi M are nonpositive. For λ > 1, the matrix λi M R n n is positive semidefinite if and only if the Schur complement of its positive definite principal submatrix (λ 1)I R s s is positive semidefinite (see [7]). In other words, for λ > 1, λi M S+ n if and only if λi J ( J) ((λ 1)I) 1 ( J) S+ n s. (11) Here (11) can easily be seen to be equivalent to the inequality ( λ 1 + s ) (n s) 0; (12) λ 1 Λ M will be the least positive solution of (12), as stated. 9

If the set {1,..., s} is a stable set in G, then 0 A(G) M I, where M is the same matrix as in Lemma 4.1. Hence, for the maximum eigenvalues Λ G Λ M 1 holds, and we can derive easily, from Lemma 4.1, the following THEOREM 4.1. For any graph G, α(g) ι 3 (G) holds. The bound ι 3 (G) is exact e.g. for complete bipartite graphs G = K 1,s. We remark that the proof of Theorem 4.1 can be carried through for any matrix A A instead of A(G), but this way we obtain weaker bounds than ι 3 (G). In fact, ι 3 (G), as a function of Λ A, is strictly monotone decreasing on the interval 0 Λ A n 1 (the first derivative of the function is negative on this interval). This means that we get the strongest bound when Λ A is maximal, that is when A = A(G). The next proposition, too, is immediate from the fact that for any graph G, 0 µ G Λ G n 1, see Exercise 11.14 in [4]. PROPOSITION 4.1. With ι 3 (Λ) := 1 ( n Λ + ) (n Λ) 2 2 + 4Λ(n 1 Λ) for Λ R, we have for any graph G. We remark also that n Λ G ι 3 (Λ G ) ι 3 (µ G ) n, ι 3 (G) ˇι 1 (G) (13) (as it can easily be verified), but it is an open problem whether ι 3 (G) ι 1 (G) holds or not, generally. Now, we turn to the bound ι 4 (G). With minor modification of the proof of Proposition 2.2, we obtain a close variant, ˇι 4 (G). PROPOSITION 4.2. holds. For any graph G, the inequality α(g) 3n 2 1 Λ G =: ˇι 4 (G) Proof. Let {1,..., s} be a stable set in G. Then, G is a subgraph of the graph H = K s + K n s. In other words, 0 A(G) A(H) = A(K s,n s ) + A(K s + K n s ). Thus, for the maximal eigenvalues the inequalities Λ G Λ H Λ Ks,n s + Λ Ks+K n s 10

hold. Here Λ Ks+K n s = n s 1, and, by F s,n s 2 we have Λ Ks,n s n/2. Hence, = J 2 A(K s,n s) S n +, Λ G n 2 + n s 1, from which with s = α(g) the statement follows. As in the case of Proposition 2.2 and Theorem 2.1, the proof of Proposition 4.2 can easily be adapted to imply THEOREM 4.2. For any graph G, α(g) ι 4 (G) holds. The following proposition is the analogue of Propositions 3.1 and 4.1. PROPOSITION 4.3. For any graph G, ι 4 (G) n/2. Proof. Let A := A(G). Then, the matrix B := ( n 2 1 ) I + J 2 A is diagonally dominant, implying B S n +. Consequently, we have n 1 + ut Ju 2 ( ) J ( n ) Λ A = n 1 + u T 2 A u n 1 2 1 = n 2 for all nonnegative unit eigenvectors u corresponding to Λ G, which was to be proved. Finally, we mention an open problem. The minimum eigenvalue λ G of a graph G is negative, specially the corresponding unit eigenvector v R n has both positive and negative coordinates. Writing the eigenvector v as the difference of its positive and negative part (i.e. v = v + v, where v +, v are nonnegative, orthogonal n-vectors), we have 0 > λ G = v T A(G)v 2v T +A(G)v 2v T +Jv, and it is not hard to conclude that λ G n/2. This result with a different proof is due to Constantine, see [1], and implies in particular, the relation ι 2 (G) ˇι 4 (G). (14) It would be interesting to see a similar proof of the conjecture ι 2 (G) ι 4 (G) via showing the inequality λ G (u T Ju)/2 for all nonnegative unit eigenvectors u corresponding to Λ G. (The relation ι 2 (G) ι 4 (G) can easily be verified e.g. for bipartite graphs or for disjoint unions of cliques.) 11

5 Conclusion In this paper we studied spectral upper bounds on the stability number of a graph, counterparts of classical bounds due to Wilf and Hoffman. Several questions arised and were partially answered: for example concerning the relation of the spectral bounds introduced in the paper among each other and with the chromatic number of the complementary graph. Acknowledgements. Section 3. References I thank Mihály Hujter for the K 3 + K 2,2 example in 1. G. Constantine, Lower bounds on the spectra of symmetric matrices with nonnegative entries. Linear Algebra and its Applications, 65:171-178, 1985. 2. E. de Klerk, Interior Point Methods for Semidefinite Programming. PhD Thesis, Technische Universiteit Delft, Delft, 1997. 3. D. Knuth, The sandwich theorem. Electronic Journal of Combinatorics, 1:1-48, 1994. 4. L. Lovász, Combinatorial Problems and Exercises. Akadémiai Kiadó, Budapest, 1979. 5. L. Lovász, On the Shannon capacity of a graph. IEEE Transactions on Information Theory, IT-25(1):1-7, 1979. 6. L. Lovász, Semidefinite programs and combinatorial optimization. In: B.A. Reed and C.L. Sales, eds., Recent Advances in Algorithms and Combinatorics, CMS Books in Mathematics, Springer, 137-194, 2003. 7. P. Rózsa, Lineáris Algebra és Alkalmazásai. Tankönyvkiadó, Budapest, 1991. 8. G. Strang, Linear Algebra and its Applications. Academic Press, New York, 1980. 9. M. Ujvári, A note on the graph-bisection problem. Pure Mathematics and Applications, 12(1):119-130, 2002. 10. M. Ujvári, New descriptions of the Lovász number, and the weak sandwich theorem. Acta Cybernetica, 20(4):499-513, 2012. 11. M. Ujvári, Strengthening weak sandwich theorems in the presence of inconnectivity. Submitted to Acta Cybernetica, 2013. 12

12. M. Ujvári, Applications of the inverse theta number in stable set problems. Accepted for publication at Acta Cybernetica, 2014. 13. H.S. Wilf, Spectral bounds for the clique and independence numbers of graphs. Journal of Combinatorial Theory B, 40 (1):113-117, 1986. 13