Sandwich Theorem and Calculation of the Theta Function for Several Graphs

Similar documents
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Four new upper bounds for the stability number of a graph

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

A Forbidden Subgraph Characterization Problem and a Minimal-Element Subset of Universal Graph Classes

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices

Semidefinite programs and combinatorial optimization

Diagonal Entry Restrictions in Minimum Rank Matrices, and the Inverse Inertia and Eigenvalue Problems for Graphs

Applications of the Inverse Theta Number in Stable Set Problems

1 Positive definiteness and semidefiniteness

Graph coloring, perfect graphs

Symmetric 0-1 matrices with inverses having two distinct values and constant diagonal

Linear algebra and applications to graphs Part 1

LOVÁSZ-SCHRIJVER SDP-OPERATOR AND A SUPERCLASS OF NEAR-PERFECT GRAPHS

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

An approximate version of Hadwiger s conjecture for claw-free graphs

Knowledge Discovery and Data Mining 1 (VO) ( )

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Ma/CS 6b Class 20: Spectral Graph Theory

Sign Patterns that Allow Diagonalizability

The Minimum Rank, Inverse Inertia, and Inverse Eigenvalue Problems for Graphs. Mark C. Kempton

Math Linear Algebra Final Exam Review Sheet

8. Diagonalization.

1. General Vector Spaces

Assignment 1: From the Definition of Convexity to Helley Theorem

Elementary linear algebra

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Ma/CS 6b Class 20: Spectral Graph Theory

Lecture 7. Econ August 18

Math 315: Linear Algebra Solutions to Assignment 7

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Solving Homogeneous Systems with Sub-matrices

Laplacian Integral Graphs with Maximum Degree 3

Math Matrix Algebra

Symmetric Matrices and Eigendecomposition

Applications of Chromatic Polynomials Involving Stirling Numbers

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Lecture Summaries for Linear Algebra M51A

Topics in Graph Theory

3. Linear Programming and Polyhedral Combinatorics

Symmetric matrices and dot products

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

The spectra of super line multigraphs

On the adjacency matrix of a block graph

Jeong-Hyun Kang Department of Mathematics, University of West Georgia, Carrollton, GA

Eigenvectors Via Graph Theory

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

and let s calculate the image of some vectors under the transformation T.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Row and Column Distributions of Letter Matrices

11 Block Designs. Linear Spaces. Designs. By convention, we shall

Recall the convention that, for us, all vectors are column vectors.

9.1 Eigenvectors and Eigenvalues of a Linear Map

Index coding with side information

Preliminaries and Complexity Theory

Introduction to Semidefinite Programming I: Basic properties a

3. Linear Programming and Polyhedral Combinatorics

Chapter 1. Matrix Algebra

Foundations of Matrix Analysis

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1 The independent set problem

Approximating the independence number via the ϑ-function

7. Symmetric Matrices and Quadratic Forms

Linear Algebra: Matrix Eigenvalue Problems

Lecture 7: Positive Semidefinite Matrices

K 4 -free graphs with no odd holes

Eigenvalues and edge-connectivity of regular graphs

Inequalities for the spectra of symmetric doubly stochastic matrices

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Reducing graph coloring to stable set without symmetry

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

I. Multiple Choice Questions (Answer any eight)

C-Perfect K-Uniform Hypergraphs

1 Directional Derivatives and Differentiability

Chapter 6 Orthogonal representations II: Minimal dimension

Solutions to Exercises Chapter 10: Ramsey s Theorem

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Some notes on Coxeter groups

Lecture 4: January 26

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Algebraic Methods in Combinatorics

Z-Pencils. November 20, Abstract

Strongly Regular Decompositions of the Complete Graph

Graph fundamentals. Matrices associated with a graph

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Conceptual Questions for Review

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

3.3 Easy ILP problems and totally unimodular matrices

Boolean Inner-Product Spaces and Boolean Matrices

5 Flows and cuts in digraphs

Review problems for MA 54, Fall 2004.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Chapter 3 Transformations

The minimum rank of matrices and the equivalence class graph

THE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz

Transcription:

Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2003-03-7 Sandwich Theorem and Calculation of the Theta Function for Several Graphs Marcia Ling Riddle Brigham Young University - Provo Follow this and additional works at: http://scholarsarchive.byu.edu/etd Part of the Mathematics Commons BYU ScholarsArchive Citation Riddle, Marcia Ling, "Sandwich Theorem and Calculation of the Theta Function for Several Graphs" (2003). All Theses and Dissertations. 57. http://scholarsarchive.byu.edu/etd/57 This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in All Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact scholarsarchive@byu.edu.

SANDWICH THEOREM AND CALCULATION OF THE THETA FUNCTION FOR SEVERAL GRAPHS by Marcia Riddle A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science Department of Mathematics Brigham Young University April 2003

BRIGHAM YOUNG UNIVERSITY GRADUATE COMMITTEE APPROVAL of a thesis submitted by Marcia Riddle This thesis has been read by each member of the following graduate committee and by majority vote has been found to be satisfactory. Date Wayne W. Barrett, Chair Date Rodney W. Forcade Date R. Vencil Skarda

BRIGHAM YOUNG UNIVERSITY As chair of the candidate s graduate committee, I have read the thesis of Marcia Riddle in its final form and have found that () its format, citations, and bibliographical style are consistent and acceptable and fulfill university and department style requirements; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the graduate committee and is ready for submission to the university library. Date Wayne W. Barrett Chair, Graduate Committee Accepted for the Department Tyler J. Jarvis Graduate Coordinator Accepted for the College G. Rex Bryce, Associate Dean College of Physical and Mathematical Sciences

ABSTRACT SANDWICH THEOREM AND CALCULATION OF THE THETA FUNCTION FOR SEVERAL GRAPHS Marcia Riddle Department of Mathematics Master of Science This paper includes some basic ideas about the computation of a function ϑ(g), the theta number of a graph G, which is known as the Lovász number of G. ϑ(g c ) lies between two hard-to-compute graph numbers ω(g), the size of the largest clique in a graph G, and χ(g), the minimum number of colors need to properly color the vertices of G. Lovász and Grötschel called this the Sandwich Theorem. Donald E. Knuth gives four additional definitions of ϑ, ϑ, ϑ 2, ϑ 3, ϑ 4 and proves that they are all equal. First I am going to describe the proof of the equality of ϑ, ϑ and ϑ 2 and then I will show the calculation of the ϑ function for some specific graphs: K n, graphs related to K n, and C n. This will help us understand the ϑ function, an important function for graph theory. Some of the results are calculated in different ways. This will benefit students who have a basic knowledge of graph theory and want to learn more about the theta function.

ACKNOWLEDGEMENTS The author of this thesis would like to express appreciation for the tremendous help given by Dr. Wayne Barrett in the preparation and presentation for the key ideas contained in this thesis. The author would also like to express appreciation to her graduate committee and to the Mathematics Department. Also, she expresses appreciation for Steven Butler who prepared the pictures and cleaned up the formatting of the final form of the thesis.

Contents Terminology STAB(G), TH(G) and QSTAB(G)......................... 4 The Theta Function 6 2 Two Additional Functions ϑ (G) and ϑ 2 (G) 8 2. The ϑ (G) function............................... 9 2.2 The ϑ 2 (G) function............................... 9 3 ϑ 2 (G) and K n 2 3. ϑ 2 (K n ) and ϑ 2 (Kn) c............................... 2 3.2 ϑ 2 (G) for complete multipartite graphs.................... 4 3.3 ϑ 2 (G) for the union of two complete graphs................. 7 4 ϑ 2 (G) and C n 8 References 3 vi

Terminology Graph: Given a nonempty finite set V, let V (2) be the set of all 2-element subsets of V. A graph G = (V, E) consists of two things, a nonempty finite set V called the vertices of G and a (possibly empty) subset E of V (2) called the edges of G. Let n = V be the number of vertices of G. When more than one graph is under consideration, it may be useful to write V (G) and E(G) for the vertex and edge sets of G, respectively. We will use the notation of ab instead of {a, b} to represent an edge. For example, let C 4 be the graph with 4 vertices, V = {, 2, 3, 4} and E = {2, 3, 24, 34}, shown in Figure. 2 3 4 Figure : Graph of C 4 Subgraph: A subgraph of a graph G is a graph H such that V (H) V (G) and E(H) E(G). Induced subgraph: An induced subgraph of G is a subgraph H such that every edge of G whose vertices are contained in V (H) belongs to E(H). Complement G c of G: the graph which has V (G) as its vertex set, and in which two vertices are adjacent if and only if they are not adjacent in G. If G has n vertices, then G c can be constructed by removing from K n all the edges of G. Note that the complement of a complete graph is a empty graph. An example of a graph and its complement is seen in Figure 2. Positive definite (positive semidefinite) matrix: a real n n symmetric matrix A which satisfies x T Ax > 0 for all x R n \ {0} (x T Ax 0 for all x R n ). Negative definite (negative semidefinite) matrix: a real n n symmetric matrix A

G G c Figure 2: A graph and its complement is negative definite if A is positive definite and is negative semidefinite if A is positive demidefinite. Clique: a nonempty set of pairwise adjacent vertices. Some cliques in the graph G in Figure 3 are {, 2}, {3, 4, 5}, and {6, 7}. 2 3 6 4 5 7 Figure 3: A graph with three large cliques Clique number: the maximum number of vertices of a clique in a graph. It is denoted by ω(g). The clique number of G in Figure 3 is 3. Independent set: a set of pairwise nonadjacent vertices of G. This is also called a stable set. Some independent sets of the graph G in Figure 3 are: V = {, 3, 7}, V 2 = {2, 5, 6}, V 3 = {, 4, 6}. So, S V (G) is an independent set if and only if no two of its vertices are adjacent, or if and only if S is a clique in G c. Independence number, α(g) = ω(g c ): the maximum number of vertices in an independent set of G. Chromatic number χ(g): the minimum number of independent sets that partition V (G). A graph G is m-partite if V (G) can be partitioned into m or fewer independent sets. The independent sets in a specified partition are partite sets. The graph G in Figure 3 has chromatic number 3 and is 3-partite. Clique cover number: the smallest number of cliques that cover the vertices of G. 2

It is denoted by χ(g). For example, χ(g) = 3 for G in Figure 3. We can also think of it as the minimum number of independent sets that partition V (G c ). It is clear that χ(g) = χ(g c ). Perfect graph: a graph G such that each induced subgraph G of G satisfies ω(g ) = χ(g ). The graph G in Figure 3 is perfect. Imperfect graph: a graph that is not perfect. The graph C 5 (shown in Figure 4) is an imperfect graph because the clique number is ω(g) = 2 but the chromatic number is χ(g) = 3. It is the only imperfect graph for n 5 vertices. 5 2 4 3 Figure 4: C 5 Dot product: The dot product of (column) vectors a and b is a b = a T b. Orthogonal vectors: the vectors a, b R n are orthogonal if a b = 0. Orthogonal matrix: a square matrix Q such that Q T Q is the identity matrix I. In other words, Q is orthogonal if and only if its columns are unit vectors perpendicular to each other. Symmetric matrix: A matrix A is symmetric if A = A T. For any symmetric matrix A we have that A = QDQ T where Q is orthogonal and D is a diagonal matrix. Orthogonal labeling: an assignment of vectors a v R d to each vertex v of a graph G such that a u a v = 0 for each pair of nonajacent vertices u and v. In other words, whenever a u is not perpendicular to a v in the labeling, we will have u and v adjacent in the graph G. Convex set: A set in Euclidean space R d is a convex set if it contains all the line segments connecting any pair of its points. Convex hull: Given a set of vectors S R n, the convex hull of S is the smallest 3

convex set containing S. STAB(G), TH(G) and QSTAB(G) Definition. The incidence vector X S of a set S of vertices of a graph G is defined by { Xi S if i S = 0 if i / S. Typically, S will be a clique or independent set. Definition 2. Given a graph G, ST AB(G) is the convex hull of incidence vectors of all stable sets of G. Example. Consider K 3 (a clique with 3 vertices): the independent sets are, {}, {2}, {3} and the corresponding incidence vectors are 0 0, 0 0, 0 0, 0 0 0. Figure 5: K 3 and STAB(K 3 ) Definition 3. QSTAB(G) is the set in R n defined by x i 0 for each i V, x i for each clique K of G. i K It suffices to consider the maximal cliques of G. In the example of K 3, these become x 0, x 2 0 and x 3 0, x +x 2 +x 3. It is clear that the last inequality implies x + x 2, corresponding to the nonmaximal clique {, 2}. For K 3, STAB(K 3 )=QSTAB(K 3 ). These can be seen in Figure 5. 4

Definition 4. The cost c(u i ) of a vector u i in an orthogonal labeling of G is defined to be 0 if u i = 0, otherwise c(u i ) = u2 i u i 2 = u 2 i u 2 i +... +. u2 di Because c(u) is a homogenous function, we may whenever convenient take all nonzero vectors in an orthogonal labeling to be unit vectors. Definition 5. Given a graph G = (V, E) on n vertices, TH(G) = {x 0 i V c(u i )x i for all orthogonal labelings u of G}. convex set. Notice that TH(G) is the intersection of infinitely many halfspaces, so TH(G) is a Lemma. If S is a stable set in G and u is an orthogonal labeling of G, then i S c(u i). Proof. Let S = {i, i 2,..., i k }. It suffices to consider the case when {u i, u i2,..., u ik } is a set of k orthonormal vectors. There exists a d d orthogonal matrix Q such that the first k columns are u i, u i2,..., u ik. Since the length of the first row of Q is, u 2 i + u 2 i 2 + + u 2 i k = c(u i ) + c(u i2 ) + + c(u ik ) = i S c(u i ). Hence, i S c(u i). Lemma 2. STAB(G) TH(G) QSTAB(G). Proof. If S is a stable set, X S is its incidence vector and {u, u 2,..., u n } is an orthogonal labeling of G, by the previous lemma, c(u i ) Xi S = i S c(u i ) + i/ S c(u i ) 0. i V Since TH(G) contains each incidence vector, TH(G) contains the convex hull of incidence vectors of stable sets. Therefore, TH(G) STAB(G). 5

For each clique K of G, define the map from V to R by i X K i. If ij / E, either i / K or j / K. So we know that X K i X K j = 0. So this map is an orthogonal labeling. Therefore, if x TH(G), i V c(x K i )x i = i V X K i x i = i K x i + i/ K 0 x i = i K x i, so x QSTAB(G). Hence TH(G) QSTAB(G). The Theta Function Definition 6. Given a graph G, we define ϑ(g) = max{ i V x i x TH(G)}. Similar definitions can be given for STAB and QSTAB: α(g) = max{ i V κ(g) = max{ i V x i x STAB(G)}, x i x QSTAB(G)}. First, we need to show that this definition of α(g) is in fact the size of the largest stable set in G. Let G be a graph with α(g) = m. Let S be an independent set of size m and let X S = [x, x 2,, x n ] be the incidence vector of S. Then, So, n x i = m. i= α(g) = m max{ x i x STAB(G)}. For example consider the graph G shown in Figure 6. For this graph we have, α(g ) = 2. The incidence vector of the stable set {, 4} is x = [, 0, 0, ] T STAB(G ), so we have 4 i= x i = 2. It is clear that 2 max{σx i x STAB(G )}. So α(g ) max{ x i x STAB(G )}. 6

3 4 2 Figure 6: G We now prove the reverse inequality. We know that STAB(G) is convex set. Let {X S, X S 2,, X Sm } be the set of incidence vectors of the stable sets of G. Then any vector x STAB(G) is a convex combination of X S, X S 2,, X Sm, so x = m a i X S i, where i= m a i = and a i 0 for i =, 2,..., m. i= Since α(g) is the maximum number of vertices in an independent set of G, for each vector X S i, α(g) n j= XS i j, so we have, n x j = j= n m m ( a i X S i j ) = j= i= i= a i n j= X S i j m a i α(g) = α(g) i= m i= a i = α(g), so α(g) max{ i V x i x STAB(G)}. Therefore, α(g) = max{ i V x i x STAB(G)}. This shows that our two different definitions of α(g) are consistent. We want to prove the sandwich theorem that we have mentioned at the beginning of the paper which is that for any graph G, ϑ(g c ) lies between the clique number ω(g) and the chromatic number χ(g). Theorem (Sandwich Theorem). ω(g) ϑ(g c ) χ(g). Proof. Since α(g c ) = ω(g) and χ(g c ) = χ(g), this is equivalent to proving that α(g c ) ϑ(g c ) χ(g c ) or α(g) ϑ(g) χ(g). 7

Since ϑ(g) = max{ i V x i x TH(G)}, α(g) = max{ x V x i x STAB(G)}, and STAB(G) TH(G), we have α(g) ϑ(g). Similarly, since TH(G) QSTAB(G), we have ϑ(g) κ(g). Now, we need to show that κ(g) χ(g). Suppose K,, K p is a smallest set of cliques that cover the vertices of G, and let x QSTAB(G). Then Therefore, n x i = i= p j= i K j x i p = p = χ(g). j= κ(g) = max{ i V x i x QSTAB(G)} χ(g). Combining, we have α(g) ϑ(g) κ(g) χ(g), which completes the proof. For example, consider the cyclic graph C 5 in Figure 4 with vertices {, 2, 3, 4, 5}. For x QSTAB(G), x + x 2, x 2 + x 3, x 3 + x 4, x 4 + x 5, x 5 + x. Then 2(x +x 2 +x 3 +x 4 +x 5 ) 5. Hence κ(c 5 ) 5 2. Since ( 2, 2, 2, 2, 2 ) QSTAB(C 5), κ(c 5 ) = 5. But χ(c 2 5) = 3 > κ(c 5 ). Corollary. If G is a perfect graph, then ϑ(g) = α(g). Proof. By the Perfect Graph Theorem, G c is a perfect graph. Therefore, we have ω(g c ) = χ(g c ). By the Sandwich Theorem, ω(g c ) ϑ(g) χ(g c ), so ϑ(g) = ω(g c ). Since α(g) = ω(g c ), therefore ϑ(g) = α(g). 2 Two Additional Functions ϑ (G) and ϑ 2 (G) We have just shown that ϑ(g) lies between χ(g) and α(g). In this part, we are going to introduce the other two functions ϑ (G) and ϑ 2 (G). They are different ways of defining ϑ(g). This will not only help us to understand ϑ(g) but also help us to compute ϑ(g). We will prove that ϑ(g) ϑ (G) ϑ 2 (G). In [3], Donald Knuth introduced two additional functions ϑ 3, ϑ 4 in order to prove ϑ 2 ϑ. Therefore ϑ, ϑ, ϑ 2 are all equivalent. 8

2. The ϑ (G) function If G is a graph, ( ) ϑ (G) = min max(/c(a v )), over all orthogonal labelings a. a v The max is if there is some v such that c(a v ) = 0. Lemma 3. ϑ(g) ϑ (G). Proof. If G is a graph, suppose x TH(G) and a is an orthogonal labeling of G. Then ϑ(g) = max{ v x v x TH(G)} = max{ c(a v v ) c(a v) x v x TH(G)} max v c(a v ) max{ c(a v ) x v x TH(G)}. v By the definition of TH(G), v c(a v)x v. Therefore, ϑ(g) max v c(a v ). Since this inequality holds for each orthogonal labeling, ( ) ϑ(g) min max = ϑ (G). a v c(a v ) 2.2 The ϑ 2 (G) function Definition 7. Matrix A is a feasible matrix for the graph G if A is indexed by the vertices of G and (i) A is real and symmetric, (ii) a ii =, for all i V, (iii) a ij =, whenever i and j are nonadjacent in G. 9

The rank matrix A 0 = [,,, ] T [,,, ] is feasible for any graph G and has one nonzero eigenvalue λ (A 0 ) = tr(a) = n = n. i= If A is any real symmetric matrix which has eigenvalues λ λ 2 λ n, by the Spectral Theorem, A = QDQ T for some orthogonal matrix Q and diagonal matrix D where D = diag(λ, λ 2,..., λ n ). Since all the eigenvalues of A are real, by the Rayleigh-Ritz Theorem, A has the maximum eigenvalue, λ (A) = max{x T Ax x 2 =, x R n } Lemma 4. The set of feasible A with λ (A) n is compact. Proof. Starting with λ (A) + λ 2 (A) + + λ n (A) = n, we use what we know about the ordering to get (n )λ (A) + λ n (A) n so (n )n + λ n (A) n. In particular, we have that λ n (A) (n 2)n so that (n 2)n λ i (A) n for i =,, n. This shows that λ 2 i (A) n 2 (n 2) 2 and so n a ij 2 = i,j= n λ 2 i (A) n 3 (n 2) 2. i= Therefore, the set is bounded. The set is clearly closed, so it is compact. From Lemma 4 and the fact that λ (A) is a continuous function of the entries of A, we know that the minimum value of λ (A) exists. 0

Definition 8. Given a graph G, then ϑ 2 (G) = min{λ (A) A is a feasible matrix for G}. Lemma 5. ϑ (G) ϑ 2 (G). Proof. Let A be a feasible matrix such that ϑ 2 (G) = λ 0 and let B = λ I A. Let λ 2 λ 3 λ n be the remaining eigenvalues of A. The eigenvalues of B are λ λ i. It is clear that all the eigenvalues of B are nonnegative, so B is positive semidefinite. If λ λ 2 A = Q... λ n QT then λ λ λ λ 2 B = Q... λ λ n QT ( λ λ ) 2 ( λ λ 2 ) 2 = Q... ( λ λ n ) 2 QT = X T X, where λ λ X = Q λ λ2... QT. λ λ n Let X = [x, x 2,, x n ] (i.e., the x i are the columns of X), so that we have b ij = x T i x j. [ ] Let u i = R n+, x i R n. x i

If i j and ij is not in E, u i u j = + x T i x j = + b ij = a ij + b ij. Since B = λ I A, we have b ij = a ij, so u i u j = a ij a ij = 0. Therefore, u, u 2,, u n is an orthogonal labeling of G. Note that appealing to the definition of cost that So for each i =, 2,, n, c(u i ) = + x i 2. c(u i ) = + x i 2 = + x i x i = + b ii = a ii + λ a ii = λ. This implies Hence, λ = max i v c(u i ). ( ) ϑ (G) = min max max a i c(a i ) i c(u i ) = λ = ϑ 2 (G). This completes the proof. In [3], Donald E. Knuth has proved that ϑ 2 (G) ϑ(g) by proving ϑ 2 (G) ϑ 3 (G) ϑ 4 (G) ϑ(g). So, we can conclude that ϑ(g) = ϑ 2 (G). 3 ϑ 2 (G) and K n Now we are going to use ϑ 2 (G) to calculate ϑ(g) for the two simplest graphs, K n and K c n. 3. ϑ 2 (K n ) and ϑ 2 (K c n) Lemma 6. ϑ(k n ) = ϑ 2 (K n ) =. 2

Consider the graph K n. We know that K n is a perfect graph. By the Sandwich Theorem, ϑ(k n ) = α(k n ) =. We also obtain this conclusion in two different ways. There are no missing edges in the graph K n, so any set of n vectors is an orthogonal labeling. Let u i = [ a, b] T for i =, 2,..., n, then c(u i ) = a/(a + b). Now consider the following, c(ui ) x i = a a + b x i, rearranging we get xi + b a. From this last statement, it is clear that if we let b = 0 then x i. It follows that TH(K n ) = {x 0 x i }. Therefore, ϑ(k n ) = max{ i V x i x TH(K n )} = max{ i V x i x i, x 0} =. We should get the same result if we look at ϑ 2 (K n ). Let A n be the feasible matrix of K n, and let λ be the maximum eigenvalue of A n, then the matrix A n has the form x 2 x (n ) x n x 2 x 2(n ) x 2n A n =........ x (n ) x 2(n ) x (n )n x n x 2n x (n )n Since for any graph G nλ = λ + λ + + λ λ + λ 2 + + λ n = tra n = n, we have that λ. On the other hand, A n becomes I when x ij = 0 for any i j, and in that case λ =. So ϑ 2 (K n ) = min λ =. Lemma 7. ϑ 2 (K c n) = n. 3

The graph of K c n is very simple. It just contains n vertices without any edges, which means there are no pairwise adjacent vertices. Since Kn c is a perfect graph, by the Sandwich Theorem, ϑ(kn) c = α(kn) c = n. We can also see this result very easily from the definition of the ϑ 2 function. Since the feasible matrix A n of Kn c is a n n matrix with all the entries equal to, the rank of A n is. Therefore, there is only one nonzero eigenvalue and it equals tr(a n ) which is n. Hence ϑ(kn) c = ϑ 2 (Kn) c = n. 3.2 ϑ 2 (G) for complete multipartite graphs Definition 9. The union of graphs G and H with V (G) V (H) =, written G H, has vertex set V (G) V (H) and edge set E(G) E(H). Similarly, the join of graphs G and H with V (G) V (H) =, written G H, is obtained from G H by adding the edges {xy : x V (G), y V (H)}. Let G = K n,n 2,...,n k be a complete multipartite graph, then K n,n 2,,n k = (K n K n2 K nk ) c = K c n K c n 2 K c n k. For example, in Figure 7 we have the union and the join of the two graphs P 3 and K 3. The bold edge in the join indicates that we join all vertices between these two sets. G c H G w H Figure 7: Lemma 8. ϑ(k n,n 2,,n k ) = max{n, n 2,, n k }. We will prove this lemma in two different ways. 4

Proof. For our first proof, consider the following. First, let G = K n,n 2,,n k. We show that G is a perfect graph. This follows since, ω(k n K n2 K nk ) = max{n, n 2,, n k } = χ(k n K n2 K nk ), with a similar equality for each induced subgraph. So K n K n2 K nk is a perfect graph. By the Perfect Graph Theorem, a graph is perfect if and only if its complement is perfect. Therefore, K n,n 2,,n k = (K n K n2 K nn ) c is perfect. So by Corollary, ϑ(k n,n 2,,n k ) = α(k n,n 2,,n k ) = ω(k n K n2 K nk ) = max{n, n 2,, n k }. result. Before we use the second method of proving the lemma, we need to state a useful Theorem 2 (Interlacing Inequalities). Let A M n be Hermitian and for any i N, let A(i) be the principal submatrix obtained from A by deleting the i th row and column. Let λ λ 2 λ n be the eigenvalues of A and µ µ 2 µ n be the eigenvalues of A(i), then λ µ λ 2 µ 2 λ 3 µ 3 λ n µ n λ n. Corollary 2. Let A M n be a Hermitian matrix and B M m be any principal submatrix of A, then λ k (A) λ k (B), k =, 2,, m. Now we are ready prove Lemma 8 by using the definition of the ϑ 2 function. First, consider an example, namely let G = K 3,2, = (K 3 K 2 K ) c. This graph is shown in Figure 8. Let A be a feasible matrix of K 3,2,. Then A has the form A = 5

K 3,2, = ( K 3 c K 2 c K ) Figure 8: where the remaining entries are any real numbers for which A is symmetric. There are three blocks in A, they are A 3, A 2, A, the feasible matrices for K3, c K2 c and K. c The eigenvalues for these matrices are: λ(a 3 ) = 3, 0, 0; λ(a 2 ) = 2, 0; λ(a ) =. Let λ be the maximum eigenvalue of A. Then by Corollary 2, ϑ 2 (K 3,2, ) = λ max{λ (A 3 ), λ (A 2 ), λ (A )} = 3. However if we set all remaining entries in A equal to 0, A = A 3 A 2 A which has eigenvalues 3, 0, 0, 2, 0, and so λ = 3. Therefore ϑ 2 (K 3,2, ) = 3. We are now ready to generalize this for the proof of Lemma 8. Proof. For our second proof, consider the following. Consider the feasible matrix A n of the graph G = K n,n 2,,n k. Then A n = J n J n2 c..., J nk where all the off diagonal entries are some real numbers which preserve symmetry and J ni are the feasible matrices for K c n i, so all the entries equal. Let λ be the maximum eigenvalue. By Corollary 2, λ (A n ) λ (J ni ) for i =, 2,, k. Since the eigenvalues of the matrix J ni are n i and 0, λ (J ni ) = n i. Therefore λ (K n,n 2,,n k ) max{n, n 2,, n k }. Equality holds when all the off diagonal entries of A n equal 0 which reaches the minimum value of λ. Hence ϑ 2 (K n,n 2,,n k ) = max{n, n 2,, n k }. 6

3.3 ϑ 2 (G) for the union of two complete graphs We have proved earlier that ϑ(k n ) = (Lemma 6). Now we want to show the calculation of the ϑ function for the union of two complete graphs. Lemma 9. ϑ(k n K m ) = 2. Proof. Let G = K m K n, and A, A m and A n be feasible matrices of G, K m, and K n respectively. Let λ (A), λ (A m ), λ (A n ) represent the maximum eigenvalues of each matrix. Then A m is a m m symmetric matrix with all the diagonal entries equal and all the off diagonal entries are real numbers. Similarly, A n is a n n symmetric matrix with all the diagonal entries equal and all the off diagonal entries are real numbers. So A has the form [ An A = A m ]. Now suppose each off diagonal entry in A n and A m equals x and define B = A + (x )I. Then we will obtain an (m + n) (m + n) symmetric matrix [ x x Since the rank of B is at most two, we have at most two nonzero eigenvalues. So, at least (m + n 2) eigenvalues are zero. The characteristic polynomial for B is ]. t m+n (m + n)xt m+n + mn(x 2 )t m+n 2. Using the quadratic formula to solve for the possible nonzero eigenvalues we find they are Hence the eigenvalues of B are: t = (m + n)x ± (m n) 2 x 2 + 4mn. 2 (m + n)x ± (m n) 2 x 2 + 4mn, and 0 with multiplicity m + n 2. 2 7

It follows that the eigenvalues of A are λ(b) (x ): (m + n)x ± (m n) 2 x 2 + 4mn 2 (x ), and (x ) with multiplicity m + n 2. When x = the eigenvalues are 0, (m+n) and 2. We have the largest eigenvalue λ = 2. This shows ϑ 2 (K n K m ) 2. By the Sandwich Theorem, ϑ(k n K m ) α(k n K m ) = 2, therefore, ϑ(k n K m ) = 2. 4 ϑ 2 (G) and C n In this part, we are going to utilize the ϑ 2 (G) function to find ϑ(g) for the n-cycle, C n, which is an imperfect graph for each odd integer greater than three. First we will illustrate how to use ϑ 2 to calculate ϑ(c 5 ). We first need to prove one important fact. Given a symmetric matrix A, let λ (A) denote the maximum eigenvalue of A. Lemma 0. Let A i, i =, 2,, k be symmetric matrices. Then λ ( k i= A i ) k λ (A i ). i= Proof. Let A and B be symmetric matrices. Since By the Rayleigh-Ritz theorem λ (A) = max x 0 x T Ax x T x, λ x T Bx (B) = max x 0 x T x. x T (A + B)x x T x = xt Ax x T x + xt Bx x T x max x 0 x T Ax x T x + max x T Bx x 0 x T x = λ (A) + λ (B), we have λ (A + B) = max x 0 x T (A + B)x x T x λ (A) + λ (B). The result follows by mathematical induction. 8

Notice that the Lemma is false if A is not a symmetric matrix. For example, if [ ] [ ] 0 A = m 0 m and B = m 0 0, then the eigenvalues of A and B are and. Then m [ ] 0 m + λ (A) + λ (B) = 2, but A + B = m m +, and the eigenvalues of A + B are 0 m m +, (m + ). It is clear that m + > 2 for any m > 0 but m. m m m Now we compute ϑ 2 (C 5 ). Consider the graph G = C 5 with V = {, 2, 3, 4, 5}, E = {x, x 2, x 3, x 4, x 5 }. The relationship of the vertices with the edges is shown in Figure 9. x 5 5 x 4 4 x 3 x 3 2 x 2 Figure 9: The feasible matrix for C 5 is: x x 5 x x 2 A = x 2 x 3 x 3 x 4 x 5 x 4 where now x, x 2, x 3, x 4, x 5 represent real numbers. We want to show that the minimum value of λ occurs when x = x 2 = x 3 = x 4 = x 5. To do this consider the matrix P where, 0 0 P = 0 0, 0 i.e., P is a permutation matrix. Now define x 5 x 4 x 5 x A 2 = P T AP = x x 2 x 2 x 3. x 4 x 3 We may view the action of P as rotating each vertex of the graph C 5 one position counterclockwise, or equivalently, each edge one position clockwise. This is seen in Figure 0. 9

x 5 5 x 4 4 x 3 x 3 2 x 2 x 5 2 x 3... x 4 x 2 5 x 4 3 x 5 5 4 x 4 3 x 3 x 2 x 2 A A 2 A 5 Figure 0: We similarly define A 3 = (P 2 ) T AP 2, A 4 = (P 3 ) T AP 3 and A 5 = (P 5 ) T AP 5, x 4 x 3 x 3 x 2 x 4 x 5 A 3 = x 5 x x x 2, A x 3 x 4 4 = x 4 x 5 x 5 x x 3 x 2 x 2 x x 2 x x 2 x 3 A 5 = x 3 x 4 x 4 x 5. x x 5 Since A, A 2, A 3, A 4 and A 5 are similar, they have the same eigenvalues. Now let x x A 0 = 5 (A x x + A 2 + A 3 + A 4 + A 5 ) = x x x x x x where x = 5 (x + x 2 + x 3 + x 4 + x 5 ). By Lemma 0 we have λ (A 0 ) = λ [ 5 (A + A 2 + A 3 + A 4 + A 5 )] λ ( 5 A ) + λ ( 5 A 2) + λ ( 5 A 3) + λ ( 5 A 4) + λ ( 5 A 5) = 5 [λ (A ) + λ (A 2 ) + λ (A 3 ) + λ (A 4 ) + λ (A 5 )] = λ (A ). Therefore, the minimum eigenvalue λ occurs at A 0, which means when x = x 2 = x 3 = x 4 = x 5. I used Maple to compute the eigenvalues of A 0 as follows: 3 + 2x, ( + 2 2 5)( + x), ( 2 2 5)( + x), ( + 2 2 5)( + x), ( 2 2 5)( + x). 20

Notice that there are only three distinct eigenvalues, namely, λ = 3 + 2x, λ 2 = ( 2 + 2 5)( + x), λ3 = ( 2 2 5)( + x). We know that ϑ 2 (C 5 ) = min λ over all feasible matrices. Figure shows plots of the lines λ, λ 2, λ 3 as functions of x. 8 8 3 3 8 8 2 x Figure : From the graph (Figure ) we see that the minimum λ occurs at the intersection of λ and λ 3, which occurs at x = ( 3 + 5)/2, and gives λ = 5. Therefore, ϑ 2 (C 5 ) = min λ (A) = 5. Since C c 5 = C 5, the Sandwich Theorem says that ω(c 5 ) ϑ(c 5 ) χ(c 5 ) which is 2 < 5 < 3. This is our first example of the sandwich theorem in which the inequalities are strict. Now, we will look at another important property of the ϑ 2 function which enables us to compute ϑ 2 (C n ). First, we need to introduce some related definitions and prove one lemma. Definition 0. Let G = (V, E) be a graph with vertex set V = {, 2, 3,, n}. The n-by-n adjacency matrix A n (G) = (a ij ) is defined by { if ij E. a ij = 0 otherwise. Definition. The eigenvalues of a graph are the eigenvalues of its adjacency matrix. 2

Lemma. Let A n be the adjacency matrix of C n. Then the eigenvalues for A n are λ j = 2 cos 2πj, j = 0,,..., n. n Proof. Assume the vertices of C n are labeled through n. Then the adjacency matrix A n of C n is 0 0 0 0 0 0. 0 0.. 0 0 A n =............ 0 0 0... 0 0 0 0. Assume λ is an eigenvalue of A n and x = [x, x 2,..., x n ] T is a corresponding eigenvector. The equation Ax = λx written in components is x n + x 2 = λx x + x 3 = λx 2 x 2 + x 4 = λx 3. x n + x = λx n We can rewrite this as a single equation x k + x k+ = λx k () with boundary conditions x = x n+ and x 0 = x n. This is a second degree linear difference equation with constant coefficients. For a difference equation, the fundamental solutions have the form r k, r 0. Let x k = r k in equation (). Then we have r k + r k+ = λr k. Dividing by r k, we have r 2 λr + = 0. Let r, r 2 be the two roots of this equation. Then r + r 2 = λ, r r 2 =. 22

First assume that r = r 2. So r = r 2 = or r = r 2 =. Consider r = r 2 =, then λ = 2. The solution for equation () is x k = k =. Also x k = k k = k is a solution since x k + x k+ λx k = (k ) + (k + ) 2k = 0. So if r = r 2 =, the general solution for equation () is x k = c + c 2 k. From the boundary conditions we have c = c + nc 2 and c + c 2 = c + (n + )c 2. Therefore c 2 = 0. Hence x = x 2 = = x n = c, i.e., c [,,..., ] T is an eigenvector corresponding to λ = 2. Similarly, if r = r 2 =, λ = 2, the solutions for the equation () are x k = ( ) k and x k = k ( ) k. The general solution for equation () is also x k = c ( ) k + c 2 k( ) k. From the boundary conditions we have x = c [,,,,, ] T if n is even, and x = 0 if n is odd. So λ = 2 is an eigenvalue if n is even. Now we assume that r r 2. Then the general solution for equation () is x k = c r k + c 2 r k 2. From the boundary conditions we have c + c 2 = c r n + c 2 r n 2 and c r + c 2 r 2 = c r n+ + c 2 r n+ 2. Equivalently, c ( r n ) + c 2 ( r n 2 ) = 0 and c r ( r n ) + c 2 r 2 ( r n 2 ) = 0. (2) 23

From equation (2) we have [ ] [ ] r n r2 n c r ( r n ) r 2 ( r2 n = 0. ) c 2 [ ] r Since [c, c 2 ] T n is nonzero, therefore B = r2 n r ( r n ) r 2 ( r2 n ) which means det(b) = 0. That is, is a singular matrix, ( r n )r 2 ( r n 2 ) ( r n 2 )r ( r n ) = 0, so, For r n ( r n )( r n 2 )(r 2 r ) = 0. =, r = e 2πij n, j = 0,,..., n. Then r 2 = r = e 2πij n which gives possible eigenvalues λ j = r + r 2 = e 2πij n For r n 2 =, we obtain the same result. Now we see the only solutions are 2πij + e n = 2 cos 2πj, j = 0,,..., n. n λ j = 2 cos 2πj, for j = 0,,..., n. n Note that j = 0 gives the solution for the case r = r 2 = and when n is even, j = n/2 gives the solution for r = r 2 =, Therefore, these two previous cases can be combined with this case. Next, we need to show that the eigenvalues of A and the λ j are in one-to-one correspondence. Consider the matrix λ 0 0 λ 0 0 0 λ... 0 0 B = A λi =............. 0 0 0.. λ 0 0 λ 24

If we delete the first and last rows and first two columns, we obtain an upper triangular matrix, which is invertible. Therefore rank(b) n 2, so nullity(b) 2. So geometric multiplicity(λ) 2, algebraic multiplicity(λ) 2. For λ 0 = 2, we consider the matrix B = A 2I, 2 2 B = 2.... 2 If we delete the first row and first column, we obtain the matrix 2 2 B =.... 2 For any x R n, x T B x = 2x 2 + 2x x 2 2x 2 2 + 2x 2 x 3 2x 2 3 + + 2x n 2 x n 2x 2 n = x 2 (x x 2 ) 2 (x 2 x 3 ) 2 (x n 2 x n ) 2 x 2 n 0. So, B is negative semidefinite. If x T B x = 0, then x = 0, x x 2 = 0, x 2 x 3 = 0,..., x n 2 x n = 0. Therefore x = x 2 = = x n = 0. So if x 0, then x T B x < 0 and B is negative definite. We know that B is invertible because it is positive definite. Therefore B is also invertible. So the rank of B is at least n. Also, since B is a singular matrix, we have n rank(b ) n. Therefore rank(b ) = n and nullity(b ) =. Therefore, geometric multiplicity(2) =, and algebraic multiplicity(2) =. 25

If n is an odd number there are (n + )/2 distinct values in λ j = 2 cos(2πj/n). We let λ (2) be the number of eigenvalues with algebraic multiplicity equal 2, and λ () be the number of eigenvalues with algebraic multiplicity equal. Since B has n eigenvalues n = λ (2) + λ () ( ) n + 2 + 2 = n. So each eigenvalue 2 cos(2πj/n), j =,..., n 2 has multiplicity 2. If n is an even number, λ = 2 is also one of the eigenvalues of matrix A since for eigenvector x = [,,,,..., ] T 0 0 0 0 0 0 0 0 0 0....... = ( 2)... 0 0 0.. 0 0 0 Consider the matrix B 2 = A λi 2 2 B 2 = 2.... 2 If we delete the first row and the first column, we obtain the matrix 2 2 B 2 =.... 2 For any x R n, by an argument similar to the case when λ 0 = 2, B 2 is positive definite. Therefore B 2 is invertible. So the rank of B 2 is at least n. On the other hand, B 2 is a singular matrix because 2 is an eigenvalue. So n rank(b 2 ) n. 26

Therefore rank(b 2 ) = n and nullity(b 2 ) =. So we have geometric multiplicity( 2) =, and algebraic multiplicity( 2) =. So the dimension of the eigenspace of B 2 corresponding to the eigenvalue 2 is one. By a similar argument as for when n is odd, we see that 2 and 2 are eigenvalues of multiplicity one and 2 cos(2πj/n), j =, 2, (n/2) are eigenvalues of multiplicity two. Equivalently, the eigenvalues of A in all cases are λ j = 2 cos 2πj, j = 0,,..., n. n Theorem 3. For any odd number n ϑ(c n ) = { n cos(π/n) +cos(π/n) if n is odd. n if n is even. 2 Proof. Let A n be the adjacency matrix of C n and J n be a n-by-n matrix with all entries equal to. As in the case n = 5, it is sufficient to consider a feasible matrix for C n to be J n ( x)a n. We have just proved that the eigenvalues of the graph C n are 2 cos 2πj, for j = 0,,..., n. n Let u j be the corresponding eigenvector. Note that u 0 = [,,..., ] T is an eigenvector corresponding to λ 0 = 2. Then we have [J n ( x)a n ]u 0 = [n ( x)2]u 0 = (n 2 + 2x)u 0. Since all the eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal, then for j > 0 we have, u T 0 J n u j =. u T 0 u j = 27 u T 0 u j. u T 0 u j = 0,

so then, [J n ( x)a n ]u j = 0u j ( x)a n u j = ( x)2 cos 2πj n u j. So the eigenvalues of J n ( x)a n are µ 0 = n 2 + 2x Let λ be the maximum eigenvalue. µ j = 2( x) cos 2πj, j =, 2,, n. n If n is even, we consider the graph of µ 0 and µ i, and consider the intersection of these lines. We know that all the lines of the form µ i = 2( x) cos(2πj/n) for j =, 2,..., n pass through the point (, 0). Since line µ 0 = n 2 + 2x has biggest positive slope and biggest y intercept, it crosses each of the lines µ, µ 2,..., µ n to the left of the vertical axis. The line µ n/2 = 2( x) cos(π) = 2 2x has the most negative slope so is the first to meet µ 0. It is easy to see this result if we consider the example of n = 6, which is illustrated in Figure 2. The distinct eigenvalues are µ 0 = 4 + 2x µ = 2( x) cos π 3 µ 2 = 2( x) cos 2π 3 µ 3 = 2( x) cos π. By graphing these eigenvalues, we can see that the maximum eigenvalue occurs at the intersection of µ 0 and µ 3 which occurs at ( /2, 3) and gives λ = 3 = n/2. 28

: : 2 : 3 4 : : 0 x Figure 2: So, for any even n, we consider the intersection of the lines 2 2x and n 2 + 2x, which occurs when x = (n/4). Then ( λ = n 2 + 2 n ) = n 4 2 = α(c n). If n is odd and j = (n )/2, ( 2π( n 2 µ j = 2( x) cos ) ) n ( = 2( x) cos π π ) = 2( x) cos π n n. Similarly, we can find the maximum eigenvalue λ by finding the intersection of µ 0 and µ j. This occurs when n 2 + 2x = 2 cos π n 2x cos π n. Rearranging we get, ( 2 ( + cos π ) )x = 2 cos π n n n + 2, so that the intersection will occur when x = 2 cos(π/n) + 2 n 2( + cos(π/n)) = n 2( + cos(π/n)). 29

Therefore Hence ( λ = n 2 + 2x = n 2 + 2 = n 2 + 2 = n cos(π/n) + cos(π/n). ϑ 2 (C n ) = ) n 2( + cos(π/n)) 2n 2( + cos(π/n)) = n n + cos(π/n) n cos(π/n), for any odd n. + cos(π/n) 30

References [] Barrett, Wayne W., Introduction to Spectral Graph Theory, unpublished notes. [2] Horn, Roger A. and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 985. [3] Knuth, Donald E., The Sandwich Theorem, The Electronic Journal of Combinatorics I (994) A. [4] Lovasz, Laszlo, On the Shannon Capacity of a Graph, IEEE Transactions on Information Theory, Vol. IT-25, No., January 979. [5] Merris, Russell, Graph Theory, John Wiley & Sons, Inc., New York, 200. [6] West, Douglas B., Introduction to Graph Theory, Prentice Hall, Upper Saddle River, NY, 996. 3