Four new upper bounds for the stability number of a graph

Size: px
Start display at page:

Download "Four new upper bounds for the stability number of a graph"

Transcription

1 Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which has several remarkable properties (for example, it is exact for perfect graphs). A variant, the inverse theta number, defined recently by the author in a previous work, also constitutes an upper bound on the stability number. In the paper we will describe counterparts of theorems due to Wilf and Hoffman, four spectral upper bounds on the stability number, which differ from both the theta and the inverse theta numbers. Keywords: stability number, spectral bound 1 Introduction The earliest spectral bounds (upper, resp. lower bounds for the chromatic number of a graph) were derived in the late 1960s by H.S. Wilf and A.J. Hoffman (see e.g. Exercises 11.20, 21 in [4]). In 1979, L. Lovász applied the method of variables to the Wilf s and Hoffman s bounds, obtaining the theta number, which is sandwiched between the stability number of the graph and the chromatic number of the complementary graph, and, as the optimal value of a semidefinite program, is easily computable (see [5], [2]). In 1986 H.S. Wilf derived spectral lower bounds on the stability number (see [13]). In this paper we will describe counterparts of Hoffman s and Wilf s bounds, four spectral upper bounds on the stability number. We begin the paper with stating the main results. First, we fix some notation. Let n N, and let G = (V (G), E(G)) be an undirected graph, with vertex set V (G) = {1,..., n}, and with edge set E(G) {{i, j} : i j}. Let A(G) be the 0-1 adjacency matrix of the graph G, that is let { A(G) := (a ij ) {0, 1} n n 0, if {i, j} E(G),, where a ij := 1, if {i, j} E(G). H-2600 Vác, Szent János utca 1. HUNGARY 1

2 The set of n by n real symmetric matrices A = A T R n n satisfying A A(G) will be denoted by A. (Here T stands for transpose, and,. are meant elementwise.) The complementary graph G is the graph with adjacency matrix A(G) := J I A(G), where I is the identity matrix, and J denotes the matrix with all elements equal to one. The disjoint union of the graphs G 1 and G 2 is the graph G 1 + G 2 with adjacency matrix A(G 1 + G 2 ) := ( A(G1 ) 0 0 A(G 2 ) We will use the notation K n for the clique graph, and K s1,...,s k for the complete multipartite graph K s K sk. Let (δ 1,..., δ n ) be the sum of the row vectors of the adjacency matrix A(G). The elements of this vector are the degrees of the vertices of the graph G. Let δ G, G, µ G be the minimum, maximum, resp. the arithmetic mean of the degrees in the graph. By Rayleigh s theorem (see [8]) for a symmetric matrix M = M T R n n the minimum and maximum eigenvalue, λ M resp. Λ M, can be expressed as ). λ M = min u =1 ut Mu, Λ M = max u =1 ut Mu. Attainment occurs if and only if u is a unit eigenvector corresponding to λ M and Λ M, respectively. By the Perron-Frobenius theorem (see [7], Theorem 9.1.3) for an elementwise nonnegative symmetric matrix M = M T 0, we have λ M Λ M = u T Mu (1) for some u 0, u T u = 1. The minimum and maximum eigenvalue of the adjacency matrix A(G) will be denoted by λ G resp. Λ G. It is a consequence of the Rayleigh and Perron- Frobenius theorems that for A A, Λ A Λ A Λ G. (2) Also, λ A (resp. Λ A ) as a function of the symmetric matrix A is concave (resp. convex), specially the function λ A + Λ A is continuous, and attains its minimum and maximum on the compact convex set A. The set of the n by n real symmetric positive semidefinite matrices will be denoted by S n +, that is S n + := { M R n n : M = M T, u T Mu 0 (u R n ) }. 2

3 It is well-known (see [8]), that the following statements are equivalent for a symmetric matrix M = (m ij ) R n n : a) M S+; n b) λ M 0; c) M is Gram matrix, that is m ij = vi T v j (i, j = 1,..., n) for some vectors v 1,..., v n. Furthermore, by Lemma 2.1 in [10], the set S+ n can be described as ( ) n S+ n a T i = a j (a i a T j ) 1 m N, a i R m (1 i n) 11 a T i a i = 1 (1 i n). (3) i,j=1 For example, diagonally dominant matrices (that is M = (m ij ) R n n with m ii i j m ij for i = 1,..., n) are positive semidefinite by the Gerschgorin s disc theorem, see [8]. Hence, F s1,...,s k := k(j A(K s1,...,s k )) J S n +. (Note that if F 1,...,1 is Gram matrix then so is F s1,...,s k.) The stability number, α(g), is the maximum cardinality of the (so-called stable) sets S V (G) such that {i, j} S implies {i, j} E(G). The chromatic number, χ(g), is the minimum number of stable sets covering the vertex set V (G). Let us define an orthonormal representation of the graph G (shortly, o.r. of G) as a system of vectors a 1,..., a n R m for some m N, satisfying a T i a i = 1 (i = 1,..., n), a T i a j = 0 ({i, j} E(G)). In the seminal paper [5] L. Lovász proved the following result, now popularly called sandwich theorem, see [3]: α(g) ϑ(g) χ(g), (4) where ϑ(g) is the Lovász number of the graph G, defined as { } 1 ϑ(g) := inf max 1 i n (a i a T i ) : a 1,..., a n o.r. of G. 11 The Lovász number has several equivalent descriptions, see [5]. For example, by (3) and standard semidefinite duality theory (see e.g. [9]), it is the common optimal value of the Slater-regular primal-dual semidefinite programs (T P ) min λ, x ii = λ 1 (i V (G)), x ij = 1 ({i, j} E(G)), X = (x ij ) S n +, λ R and (T D) max tr (JY ), tr (Y ) = 1, y ij = 0 ({i, j} E(G)), Y = (y ij ) S n +. 3

4 (Here tr stands for trace.) Note that for appropriately chosen s 1,..., s k and s, the matrices ( ) J/s 0 X = F s1,...,s k, Y = 0 0 are feasible solutions of programs (T P ) and (T D), respectively, with values λ = k = χ(g) and tr (JY ) = s = α(g), resulting in a nice proof of the sandwich theorem (see [2]). Analogously, the inverse theta number, ι(g), satisfies the inverse sandwich inequality, (α(g)) 2 + n α(g) ι(g) nϑ(g), (5) see [12]. Here the inverse theta number, defined as { n } 1 ι(g) := inf (a i a T i ) : a 1,..., a n o.r. of G, 11 i=1 equals the common attained optimal value of the primal-dual semidefinite programs (T P ) inf tr (W ) + n, w ij = 1 ({i, j} E(G)), W = (w ij ) S+, n m ii = 1 (i = 1,..., n), (T D ) sup tr (JM), m ij = 0 ({i, j} E(G)), M = (m ij ) S+. n Both bounds can be obtained via convex spectral optimization: obviously (compare with (T P )), ϑ(g) = min Λ z ii = 1 (i V (G)), Z z ij = 1 ({i, j} E(G)), Z = (z ij ) = (z ji ) R n n, and, similarly (compare with (T P )), { u ι(g) = min n tr U + nλ ij = 1 ({i, j} E(G)), U U = (u ij ) S+ n In this paper we will consider spectral upper bounds on the stability number, different from both the Lovász number and the inverse theta number. Let us define }. ι 1 (G) := n Λ G, Σ Λ G ι 2 (G) := n 1 Λ G λ G, ι 3 (G) := 1 ( n Λ G + ) (n Λ G ) Λ G (n 1 Λ G ), ι 4 (G) := n 1 + Σ 2 Λ G, 4

5 where Σ := (u u n ) 2 with { u = (ui ) R n, u 0, u T u = 1, u T A(G)u = Λ G. (Note that Λ G +1 Σ n by the Cauchy-Schwarz inequality, hence each bound is at least n Λ G.) We will prove in Sections 2, 3, 4, and 4, respectively, that the inequalities α(g) ι 1 (G), ι 2 (G), ι 3 (G), ι 4 (G) (6) hold. These upper bounds are efficiently computable via methods in [8]. (For lower bounds on α(g), see e.g. [11].) Several open problems arise: Can the four bounds give better results than other upper bounds (see e.g. [11]), such as Λ G + 1 (Wilf s upper bound for the chromatic number of the complementary graph), or ϑ(g), n + 1 ϑ(g)? How do they relate to χ(g), n + 1 χ(g)? How do they relate to each other? These questions (partially answered in the paper) need further investigation. 2 The counterpart of Wilf s bound In this section we will describe the counterpart of Wilf s lower bound on the stability number. The spectral upper bound ι 1 (G) is derived via estimating from above the maximum eigenvalue of the adjacency matrix. In [13] Wilf proved, as a consequence of a theorem of Motzkin-Straus, the relation n n Λ G α(g). (7) The next proposition describes a weaker form of (7). PROPOSITION 2.1. For any graph G, χ(g) n/(n Λ G ). Proof. Let S 1,..., S k be stable sets in G with cardinality s 1,..., s k, respectively, so that s s k = n. Then, G is a subgraph of H = K s1,...,s k. In other words, 0 A(G) A(H) = J F s 1,...,s k + J, k implying, by F s1,...,s k S+, n that Λ G Λ H (k 1)n. k As here k = χ(g) can be chosen, so the statement follows. The counterpart of Proposition 2.1 can be proved similarly, and leads us to the bound ι 1 (G). 5

6 PROPOSITION 2.2. For any graph G, the inequality α(g) n Λ G n Λ G =: ˇι 1 (G) holds. Proof. Let {1,..., s} be a stable set in G. Then, G is a subgraph of the graph H = K s,1,...,1. In other words, 0 A(G) A(H) = J F s,1,...,1 + J n s + 1. Thus, for the maximal eigenvalues the inequalities Λ G Λ H (n s)n n s + 1 hold, by F s,1,...,1 S n +. The statement follows with s = α(g). In [13] Wilf used the method of variables to strengthen the bound in (7): he proved the relation Σ Σ Λ G α(g). (8) Analogously, the proof of Proposition 2.2 can easily be adapted to imply THEOREM 2.1. For any graph G, α(g) ι 1 (G) holds. Note that (8) implies the stronger relation α(g) χ(g) n + 1 α(g) ι 1 (G) (9) also, but the proof of Theorem 2.1 does not use the Motzkin-Straus theorem. 3 The counterpart of Hoffman s bound In this section we describe the counterpart of a spectral lower bound for the chromatic number due to Hoffman. The proof relies on estimating from above the minimum eigenvalue of the adjacency matrix. Hoffman s theorem (see e.g. [4]) states that for any graph G, χ(g) 1 + Λ G λ G. (10) The proof remains valid for arbitrary matrix A A instead of A(G), and the strongest bound obtained this way (by the so-called method of variables) is the Lovász number ϑ(g) (see [5], [6]). The proof of the counterpart closely follows the proof of (10). 6

7 THEOREM 3.1. For any graph G, α(g) ι 2 (G). Proof. Let A := A(G), and suppose that {1,..., s} is a stable set in G for some 1 s n 1. Then, the matrix A can be partitioned as ( ) A11 A A = 12, A 21 A 22 where A 11 = 0 R s s, A 12 = A T 21, A 22 = A T 22 R (n s) (n s). Let x R n be an eigenvector corresponding to the eigenvalue Λ A. Let x = (x T 1, x T 2 ) T, where x 1 R s, x 2 R n s. Let us denote by y 1 R s the vector with first element x 1, otherwise zero, and let us define similarly the vector y 2 R n s, too. Let y R n be the vector obtained by stacking the vectors y 1, y 2 on the top of each other. Let us choose orthogonal matrices B 1 R s s, B 2 R (n s) (n s) such that B 1 y 1 = x 1 and B 2 y 2 = x 2 hold. Let B be the block-diagonal matrix formed by the matrices B 1, B 2. Then, B R n n is an orthogonal matrix, By = x, and B 1 ABy = B 1 Ax = Λ A B 1 x = Λ A y. Hence, the vector y is an eigenvector (with eigenvalue Λ A ) of the matrix B 1 AB = (B 1 i A ij B j ) i,j=1,2. Let us consider the submatrix C = (c ij ) R 2 2, C = ((B 1 i A ij B j ) 11 ) i,j=1,2. As B 1 ABy = Λ A y, so Cz = Λ A z with the 2-vector z := ( x 1, x 2 ) T, implying Λ A Λ C. By A 11 = 0, we have c 11 = 0, thus the trace of the matrix C equals Λ C + λ C = c 22. Furthermore, as A 22 J I, so the matrix (n s 1)I A 22 is diagonally dominant, necessarily positive semidefinite. Hence, the inequalities c 22 Λ B 1 2 A22B2 = Λ A 22 n s 1 hold. Cauchy s theorem on interlacing eigenvalues (see [8]) gives λ A λ C and Λ C Λ A. Summarizing, we have λ A λ C = c 22 Λ C = c 22 Λ A n s 1 Λ A, where s can be chosen to be the stability number α(g). This completes the proof of the theorem. We already have mentioned that ι 2 (G) n Λ G, but more can be claimed: 7

8 PROPOSITION 3.1. For any graph G, holds. ι 2 (G) n Λ G + Λ G ϑ(g) 1 1 =: ˆι 2(G) Proof. By the remark preceding Theorem 3.1, we have where A := A(G). Hence, λ A ι 2 (G) n 1 Λ A and the statement follows from (2). Λ A ϑ(g) 1, ( ) 1 1, ϑ(g) 1 Both proofs can be carried through with A A instead of A = A(G), which means that ι 2 (G, A) α(g), ˆι 2 (G) for A A, where ι 2 (G, A) := n 1 Λ A λ A. By compactness of the set A there exists an optimal matrix A A such that ι 2 (G, A ) = min{ι 2 (G, A) : A A}. As concluding remarks in this section, we will show examples when A A(G), and when A = A(G) meets the requirements. First, note that there exists a matrix B A such that ι 2 (G, B) n + 1 ϑ(g). In fact, let us choose a matrix B = (b ij ) A satisfying ϑ(g) = 1 + Λ B λ B (by the remark preceding Theorem 3.1 there exists such an optimal matrix B). We can assume that for some indices i j, b ij = 1. Then, from Rayleigh s theorem, λ B 1 follows. Moreover, we have Summarizing, we obtain Λ B + λ B = λ B (ϑ(g) 2) 0. n + 1 ϑ(g) = n 1 Λ B + λ B λ B ι 2 (G, B). 8

9 On the other hand, for the perfect graph we have G 0 := K 3 + K 2,2 n + 1 ϑ(g 0 ) = 5 < 6 = n 1 Λ G0 λ G0 ; we can see that in this case A A(G 0 ). Finally, note that ˆι 2 (G) = n + 1 ϑ(g) if and only if ϑ(g) = 2 (e.g. for a bipartite graph) or ϑ(g) = Λ G + 1 (e.g. when G = K s K sk ). Hence, for bipartite graphs and for disjoint unions of cliques we have A = A(G) with ι 2 (G) = n 1 and ι 2 (G) = n Λ G, respectively. 4 Variants In this section we describe two further spectral upper bounds on the stability number, derived via similar methods, and hence considered as variants of, ι 1 (G). In order to derive the bound ι 3 (G) we will use the following technical lemma. LEMMA 4.1. Let 1 s n 1, and let M := I + A(K s + K n s ). Then, Λ M = 1 2 ( n s 1 + ) (n s 1) 2 + 4(s 1)(n s) is the maximum eigenvalue of the matrix M. Proof. The eigenvalue Λ M can be rewritten as Λ M = min{λ R : λi M S n +}. For λ 1, λi M S+, n as then the diagonal elements of λi M are nonpositive. For λ > 1, the matrix λi M R n n is positive semidefinite if and only if the Schur complement of its positive definite principal submatrix (λ 1)I R s s is positive semidefinite (see [7]). In other words, for λ > 1, λi M S+ n if and only if λi J ( J) ((λ 1)I) 1 ( J) S+ n s. (11) Here (11) can easily be seen to be equivalent to the inequality ( λ 1 + s ) (n s) 0; (12) λ 1 Λ M will be the least positive solution of (12), as stated. 9

10 If the set {1,..., s} is a stable set in G, then 0 A(G) M I, where M is the same matrix as in Lemma 4.1. Hence, for the maximum eigenvalues Λ G Λ M 1 holds, and we can derive easily, from Lemma 4.1, the following THEOREM 4.1. For any graph G, α(g) ι 3 (G) holds. The bound ι 3 (G) is exact e.g. for complete bipartite graphs G = K 1,s. We remark that the proof of Theorem 4.1 can be carried through for any matrix A A instead of A(G), but this way we obtain weaker bounds than ι 3 (G). In fact, ι 3 (G), as a function of Λ A, is strictly monotone decreasing on the interval 0 Λ A n 1 (the first derivative of the function is negative on this interval). This means that we get the strongest bound when Λ A is maximal, that is when A = A(G). The next proposition, too, is immediate from the fact that for any graph G, 0 µ G Λ G n 1, see Exercise in [4]. PROPOSITION 4.1. With ι 3 (Λ) := 1 ( n Λ + ) (n Λ) Λ(n 1 Λ) for Λ R, we have for any graph G. We remark also that n Λ G ι 3 (Λ G ) ι 3 (µ G ) n, ι 3 (G) ˇι 1 (G) (13) (as it can easily be verified), but it is an open problem whether ι 3 (G) ι 1 (G) holds or not, generally. Now, we turn to the bound ι 4 (G). With minor modification of the proof of Proposition 2.2, we obtain a close variant, ˇι 4 (G). PROPOSITION 4.2. holds. For any graph G, the inequality α(g) 3n 2 1 Λ G =: ˇι 4 (G) Proof. Let {1,..., s} be a stable set in G. Then, G is a subgraph of the graph H = K s + K n s. In other words, 0 A(G) A(H) = A(K s,n s ) + A(K s + K n s ). Thus, for the maximal eigenvalues the inequalities Λ G Λ H Λ Ks,n s + Λ Ks+K n s 10

11 hold. Here Λ Ks+K n s = n s 1, and, by F s,n s 2 we have Λ Ks,n s n/2. Hence, = J 2 A(K s,n s) S n +, Λ G n 2 + n s 1, from which with s = α(g) the statement follows. As in the case of Proposition 2.2 and Theorem 2.1, the proof of Proposition 4.2 can easily be adapted to imply THEOREM 4.2. For any graph G, α(g) ι 4 (G) holds. The following proposition is the analogue of Propositions 3.1 and 4.1. PROPOSITION 4.3. For any graph G, ι 4 (G) n/2. Proof. Let A := A(G). Then, the matrix B := ( n 2 1 ) I + J 2 A is diagonally dominant, implying B S n +. Consequently, we have n 1 + ut Ju 2 ( ) J ( n ) Λ A = n 1 + u T 2 A u n = n 2 for all nonnegative unit eigenvectors u corresponding to Λ G, which was to be proved. Finally, we mention an open problem. The minimum eigenvalue λ G of a graph G is negative, specially the corresponding unit eigenvector v R n has both positive and negative coordinates. Writing the eigenvector v as the difference of its positive and negative part (i.e. v = v + v, where v +, v are nonnegative, orthogonal n-vectors), we have 0 > λ G = v T A(G)v 2v T +A(G)v 2v T +Jv, and it is not hard to conclude that λ G n/2. This result with a different proof is due to Constantine, see [1], and implies in particular, the relation ι 2 (G) ˇι 4 (G). (14) It would be interesting to see a similar proof of the conjecture ι 2 (G) ι 4 (G) via showing the inequality λ G (u T Ju)/2 for all nonnegative unit eigenvectors u corresponding to Λ G. (The relation ι 2 (G) ι 4 (G) can easily be verified e.g. for bipartite graphs or for disjoint unions of cliques.) 11

12 5 Conclusion In this paper we studied spectral upper bounds on the stability number of a graph, counterparts of classical bounds due to Wilf and Hoffman. Several questions arised and were partially answered: for example concerning the relation of the spectral bounds introduced in the paper among each other and with the chromatic number of the complementary graph. Acknowledgements. Section 3. References I thank Mihály Hujter for the K 3 + K 2,2 example in 1. G. Constantine, Lower bounds on the spectra of symmetric matrices with nonnegative entries. Linear Algebra and its Applications, 65: , E. de Klerk, Interior Point Methods for Semidefinite Programming. PhD Thesis, Technische Universiteit Delft, Delft, D. Knuth, The sandwich theorem. Electronic Journal of Combinatorics, 1:1-48, L. Lovász, Combinatorial Problems and Exercises. Akadémiai Kiadó, Budapest, L. Lovász, On the Shannon capacity of a graph. IEEE Transactions on Information Theory, IT-25(1):1-7, L. Lovász, Semidefinite programs and combinatorial optimization. In: B.A. Reed and C.L. Sales, eds., Recent Advances in Algorithms and Combinatorics, CMS Books in Mathematics, Springer, , P. Rózsa, Lineáris Algebra és Alkalmazásai. Tankönyvkiadó, Budapest, G. Strang, Linear Algebra and its Applications. Academic Press, New York, M. Ujvári, A note on the graph-bisection problem. Pure Mathematics and Applications, 12(1): , M. Ujvári, New descriptions of the Lovász number, and the weak sandwich theorem. Acta Cybernetica, 20(4): , M. Ujvári, Strengthening weak sandwich theorems in the presence of inconnectivity. Submitted to Acta Cybernetica,

13 12. M. Ujvári, Applications of the inverse theta number in stable set problems. Accepted for publication at Acta Cybernetica, H.S. Wilf, Spectral bounds for the clique and independence numbers of graphs. Journal of Combinatorial Theory B, 40 (1): ,

Applications of the Inverse Theta Number in Stable Set Problems

Applications of the Inverse Theta Number in Stable Set Problems Acta Cybernetica 21 (2014) 481 494. Applications of the Inverse Theta Number in Stable Set Problems Miklós Ujvári Abstract In the paper we introduce a semidefinite upper bound on the square of the stability

More information

Operations Research. Report Applications of the inverse theta number in stable set problems. February 2011

Operations Research. Report Applications of the inverse theta number in stable set problems. February 2011 Operations Research Report 2011-01 Applications of the inverse theta number in stable set problems Miklós Ujvári February 2011 Eötvös Loránd University of Sciences Department of Operations Research Copyright

More information

On the projection onto a finitely generated cone

On the projection onto a finitely generated cone Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices Chapter 3 Some Applications Having developed the basic theory of cone programming, it is time to apply it to our actual subject, namely that of semidefinite programming. Indeed, any semidefinite program

More information

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007 Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory

More information

Semidefinite programs and combinatorial optimization

Semidefinite programs and combinatorial optimization Semidefinite programs and combinatorial optimization Lecture notes by L. Lovász Microsoft Research Redmond, WA 98052 lovasz@microsoft.com http://www.research.microsoft.com/ lovasz Contents 1 Introduction

More information

Sandwich Theorem and Calculation of the Theta Function for Several Graphs

Sandwich Theorem and Calculation of the Theta Function for Several Graphs Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2003-03-7 Sandwich Theorem and Calculation of the Theta Function for Several Graphs Marcia Ling Riddle Brigham Young University

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

Operations Research. Report Multiplically independent word systems. September 2011

Operations Research. Report Multiplically independent word systems. September 2011 Operations Research Report 2011-02 Multiplically independent word systems Miklós Ujvári September 2011 Eötvös Loránd University of Sciences Department of Operations Research Copyright c 2011 Department

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Spectra of Adjacency and Laplacian Matrices

Spectra of Adjacency and Laplacian Matrices Spectra of Adjacency and Laplacian Matrices Definition: University of Alicante (Spain) Matrix Computing (subject 3168 Degree in Maths) 30 hours (theory)) + 15 hours (practical assignment) Contents 1. Spectra

More information

Chapter 6 Orthogonal representations II: Minimal dimension

Chapter 6 Orthogonal representations II: Minimal dimension Chapter 6 Orthogonal representations II: Minimal dimension Nachdiplomvorlesung by László Lovász ETH Zürich, Spring 2014 1 Minimum dimension Perhaps the most natural way to be economic in constructing an

More information

LNMB PhD Course. Networks and Semidefinite Programming 2012/2013

LNMB PhD Course. Networks and Semidefinite Programming 2012/2013 LNMB PhD Course Networks and Semidefinite Programming 2012/2013 Monique Laurent CWI, Amsterdam, and Tilburg University These notes are based on material developed by M. Laurent and F. Vallentin for the

More information

New Lower Bounds on the Stability Number of a Graph

New Lower Bounds on the Stability Number of a Graph New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), 533 540]

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Relaxations of combinatorial problems via association schemes

Relaxations of combinatorial problems via association schemes 1 Relaxations of combinatorial problems via association schemes Etienne de Klerk, Fernando M. de Oliveira Filho, and Dmitrii V. Pasechnik Tilburg University, The Netherlands; Nanyang Technological University,

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Conic approach to quantum graph parameters using linear optimization over the completely positive semidefinite cone

Conic approach to quantum graph parameters using linear optimization over the completely positive semidefinite cone Conic approach to quantum graph parameters using linear optimization over the completely positive semidefinite cone Monique Laurent 1,2 and Teresa Piovesan 1 1 Centrum Wiskunde & Informatica (CWI), Amsterdam,

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Max k-cut and the smallest eigenvalue

Max k-cut and the smallest eigenvalue Max -cut and the smallest eigenvalue V. Niiforov arxiv:1604.0088v [math.co] 11 Apr 016 Abstract Let G be a graph of order n and size m, and let mc (G) be the maximum size of a -cut of G. It is shown that

More information

FORBIDDEN MINORS FOR THE CLASS OF GRAPHS G WITH ξ(g) 2. July 25, 2006

FORBIDDEN MINORS FOR THE CLASS OF GRAPHS G WITH ξ(g) 2. July 25, 2006 FORBIDDEN MINORS FOR THE CLASS OF GRAPHS G WITH ξ(g) 2 LESLIE HOGBEN AND HEIN VAN DER HOLST July 25, 2006 Abstract. For a given simple graph G, S(G) is defined to be the set of real symmetric matrices

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

Eight theorems in extremal spectral graph theory

Eight theorems in extremal spectral graph theory Eight theorems in extremal spectral graph theory Michael Tait Carnegie Mellon University mtait@cmu.edu ICOMAS 2018 May 11, 2018 Michael Tait (CMU) May 11, 2018 1 / 1 Questions in extremal graph theory

More information

Graphs with convex-qp stability number

Graphs with convex-qp stability number Universytet im. Adama Mickiewicza Poznań, January 2004 Graphs with convex-qp stability number Domingos M. Cardoso (Universidade de Aveiro) 1 Summary Introduction. The class of Q-graphs. Adverse graphs

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and

More information

Modeling with semidefinite and copositive matrices

Modeling with semidefinite and copositive matrices Modeling with semidefinite and copositive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/24 Overview Node and Edge relaxations

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u) CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue

More information

The chromatic number and the least eigenvalue of a graph

The chromatic number and the least eigenvalue of a graph The chromatic number and the least eigenvalue of a graph Yi-Zheng Fan 1,, Gui-Dong Yu 1,, Yi Wang 1 1 School of Mathematical Sciences Anhui University, Hefei 30039, P. R. China fanyz@ahu.edu.cn (Y.-Z.

More information

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs Ma/CS 6b Class 3: Eigenvalues in Regular Graphs By Adam Sheffer Recall: The Spectrum of a Graph Consider a graph G = V, E and let A be the adjacency matrix of G. The eigenvalues of G are the eigenvalues

More information

Maximum k-regular induced subgraphs

Maximum k-regular induced subgraphs R u t c o r Research R e p o r t Maximum k-regular induced subgraphs Domingos M. Cardoso a Marcin Kamiński b Vadim Lozin c RRR 3 2006, March 2006 RUTCOR Rutgers Center for Operations Research Rutgers University

More information

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Canonical Problem Forms. Ryan Tibshirani Convex Optimization Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES?

WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES? MATHEMATICS OF OPERATIONS RESEARCH Vol. 6, No. 4, November 00, pp. 796 85 Printed in U.S.A. WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES? MICHEL X. GOEMANS and LEVENT TUNÇEL

More information

The Colin de Verdière parameter, excluded minors, and the spectral radius

The Colin de Verdière parameter, excluded minors, and the spectral radius The Colin de Verdière parameter, excluded minors, and the spectral radius Michael Tait Abstract In this paper we characterize graphs which maximize the spectral radius of their adjacency matrix over all

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

1 The independent set problem

1 The independent set problem ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Index coding with side information

Index coding with side information Index coding with side information Ehsan Ebrahimi Targhi University of Tartu Abstract. The Index Coding problem has attracted a considerable amount of attention in the recent years. The problem is motivated

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors

More information

On the sum of two largest eigenvalues of a symmetric matrix

On the sum of two largest eigenvalues of a symmetric matrix On the sum of two largest eigenvalues of a symmetric matrix Javad Ebrahimi B. Department of Mathematics Simon Fraser University Burnaby, B.C. V5A 1S6 Vladimir Nikiforov Department of Mathematical Sci.

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

Inverse Perron values and connectivity of a uniform hypergraph

Inverse Perron values and connectivity of a uniform hypergraph Inverse Perron values and connectivity of a uniform hypergraph Changjiang Bu College of Automation College of Science Harbin Engineering University Harbin, PR China buchangjiang@hrbeu.edu.cn Jiang Zhou

More information

New feasibility conditions for directed strongly regular graphs

New feasibility conditions for directed strongly regular graphs New feasibility conditions for directed strongly regular graphs Sylvia A. Hobart Jason Williford Department of Mathematics University of Wyoming Laramie, Wyoming, U.S.A sahobart@uwyo.edu, jwillif1@uwyo.edu

More information

On Hadamard Diagonalizable Graphs

On Hadamard Diagonalizable Graphs On Hadamard Diagonalizable Graphs S. Barik, S. Fallat and S. Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A2 Abstract Of interest here is a characterization

More information

The spread of the spectrum of a graph

The spread of the spectrum of a graph Linear Algebra and its Applications 332 334 (2001) 23 35 www.elsevier.com/locate/laa The spread of the spectrum of a graph David A. Gregory a,,1, Daniel Hershkowitz b, Stephen J. Kirkland c,2 a Department

More information

The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results

The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results Frank J. Hall Department of Mathematics and Statistics Georgia State University Atlanta, GA 30303

More information

Mustapha Ç. Pinar 1. Communicated by Jean Abadie

Mustapha Ç. Pinar 1. Communicated by Jean Abadie RAIRO Operations Research RAIRO Oper. Res. 37 (2003) 17-27 DOI: 10.1051/ro:2003012 A DERIVATION OF LOVÁSZ THETA VIA AUGMENTED LAGRANGE DUALITY Mustapha Ç. Pinar 1 Communicated by Jean Abadie Abstract.

More information

Lecture 4: January 26

Lecture 4: January 26 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Javier Pena Lecture 4: January 26 Scribes: Vipul Singh, Shinjini Kundu, Chia-Yin Tsai Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

The Colin de Verdière parameter, excluded minors, and the spectral radius

The Colin de Verdière parameter, excluded minors, and the spectral radius The Colin de Verdière parameter, excluded minors, and the spectral radius arxiv:1703.09732v2 [math.co] 7 Nov 2017 Michael Tait Abstract In this paper we characterize graphs which maximize the spectral

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Lecture 18. Ramanujan Graphs continued

Lecture 18. Ramanujan Graphs continued Stanford University Winter 218 Math 233A: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: March 8, 218 Original scribe: László Miklós Lovász Lecture 18 Ramanujan Graphs

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Reformulation of the Hadamard conjecture via Hurwitz-Radon word systems

Reformulation of the Hadamard conjecture via Hurwitz-Radon word systems Reformulation of the Hadamard conjecture via Hurwitz-Radon word systems Miklós Ujvári Abstract. The Hadamard conjecture (unsolved since 1867) states that there exists an orthogonal matrix with entries

More information

A Characterization of Distance-Regular Graphs with Diameter Three

A Characterization of Distance-Regular Graphs with Diameter Three Journal of Algebraic Combinatorics 6 (1997), 299 303 c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. A Characterization of Distance-Regular Graphs with Diameter Three EDWIN R. VAN DAM

More information

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations DISSERTATION submitted to Department of Mathematics at Chemnitz University of Technology in accordance with

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Szemerédi s Lemma for the Analyst

Szemerédi s Lemma for the Analyst Szemerédi s Lemma for the Analyst László Lovász and Balázs Szegedy Microsoft Research April 25 Microsoft Research Technical Report # MSR-TR-25-9 Abstract Szemerédi s Regularity Lemma is a fundamental tool

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks

Optimization of Quadratic Forms: NP Hard Problems : Neural Networks 1 Optimization of Quadratic Forms: NP Hard Problems : Neural Networks Garimella Rama Murthy, Associate Professor, International Institute of Information Technology, Gachibowli, HYDERABAD, AP, INDIA ABSTRACT

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Strongly Regular Decompositions of the Complete Graph

Strongly Regular Decompositions of the Complete Graph Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Product distance matrix of a tree with matrix weights

Product distance matrix of a tree with matrix weights Product distance matrix of a tree with matrix weights R B Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India email: rbb@isidacin Sivaramakrishnan Sivasubramanian

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Non-linear index coding outperforming the linear optimum

Non-linear index coding outperforming the linear optimum Non-linear index coding outperforming the linear optimum Eyal Lubetzky Uri Stav Abstract The following source coding problem was introduced by Birk and Kol: a sender holds a word x {0, 1} n, and wishes

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Recursive generation of partitionable graphs

Recursive generation of partitionable graphs Recursive generation of partitionable graphs E. Boros V. Gurvich S. Hougardy May 29, 2002 Abstract Results of Lovász (1972) and Padberg (1974) imply that partitionable graphs contain all the potential

More information

Approximating the independence number via the ϑ-function

Approximating the independence number via the ϑ-function Approximating the independence number via the ϑ-function Noga Alon Nabil Kahale Abstract We describe an approximation algorithm for the independence number of a graph. If a graph on n vertices has an independence

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions Unconstrained UBC Economics 526 October 18, 2013 .1.2.3.4.5 Section 1 Unconstrained problem x U R n F : U R. max F (x) x U Definition F = max x U F (x) is the maximum of F on U if F (x) F for all x U and

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Fiedler s Theorems on Nodal Domains

Fiedler s Theorems on Nodal Domains Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Lecture 1 and 2: Random Spanning Trees

Lecture 1 and 2: Random Spanning Trees Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny

More information

Detailed Proof of The PerronFrobenius Theorem

Detailed Proof of The PerronFrobenius Theorem Detailed Proof of The PerronFrobenius Theorem Arseny M Shur Ural Federal University October 30, 2016 1 Introduction This famous theorem has numerous applications, but to apply it you should understand

More information

A taste of perfect graphs

A taste of perfect graphs A taste of perfect graphs Remark Determining the chromatic number of a graph is a hard problem, in general, and it is even hard to get good lower bounds on χ(g). An obvious lower bound we have seen before

More information

arxiv: v1 [math.co] 20 Sep 2014

arxiv: v1 [math.co] 20 Sep 2014 On some papers of Nikiforov Bo Ning Department of Applied Mathematics, School of Science, Northwestern Polytechnical University, Xi an, Shaanxi 71007, P.R. China arxiv:109.588v1 [math.co] 0 Sep 01 Abstract

More information

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It

More information

1 Review: symmetric matrices, their eigenvalues and eigenvectors

1 Review: symmetric matrices, their eigenvalues and eigenvectors Cornell University, Fall 2012 Lecture notes on spectral methods in algorithm design CS 6820: Algorithms Studying the eigenvalues and eigenvectors of matrices has powerful consequences for at least three

More information