New Lower Bounds on the Stability Number of a Graph
|
|
- Beverley Richard
- 5 years ago
- Views:
Transcription
1 New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), ] established that the reciprocal of the stability number of G (the size of the maximum stable set of G) is given by the minimum value of a certain quadratic function over the unit simplex. We propose two new lower bounds on the stability number of G based on this formulation. The first lower bound is obtained by minimizing the same objective function over the largest inscribed ball in the unit simplex. Using the fact that quadratic optimization over a full-dimensional ball admits a tight semidefinite programming relaxation, our lower bound can be computed to within any arbitrary precision in polynomial time. For regular graphs, we establish that this lower bound has a closed form solution and that it is tighter than some other existing lower bounds. The second lower bound improves upon the first lower bound and is obtained by a further refinement of the optimal solution that yields the first bound. We evaluate the new bounds and compare them with several other known lower bounds on the DIMACS collection of clique problems. Our computational results reveal that especially the improved lower bound is tighter than all other lower bounds on the majority of the instances. Key words: Maximum stable set, maximum clique, stability number, clique number, semidefinite programming. Department of Industrial Engineering, Bilkent University, Bilkent, Ankara, Turkey (yildirim@bilkent.edu.tr) 1
2 AMS Subject Classifications: 90C35, 90C22, 65K05, 90C20 1 Introduction Let G = (V, E) be a simple, undirected graph with a vertex set V = {1, 2,..., n} and an edge set E consisting of m edges, where each edge is identified with an unordered pair of its end vertices. A pair of vertices in V is said to be adjacent if they are connected by an edge in E. A set S V is a stable set of G if each pair of vertices in S is mutually nonadjacent. The cardinality of the maximum stable set of G is called the stability number of G and is denoted by α(g). A clique C V is a set of mutually adjacent vertices. Similarly, the clique number of G, denoted by ω(g), is the size of the maximum clique in G. For a graph G = (V, E), the complement graph G = (V, E) is obtained from G by removing the edges of G and connecting each pair of nonadjacent vertices of G by an edge. Clearly, S V is a stable set of G if and only if S is a clique of G. It follows that α(g) = ω(g). It is well-known that computing the stability number (equivalently, the clique number) of a graph is in general an NP-hard problem. The recent survey paper by Bomze at. al. [1] provides an account of the fairly rich literature including applications, formulations, exact algorithms, heuristics, and bounds and estimates. In fact, it is not only difficult to compute the exact stability number but no efficient algorithm can compute a good approximation to it. Hastad [5] proved that the stability number cannot be approximated within a factor of n 1/2 ɛ for any ɛ > 0 unless P = NP. Under a slightly stronger complexity assumption, the factor can be improved to n 1 ɛ. However, the stability number can be computed in polynomial time for certain classes of graphs such as perfect graphs and t-perfect graphs [4]. In this paper, we focus on computing lower bounds on the stability number of a given graph G. Similarly to the other known lower bounds [12, 2], our bounds rely on the continuous formulation of Motzkin and Straus [7], who established that the reciprocal of the stability number of a graph is given by the minimum value of a certain, usually nonconvex quadratic function over the unit simplex. We consider minimizing the same quadratic 2
3 objective function over the largest ball inscribed in the unit simplex. It is now well-known that quadratic optimization over a full-dimensional ball can be computed in polynomial time using a tight semidefinite programming relaxation (see, e.g., [9, 8, 10]). Since the optimal value of the latter optimization problem is an upper bound on the reciprocal of the stability number, the reciprocal of this optimal value provides our first lower bound on the stability number. For regular graphs, we establish that this lower bound has a closed form solution. Furthermore, for this class of graphs, the new bound is strictly better than some of the other known lower bounds on the stability number. The second lower bound is obtained by a further refinement of the optimal solution of the quadratic optimization problem that gives rise to the first lower bound. More specifically, the optimal solution lies inside the largest ball inscribed in the unit simplex. A family of feasible solutions in the unit simplex is constructed by extending this optimal solution appropriately until the boundary of the unit simplex. The improved lower bound is obtained by further minimizing the quadratic objective function over this family of feasible solutions which lies on a line segment. It follows that the latter lower bound is at least large as the former one. In an attempt to compare the new lower bounds with the other known bounds, we have performed computational experiments on the DIMACS collection of clique instances. Our experiments reveal that especially the improved lower bound provides a competitive alternative to the other known bounds. This paper is organized as follows. In the remainder of this section, we define our notation. Section 2 discusses the continuous formulation of Motzkin and Straus [7] and reviews the known lower bounds on the stability number. Section 3 presents the first lower bound and establishes several properties. The improved lower bound is discussed in Section 4. The results of the computational experiments are presented in Section 5. Section 6 concludes the paper. 3
4 1.1 Notation R n and S n denote the n-dimensional Euclidean space and the space of n n real symmetric matrices, respectively. For u R n, u i denotes the ith component of u and u represents its Euclidean norm. For a graph G = (V, E) with V = {1,..., n}, A G S n denotes the adjacency matrix of G. The complete graph on n vertices is denoted by K n. For X S n, we use A 0 (A 0) to indicate that A is positive semidefinite (positive definite). For X S n and Y S n, the trace inner product is denoted by X Y = n i=1 n j=1 X ijy ij. The identity matrix in S n is denoted by I n. We reserve e to denote the vector of all ones in the appropriate dimension and e j to represent the unit vector whose jth component is 1. We use E i S n for the symmetric matrix e i (e i ) T, i = 1,..., n. The (n 1)-dimensional unit simplex in R n is denoted by n, i.e., n := {x R n : e T x = 1, x 0}. 2 Formulation and Lower Bounds Given a simple, undirected graph G = (V, E), Motzkin and Straus [7] established the following: Similarly, the clique number ω(g) satisfies 1 α(g) = min x n x T (I n + A G ) x. (1) 1 1 ω(g) = max x n x T A G x. (2) While solving (1) or (2) is in general NP-hard, each one provides a continuous formulation of a combinatorial optimization problem. Furthermore, these formulations play a central role in the derivation of several known lower bounds on the stability number or the clique number of G. Using the fact that x = (1/n)e n, it follows from (1) that α(g) n2 n + 2m 1, (3) where m = E is the number of edges of G. This bound matches the stability number for complete graphs K n and their complements. 4
5 The other lower bounds in the literature are based on combining the formulations (1) and (2) with the spectral theory of graphs. We now collect some results about the spectra of graphs. The reader is referred to [3] for further details. Theorem 2.1 Let G = (V, E) be a graph with the adjacency matrix A G S n and let λ 1 λ 2... λ n denote the spectrum of A G. 1. n i=1 λ i = If G contains no edges, then λ 1 =... = λ n = If G contains at least one edge, then 1 λ n n 1 and λ n λ 1 1, i.e., λ n is the spectral radius of A G. 4. λ n = n 1 if and only if G is a complete graph. 5. λ n = 1 if and only if the components of G consist of graphs K 2 and possibly K λ 1 = 1 if and only if the components of G are complete graphs. 7. λ 1 = λ n if and only if the component of G with the largest eigenvalue λ n is a bipartite graph. 8. A G is irreducible if and only if G is connected. In this case, there exists a positive eigenvector x P R n, called the Perron eigenvector, corresponding to λ P = λ n, called the Perron root. 9. e R n is an eigenvector of A G corresponding to λ n if and only if G is a regular graph. 10. A G has exactly one positive eigenvalue if and only if the nonisolated vertices of G form a complete multipartite graph. Under the assumption that G is a connected graph, it follows from part 8 of Theorem 2.1 that A G is an irreducible, symmetric, nonnegative matrix. Therefore, let λ P > 0 and x P R n 5
6 denote the Perron root and the positive Perron eigenvector, respectively. Using the feasible solution (1/s P )x P n of (2), where s P := e T x P, Wilf [12] established that α(g) = ω(g) λ P s 2 P λ P + 1, (4) with equality if and only if G is a complete graph. The lower bound (4) is an improvement over the lower bound (3). More recently, Budinich [2] proposed a new lower bound that makes use of all the eigenvectors of A G. In particular, if {x 1, x 2,..., x n 1, x P } denotes the set of eigenvectors of A G, one can construct a family of unit vectors y j (µ) = µx j + 1 µ 2 x P R n, j = 1, 2,..., n 1. Then, z j (µ) := (1/e T y j (µ))y j (µ) is a feasible solution of (2) for j = 1, 2,..., n 1 as long as µ [l j, u j ], where l j 0 u j are chosen to ensure the nonnegativity of z j (µ). Let g j (µ) := z j (µ) T A G z j (µ) for j = 1,..., n 1 and let g := max j=1,...,n 1 max µ [lj,u j ] follows from (2) that g j (µ). It α(g) = ω(g) 1 1 g. (5) Unless G is a complete multipartite graph, Budinich shows that (5) strictly improves upon (4). A comparison of the three lower bounds reveals that (3) is the easiest to compute and is provably the weakest one. While (4) requires only the computation of the Perron root and the Perron eigenvector, one needs the full spectrum and the full set of eigenvectors to compute (5). 3 A New Lower Bound In this section, we propose a new lower bound based on the continuous formulation (1). The new bound is obtained by minimizing the objective function in (1) over the largest ball inscribed in the unit simplex. We start with the characterization of the largest ball inscribed in the unit simplex. 6
7 Lemma 3.1 Let U := [u 1,..., u n 1 ] R n (n 1) be a matrix whose columns form an orthonormal basis for the orthogonal complement of e R n. The largest (n 1)-dimensional ball inscribed in n is given by { } B := x R n : x = 1 n e + Uw, w 1, w R n 1. (6) n(n 1) Proof. It is straightforward to show that the smallest (n 1)-dimensional ball enclosing n is given by B := { x R n : x = (1/n)e + Uz, z } n 1 n, z Rn 1. By the Löwner-John theorem [6], scaling the radius of B by a factor of 1/(n 1) yields an (n 1)-dimensional ball contained in n. The assertion follows from the fact that the scaling is tight for the unit simplex. Since B n by Lemma 3.1, we have from which it follows that 1 α(g) = min x n x T (I n + A G ) x min x B x T (I n + A G ) x =: ν, (7) α(g) 1/ν. (8) Therefore, (8) yields a lower bound on the stability number of G. where Note that ν = min x T (I n + A G ) x, x B { = min ((1/n)e + Uw) T (I n + A G ) ((1/n)e + Uw) : w 1/ } n(n 1) { = min w T Mw + 2v T w + γ : w 1/ } n(n 1), M := U T (I n + A G ) U = I n 1 + U T A G U S n 1, (9a) v := 1 n U T (I n + A G ) e = 1 n U T A G e R n 1, (9b) γ := n + 2m n 2. (9c) We first establish that the new lower bound (8) is at least as good as (3). 7
8 Lemma 3.2 For any graph G, we have α(g) 1 ν n2 n + 2m. (10) Proof. The first inequality follows from the definition of ν. The second inequality is a consequence of the fact that w = 0 R n 1 is a feasible solution of the problem yielding the new bound. We now discuss how to compute ν efficiently. It is now well-known that quadratic optimization over a full-dimensional ellipsoid admits a tight semidefinite programming (SDP) relaxation. The next proposition summarizes this result. Proposition 3.1 Given a graph G = (V, E), ν can be computed to within any arbitrary accuracy in polynomial time. Proof. We simply sketch the proof, which closely mimics the arguments in [13, Proposition 2.6]. Let F := M v T v S n, G := n(n 1)I n 1 0 S n. (11) γ 0 1 It follows from the results of [9, 8, 10] that quadratic optimization over a full-dimensional ball admits a tight SDP relaxation given by { ν = min w T Mw + 2v T w + γ : w 1/ } n(n 1), = min{f W : G W 0, E n W = 1, W 0}. Since any SDP problem can be solved to within any arbitrary accuracy in polynomial time using interior-point methods, the assertion follows. In addition to computing the optimal value ν, one can efficiently construct an optimal solution w R n 1 of the quadratic optimization formulation by transforming any optimal solution W S n of the SDP relaxation [10, Proposition 3]. We will use this result in the derivation of the improved lower bound in Section 4. 8
9 3.1 Regular Graphs Having established that the new lower bound can be computed efficiently, we now turn our attention into the special class of regular graphs. A graph G = (V, E) is said to be regular of degree k if each vertex in V has exactly k neighbors. For such graphs, it follows that 2m = nk, or k = 2m/n. Using the spectral properties of this class of graphs outlined in Theorem 2.1, we establish that the new lower bound (8) has a closed form solution. Furthermore, (8) is tighter than (3) on a large subset of this class of graphs. Proposition 3.2 Let G = (V, E) be a regular graph of degree k 1. Then, ν = n + 2m n λ 1 n(n 1) n + 2m n 2, (12) where λ 1 1 is the smallest eigenvalue of A G. This implies that α(g) 1 ν = 1 n+2m n λ 1 n(n 1) n2 n + 2m. (13) Furthermore, unless the components of G are complete graphs, the new lower bound (8) is tighter than (3). Proof. Since G is a regular graph of degree k, we have A G e = ke, which implies that k R is an eigenvalue of A G with the corresponding eigenvector e R n. Therefore, the columns of the matrix U R n (n 1) in the statement of Lemma 3.1 can be chosen to consist of the remaining n 1 eigenvectors of A G. It follows from (9b) that v = (1/n)U T A G e = (k/n)u T e = 0 and M = U T (I n + A G )U = I n 1 + Σ, where Σ S n 1 is a diagonal matrix whose entries are the remaining n 1 eigenvalues of A G given by λ 1 λ 2... λ n 1. Therefore, { ν = min w T Mw + 2v T w + γ : w 1/ } n(n 1), { = min w T (I n 1 + Σ)w + γ : w 1/ } n(n 1), = 1 + λ 1 n(n 1) + n + 2m n 2, 9
10 where we used the fact that λ 1 1 for any graph with at least one edge (cf. part 3 of Theorem 2.1). This establishes (12) and hence (13). The last part of the assertion follows from parts 3 and 6 of Theorem 2.1. We next establish that the new lower bound is also tighter than the lower bound (4) for certain regular graphs. Proposition 3.3 Let G = (V, E) be a regular graph of degree k with k 1 such that the complement graph G is connected. Then, the lower bounds (3) and (4) coincide. If, in addition, G has at least one connected component that is not a complete graph, then each bound is strictly smaller than (8). Proof. Clearly, the complement graph G is also regular of degree n k 1. Therefore, x P = (1/ n)e R n is a Perron eigenvector of A G with the corresponding Perron root λ P = n k 1 and s P = e T x P = n (cf. parts 8 and 9 of Theorem 2.1). Therefore, the lower bound (4) is given by α(g) n k 1 n (n k 1) + 1 = n k + 1 = n2 n + 2m, where we used 2m = nk. This establishes the first part of the assertion. The second part follows from Proposition 3.2. The following result establishes that all four lower bounds coincide with the stability number on a certain class of regular graphs. Proposition 3.4 Let G = (V, E) be a regular graph of degree k with k 1 such that the complement graph G is a connected, complete multipartite graph. Then, each of the four lower bounds (3), (4), (5), and (8) coincides with α(g). Proof. By Proposition 3.3, the lower bounds (3) and (4) are equal to n 2 /(n+2m) = n/(k+1). Suppose that G has t connected components. By the hypothesis, each connected component is a complete graph on k + 1 vertices. Clearly, α(g) = n/(k + 1). Since each component 10
11 of G is a complete graph, it follows that the smallest eigenvalue of A G satisfies λ 1 = 1 (cf. part 6 of Theorem 2.1). By Proposition 3.2, the new lower bound (8) is also equal to n/(k + 1). Finally, since G is a regular, connected, complete multipartite graph, it follows from [2, Proposition 3] that the lower bounds (5) and (4) agree, which completes the proof. 3.2 Irregular Graphs We call a graph G irregular if it is not a regular graph. In contrast with regular graphs, a complete characterization of the spectra of such graphs does not exist. Therefore, the new lower bound (8) does not in general have a closed form solution for such graphs. Since the lower bound (5) requires the computation of the complete spectrum of A G, a complete characterization of graphs for which the new lower bound (8) is tighter than (5) is not straightforward. However, as the following simple example illustrates, there do exist graphs for which the new lower bound (8) is strictly better than (5). Example 3.1 Let G = (V, E), where V = {1, 2, 3, 4, 5}, E = {(1, 4), (2, 3), (2, 5), (3, 4), (3, 5)}, i.e., G is given by a path of length two connected to a vertex of a complete graph on three vertices. Clearly, α(g) = 2 and the lower bound (3) is equal to 25/15 = It is easy to verify that x = [1/4, 1/4, 0, 1/4, 1/4] T is an optimal solution of the quadratic optimization problem (1). Note that x (1/5)e = 1/ 20, which implies that x lies in the largest ball inscribed in 5 and hence is a feasible solution of the quadratic optimization problem defining ν (cf. (7)). Therefore, 1/ν = α(g) = 2, which implies that the lower bound (8) matches the stability number of G. The complement graph G is a connected, bipartite graph. The Perron root is given by λ P = and the Perron eigenvalue x P satisfies s P = e T x P = Therefore, the 11
12 lower bound (4) is given by α(g) (2.1829) = which is strictly smaller than α(g) = 2. The numerically computed lower bound (5) is equal to , which implies that the lower bound (8) is the tightest among the four bounds. 4 An Improved Lower Bound In this section, we propose a new lower bound based on a refinement of the lower bound (8). Let us recall the quadratic optimization reformulation in (n 1)-dimensional space whose optimal value defines (8). ν = min {w T Mw + 2v T w + γ : w 1/ } n(n 1), w R n 1, (14) where M, v, and γ are defined as in (9). It is well-known that the optimal solution w R n 1 of (14) satisfies the following necessary and sufficient optimality conditions. (M + λ I n 1 ) w = v, (15a) λ (1/ ) n(n 1) w = 0, (15b) M + λ I n 1 0, (15c) λ 0, (15d) where λ R is the Lagrange multiplier corresponding to the inequality constraint. Let w R n 1 be an optimal solution of (14). By the reformulation in Section 3, it follows that x = (1/n)e + Uw B n, which implies that x R n is a feasible solution of the original quadratic optimization problem (1). In order to derive the improved lower bound, we first construct a family of feasible solutions of (1) given by x(θ) := (1/n)e + θuw, θ R. (16) 12
13 Clearly, x(1) = x n and e T x(θ) = 1 for all θ R. It follows then that x(θ) n for all θ [1, θ ], where θ := 1 min i:(uw ) i <0 n (Uw ) i 1. (17) Since e T Uw = 0, θ is well-defined and has a finite value unless w = 0, in which case, we define θ = +. The new lower bound is obtained by further minimizing the objective function of (1) on this family of feasible solutions x(θ) for θ [1, θ ]. In the reformulation (14) in (n 1)- dimensional space, this is equivalent to extending the solution w, i.e., replacing w by w(θ) := θw, until x(θ) = (1/n)e + Uw(θ) hits the boundary of the unit simplex n. Obviously, the new optimal value will be at least as small as ν therefore be at least as large as (8). To this end, let us define and its reciprocal will ν(θ) := x(θ) T (I + A G )x(θ) = θ 2 (w ) T Mw + 2θv T w + γ. (18) Let The new lower bound is given by The next proposition establishes that the improved lower bound (20) can easily be computed. ν := min ν(θ) 1 θ [1,θ ] α(g). (19) 1/ν α(g). (20) Proposition 4.1 Let w R n 1 be an optimal solution of (14) and suppose that (w, λ ) satisfies the optimality conditions (15). Then, ν = ν( θ), where 1, if λ = 0, θ := arg min ν(θ) = min{θ, vt w }, if (w ) T Mw > 0 and λ > 0, θ [1,θ ] (w ) T Mw θ, otherwise. Furthermore, the new lower bound (20) is tighter than (8) in the second and third cases above unless θ = 1. 13
14 Proof. First, suppose that λ = 0, which implies that M 0 by (15c). Therefore, w is the unconstrained minimizer of the convex quadratic objective function of (14) by (15a). Therefore, ν = ν(1) = min θ R ν(θ), which implies that ν = ν = 1/α(G) since the unconstrained global minimizer x(1) = (1/n)e + Uw is a feasible solution of (1). Suppose now that w = 1/ n(n 1) with λ > 0. By (15a), (w ) T Mw + λ n(n 1) = vt w. Note that ν (1) = 2((w ) T Mw + v T w ) = 2(λ /(n(n 1))) < 0, which implies that ν(θ) is strictly decreasing at θ = 1. Therefore, ν < ν if θ > 1. If (w ) T Mw > 0, then ν(θ) is a convex function and its global minimizer is given by θ = (v T w )/((w ) T Mw ) > 1 since ν (1) < 0. It follows that θ = min{θ, θ} in this case. Otherwise, ν(θ) is a concave, decreasing function and θ = θ, which completes the proof. 5 Computational Results In this section, we present our computational results on the DIMACS collection of clique problems ( Each of the five lower bounds given by (3), (4), (5), (8), and (20) was computed on the complements of each of the sixty four instances. We used MATLAB to compute each of the five bounds on each of the instances. In particular, MATLAB s several built-in functions including eigs, eig, and fminbnd were employed to compute the bounds (4) and (5). We used the semidefinite programming (SDP) formulation given in the proof of Proposition 3.1 to compute the new lower bound (8). The resulting SDP problems were solved by the MATLAB-based interior-point solver SDPT [11] using the default parameters. In order to compute the improved lower bound, the optimal solution of the SDP formulation was transformed into an optimal solution of (14) using [10, Proposition 3]. Tables 1 and 2 present the results of the implementation on each of the sixty four instances. The first column presents the name of the instance. Note that the computations 14
15 GRAPH LOWER BOUNDS Instance V E α(g) (3) (4) (5) (8) (20) MANN-a9.co MANN-a27.co MANN-a45.co brock200-1.co brock200-2.co brock200-3.co brock200-4.co brock400-1.co brock400-2.co brock400-3.co brock400-4.co brock800-1.co brock800-2.co brock800-3.co brock800-4.co c-fat200-1.co c-fat200-2.co c-fat200-5.co c-fat500-1.co c-fat500-2.co cfat co cfat co hamming6-2.co hamming6-4.co hamming8-2.co hamming8-4.co hamming10-2.co hamming10-4.co johnson8-2-4.co johnson8-4-4.co johnson co johnson co keller4.co keller5.co p-hat300-1.co p-hat300-2.co p-hat300-3.co p-hat500-1.co p-hat500-2.co p-hat500-3.co p-hat700-1.co p-hat700-2.co p-hat700-3.co p-hat co p-hat co p-hat co Table 1: Computational Results 15
16 GRAPH LOWER BOUNDS Instance V E α(g) (3) (4) (5) (8) (20) p-hat co p-hat co p-hat co san co san co san co san co san co san co san co san co san co san co san1000.co sanr co sanr co sanr co sanr co Table 2: Computational Results (continued) were performed on the complement graphs. The second group of columns reports the number of nodes V, the number of edges E, and the size of the maximum stable set α(g). The values with an asterisk correspond to the best known lower bounds on α(g). The lower bounds (3), (4), (5), (8), and (20) computed for each instance are presented in the third group of columns. The tightest lower bound for each instance is highlighted. A close examination of Tables 1 and 2 reveals that the tightest lower bound is either given by (5) or by (20). In particular, the new improved lower bound (20) was the tightest among all five lower bounds on thirty nine of the sixty four instances. The lower bound (5) was the tightest on the remaining twenty five instances. These results indicate that the improved lower bound (20) provides a competitive alternative to the other lower bounds. The lower bound (3), which is the easiest to compute, is always the weakest one among all five lower bounds. The lower bounds (4) and (8) usually yield similar values which are tighter than (3). As expected, the lower bounds (5) and (20) always outperform (4) and (8), respectively. Finally, while each of the lower bounds is usually significantly smaller than the stability 16
17 number on most of the instances, our computational results indicate that especially the lower bounds (5) and (20) either match the stability number or provide a very good approximation to it on some of the instances. These results indicate that the progress on lower bounds may have significant implications for the computation of the stability number, which does not admit any efficient, nontrivial approximation. 6 Concluding Remarks In this paper, we proposed two lower bounds on the stability number of a given graph G. Both of our bounds rely on the continuous formulation (1) and can be efficiently computed. Our computational results indicate that especially the improved lower bound (20) has a promising performance in comparison with the other lower bounds. Given the hardness of even approximating the stability number, the construction of improved bounds may have significant implications since the maximum stable set problem has many applications in diverse areas. In the near future, we intend to continue our work on obtaining upper and lower bounds by considering various tractable inner and outer approximations to the continuous formulation (1) of the stability number. References [1] I. M. Bomze, M. Budinich, P. M. Pardalos, and M. Pelillo. The maximum clique problem. In D.-Z. Du and P. M. Pardalos, editors, Handbook of Combinatorial Optimization (Supplement Volume A), pages Kluwer Academic, Boston, Massachusetts, U.S.A., [2] M. Budinich. Exact bounds on the order of the maximum clique of a graph. Discrete Applied Mathematics, 127: , [3] D. M. Cvetković, M. Doob, and H. Sachs. Spectra of Graphs. Pure and Applied Mathematics. Academic Press, Inc., New York,
18 [4] M. Grötschel, L. Lovász, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization. Springer, New York, [5] J. Hastad. Clique is hard to approximate within n 1 ɛ. Acta Mathematica, 182(1): , [6] F. John. Extremum problems with inequalities as subsidiary conditions. In Studies and Essays, presented to R. Courant on his 60th birthday January 8, 1948, pages Interscience, New York, Reprinted in: Fritz John, Collected Papers Volume 2 (J. Moser, ed), Birkhäuser, Boston, 1985, pp [7] T. S. Motzkin and E. G. Straus. Maxima for graphs and a new proof of a theorem of Turán. Canadian Journal of Mathematics, 17: , [8] F. Rendl and H. Wolkowicz. A semidefinite framework for trust region subproblems with applications to large scale minimization. Mathematical Programming, 77(2): , [9] R. J. Stern and H. Wolkowicz. Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations. SIAM Journal on Optimization, 5(2): , [10] J. F. Sturm and S. Z. Zhang. On cones of nonnegative quadratic functions. Mathematics of Operations Research, 28(2): , [11] R. H. Tütüncü, K. C. Toh, and M. J. Todd. Solving semidefinite-quadratic-linear programs using SDPT3. Mathematical Programming, 95: , [12] H. S. Wilf. Spectral bounds for the clique and independence numbers of graphs. Journal of Combinatorial Theory, Series B, 40: , [13] E. A. Yıldırım. On the minimum volume covering ellipsoid of ellipsoids. SIAM Journal on Optimization, 17(3): ,
On a Polynomial Fractional Formulation for Independence Number of a Graph
On a Polynomial Fractional Formulation for Independence Number of a Graph Balabhaskar Balasundaram and Sergiy Butenko Department of Industrial Engineering, Texas A&M University, College Station, Texas
More informationVariable Objective Search
Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationFour new upper bounds for the stability number of a graph
Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA
More informationA solution approach for linear optimization with completely positive matrices
A solution approach for linear optimization with completely positive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F.
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More informationON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS
ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS E. ALPER YILDIRIM Abstract. Let S denote the convex hull of m full-dimensional ellipsoids in R n. Given ɛ > 0 and δ > 0, we study the problems of
More informationMustapha Ç. Pinar 1. Communicated by Jean Abadie
RAIRO Operations Research RAIRO Oper. Res. 37 (2003) 17-27 DOI: 10.1051/ro:2003012 A DERIVATION OF LOVÁSZ THETA VIA AUGMENTED LAGRANGE DUALITY Mustapha Ç. Pinar 1 Communicated by Jean Abadie Abstract.
More informationFINDING INDEPENDENT SETS IN A GRAPH USING CONTINUOUS MULTIVARIABLE POLYNOMIAL FORMULATIONS
FINDING INDEPENDENT SETS IN A GRAPH USING CONTINUOUS MULTIVARIABLE POLYNOMIAL FORMULATIONS J. ABELLO, S. BUTENKO, P.M. PARDALOS, AND M.G.C. RESENDE Abstract. Two continuous formulations of the maximum
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationThe maximal stable set problem : Copositive programming and Semidefinite Relaxations
The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu
More informationApproximating scalable frames
Kasso Okoudjou joint with X. Chen, G. Kutyniok, F. Philipp, R. Wang Department of Mathematics & Norbert Wiener Center University of Maryland, College Park 5 th International Conference on Computational
More informationIntroduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationApplications of the Inverse Theta Number in Stable Set Problems
Acta Cybernetica 21 (2014) 481 494. Applications of the Inverse Theta Number in Stable Set Problems Miklós Ujvári Abstract In the paper we introduce a semidefinite upper bound on the square of the stability
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationA new trust region technique for the maximum weight clique problem
A new trust region technique for the maximum weight clique problem Stanislav Busygin Industrial and Systems Engineering Department, University of Florida, 303 Weil Hall, Gainesville, FL 32611, USA Abstract
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage
More informationA Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs
A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationThe Signless Laplacian Spectral Radius of Graphs with Given Degree Sequences. Dedicated to professor Tian Feng on the occasion of his 70 birthday
The Signless Laplacian Spectral Radius of Graphs with Given Degree Sequences Xiao-Dong ZHANG Ü À Shanghai Jiao Tong University xiaodong@sjtu.edu.cn Dedicated to professor Tian Feng on the occasion of his
More informationWHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES?
MATHEMATICS OF OPERATIONS RESEARCH Vol. 6, No. 4, November 00, pp. 796 85 Printed in U.S.A. WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES? MICHEL X. GOEMANS and LEVENT TUNÇEL
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationA Continuation Approach Using NCP Function for Solving Max-Cut Problem
A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut
More informationLift-and-Project Techniques and SDP Hierarchies
MFO seminar on Semidefinite Programming May 30, 2010 Typical combinatorial optimization problem: max c T x s.t. Ax b, x {0, 1} n P := {x R n Ax b} P I := conv(k {0, 1} n ) LP relaxation Integral polytope
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationarxiv: v1 [math.co] 20 Sep 2014
On some papers of Nikiforov Bo Ning Department of Applied Mathematics, School of Science, Northwestern Polytechnical University, Xi an, Shaanxi 71007, P.R. China arxiv:109.588v1 [math.co] 0 Sep 01 Abstract
More informationProjection methods to solve SDP
Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method
More informationA simplex like approach based on star sets for recognizing convex-qp adverse graphs
A simplex like approach based on star sets for recognizing convex-qp adverse graphs Domingos M. Cardoso Carlos J. Luz J. Comb. Optim., in press (the final publication is available at link.springer.com).
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationPart II Strong lift-and-project cutting planes. Vienna, January 2012
Part II Strong lift-and-project cutting planes Vienna, January 2012 The Lovász and Schrijver M(K, K) Operator Let K be a given linear system in 0 1 variables. for any pair of inequalities αx β 0 and α
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More informationAnalysis of Copositive Optimization Based Linear Programming Bounds on Standard Quadratic Optimization
Analysis of Copositive Optimization Based Linear Programming Bounds on Standard Quadratic Optimization Gizem Sağol E. Alper Yıldırım April 18, 2014 Abstract The problem of minimizing a quadratic form over
More informationA Bound for Non-Subgraph Isomorphism
A Bound for Non-Sub Isomorphism Christian Schellewald School of Computing, Dublin City University, Dublin 9, Ireland Christian.Schellewald@computing.dcu.ie, http://www.computing.dcu.ie/ cschellewald/ Abstract.
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationGraphs with convex-qp stability number
Universytet im. Adama Mickiewicza Poznań, January 2004 Graphs with convex-qp stability number Domingos M. Cardoso (Universidade de Aveiro) 1 Summary Introduction. The class of Q-graphs. Adverse graphs
More informationA VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION 1. INTRODUCTION
A VARIATIONAL METHOD FOR THE ANALYSIS OF A MONOTONE SCHEME FOR THE MONGE-AMPÈRE EQUATION GERARD AWANOU AND LEOPOLD MATAMBA MESSI ABSTRACT. We give a proof of existence of a solution to the discrete problem
More informationOn Hadamard Diagonalizable Graphs
On Hadamard Diagonalizable Graphs S. Barik, S. Fallat and S. Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A2 Abstract Of interest here is a characterization
More informationLecture 4: January 26
10-725/36-725: Conve Optimization Spring 2015 Lecturer: Javier Pena Lecture 4: January 26 Scribes: Vipul Singh, Shinjini Kundu, Chia-Yin Tsai Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationSemidefinite programs and combinatorial optimization
Semidefinite programs and combinatorial optimization Lecture notes by L. Lovász Microsoft Research Redmond, WA 98052 lovasz@microsoft.com http://www.research.microsoft.com/ lovasz Contents 1 Introduction
More informationStrong duality in Lasserre s hierarchy for polynomial optimization
Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization
More informationarxiv: v1 [math.oc] 26 Sep 2015
arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,
More informationVariational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003
Variational Inequalities Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 c 2002 Background Equilibrium is a central concept in numerous disciplines including economics,
More informationPolynomial Solvability of Variants of the Trust-Region Subproblem
Polynomial Solvability of Variants of the Trust-Region Subproblem Daniel Bienstock Alexander Michalka July, 2013 Abstract We consider an optimization problem of the form x T Qx + c T x s.t. x µ h r h,
More informationRelaxations of combinatorial problems via association schemes
1 Relaxations of combinatorial problems via association schemes Etienne de Klerk, Fernando M. de Oliveira Filho, and Dmitrii V. Pasechnik Tilburg University, The Netherlands; Nanyang Technological University,
More informationModeling with semidefinite and copositive matrices
Modeling with semidefinite and copositive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/24 Overview Node and Edge relaxations
More informationGlobal Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition
Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality
More informationSecond Order Cone Programming Relaxation of Positive Semidefinite Constraint
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationCSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance
More informationThe combinatorics of pivoting for the maximum weight clique
Operations Research Letters 32 (2004) 523 529 Operations Research Letters wwwelseviercom/locate/dsw The combinatorics of pivoting for the maximum weight clique Marco Locatelli a; ;1, Immanuel M Bomze b,
More informationMath 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.
Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationGlobal Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints
Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic
More informationCertifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering
Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Shuyang Ling Courant Institute of Mathematical Sciences, NYU Aug 13, 2018 Joint
More informationPreliminaries and Complexity Theory
Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra
More informationSecond Order Cone Programming Relaxation of Nonconvex Quadratic Optimization Problems
Second Order Cone Programming Relaxation of Nonconvex Quadratic Optimization Problems Sunyoung Kim Department of Mathematics, Ewha Women s University 11-1 Dahyun-dong, Sudaemoon-gu, Seoul 120-750 Korea
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationApplications of semidefinite programming in Algebraic Combinatorics
Applications of semidefinite programming in Algebraic Combinatorics Tohoku University The 23rd RAMP Symposium October 24, 2011 We often want to 1 Bound the value of a numerical parameter of certain combinatorial
More informationA CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More informationSDP Relaxations for MAXCUT
SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More informationA Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems
A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction
More informationA new look at nonnegativity on closed sets
A new look at nonnegativity on closed sets LAAS-CNRS and Institute of Mathematics, Toulouse, France IPAM, UCLA September 2010 Positivstellensatze for semi-algebraic sets K R n from the knowledge of defining
More informationUsing quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints
Noname manuscript No. (will be inserted by the editor) Using quadratic conve reformulation to tighten the conve relaation of a quadratic program with complementarity constraints Lijie Bai John E. Mitchell
More informationDegeneracy in Maximal Clique Decomposition for Semidefinite Programs
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationLaplacian Integral Graphs with Maximum Degree 3
Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,
More informationFast Linear Iterations for Distributed Averaging 1
Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider
More informationThe complexity of optimizing over a simplex, hypercube or sphere: a short survey
The complexity of optimizing over a simplex, hypercube or sphere: a short survey Etienne de Klerk Department of Econometrics and Operations Research Faculty of Economics and Business Studies Tilburg University
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationSpectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering. Some Slides
Spectral Theory of Unsigned and Signed Graphs Applications to Graph Clustering Some Slides Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104,
More informationSolving large Semidefinite Programs - Part 1 and 2
Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationThe local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance
The local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance Marina Meilă University of Washington Department of Statistics Box 354322 Seattle, WA
More information17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs
Chapter 17 Graphs and Graph Laplacians 17.1 Directed Graphs, Undirected Graphs, Incidence Matrices, Adjacency Matrices, Weighted Graphs Definition 17.1. A directed graph isa pairg =(V,E), where V = {v
More informationInteger Programming Formulations for the Minimum Weighted Maximal Matching Problem
Optimization Letters manuscript No. (will be inserted by the editor) Integer Programming Formulations for the Minimum Weighted Maximal Matching Problem Z. Caner Taşkın Tınaz Ekim Received: date / Accepted:
More informationOn the Lovász Theta Function and Some Variants
On the Lovász Theta Function and Some Variants Laura Galli Adam N. Letchford March 2017. To appear in Discrete Optimization. Abstract The Lovász theta function of a graph is a well-known upper bound on
More informationCanonical Problem Forms. Ryan Tibshirani Convex Optimization
Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationNOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES
NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES MIKE BOYLE. Introduction By a nonnegative matrix we mean a matrix whose entries are nonnegative real numbers. By positive matrix we mean a matrix
More informationImproved bounds on book crossing numbers of complete bipartite graphs via semidefinite programming
Improved bounds on book crossing numbers of complete bipartite graphs via semidefinite programming Etienne de Klerk, Dima Pasechnik, and Gelasio Salazar NTU, Singapore, and Tilburg University, The Netherlands
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationChapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices
Chapter 3 Some Applications Having developed the basic theory of cone programming, it is time to apply it to our actual subject, namely that of semidefinite programming. Indeed, any semidefinite program
More informationMinimizing the Laplacian eigenvalues for trees with given domination number
Linear Algebra and its Applications 419 2006) 648 655 www.elsevier.com/locate/laa Minimizing the Laplacian eigenvalues for trees with given domination number Lihua Feng a,b,, Guihai Yu a, Qiao Li b a School
More informationProbabilistic Method. Benny Sudakov. Princeton University
Probabilistic Method Benny Sudakov Princeton University Rough outline The basic Probabilistic method can be described as follows: In order to prove the existence of a combinatorial structure with certain
More informationSpectral densest subgraph and independence number of a graph 1
Spectral densest subgraph and independence number of a graph 1 Reid Andersen (Microsoft Research, One Microsoft Way,Redmond, WA 98052 E-mail: reidan@microsoft.com) Sebastian M. Cioabă 2 (Department of
More information