Chordally Sparse Semidefinite Programs

Size: px
Start display at page:

Download "Chordally Sparse Semidefinite Programs"

Transcription

1 Chordally Sparse Semidefinite Programs Raphael Louca Abstract We consider semidefinite programs where the collective sparsity pattern of the data matrices is characterized by a given chordal graph. Acyclic, interval, and split graphs are examples. By reformulating the semidefinite program as a conic convex program over a suitably defined chordally sparse matrix cone, we derive both upper bounds and generic lower bounds on the minimal rank of optimal solutions to the conic convex program. These bounds translate into upper and generic lower bounds on the minimal rank of optimal solutions to the semidefinite program and depend both on the dimension of the affine space in which feasible solutions lie and on the size of the overlap between the maximal cliques of the graph. The bounds reduce to the ones in the extant literature for the special case of the complete graph. 1 Introduction Let S n denote the space of n-by-n real symmetric matrices, a vector space of dimension dim(s n ) = n 2 := n(n + 1)/2. For C, A 1, A 2,..., A m S n and b = [b 1, b 2,..., b m ] R m, consider the semidefinite program minimize X S n C, X subject to A k, X = b k, k = 1, 2,..., m, X 0. In (1), the notation X 0 is shorthand for X S n +, the cone of real symmetric positive semidefinite matrices of order n. The dual problem of (1) is (1) maximize b, y y R m,z S n subject to C m y k A k = Z. k=1 Z 0. (2) R. Louca is with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA. rl553@cornell.edu

2 Semidefinite programs are ubiquitous in many different and diverse areas of engineering and applied mathematics. For one, many prominent design problems arising in the areas of control theory and dynamical systems, as well as several convex optimization problems can be formulated as semidefinite programs. For another, many nonconvex optimization problems such as nonconvex quadratically constrained quadratic programs arising in classic problems in control theory, combinatorial, and network optimization admit convex relaxations as semidefinite programs. Many of the problems that arise in such practical settings possess a number of structural characteristics that, if exploited properly, allow for a more thorough understanding of the problem s properties. In this paper, we consider semidefinite programs where the collective sparsity pattern of the data matrices C, A 1, A 2,..., A m is described by a given chordal graph. The problem of interest is that of finding upper and lower bounds on the minimal rank of primal optimal solutions. The problem of finding low rank solutions to semidefinite programs has received increased attention in recent years, partly because a wide class nonconvex optimization problems admit equivalent reformulations as semidefinite programs subject to a rank constraint. The semidefinite relaxation is obtained by eliminating the rank constraint. Therefore, a solution whose rank satisfies the original rank constraint is desirable. Predominantly, the extant literature has approached this problem from two distinct perspectives: one which provides upper and generic lower bounds on the rank of optimal solutions to the semidefinite programs (1)-(2), which depend solely on the dimension of the affine space in which feasible solutions lie and a second, which provides an upper bound on the minimal rank of optimal solutions to (1) depending on properties of the graph generated by the collective sparsity pattern of the data matrices. In the former perspective, one of the first results, was independently given by Barvinok [1] and Pataki [2], who showed that if the semidefinite program (1) is feasible, then it has a feasible solution whose rank is less than or equal to 2 m. 1 In general, this bound is shown to be essentially tight [3], meaning that for any fixed m, one can find matrices A 1, A 2,..., A m and scalars b 1, b 2,..., b m defining an affine space which intersects the positive semidefinite cone only at points of rank greater or equal to 2 m. Moreover, a solution which satisfies the aforementioned bound can be found in polynomial time [4]. However, a special case for which this bound can be sharpened is given in [3]. A different approach towards the first perspective was introduced by Alizadeh et al. [5] and Shapiro [6] who use analogous arguments to show that, under a strict feasibility condition, optimal solutions to (1) and (2) will be nondegenerate generically (c.f. Definitions 2.1 and 2.3). A direct consequence of nondegeneracy is the existence of generic upper and lower bounds on the rank of optimal solutions to (1) and (2). In particular, the generic bounds on the rank r of the primal optimal solution are given by n 2 n 2 m r 2 m. We provide a more careful discussion of these results in Section 2.3. In the latter perspective, a graph theoretic upper bound whose calculation is computationally intractable in general on the minimal rank of optimal solutions to (1) was provided by Laurent and Varvitsiotis [7, 8]. This bound depends on a property (defined as the gram dimension) of the graph generated by the collective sparsity pattern of the data matrices (c.f. Definition 2.7). A key consequence is that semidefinite programs (1) attaining their optimum have an optimal solution whose rank is bounded from above by the clique number of said graph. The problem of finding the clique number is NP-Hard for general graphs. It can, however, be solved in polynomial time for certain graph classes, e.g., chordal graphs. 1 Here, 2 k = h, where h is the positive real root of h 2 = k and h denotes the integer part of h. 2

3 In this paper, we provide an approach which unifies these two distinct perspectives for semidefinite programs whose data matrices have collective sparsity pattern characterized by a given chordal graph. Examples of chordal graphs include, ayclcic graphs, interval graphs, split graphs, and the complete graph, to name a few. Chordal graphs possess a unique property that merits particular consideration. Namely, every G-partial positive semidefinite matrix 2 has a positive semidefinite completion if and only if G is chordal [9]. Given data matrices with collective sparsity corresponding to a chordal graph G, we first reformulate the semidefinite program as a conic linear program over a suitably defined convex chordal matrix cone [10] and then we leverage the aforementioned property to efficiently characterize the facial structure of said cone. By relying on further properties of chordal graphs governing the intersection of their cliques, we obtain, in a manner similar to the results of Pataki [2], Barvinok[1], and Deza et. al [11], an upper bound on the minimal rank of optimal solutions to the conic convex program which depends both on the number of constraints m and the size of the intersection between the maximal cliques of the graph (these can be computed efficiently for a chordal graph). Moreover, this bound is inherently guaranteed to be no worse than the graph theoretic upper bound of Laurent and Varvitsiotis [8] and translates directly to an upper bound on the minimal rank of optimal solutions to the semidefinite program by relying on a well-known result in matrix completion theory (c.f. Lemma 2.10). Besides its importance in characterizing the facial structure of the chordal matrix cone, the aforementioned property of chordal graphs is also critical for characterizing both the tangent cone to the chordal matrix cone at an arbitrary feasible point and the dimension of its lineality space. These notions are employed in conjuction with a result from Pataki and Tunçel [12] stating that primal nondegeneracy is a generic property of optimal solutions to arbitrary conic convex programs (provided that the underlying cone is pointed, closed, and convex) to obtain a generic lower bound on the rank of optimal solutions to the primal conic convex program at hand. This lower bound translates readily to a generic lower bound on the rank of optimal solutions to the primal semidefinite program by employing a simple argument which bounds from below the rank of at least one principal submatrix of any primal optimal solution to (1) determined by a maximal clique of the chordal graph. The remainder of the paper is organized as follows. Section 2 contains preliminary results, definitions, and some notation used throughout the paper. Section 3 contains our main results including a reformulation of the sparse semidefinite program as a conic convex program in Section 3.1. In Section 3.2 we characterize the facial structure of the chordal matrix cone and derive an algebraic characterization for the dimension of the faces of said cone. Section contains our first main result which provides an upper bound on the rank of extreme points of the chordal matrix cone. We discuss several implications of our first main result to chordally sparse semidefinite programs in Section In Section 3.3, we characterize the tangent cone to the chordal matrix cone at an arbitrary point in the cone and provide an algebraic characterization for its lineality space. Section contains our second main result which provides a generic lower bound on the rank of primal optimal solutions to the conic convex program. We discuss implications of our second main result in Section and close with a numerical experiment in Section 4. 2 Given a graph G, a G-partial matrix P is a partial matrix whose entries are specified only on the diagonal and at positions corresponding to edges of G. It is said to be positive semidefinite if for every clique of G the principal submatrix of P generated by the indices in the clique is positive semidefinite. See Section 2.3 for a more detailed discussion. 3

4 K K 1 1 K K 2 3 K 3 K K K 4 K 5 K 3 (a) (b) (c) Figure 1: (a) A simple chordal graph G = (V, E) on V = 8 vertices and E = 13 edges with p = 5 maximal cliques given by K 1 = {1, 2, 7}, K 2 = {2, 3, 7, 8}, K 3 = {3, 5, 7, 8}, K 4 = {4, 7}, and K 5 = {6, 7}. The clique number of G is ω(g) = 4. (b) The clique intersection graph of G with edge weights equal to the sizes of the intersections of the cliques. (c) A clique tree of G with the clique intersection property. One can readily verify that the given clique tree is a maximum-weight spanning tree of the clique intersection graph. 2 Notation and Preliminary Results We introduce some preliminary notation that is used throughout this paper. We denote by R n the n-dimensional Euclidean space and by S n the space of real symmetric matrices of order n, a real vector space of dimension dim(s n ) = n 2 := n(n + 1)/2. For a matrix X R m n, we write X for the transpose of X. The column space of X is denoted by col(x) and its nullspace by ker(x). The dimension of col(x) is denoted by rank(x) and the dimension of ker(x) by null(x). We endow R n with the usual inner product x, y = x y for all x, y R n. Similarly, we endow S n with the Frobenius inner product X, Y = tr(xy ) for all X, Y S n. A subset K R n (or S n ) is a convex cone if it contains zero and ax + by K for any positive scalars a, b and any x, y K. The dual cone of K is given by K = {y R n x, y 0, for all x K}. A cone K R n is called pointed if K ( K) = {0}. An extreme point of a convex set S is a point x S with the property that if x = λy + (1 λ)z, for y z S and λ R, then either y = x or z = x. For a matrix X S n, the notation X 0 is shorthand for X S n +, the closed convex cone of symmetric n n positive semidefinite matrices. Similarly, the notation X 0 is shorthand for X S n ++, the convex cone of real symmetrix n n positive definite matrices. For a function f : R n R m and a subset S R n, we denote by f(s) the image of S under f. And, for a subset P R m we denote by f 1 (P ), the inverse image of P under f. Finally, for any subset U of V = {1, 2,..., n}, sorted using the natural ordering, we denote by U(j) the j th element of U. 2.1 Chordal Graphs In this section, we provide a brief introduction of related graph theoretic notation used throughout the paper. In particular, we focus on chordal graphs and their properties. Let G = (V, E) be a simple graph (i.e., an unweighted, undirected graph with no graph loops or multiple edges), where V = {1, 2,..., n} denotes the set of vertices of G and E V V denotes the set of edges of G. A cycle of G on k-nodes, k 3, is a sequence of distinct vertices {n 1, n 2,..., n k } V such 4

5 that {(n 1, n 2 ), (n 2, n 3 ),..., (n k 1, n k ), (n k, n 1 )} E. A clique of G is a subset K V with the property that (i, j) E for all i, j K. A clique is said to be maximal, if it is not a subset of a larger clique. For a graph G, its clique number, denoted by ω(g), is the size of the largest maximal clique of G. The problem of finding all maximal cliques of an arbitrary graph is in general NP-hard [13]. However, certain families of graphs admit polynomial-time complexity algorithms for finding maximal cliques. One such family is chordal graphs, who is also shown to have at most n maximal cliques [14]. A simple graph G is said to be chordal (or triangulated or rigid circuit) if every cycle of length greater than three has a chord, namely, an edge connecting two nonconsecutive vertices on the cycle. Examples of chordal graphs include the complete graph, trees, and interval graphs. Chordal graphs find a plethora of applications in many areas like graphical models, Gaussian elimination, and algorithmic graph theory. Moreover, several problems that were shown to be hard on arbitrary classes of graphs can be solved in polynomial time when the input graph is chordal. These include the maximum independent set problem, the maximum weighted clique problem and the minimum coloring problem, to name a few. We refer the reader to [15] and to the references therein for a survey of some known properties of chordal graphs. Let G be a chordal graph with maximal cliques K 1, K 2,..., K p. The clique intersection graph of G, denoted by G I, is a weighted graph that has the maximal cliques K 1, K 2,..., K p as its vertices and an edge with weight K i K j between distinct maximal cliques K i, K j if K i K j. A clique tree of G, denoted by G T, is a spanning tree of the clique intersection graph. A clique tree is said to have the clique intersection property if for every pair of distinct cliques K i, K j, 1 i j p, the set of vertices in K i K j is contained in every clique on the path connecting K i and K j in the clique tree [15]. In [16] Gavril shows that a connected graph G is chordal if and only if there exists a clique tree for which the clique intersection property holds. Moreover, Bernstein and Goodman [17] show that the set of clique trees of G that have the clique intersection property is identical to the set of maximum-weight spanning trees of the clique intersection graph of G. Figure 1 gives an example of a chordal graph together with the corresponding clique intersection graph and a clique tree satisfying the clique intersection property. 2.2 Constraint Nondegeneracy and Conic Programming In this section, we introduce some notions from variational analysis which will be used in the sequel. Let K R n be a closed convex cone. The lineality space (not to be confused with the linear hull) of K, denoted by lin K is the largest linear subspace of R n contained in K and it is given by lin K = K ( K). Note that when K is pointed, lin K = {0}. The tangent cone of a convex set S R n at a point x S gives a local approximation of the set S at x. In particular, the tangent cone to a convex set S at a point x S is the closure of the cone generated by S {x}, i.e., T S (x) = cl{λy λ 0, y S {x}}, where cl P denotes the closure of the set P. Note that if S is a convex set, then the tangent cone to S at an arbitrary point x S is a (closed) convex set. As an example, if S = R n, then for all x R n, T S (x) = R n. On the other hand, if S = { x} R n, and x = x, then T S (x) = {0}. 5

6 We are now ready to introduce the notion of nondegeneracy introduced by Robinson in [18] which is central to our analysis throughout the paper. For any two sets S 1, S 2 R n let S 1 + S 2 be the Minkowski sum of S 1 and S 2, i.e., S 1 + S 2 = {x + y x S 1, y S 2 }. Moroever, for a matrix A R m n and a set S R n, we denote by A(S) the image of S under A. We have the following definition from [19]. Definition 2.1 (Nondegeneracy). Let S R n and P R m be closed convex sets and F : R n R m be a continuously differentiable convex function. Consider a point x S with F (x) P and let F (x) R m n be the Jacobian matrix of F at x. We say that x is nondegenerate if lin T P (F (x)) F (x)(lin T S (x)) = R m. We provide the following example which gives an algebraic characterization of nondegeneracy for primal semidefinite programs in standard form and show that the above definition reduces to the definition of nondegeneracy in Theorem 6 of [5]. Example 2.2 (Primal Nondegeneracy of SDPs). Let A : S n R m be a linear map given by A(X) = [ A 1, X, A 2, X,..., A m, X ] and notice that the feasible set of the semidefinite program (1) is precisely equal to A 1 (b) S n +. By setting S = S n + S n, P = {0} R m, and F (X) = A(X) b in Definition 2.1, we obtain that a point X S n + A 1 (b) is nondegenerate if lin T {0} (0) A(lin T S n + (X)) = R m if and only if A(lin T S n + (X)) = R m, where the equivalence follows from T {0} (0) = {0}. Let Q R n n be a matrix whose columns form a set of orthonormal eigenvectors for X and suppose that rank(x) = r. One can show that (see e.g. [5]) { [ ] } U V lin T S n + (X) = Q V Q U S r, V R r (n r). 0 Partition Q as Q = [Q 1, Q 2 ], where the columns of Q 1 R n r and Q 2 R n (n r) correspond to the nonzero and the zero eigenvalues of X, respectively. Then, for any Y lin (T S n + (X)), there exist U S r and V R n (n r) such that [ ] [ ] [ ] A k, Y = Q 1 A kq 1 Q 1 A kq 2 U V Q 1 Q 2 A kq 1 Q 2 A, kq 2 V = kq 1 Q 1 A kq 2 0 Q 2 A kq 1 0, Z, where Z S n is arbitrary. Therefore, the condition A(lin T S n + (X)) = R m amounts to checking linear independence of the matrices [ ] Q 1 B k = A kq 1 Q 1 A kq 2 Q 2 A, k = 1, 2,..., m. kq 1 0 This is precisely the primal nondegeneracy condition in Theorem 6 of [5]. 6

7 Let K R n be a closed convex cone. In its most general form, a primal conic linear program and its dual are given by minimize c, x maximize b, y x R n y R m, z R n subject to Ax = b and subject to c A y = z x K z K. (3) respectively, where c R n, b R m and A R m n are given. We are interested in generic properties of conic convex programs of the form (3). By generic, we mean a property that holds for almost all data (c, b, A) (in the sense of Lebesgue measure), defining a conic linear program. We make this precise in the following definition. Definition 2.3. Let L(R n, R m ) denote the set of all linear operators from R n into R m. A property P of conic convex programs is generic if it holds for almost all triples (c, b, A), i.e., the set of triples for which P fails to hold has measure zero in R n R m L(R n, R m ). Alizadeh et al. [5] and Shapiro [6] first showed that primal and dual nondegeneracy are generic properties of primal and dual optimal solutions to semidefinite programs. This result was later generalized by Pataki and Tunçel [12], who showed that said properties are generic for optimal solutions to general conic convex programs whenever the underlying cone is pointed, closed, convex and with a nonempty interior. We have the following result from [12]. Theorem 2.4. [12, Thm. 5] Let K be a pointed, closed, convex cone with nonempty interior and consider the primal and dual conic convex programs (3). Then primal and dual nondegeneracy of optimal solutions to (3) are generic properties. 2.3 Bounds on Minimal Rank of Solutions to Semidefinite Programs In this section, we briefly review some key results from the literature on bounds on the minimal rank of optimal solutions to semidefinite programs. Barvinok [1] and Pataki [2] provide an upper bound on the rank of extreme points to semidefinite programs in primal standard form (1), stating that if X is an extreme point, then m rank(x), (4) 2 where a denotes the integer part of a R. This bound translates readily to an upper bound on the minimal rank of optimal solutions to the semidefinite program (1) by a simple argument establishing the existence of an extreme point in the optimal solution set. We state the key Lemma here as we use it the proof of our main Theorem 3.6. Lemma 2.5. [20, Lemma II.3.5] Let S R n be a nonempty closed convex set which does not contain straight lines. Then S has an extreme point. In light of Lemma 2.5, let OPT denote the optimal value of the (1) and note that if OPT is finite, then the optimal solution set, given by {X S n + C, X = OPT, A k, X = b k, k = 1, 2..., m} 7

8 is nonempty, closed, convex and it does not contain straight lines as it is a subset of S n +. Hence, it contains an extreme point and therefore a point whose rank satisfies (4). A different, but equally valuable approach is given by Alizadeh et al. [5] and Shapiro [6] who show that primal and dual nondegeneracy (c.f. Definition 2.1) are generic properties of semidefinite programs. The generic nature of these properties is beneficial in two ways: On one hand, dual (primal) nondegeneracy provides a sufficient condition for the uniqueness of primal (dual) optimal solutions. The sufficient condition is also necessary under the additional assumption of strict complementarity, 3 which is also shown to be a generic property of optimal solutions to semidefinite programs. On the other hand, an immediate consequence of primal and dual nondegeneracy is the existence of generic (c.f. Definition 2.3) upper and lower bounds on the rank of optimal solutions to (1)-(2). We have the following Theorem from [5]. Theorem 2.6. Suppose that X and (y, Z) are primal and dual nondegenerate and optimal for (1)-(2), respectively. Then n 2 n 2 m rank(x) 2 m and n 2 m rank(z) 2 n 2 m. Remark 1. The reader can readily verify that the upper bound on rank(x) in Theorem 2.6 coincides with the upper bound in (4). Moreover, it is worth noting that the proof of Barvinok [1] and Pataki [2] does not require nondegeneracy assumptions. Without these assumptions, however, the upper bound on rank(x) need not hold for all primal optimal solutions. The bounds in Theorem 2.6 and (4) are solely functions of the number of constraints in (1) (or equivalently, under the assumption that the matrices A 1, A 2,..., A m are linearly independent, the codimension of the affine space A = {X S n A k, X = b k, k = 1, 2,..., m}). In a different vein, and by leveraging on the collective sparsity pattern of the data matrices C, A 1, A 2,..., A m, Laurent and Varvitsiotis [7, 8] provide a graph theoretic upper bound on the rank of optimal solutions to (1) that is independent of the number of constraints. To state the result precisely, we require an essential definition, which we use throughout the paper. Definition 2.7. The graph of a semidefinite program specified by matrices C, A 1, A 2,..., A m S n is a simple graph G = (V, E), where V = {1, 2,..., n} and E = {(i, j) V V C ij 0 or k with [A k ] ij }. The following Theorem is an immediate consequence of Theorem 2.1 and Theorem in [8]. Theorem 2.8. Let G = (V, E) be the graph of the semidefinite program (1). If the primal problem (1) attains its optimum, then there exists an optimal solution X with rank(x) ω(g). Computing the clique number of an arbitrary graph G is, in general, NP-hard. As noted above, however, the clique number can be computed efficiently when G is chordal. An immediate consequence of Theorem 2.8 is that semidefinite programs (attaining their optimum) whose underlying 3 In the context of semidefinite programming, an optimal primal and dual pair (X, y, Z) is said to be strictly complementary if rank(x) + rank(z) = n. 8

9 graph G is acyclic have optimal solutions whose rank is bounded above by two, since acyclic graphs have clique number less than or equal to two. Moreover, in the case of a semidefinite program with acyclic graph structure, Theorem 2.8 provides a tighter upper bound than (4) whenever the number of constraints in (1) is greater than five. Nevertheless, one can readily construct examples, where Theorem 2.8 gives looser bounds than (4). We conclude this section with a result concerning positive semidefinite completions of partial matrices. For a given simple graph G = (V, E) on n-vertices define a G-partial real symmetric matrix X G as a set of real numbers indexed by the set I G := {(i, j) i = j V, or (i, j) E and i < j}. (5) A completion of X G is a matrix X S n which satisfies X ij = [X G ] ij for all (i, j) I G. We say that X S n is a positive semidefinite completion of X G if and only if X 0. Moreover, a G-partial matrix is said to be G-partial positive semidefinite if for any clique K of G the principal submatrix X G (K) = [X ij i, j K] S K of X G is positive semidefinite. A graph G is positive semidefinite completable if and only if any G-partial positive semidefinite matrix X G has a positive semidefinite completion. The following result by Grone et al. [9] gives a necessary and sufficient condition under which a graph G is positive semidefinite completable. Theorem 2.9. [9, Thm. 7 and Prop. 2] The graph G is positive semidefinite completable if and only if G is chordal. Theorem 2.9 together with the following result from [8] lie in the heart of our analysis. Lemma [8, Lemma ] Let G be a chordal graph with maximal cliques K 1, K 2,..., K p and consider a G-partial positive semidefinite matrix X G. Then, there exists a positive semidefinite matrix completion X of X G which has rank(x) max rank(x G(K i )). 1 i p We remark that the proof of Lemma 2.10 is constructive, thus it naturally leads to an algorithm for constructing a positive semidefinite matrix completion which satisfies the aforementioned bound. 2.4 The Symmetric and Graph Kronecker Products Throughout the paper we make use of two linear operators: the symmetric and graph Kronecker products. We introduce them here for coherence and discuss them further in Appendix B. The vector space S n can be identified with the space R n 2 through a suitably defined function which vectorizes symmetric matrices. More precisely, let svec : S n R n 2 be a function given by [ ] svec(x) = X 11 2X Xn1 X 22 2X X2n... X nn and note that the function svec maps S n and R n 2 isometrically, i.e., for any X, Y S n, X, Y = svec(x), svec(y ). The symmetric Kronecker product of two square matrices X, Y R n n, denoted by X Y, can be defined implicitly as follows: (X Y ) svec(s) = 1 2 svec(xsy + Y SX ). 9

10 A direct definition of X Y is given in Appendix B. In Lemma 2.11, we state some properties of the symmetric Kronecker product that we use throughout the paper. The proofs of these properties can be found in [21] which provides an excellent survey of the Kronecker and symmetric Kronecker products and their properties. Lemma Let A, B, C, D R n n. Then, (a) (A B) = A B. (b) A (B + C) = A B + A C. (c) (A B)(C D) = 1 2 (AC BD + AD BC). (d) If A, B S n are commuting with eigenvalues α 1, α 2,..., α n and β 1, β 2,..., β n respectively, then the n 2 eigenvalues of A B are given by 1 2 (α iβ j + β i α j ), 1 i j n. We now introduce a variant of the symmetric Kronecker product for matrices whose sparsity pattern is described by a simple graph G. Denote by S n G Sn the set of symmetric matrices of order n with sparsity pattern I G, where I G is as defined in (5). Let n G := I G and let (P G ) ij,kl be the entry of the matrix P G R n G n 2 in the row defining element (i, j) I G and in the column that is multiplied with the element S kl in svec(s), for an arbitrary S S n G. Define P G by setting [P G ] ij,kl = 1 if i = k and j = l and equal to zero otherwise. Moreover, let svec G : S n G Rn G be a function given by svec G (S) = P G svec(s). (6) Similar to the function svec, the definition of svec G yields an inner product equivalence, namely X, Y = svec G (X), svec G (Y ), for all X, Y S n G. The graph Kronecker product operator, denoted by G, can be defined for any two real matrices X, Y R k n as a mapping on vector svec G (S), as follows: (X G Y ) svec G (S) = 1 2 svec(xsy + Y SX ). As in the case of the symmetric Kronecker product, we give a direct definition for X G Y in Appendix B. 3 Main Results 3.1 The Choral Matrix Cone S n G,c For the remainder of the paper, let us denote by G = (V, E) the graph of the semidefinite program (1) as defined in Definition 2.7 and by I G the index set defined in (5). We assume throughout the paper that G is chordal. Assumption 1. The graph G is chordal with maximal cliques K 1, K 2,..., K p each of size n i := K i. 10

11 We continue to let S n G Sn be the set of symmetric matrices of order n with sparsity pattern I G. The dimension of this space is dim(s n G) = n G := I G. Let S n G,c Sn G be the set of matrices in Sn G that have a positive semidefinite matrix completion. More precisely, S n G,c := {X Sn G X has a positive semidefinite completion} = {proj S n G (X) X S n +}, where proj S n : S n S n G G denotes the projection on the subspace Sn G. Throughout the paper, we refer to S n G,c as the chordal matrix cone. In [10], Vandenberghe et al. show that the subset S n G,c Sn G is a is closed, convex, pointed cone with a nonempty interior (c.f. Appendix A for a proof). The dual cone of S n G,c is (Sn G,c ) = S n G Sn + (see e.g. p.121 of [10]). Notice that, contrary to the cone of positive semidefinite matrices, the cone S n G,c is not self-dual. Consider the following primal conic optimization problem minimize C, X X S n G subject to A k, X = b k, k = 1, 2,..., m, X S n G,c, where C, A 1, A 2,..., A m S n G and b = [b 1, b 2,..., b m ] R m. The dual problem of (7) is given by maximize y R m,z S n G subject to b, y C m y k A k = Z. k=1 Z 0. Weak duality (i.e., b, y C, X ) implies that for primal and dual feasible points X, y, Z, the duality gap is X, Z. We make the following assumptions which apply throughout the paper. Assumption 2. The matrices A 1, A 2,..., A m are linearly independent, i.e., they span an m-dimensional space in S n G. The linear independence assumption is without loss of generality. If the matrices A k, k = 1,..., m are linearly dependent, one can choose a basis of, say l < m, matrices from {A k k = 1, 2,..., m} and remove the other m l equality constraints to establish an equivalent problem in which the assumption holds. Assumption 3. There exists a primal feasible point X int S n G,c (the interior of Sn G,c ) and a dual feasible point (y, Z) with Z 0. Assumption 3 is a Slater condition and it guarantees strong duality to hold, i.e., X, Z = 0 for optimal solutions X, y, Z. Remark 2. Contrary to semidefinite programming, where the condition X, Z = 0 implies that XZ = 0 and rank(x) + rank(z) n (a condition known as complementarity), the strong duality condition X, Z = 0 for the conic convex program (7)-(8) does not in general guarantee that XZ = 0 or that rank(x) + rank(z) n as the primal optimal solution X of (7) need not be positive semidefinite. 11 (7) (8)

12 Under the assumption that the graph G is chordal (c.f. Assumption 1), Theorem 2.9 implies that the primal conic convex program (7) admits an equivalent reformulation as a semidefinite program. In particular, for each clique K i of G, let W i R n n i be a matrix whose j th column is equal to the K i (j) standard basis vector in R n. Then, one can readily verify that the feasible and optimal solution sets of the semidefinite program minimize C, X X S n G subject to A k, X = b k, k = 1, 2,..., m, W i XW i 0, i = 1, 2,..., p. (9) are equal to the corresponding feasible and optimal solution sets of the conic convex program (7). We state this as a Lemma. Lemma 3.1. Let G be a chordal graph with maximal cliques K 1, K 2,..., K p. Then the cone S n G,c admits the equivalent reformulation S n G,c = {X S n G W i XW i 0, for all i = 1, 2,..., p}. Proof. The proof follows directly from Theorem Facial Structure of the Chordal Matrix Cone S n G,c In this section, we study the facial structure of the cone S n G,c, when the underlying graph G is chordal. First, we provide a characterization for the faces of S n G,c (c.f. Proposition 3.3) and then a characterization for their dimension (c.f. Theorem 3.5). Throughout this section, we let F (x, S) be the smallest face of an arbitrary convex set S that contains x. That is, F (x, S) is the face of S that contains x in its relative interior. As an example, x is an extreme point (c.f. Section 2) of S if and only if F (x, S) = {x}. We start with a Lemma, which is a direct consequence of Proposition 2.1 and Corollary 2.2 of [22] and which establishes that the cone S n G,c is facially exposed, i.e., every proper face of Sn G,c is exposed. Recall that a face F of a convex set S is said to be exposed if either F = S or if there exists a supporting hyperplane H of S such that H S = F and it is said to be proper if F is neither empty nor equal to S. Lemma 3.2. The convex cone S n G,c is facially exposed. Proof. The proof of Lemma 3.2 is differed to Appendix C. Recall from Section 3.1 the definition of the matrices W i R n n i, i = 1, 2,..., p. We have the following Proposition characterizing the faces of S n G,c. Its proof follows largely from arguments in Proposition 12.3 of Chapter 2 in [20]. We defer the proof of the Proposition to Appendix C as the ideas in the proof are not exploited in the sequel. The reader can skip the proof without loss of continuity. 12

13 Proposition 3.3. Let G = (V, E) be a chordal graph with maximal cliques K 1, K 2,..., K p and consider a matrix X S n G,c. For each i = 1, 2,..., p, let r i = rank(wi XW i) and let Q i Σ i Q i be an eigenvalue decomposition of Wi XW i. Define diagonal matrices D i whose first r i diagonal entries are equal to zero and last n i r i entries are equal to one and set P i = Q i D i Q i and P = p W ip i Wi. Then, F (X, S n G,c) = {Y S n G,c P, Y = 0}. Of interest is an algebraic characterization of the dimension of the face F (X, S n G,c ) since, as we show in Theorem 3.6, this will give an upper bound on the rank of an extreme point of the feasible set of the primal conic convex program (7). Before stating the theorem we introduce some preliminaries which we use in our study in the sequel. Consider a chordal graph G = (V, E) together with the corresponding clique intersection graph (c.f. Section 2.1) G I = (V I, E I ) and let X S n G,c be arbitrary. For each i = 1, 2,..., p, suppose that rank(wi XW i) = r i and let Wi XW i = Q i Q i, where Q i is an n i r i matrix of rank r i. 4 For each (i, j) E I define a matrix W ij R n n i whose k th column is equal to the K i (k) standard basis vector in R n if K i (k) K j and equal to the zero vector in R n otherwise and let S ij R n n be a matrix given by S ij := [ ] W ij Q i 0 n (n ri ). (10) Moreover, let U R EI n 2 pn 2 be a block matrix with block entries of size n 2 n 2 and denote by U ij,k R n 2 n 2 the block entry of U indexed by edge (i, j) E I and by node K k V I. The matrix U is defined by [U] ij,k := S ij S ij, if k = i, S ji S ji, if k = j, 0 n 2, otherwise. n 2 Example 3.4. Suppose that G I = (V I, E I ) where, V I = {K 1,..., K 4 } and E I = {(K 1, K 2 ), (K 1, K 3 ), (K 2, K 4 ), (K 3, K 4 )}. Then the matrix U R 4n 2 4n 2 has the form (11) U = K 1 K 2 K 3 K 4 (K 1, K 2 ) S 12 S 12 S 21 S (K 1, K 3 ) S 13 S 13 0 S 31 S 31 0 (K 2, K 4 ) 0 S 24 S 24 0 S 42 S 42. (K 3, K 4 ) 0 0 S 34 S 34 S 43 S 43 We are now ready to characterize the dimension of F (X, S n G,c ). We have the following Theorem, whose proof, follows partly from arguments in the proof Theorem in [11] 4 One way to construct a matrix Q i R n i r i of rank r i is to consider an eigenvalue decomposition of W i XW i = P iσ ip i, where Σ i R n i n i is a diagonal matrix whose first r i diagonal entries are positive and last n i r i diagonal entries are zero. Then, the matrix Q i can be taken to be equal to Q i = P i(σ 1 i ) 1/2, where Σ 1 i is the r i r i submatrix of Σ i corresponding to the positive diagonal entries of Σ i and (Σ 1 i ) 1/2 is the square root of Σ 1 i. 13

14 Theorem 3.5. Consider a chordal graph G = (V, E) with maximal cliques K 1, K 2,..., K p and let G I = (V I, E I ) be the clique intersection graph of G. Further, let X S n G,c and for each i = 1, 2..., p, let Wi XW i = Q i Q i, where Q i is an n i r i matrix of rank r i = rank(wi XW i). Then, dim(f (X, S n G,c)) = rank(u), r 2 i where U R E I n 2 pn 2 is as defined in (11). Proof. We differ the proof of the Theorem to Appendix C. We make a comparison with the results of Barvinok [20] on the dimension of the smallest face of S n + at a point W S n +. In particular, Barvinok shows that if rank(w ) = r, then dim(f (W, S n +)) = r 2. One can readily verify by recalling the definition of the matrices W ij, that when G is the complete graph (in which case S n G,c = Sn +), the conclusion of Theorem 3.5 reduces to the result of Barvinok. Moreover, if G is disconnected and has connected components G i = (V i, E i ), i = 1, 2,..., p, each of which is the complete graph on n i nodes, then K i = V i for all i and S n G,c = Sn 1 + Sn 2 + Snp +. Since K i K j = for all 1 i j n, Theorem 3.5 gives dim(f (X, S n G,c )) = p r 2 i, which is also in accordance with the results of Barvinok. To see why, notice that the faces of the product cone S n 1 + Snp + are precisely the Cartesian products of the faces the cones Sn i +, i = 1, 2,..., p, see e.g. Section in [23]). An immediate consequence of Barvinok s characterization of dim(f (W, S n +)), in conjunction with Lemma 2.5, gives an upper bound on the rank of extreme points of the feasible set of the semidefinite program (see e.g. [20, Prop. II.13.1]). In the following section, we use a similar argument to prove a related result on the rank of extreme points of A 1 (b) S n G,c. However, as we will see, the main difficulty in our proof lies in bounding the rank of the matrix U from above Linear Equations in the Chordal Matrix Cone This section is devoted to the proof of our first main theorem which provides an upper bound on the rank of extreme points of the chordal matrix cone S n G,c. Throughout this section, we let A : S n G Rm be a linear map given by A(X) := [ A 1, X, A 2, X,..., A m, X ] (12) and we assume that the affine space A 1 (b) has a nonempty intersection with S n G,c. We continue to let F (x, S) be the smallest face of a convex set S that contains x. Using the characterization for the dimension of the faces of S n G,c in Theorem 3.5 we obtain an upper bound on the rank of extreme points of A 1 (b) S n G,c. Recall (c.f. Section 2.1 and [15] for more details) that a clique tree of a graph satisfies the clique intersection property if for every pair of distinct cliques K i, K j, the set K i K j is contained in every clique in the (unique) path connecting K i and K j in the clique tree. Such a tree is always guaranteed to exist for a chordal graph and can be constructed by finding a maximum-weight spanning tree of the clique intersection graph. We have the following Theorem. Theorem 3.6. Let G be a chordal graph with maximal cliques K 1, K 2,..., K p and consider a clique tree G T = (V T, E T ) of G which has the clique intersection property. Then, 14

15 (a) A 1 (b) S n G,c has an extreme point. (b) If X A 1 (b) S n G,c is an extreme point then, r 2 i m + (i,j) E T min{r i, r j, K i K j } 2, (13) where r i = rank(w i XW i). Proof. (a) Since S n G,c is closed, pointed, and convex (c.f. Lemma A.1), A 1 (b) S n G,c is nonempty, closed, and it does not contain straight lines. Therefore, it follows, by Lemma 2.5, that A 1 (b) S n G,c has an extreme point. (b) Let X A 1 (b) S n G,c be an extreme point. We first now show that dim(f (X, Sn G,c )) m. Then, we invoke Theorem 3.5 to obtain an upper bound on the the sum of the ranks of the principal submatrices of X generated by the maximal cliques K 1, K 2,..., K p. This upper bound is a function of the matrix U defined in Theorem 3.5. To finish the proof, we bound the rank of U from above. Clearly, X A 1 (b) F (X, S n G,c ) and since X is an extreme point the face F (X, Sn G,c ) = {X} which readily gives dim(a 1 (b) F (X, S n G,c)) = 0. (14) Next, recall that the affine space A 1 (b) can be expressed as A 1 (b) = A 1 (0) + X, where X is an arbitrary point in A 1 (b) and A 1 (0) = {X S n A k, X = 0, k = 1, 2,..., m} is the linear subspace of which A 1 (b) is a translate. The dimension of A 1 (b) is taken to be equal to the dimension of A 1 (0). Moreover, recall that for any two subspaces L 1, L 2 R n, dim(l 1 + L 2 ) = dim(l 1 )+dim(l 2 ) dim(l 1 L 2 ) (see e.g. Lemma in [2] for a proof). Consider the following string of inequalities: n G = dim(s n G) dim(f (X, S n G,c) + A 1 (b)) = dim(f (X, S n G,c)) + dim(a 1 (b)) dim(f (X, S n G,c) A 1 (b)) dim(f (X, S n G,c)) + n G m, where, the last inequality follows from (14) and the fact that dim(a 1 (b)) n G m (under Assumption 2, the last inequality holds with equality). Therefore, we obtain dim(f (X, S n G,c )) m. We have shown in Theorem 3.5 that if G I = (V I, E I ) is the clique intersection graph of G, then dim(f (X, S n G,c)) = r 2 i rank(u), where r i = rank(wi XW i) and U R EI n 2 pn 2 is as defined in (11). A direct application of Theorem 3.5 to the inequality dim(f (X, S n G,c )) m gives r 2 i m + rank(u). 15

16 The rest of the proof involves computing an upper bound on rank(u). Let G T = (V T, E T ) be a clique tree of G which has the clique intersection property. Consider any two distinct cliques K i, K j, of G satisfying K i K j and assume that K i, K j are non-neighboring nodes in G T. Let j = i + k, k > 1 and suppose without loss of generality that (K i, K i+1,..., K i+k ) is the path in G T that connects K i and K i+k. Recall the definition of the matrices S ij in (10) and U in (11) and notice that, the matrix U i,i+k = [ K i K i+k 0 S i,i+k S i,i+k 0 S i+k,i S i+k,i 0 ] R n 2 pn 2 is a submatrix of U. We will show that U i,i+k can be generated by linear combinations of the rows of the matrices U i,i+1, U i+1,i+2,..., U i+k 1,i+k. This will imply that the rank of U is invariant under removal of U i,i+k from U and therefore, when evaluating the rank of U it suffices to consider the submatrix of U generated by the matrices U l,t, (l, t) E T. For each t = 1, 2,..., p, let Wt XW t = Q t Q t, where Q t is n t r t matrix of rank r t. 4 The clique intersection property of G T guarantees that K i K i+k K l for each l {i,..., i + k}. Define matrices W l,l+1 R n n l whose t th column is equal to the K l (t) standard basis vector in R n if K l (t) K i K i+k and equal to the zero vector in R n otherwise. Further, let S l,l+1 R n n be a matrix given by S l,l+1 = [ Wl,l+1 Q l 0 n (n rl)]. Define W l+1,l and S l+1,l similarly. For an arbitrary matrix F R n 2 n 2, assign a distinct label to each of its rows from the set {(α, β) 1 α β n}. Clearly, each matrix S l,l+1 contains the rows of S l,l+1 whose labels are in {(α, β) α β and α, β K i K i+k } and it is zero everywhere else. Let Ũ l,l+1 R n 2 pn 2 be a matrix given by Ũ l,l+1 = [ K i K l K l+1 K i+k Sl,l+1 S l,l+1 S l+1,l S l+1,l ]. Observe that S i,i+1 = S i,i+k, S i+k,i+k 1 = S i+k,i, and S l+1,l = S l+1,l+2 for each l {i,..., i+k 2}. The above observations imply that k 1 Ũ i+l,i+l+1 = U i,i+k. l=0 This shows that U i,i+k can be generated by a linear combination of the rows of the matrices U i,i+1, U i+1,i+2,..., U i+k 1,i+k and therefore, the rank of U is invariant under removal of U i,i+k from U. Let U T R (p 1)n 2 pn 2 be the submatrix of U generated by the matrices U i,j, (i, j) E T. The analysis above implies that rank(u) = rank(u T ). Therefore, it suffices to upper bound the rank of U T. By the definition of U T and the fact that for any two matrices A, B of compatible dimensions ([ ]) A rank rank(a) + rank(b) B 16

17 we readily obtain that rank(u T ) (i,j) E T rank([s ij S ij S ji S ji ]). (15) We will now show that rank([s ij S ij S ji S ji ]) rank(s ) 2 ij. Recall the definition of the matrices W ij. Since X S n G,c, we must have W ij(wi XW i)wij = W ji(wj XW j)w ji, for all (i, j) E T, or equivalently W ij (Q i Q i )W ij = W ji (Q j Q j )W ji, for all (i, j) E T. (16) Using the definiton of the matrices S ij, S ji, we can restate (16) as The following Lemma from [24] is useful. S ij S ij = S ji S ji. (17) Lemma 3.7. [24, Lemma 2.1] Suppose that S, R R n q satisfy SS = RR. Then R = SP for some orthogonal matrix P R q q. A direct application of Lemma 3.7 to (17) implies the existence of an orthogonal matrix P R n n such that S ji = S ij P. Therefore, we obtain rank([s ij S ij S ji S ji ]) = rank([s ij S ij (S ij P ) (S ij P )]) = rank([s ij S ij (S ij S ij )(P P )]) = rank([s ij S ij ]) = rank(s ij ) 2 = rank(s ) 2 ji, (18) where the second equality follows from Lemma 2.11(c). The third equality follows from the fact that for matrices A, B, C of appropriate dimension satisfying C = AB, the column space of C is a subspace of the columnspace of A. This implies that col((s ij S ij )(P P )) col(s ij S ij ) and since rank(a) rank([a B]) with equality if and only if col(b) col(a), we obtain the desired equality. The fourth equality follows from (a), (c), and (d) in Lemma 2.11 and the fact that for any matrix A, rank(a) = rank(a A). More precisely, rank(s ij S ij ) = rank((s ij S ij ) (S ij S ij )) = rank((s ij S ij ) (S ij S ij )). Since S ij S ij is symmetric and commutes with itself, one can readily verify by invoking Lemma 2.11(d) that rank((s ij S ij) (S ij S ij)) = rank(s ij ) 2. Finally, the last equality in (18) follows from the orthogonality of P and the fact that for any two matrices A, B of compatible dimension rank(ab) min{rank(a), rank(b)}. More precisely, rank(s ij ) = rank(s ij P P 1 ) rank(s ij P ) = rank(s ji ) = rank(s ji P 1 P ) rank(s ij P ) rank(s ij ) 17

18 which readily gives rank(s ij ) = rank(s ji ). By combining (15) and (18), we obtain rank(u T ) rank([s ij S ij S ji S ji ]) = rank(s ) 2 ij. (i,j) E T (i,j) E T To finish the proof, observe that each S ij has at most r i nonzero columns and at most K i K j nonzero rows. This implies that rank(s ij ) min{r i, K i K j }. Moreover, since rank(s ij ) = rank(s ji ), we readily obtain that rank(s ij ) min{r i, r j, K i K j }. This, together with the fact that the function n n 2 is monotonic, gives the desired upper bound on the rank of U, namely rank(u) = rank(u T ) min{r i, r j, K i K } 2 j. (i,j) E T Discussion and Implications for Chordally Sparse Semidefinite Programs In this section, we discuss some implications of Theorem 3.6. First, notice that Theorem 3.6 provides an implicit bound on the minimal rank of optimal solutions to the conic convex program (7). Indeed, let OPT be the optimal value of (7) and notice that the optimal solution set, given by {X S n G C, X = OPT, A k, X = b k, k = 1, 2..., m} S n G,c, is nonempty, closed, convex and it does not contain straight lines as it is a subset of S n G,c, a pointed cone (c.f. Appendix A). Hence, it follows by Lemma 2.5 that the optimal solution set has an extreme point and therefore by Theorem 3.6(b), a point X satisfying r 2 i m + min{r i, r j, K i K } 2 j, (19) (i,j) E T where r i = rank(wi X W i ). Moreover, once such a solution X S n G,c is obtained, we can invoke Lemma 2.10 to construct an optimal solution X 0 to the semidefinite program (1) which satisfies rank(x ) r := max r i. 1 i p It is therefore natural to ask whether (19) can provide a sharper upper bound on the minimal rank of optimal solutions to the semidefinite program than the bounds in the extant literature [1, 2, 5, 8] (c.f. Thm. 2.6 and Thm. 2.8). For one thing, since r i n i for all i = 1, 2,..., p and max 1 i p n i = ω(g), the clique number of G, it follows readily that r ω(g). This is in accordance with the graph theoretic bound of Laurent and Varvitsiotis [7, 8]. For another, we claim that r 2 m which implies that r can be no worse than bounds of Barvinok, Pataki, and Alizadeh et al. [1, 2, 5]. To see why this is true, fix i {1, 2,..., p} and root the clique tree G T at node i. This induces, for each vertex j i, a unique parent node, which we denote by p(j). A node j with h ancestors is said to be at depth h in the rooted tree and we write d(j) = h. The depth of the tree G T is defined as the maximal depth of a node in G T. Suppose that the depth of G T is equal to D for some D 0 and consider the following string of arguments: (i,j) E T min{r i, r j, K i K j } 2 = D k=1 j:d(j)=k min{r j, r p(j), K j K p(j) } 2 D k=1 j:d(j)=k r 2 j j i r 2 j. 18

19 Using the above string of arguments we readily obtain that r 2 j = r 2 i + r 2 j m + min{r i, r j, K i K } 2 j m + j=1 j i (i,j) E T j i and therefore r 2 i m. As the choice of i was arbitrary, we conlcude that r 2 m and therefore, (19) will always yield bounds that are no worse than the bounds in the existing literature. Therefore, Theorem 3.6 provides a unified treatment for the existing upper bounds on the minimal rank of optimal solutions to semidefinite programs. Nevertheless, however, the conclusions of Theorem 3.6 do not, in general, lead to an improvement of said bounds. We will show that for any for any ρ = min{ω(g), 2 m} it is possible to construct a matrix X S n G,c which has r i = rank(wi XW i), i = 1, 2,..., p with r = max 1 i p r i = ρ, and satisfies (19). Indeed, given m n G, let ρ = min{ω(g), 2 m} and consider a maximal clique K i, i = 1, 2,..., p of size n i ρ. Let S ρ i be a subset of K i with S ρ i = ρ and consider a matrix X S n G,c whose principal submatrix on the indices in Sρ i has full rank and whose remaining entries are equal to zero. It follows readily that r j = S ρ i K j for all j = 1, 2,..., p. Recall that the clique tree G T of G in the theorem statement has the clique intersection property which ensures that for for every pair of distinct cliques K i, K j, we have K i K j K k, whenever K k is in the path from K i to K j. This implies that K i K j K k K j and since S ρ i K j K i K j for all j, we obtain that S ρ i K j K k K j for all K k in the path from K i to K j. Root the clique tree G T at node i and assume that G T has depth D. Using our notation above we obtain (i,j) E T min{r i, r j, K i K j } 2 = = = D k=1 j:d(j)=k D k=1 j:d(j)=k D k=1 j:d(j)=k = j i S ρ i K j 2, r 2 j, min{ S ρ i K j, S ρ i K p(j), K p(j) K j } 2 min{ S ρ i K j, S ρ i K j } 2 S ρ i K j 2 where the second equality follows since K p(j) is in the path from K i to K j and the third equality follows from the clique intersection property. Subsituting the above equality to (19), we obtain ρ 2 m which holds by our choice of ρ. We have the following Corollary. Corollary 3.8. Let G be a chordal graph with maximal cliques K 1, K 2,..., K p and clique number ω(g) = max 1 i p K i. If the semidefinite program minimize C, X subject to A k, X = b k, k = 1, 2,..., m, X 0, X S n where C, A 1, A 2,..., A m S n G and b 1, b 2,..., b m R is feasible, then it has an optimal solution X satisfying rank(x) min{ω(g), 2 m}. 19

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

A new graph parameter related to bounded rank positive semidefinite matrix completions

A new graph parameter related to bounded rank positive semidefinite matrix completions Mathematical Programming manuscript No. (will be inserted by the editor) Monique Laurent Antonios Varvitsiotis A new graph parameter related to bounded rank positive semidefinite matrix completions the

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming

Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Positive Semidefinite Matrix Completions on Chordal Graphs and Constraint Nondegeneracy in Semidefinite Programming Houduo Qi February 1, 008 and Revised October 8, 008 Abstract Let G = (V, E) be a graph

More information

Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property

Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property Positive Semidefinite Matrix Completion, Universal Rigidity and the Strong Arnold Property M. Laurent a,b, A. Varvitsiotis a, a Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 XG Amsterdam,

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information

Copositive matrices and periodic dynamical systems

Copositive matrices and periodic dynamical systems Extreme copositive matrices and periodic dynamical systems Weierstrass Institute (WIAS), Berlin Optimization without borders Dedicated to Yuri Nesterovs 60th birthday February 11, 2016 and periodic dynamical

More information

Sparse Matrix Theory and Semidefinite Optimization

Sparse Matrix Theory and Semidefinite Optimization Sparse Matrix Theory and Semidefinite Optimization Lieven Vandenberghe Department of Electrical Engineering University of California, Los Angeles Joint work with Martin S. Andersen and Yifan Sun Third

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Raghunathan, A.U.; Knyazev, A. TR2016-040 July 2016 Abstract Exploiting

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION RAMTIN MADANI, GHAZAL FAZELNIA, SOMAYEH SOJOUDI AND JAVAD LAVAEI Abstract. This paper is concerned with the problem

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph MATHEMATICAL ENGINEERING TECHNICAL REPORTS Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph Hisayuki HARA and Akimichi TAKEMURA METR 2006 41 July 2006 DEPARTMENT

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

The Graph Realization Problem

The Graph Realization Problem The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations

Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations Optimizing Extremal Eigenvalues of Weighted Graph Laplacians and Associated Graph Realizations DISSERTATION submitted to Department of Mathematics at Chemnitz University of Technology in accordance with

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

The Strong Largeur d Arborescence

The Strong Largeur d Arborescence The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes

Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes Facets for Node-Capacitated Multicut Polytopes from Path-Block Cycles with Two Common Nodes Michael M. Sørensen July 2016 Abstract Path-block-cycle inequalities are valid, and sometimes facet-defining,

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Normal Fans of Polyhedral Convex Sets

Normal Fans of Polyhedral Convex Sets Set-Valued Analysis manuscript No. (will be inserted by the editor) Normal Fans of Polyhedral Convex Sets Structures and Connections Shu Lu Stephen M. Robinson Received: date / Accepted: date Dedicated

More information

A Geometric Approach to Graph Isomorphism

A Geometric Approach to Graph Isomorphism A Geometric Approach to Graph Isomorphism Pawan Aurora and Shashank K Mehta Indian Institute of Technology, Kanpur - 208016, India {paurora,skmehta}@cse.iitk.ac.in Abstract. We present an integer linear

More information

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Solvability of Linear Matrix Equations in a Symmetric Matrix Variable Maurcio C. de Oliveira J. William Helton Abstract We study the solvability of generalized linear matrix equations of the Lyapunov type

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

arxiv: v4 [math.oc] 12 Apr 2017

arxiv: v4 [math.oc] 12 Apr 2017 Exact duals and short certificates of infeasibility and weak infeasibility in conic linear programming arxiv:1507.00290v4 [math.oc] 12 Apr 2017 Minghui Liu Gábor Pataki April 14, 2017 Abstract In conic

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Equivalent relaxations of optimal power flow

Equivalent relaxations of optimal power flow Equivalent relaxations of optimal power flow 1 Subhonmesh Bose 1, Steven H. Low 2,1, Thanchanok Teeraratkul 1, Babak Hassibi 1 1 Electrical Engineering, 2 Computational and Mathematical Sciences California

More information

The doubly negative matrix completion problem

The doubly negative matrix completion problem The doubly negative matrix completion problem C Mendes Araújo, Juan R Torregrosa and Ana M Urbano CMAT - Centro de Matemática / Dpto de Matemática Aplicada Universidade do Minho / Universidad Politécnica

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Doubly Nonnegative Relaxations for Quadratic and Polynomial Optimization Problems with Binary and Box Constraints Sunyoung Kim, Masakazu

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Strongly Regular Decompositions of the Complete Graph

Strongly Regular Decompositions of the Complete Graph Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

More information

Reconstruction and Higher Dimensional Geometry

Reconstruction and Higher Dimensional Geometry Reconstruction and Higher Dimensional Geometry Hongyu He Department of Mathematics Louisiana State University email: hongyu@math.lsu.edu Abstract Tutte proved that, if two graphs, both with more than two

More information

Isotropic matroids III: Connectivity

Isotropic matroids III: Connectivity Isotropic matroids III: Connectivity Robert Brijder Hasselt University Belgium robert.brijder@uhasselt.be Lorenzo Traldi Lafayette College Easton, Pennsylvania 18042, USA traldil@lafayette.edu arxiv:1602.03899v2

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Singularity Degree of the Positive Semidefinite Matrix Completion Problem

Singularity Degree of the Positive Semidefinite Matrix Completion Problem Singularity Degree of the Positive Semidefinite Matrix Completion Problem Shin-ichi Tanigawa November 8, 2016 arxiv:1603.09586v2 [math.oc] 4 Nov 2016 Abstract The singularity degree of a semidefinite programming

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

Linear-Time Algorithms for Finding Tucker Submatrices and Lekkerkerker-Boland Subgraphs

Linear-Time Algorithms for Finding Tucker Submatrices and Lekkerkerker-Boland Subgraphs Linear-Time Algorithms for Finding Tucker Submatrices and Lekkerkerker-Boland Subgraphs Nathan Lindzey, Ross M. McConnell Colorado State University, Fort Collins CO 80521, USA Abstract. Tucker characterized

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called

More information

Convex and Semidefinite Programming for Approximation

Convex and Semidefinite Programming for Approximation Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh June 2009 1 Linear independence These problems both appeared in a course of Benny Sudakov at Princeton, but the links to Olympiad problems are due to Yufei

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lecture Semidefinite Programming and Graph Partitioning

Lecture Semidefinite Programming and Graph Partitioning Approximation Algorithms and Hardness of Approximation April 16, 013 Lecture 14 Lecturer: Alantha Newman Scribes: Marwa El Halabi 1 Semidefinite Programming and Graph Partitioning In previous lectures,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

An LMI description for the cone of Lorentz-positive maps II

An LMI description for the cone of Lorentz-positive maps II An LMI description for the cone of Lorentz-positive maps II Roland Hildebrand October 6, 2008 Abstract Let L n be the n-dimensional second order cone. A linear map from R m to R n is called positive if

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

Preliminaries and Complexity Theory

Preliminaries and Complexity Theory Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information