Completely Positive Reformulations for Polynomial Optimization
|
|
- Scott Lawrence
- 5 years ago
- Views:
Transcription
1 manuscript No. (will be inserted by the editor) Completely Positive Reformulations for Polynomial Optimization Javier Peña Juan C. Vera Luis F. Zuluaga Received: date / Accepted: date Abstract Polynomial optimization encompasses a very rich class of problems in which both the objective and constraints can be written in terms of polynomials on the decision variables. There is a well established body of research on quadratic polynomial optimization problems based on reformulations of the original problem as a conic program over the cone of completely positive matrices, or its conic dual, the cone of copositive matrices. As a result of this reformulation approach, novel solution schemes for quadratic polynomial optimization problems have been designed by drawing on conic programming tools, and the extensively studied cones of completely positive and of copositive matrices. In particular, this approach has been applied to solve key combinatorial optimization problems. Along this line of research, we consider polynomial optimization problems that are not necessarily quadratic. For this purpose, we use a natural extension of the cone of completely positive matrices; namely, the cone of completely positive tensors. We provide a general characterization of the class of polynomial optimization problems that can be formulated as a conic program over the cone of completely positive tensors. As a consequence of this characterization, it follows that recent related results for quadratic problems can be further strengthened and generalized to higher order polynomial optimization problems. Also, we show that the conditions underlying the characterization are conceptually the same, regardless of the degree of the polynomials defining the problem. To illustrate our results, we discuss in further detail special and relevant instances of polynomial optimization problems. Keywords Polynomial Optimization Copositive Programming Completely Positive Tensors Quadratic Programing J. Peña Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA jfp@andrew.cmu.edu J. C. Vera Tilburg School of Economics and Management, Tilburg University, Tilburg, The Netherlands j.c.veralizcano@tilburguniversity.edu L. F. Zuluaga Industrial and Systems Engineering, Lehigh University., Bethlehem, PA, USA luis.zuluaga@lehigh.edu
2 2 Javier Peña, Juan C. Vera, Luis F. Zuluaga 1 Introduction Historically, polynomials are among the most popular class of functions used for empirical modeling in science and engineering. Polynomials are easy to evaluate, appear naturally in many physical (real-world) systems, and can be used to accurately approximate any smooth function. It is not surprising then, that the task of solving polynomial optimization problems; that is, problems where both the objective and constraints are multivariate polynomials, is ubiquitous and of enormous interest in these fields. Clearly, polynomial optimization problems encompass a very general class of non-convex optimization problems, including key combinatorial optimization problems. The connections between real algebraic results and convex optimization uncovered by the early work of Nesterov [43] and Shor [58], revealed the possibility of using conic optimization techniques and algorithms to find global or near-global solutions for a (non-convex) polynomial optimization problem (POP). Further seminal work by Parrilo [46], and Lasserre [35], followed by very active research in this direction has given rise to the area of polynomial optimization (PO) [cf., 1]. Nowadays, PO techniques are being used to successfully tackle theoretical and practical problems in diverse areas such as: control [45], combinatorial optimization [22, 31, 33, 39, 48], information theory [44], signal processing [30], integer programming [37, 38], quadratic programming [8, 16], physics [24], linear algebra [47], finance [5], and probability [6, 36], among others. For additional and excellent information on the recent PO literature, we refer the reader to [1, 7]. A well established line of research in polynomial optimization looks at reformulating quadratic POPs as a completely positive program (CPP); that is, a conic (linear) program over the cone of completely positive (CP) matrices [cf., 10, 28]. An important example of this approach is the work of de Klerk and Pasechnik [22], who show that computing the stability number α(g) of a graph G can be reformulated as a CPP. The results in [22] have led to novel ways to address the solution of this key combinatorial optimization problem [see, e.g., 22, 32, 48]. In particular, de Klerk and Pasechnik [22] study a hierarchy of (size-increasing) semidefinite programs whose value converges to α(g), where the first order of the hierarchy corresponds to the Schrijver s ϑ -function [cf., 56], which strengthens the well-known Lovász ϑ-approximation to α(g) [cf., 41]. There are two main advantages behind reformulating POPs as CPPs. First, it allows the use of the highly developed and elegant framework of conic programming. Second, the complexity of the POP is captured into the widely studied cones of completely positive matrices, or by conic duality copositive matrices. Because of this duality relationship, a substantial part of the relevant literature here refers to copositive matrices, and copositive programs; that is, a conic (linear) optimization problem over the cone of copositive matrices. As mentioned in [16], any CPP has a natural associated dual conic (linear) program over the cone of copositive matrices. In light of this relationship, through out the article, we take the liberty to refer to both completely positive and copositive programs as CPPs. In particular, this kind of reformulation of POPs as CPPs means that advances in the solution of CPPs can be applied to a very rich class of quadratic POPs. In part because of this fact, the solution of CPPs has been the focus of recent and active research. The early seminal work of Parrilo [46], who introduced a hierarchy of semidefinite programs to approximate the solution of CPPs has been followed by the introduction of different solution schemes for CPPs. For example, a hierarchy of linear programs to approximate the solution of CPPs was introduced in [22], and a different hierarchy of semidefinite programs was introduced in [48]. The corresponding conic dual approximation schemes have been studied in [25]. Nowadays, there is a large variety of algorithmic schemes to solve CPPs. For example, consider the cutting plane-based algorithm in [26], the adaptive linear programming algorithm in [15], the outer approximation scheme in [13], the combination of KKT conditions and relaxations of the cone of copositive matrices
3 Completely Positive Reformulations for Polynomial Optimization 3 in [19], and the polynomial time approximation scheme (PTAS) in [21]. For further review of the algorithmic solution schemes for CPPs we refer the reader to the recent surveys by Bomze [10], Bundfuss [14], Dür [28]. The class of quadratic POPs for which CPP formulations have now been obtained includes a variety of quadratic programming (QP) and combinatorial optimization problems such as: standard quadratic programming problems [8, 12], quadratic assignment problems [51], the graph tri-partitioning problem [50], and the chromatic number of a graph [27, 33]. In more generality, Burer [16] has recently shown that quite a general class of QPs with linear as well as binary constraints allows a CPP formulation. In [42], CPP formulations are also used to solve mixed integer linear programs under uncertainty. The CPP formulation results of Burer [16] for QPs with linear as well as binary constraints, and QPs with quadratic constraints have been further extended by Arima et al. [2], Bai et al. [3], Bomze and Jarre [9], Burer and Dong [17], Dickinson et al. [23], Eichfelder and Povh [29]. Next, we briefly review these extensions. In [9], the conditions provided in [16] under which a QP with linear and binary constraints is equivalent to a CPP are analyzed in more detail from a topological point of view. In [17, 23, 29], conic programming reformulations for general QPs are obtained in terms of the cone of generalized completely positive matrices [cf., 17], or its conic dual, the cone of set-semidefinite matrices [cf., 29]. In [2], CPP formulations are obtained for QPs with constraints given by homogenous quadratic polynomials except one inhomogeneous quadratic constraint with no linear terms. Further results in this direction are given in [3], where sufficient conditions to obtain CPP formulations for QPs with a single quadratic constraint (or multiple ones that can be aggregated into a single one) are provided. As discussed in [3] the latter results provide CPP formulations for QPs with linear and complementary constraints. Our manuscript further contributes to the line of research on conic reformulations of POPs. More specifically, the main contributions of the manuscript can be summarized as follows. We consider a general class of POPs in which the polynomials are not necessarily quadratic (i.e., with degree higher than 2). For this purpose, we use a natural extension of the cone of CP matrices; namely, the cone of completely positive (CP) tensors. More specifically, a CP matrix is a secondorder CP tensor. In general, the order of the CP tensors used is related to the degree of the polynomials in the POP (see Section 3.1). If the feasible set of the POP has a compact feasible set, a CPP reformulation of the POP over fourth-order CP tensors is possible (see Section 4.5). Fourth-order CP-tensors seem to be necessary even for quadratic POPs with a compact feasible set (see Section 4.3). We provide a general characterization of the class of POPs that can be formulated as a conic program over the cone of CP tensors. This characterization shows that the results in [2, 3, 9, 16] for some classes of quadratic POPs can be generalized to a more general class of POPs. The conditions required for the conic reformulation to be equivalent to the original POP can be captured in general using the notion of the horizon cone [cf. 54]. When restricted to quadratic POPs our conditions are weaker than the previously existing ones in the literature, and thus we capture a larger class of quadratic POPs. In particular, we provide an answer to an implicit open question in [9] regarding the characterization of POPs that can be reformulated as CPPs. Conic reformulations for general classes of POPs have been obtained in [17, 23, 29] using setsemidefinite matrices. Here, we do so using the cone of CP tensors. The advantage of the latter relies on two facts: First, as mentioned in [16, 17], unlike the cone of CP matrices (or its dual counterpart given by copositive matrices), relatively little is known about algorithmic solution schemes for optimization problems over the cone of set-semidefinite matrices. Second, algorithmic solution schemes for CPPs; where the difficulty is dealing with CP matrices, generically apply for CP tensors regardless of their order. This has been partly explored in [59]. In particular, this
4 4 Javier Peña, Juan C. Vera, Luis F. Zuluaga is the case for the semidefinite programming and linear programming approximations for CPPs in [22, 25, 46, 48], the PTAS provided in [21], and the approximation scheme of [13]. The rest of the article is organized as follows. In Section 2, we formally state the definitions of POP and CPP, and review some of the previous results regarding CPP reformulations of quadratic POPs. In Section 3, we present the main results of the article regarding CPP reformulations of a class of general POPs. In Section 4, we derive relevant consequences of this general CPP reformulation results. We finish in Section 5 with some final remarks. 2 Preliminaries Consider the polynomial optimization problem (POP): inf q(x) s.t. h i (x) = 0, i = 1,..., m, g j (x) 0, j = 1,..., r, (1) for some given n-variate polynomials q, h i, g j R[x], i = 1,..., m, j = 1,..., r. Currently, there is a well established body of research that addresses the solution of (nonconvex) quadratic POPs by reformulating the original problem as a conic (linear) program over the cone C n of completely positive matrices, or its conic dual C n of copositive matrices. The former kind of conic program is usually called a completely positive program and the latter a copositive program. Given this conic duality relationship [see, e.g., 16], here we take the liberty to refer to both completely positive and copositive programs as CPPs. Formally, the cone of completely positive matrices C n is given by [see, e.g., 16, 22] { k C n = x i x i i=1 : x i R n + for i {1,..., k}, k N }. (2) The cone C n is a subset of S n, the space of symmetric matrices in R n n. We endow S n with the Frobenius inner product, which we denote by,, defined by A, B := trace(ab) = i,j A ijb ij. For a set U in a finite dimensional vector space, recall its conic dual U = {w : u, w 0 for all u U} [cf., 53]. The conic dual of C n is the cone of copositive matrices, that is [see, e.g., 16, 22] C n = { A S n : x Ax 0 for all x R n +}. (3) In the literature which of the above cones is labeled as the dual varies. Here, we choose (3) to be the dual cone since our discussion centres on the completely positive nature of the matrices in (2). A seminal example of the CPP reformulation approach is the work of de Klerk and Pasechnik [22], who show that the combinatorial problem of finding the stability number α(g) of a graph G can be reformulated as a CPP. Theorem 1 (de Klerk and Pasechnik [22]) Let the graph G = (V, E) be given with V = n, and adjacency matrix A S n. The stability number of G is given by: α(g) = max J, X = min λ s.t. A, X = 0 I, X = 1 s.t. λi + ya J Cn X C n
5 Completely Positive Reformulations for Polynomial Optimization 5 where J, I S n respectively denote the all-ones and the identity matrices. CPP formulations along the lines of Theorem 1 have now been obtained for a variety of quadratic programming (QP) and combinatorial optimization problems such as: standard quadratic programming problems [8, 12], quadratic assignment problems [51], the graph tri-partitioning problem [50], and the chromatic number of a graph [27, 33]. In more generality, Burer [16] has recently shown that quite a general class of QPs with linear as well as binary constraints allows a CPP reformulation. Theorem 2 (Burer [16]) Let Q S n, c R n, a i R n, b i R, for i = 1,..., m, and B {1,..., n} be given, and L = {x R n + : a i x = b i, i = 1,..., m}. If x L 0 x j 1 for all j B, (4) then the following two optimization problems are equivalent: min Q, X + 2c x min x Qx + 2c x s.t. a s.t. a i x = b i x = b i, i = 1,..., m, i, i = 1,..., m, a (P) i Xa i = b 2 i, i = 1,..., m, x j {0, 1}, j B, x( j = X) jj, j B x 0, 1 x C x X n+1. (C) That is, the optimal values of (P) and (C) are equal, and if (x, X ) is an optimal solution for (C), then x is in the convex hull of optimal solutions of (P). Note that condition (4) can be satisfied by adding the redundant linear constraints 0 x i 1 for all i B to (P), and linear inequalities can be considered by adding appropriate slack variables [cf., 16]. Thus, Theorem 2 shows that QPs with linear constraints and binary variables can be reformulated as a CPP of the form (C). In [9], condition (4) is analyzed in more detail from a topological point of view. The CPP reformulation approach outlined above has been further and recently extended. In particular, extensions of Theorem 2 to consider QPs with quadratic constraints are also considered in [16]. Further results in this direction are given in [3], where sufficient conditions to obtain CPP formulations for QPs with a single quadratic constraint (or multiple ones that can be aggregated into a single one) are provided. As discussed in [3] the latter results provide CPP formulations for QPs with linear and complementary constraints. In [2], CPP formulations are obtained for QPs with constraints given by homogenous quadratic polynomials except one inhomogeneous quadratic constraint with no linear terms. 3 General completely positive reformulation In this section we consider POPs in which the polynomials are not necessarily quadratic. We provide a general characterization of the class of POPs that can be formulated as a conic program over the cone of completely positive tensors. This is a natural generalization of the cone of completely positive matrices (cf., eq. (2)). To formally state our results let us first denote by Td n the set of tensors of order d and dimension n, that is, T n d = R n R }{{ n. } d
6 6 Javier Peña, Juan C. Vera, Luis F. Zuluaga A tensor is said to be symmetric if the values of its entries are independent of the permutation of its indices. We denote by Sd n the set of symmetric tensors (of order d and dimension n). Observe that S2 n is the familiar space of n n symmetric matrices. Let M d : R n Sd n be the mapping defined by M d (x) = x x x, (5) }{{} d where denotes the tensor product. For example, for any x R n, M 2 (x) = xx. In general, M d (x) is the symmetric tensor whose (i 1, i 2,..., i d ) entry is x i1 x i2 x id. Observe that is not used to denote the Kronecker product of matrices. Instead the Kronecker product of x, y R n is vec(x y). We denote by, n,d the inner product in S n d defined by A, B n,d = i 1,...,i d A i1,...,i d B i1,...,i d. Notice that for d = 2 the inner product, n,2 is the Frobenius inner product. If n and d are clear from the context, we will write just, for, n,d. Observe that for x, y R n we have M d (x), M d (x) = (x y) d. Define the cone of completely positive tensors as C n,d = conv(m d (R n +)), (6) where conv( ) denotes the convex hull [cf., 53]. The notation in (6) is motivated by the fact that the second-order completely positive tensors C n,2 = C n, the cone of completely positive matrices (2). Accordingly, Cn,2 = Cn, the cone of copositive matrices. In general, Cn,d has a one to one correspondence with the cone of d-degree copositive forms; that is, the homogeneous polynomials of degree d that are non-negative in the non-negative orthant. For example, it is clear from (3) that Cn,2 corresponds to the set of quadratic copositive forms. The tensor generalization of completely positive matrices was implicitly introduced in [25, eq. (6)], and provides a natural generalization of completely positive matrices. As in the case d = 2 of completely positive matrices, C n,d is a closed, pointed, convex cone with non-empty interior. Proposition 1 For any d > 0 and n > 0, C n,d is a closed, pointed, convex cone with non-empty interior. Proof First, we show that C n,d is closed. Assume X k X with X k C n,d. Let N denote the dimension of Sd n. By Carathéodory s Theorem for cones (see, e.g., [4, 53]), for each k there exist u k,i R n +, i = 1,..., N such that X k = N i=1 M d(u k,i ). Let e R n + be the all-ones vector. Then N N (e u k,i ) d = Md (u k,i ), M d (e) = X k, M d (e) X, M d (e). (7) i=1 i=1 Thus, as each u k,i R n +, (7) implies U i := {u k,i : k = 1, 2,... } is bounded. By suitably selecting subsequences, it follows that each U i, i = 1,..., N + 1 has a limit point ū i R n + such that X k N+1 i=1 M d(ū i ). Therefore, X = N+1 i=1 M d(ū i ) conv(m d (R n +)). The pointedness of C n,d is immediate because by construction any X C n,d has non-negative components and so X, X C n,d implies X = 0. To show that C n,d has nonempty interior, it suffices to show that Cn,d is pointed. To that end, assume T, T C n,d. Then T, X n,d = 0 for all X span(c n,d ). Consider the homogeneous polynomial p(x) defined by p(x) := T, M d (x) n,d. Since T, X n,d = 0 for all X C n,d, it follows that the homogeneous polynomial p(x) satisfies p(x) = 0 for all x R d +. Therefore p(x) is the zero polynomial and so T = 0. Since this holds for all T Sd n with T, T C n,d we conclude that C n,d is pointed.
7 Completely Positive Reformulations for Polynomial Optimization 7 By defining an appropriate mapping of its coefficients, the evaluation of a d-degree polynomial can be written as an inner product in Sd n via the mapping M d( ). Let R d [x] := {p R[x] : deg(p) d} and define C d : R d [x] S n+1 d by C d p β x β α 1! α n! := p α α! β Z n + i1,...,id : β d where α is the (unique) exponent such that x α1 1 xαn n = x i1 x id (i.e., α k is the number of times k appears in the multi-set {i 1,..., i d }). The main property of the linear operator C d is that for any d-degree, n-variate polynomial p R d [x] and a R n we have p(a) = C d (p), M d (1, a) n,d. (8) For the sake of clarity we slightly abuse notation in (8) by writing M d (1, a) for M d ((1, a ) ). Example 1 Let p(x 1, x 2 ) = x 3 1 2x 2 1x 2 x x 1 x x 1 x 2 + 3x 2 1. For any a = (a 1, a 2 ) R 2, p(a) = C, M, where C = C 3 (p) and M = M 3 (1, a), that is C (0,, ) = , C (1,, ) = , C (2,, ) = , and 1 a 1 a 2 a 1 a 2 M (0,, ) = a 1 a 2 1 a 1 a 2 a 2 a 1 a 2 a 2 1 a 1 a 2, M (1,, ) = a 2 a 2 a 1 a 2 a 2 1 a 3 1 a 2 2 1a 2, M (2,, ) = a 1 a 2 a 2 1a 2 a 1 a 2 2 a 1 a 2 a 2 1a 2 a 1 a a 2 2 a 1 a 2 2 a Main results We are now ready to state our main results, which characterize classes of POPs that can be reformulated in terms of the cone of completely positive tensors C n,d. Notice that by adding appropriate extra variables and multiplying by suitable non-negative polynomials, the POP (1) can be rewritten as a POP with equality constraints of the same degree only, where all the variables are constrained to be non-negative (see Section 3.3 and Remark 1 for details). Consider POPs of the form: inf q(x) s.t. h i (x) = 0, i = 1,..., m, (9) x 0, for some given n-variate polynomials q, h i R[x], i = 1,..., m. The following CPP is a relaxation of (9). inf C d (q), Y s.t. C d (h i ), Y = 0, i = 1,..., m C d (1), Y = 1 Y C n+1,d. (10) Proposition 2 Let q, h 1,..., h m R[x] in (9) be such that deg(h i ) d for i = 1,..., m, and deg(q) d. Then the optimal value of (10) is a lower bound for the optimal value of (9).
8 8 Javier Peña, Juan C. Vera, Luis F. Zuluaga Proof Let x R n + be a feasible solution of (9). Applying (8) to h i (x), i = 1,..., m it follows that Y = M d (1, x) is a feasible solution of (10). Applying (8) again it follows C d (q), Y = q(x). The CPP (10) can be seen as a natural convex lifting of the POP (9). However this relaxation is not always tight as illustrated in Section 4.3. We are interested in characterizing conditions under which the relaxation (10) is tight, and the optimal solutions of (10) can be characterized in terms of the optimal solution of (9). Definition 1 Problems (9) and (10) are equivalent if the following holds: (a) The optimal values of (9) and (10) are the same. (b) One of the problems (9) and (10) attains its optimal value if and only the other one does. (c) For any Y C n+1,d, let X(Y ) R n + be defined by X(Y ) j = Y 0,...,0,j for j = 1,..., n. When attainment takes place, {X(Y ) : Y is an optimal solution to (10)} = conv{x : x is an optimal solution to (9)}. Notice that if (9) and (10) are equivalent, and (9) has a unique solution x, then X(Y ) = x, for any optimal solution Y of (10). Next, in Theorems 4 and 5, we provide conditions under which problems (9) and (10) are equivalent. These conditions characterize a certain appropriate behavior at infinity of the polynomials involved in (9). All this is formalized in terms of the so-called horizon cone [cf. 54] and the homogenous component of a polynomial. Given a non-empty set S R n, the horizon cone S is defined as S := {y R n : there exist x k S, λ k R +, k = 1, 2,... such that λ k 0 and λ k x k y}. (11) If S =, define S := {0}. The horizon cone generalizes the recession cone (see Proposition 3(i)). Proposition 3 below summarizes some straightforward properties of horizon cones that we will use throughout the sequel. Proposition 3 The following are some properties of the horizon cone defined in (11): (i) Let A R n m, b R m, and S = {x R n : Ax b}. If S, then S = {y R n : Ay 0}. (ii) If S R n is bounded, then S = {0}. (iii) If S T R n then S T. (iv) If S, T R n then (S T ) = S T. (v) If S, T R n then (S T ) S T but the reverse inclusion does not necessarily hold. Given a polynomial h R[x] let h(x) denote the homogeneous component of h of highest total degree. In other words, h(x) is obtained by dropping from h the terms whose total degree is less than deg(h). We will also write h 1 (0) to denote the set of zeros of the polynomial h R[x]. Notice that h 1 (0) is the set of zeros at infinity of h [see, e.g., 52] and h 1 (0) h 1 (0). In addition, the following analog of (8) holds for any d-degree, n-variate polynomial p R d [x] and a R n : p(a) = C d (p), M d (0, a) n,d. (12) Theorem 4 Let q, h 1,..., h m R d [x] in (9) be such that for i = 1,..., m (i) deg(h i ) = d, h i (x) 0 for all x S i 1, and (ii) {x S i 1 : h i (x) = 0} S i where S 0 = R n + and S i = {x R n + : h j (x) = 0, j i}, i = 1,..., m. Then problems (9) and (10) are equivalent.
9 Completely Positive Reformulations for Polynomial Optimization 9 Theorem 5 Let q, h 1,..., h m R d [x] in (9) be such that for i = 1,..., m (i) deg(h i ) = d, h i (x) 0 for all x R n +, and (ii) q(x) 0 for all x {x R n + : h i (x) = 0, i = 1,..., m} Then problems (9) and (10) are equivalent. The proof of Theorems 4 and 5 are similar, and are presented in Section 3.2. Although the condition of Theorems 4 and 5 are closely related, note that (i) and (ii) in Theorem 4 do not imply (i) and (ii) in Theorem 5 or viceversa. The conditions used in Theorems 4 and 5 encompass natural assumptions in optimization problems, as shown in Section 4 and Corollaries 1, 2 and 3 below. Corollary 1 Let q, h 1,..., h m R[x] in (9) be such that deg(q) d and, deg(h i ) = d and h i (x) 0 for any x R n + and all i = 1,..., m. Then problems (9) and (10) are equivalent in any of the following cases (i) q is bounded below on R n +. (ii) {x R n + : h i (x) = 0, i = 1,..., m} { x R n + : h i (x) = 0, i = 1,..., m }. Proof If q(x) bounded below on R n +, Lemma 1 below implies q(x) 0 for all x R n + {x R n + : h i (x) = 0, i = 1,..., m}. Hence case (i) follows from Theorem 5. To prove (ii), notice that by Proposition 2 we can assume q bounded on F P OP := {x R n + : h i (x) = 0, i = 1,..., m}. By Lemma 1, q(x) 0 for all x F P OP {x R n + : h i (x) = 0, i = 1,..., m}. Hence case (ii) follows from Theorem 5. Lemma 1 Let q be bounded below in S then q(x) 0 for all x S. Proof Let s S. Let λ k 0 and s k S be such that λ k s k s. Write q(x) = d l=0 q l(x) the expansion of q(x) in homogeneous components (i.e. q l is homogeneous of degree l). We have ( ) ( ) q(s) = lim q(λ ks k ) = lim inf k k λd k d 1 q(s k ) q l (s k ) l=0 lim inf k λ k d 1 l=0 λ d 1 l k q l (λ k s k ) where the inequality follows from lim inf k λ d k q(s k) 0 which is a consequence of q being bounded below on S. By considering the equivalent formulation of (9) obtained after replacing h i (x) = 0 by h 2 i (x) = 0, the non-negativity condition h i (x) 0 for any x R n + for i = 1,..., m in Theorem 5 and Corollary 1 is satisfied. The corresponding CPP reformulation requires the use of higher order tensors, of order 2d instead of d. Corollary 2 Let q, h 1,..., h m R[x] in (9) be such that deg(q) 2d and deg(h i ) = d for i = 1,..., m. Problem (9) is equivalent to = 0, inf C 2d (q), Y s.t. C 2d (h 2 i ), Y = 0, i = 1,..., m C 2d (1), Y = 1 Y C n+1,2d, (13) in any of the following cases: (i) q is bounded below on R n +. (ii) {x R n + : h i (x) = 0, i = 1,..., m} { x R n + : h i (x) = 0, i = 1,..., m }.
10 10 Javier Peña, Juan C. Vera, Luis F. Zuluaga (iii) q(x) 0 for all x {x R n + : h i (x) = 0, i = 1,..., m}. Proof Follows directly from Corollary 1 and Theorem 5. Next, we consider the case in which there is a known upper bound M on the value of the variables in the feasible set of (9). By considering the equivalent formulation of (9) obtained after adding extra variables y j, j = 1,..., n, and the redundant constraints y j 0, j = 1,..., n and (1 + n j=1 x j + n j=1 y j) d 2 n j=1 (M x j y j ) 2 = 0, an equivalent CPP formulation of (9) can be obtained. Corollary 3 Let d 2. Let q, h 1,..., h m R[x] in (9) be such that deg(q) d and, deg(h i ) = d. Let M be given such that {x 0 : h i (x) = 0, i = 1,..., m} [0, M] n {x : h i (x) 0, i = 1,..., m}. Problem (9) is equivalent to inf C d (q), Y s.t. C d (h i ), Y = 0, i = 0, 1,..., m C d (1), Y = 1 Y C 2n+1,d, (14) where h 0 (x, y) = (1 + n j=1 x j + n j=1 y j) d 2 n j=1 (M x j y j ) 2. Proof Let h 1(x, y) = h 0 (x, y) and h i+1 (x, y) = h i(x) for i = 1,..., m. Let S i = {(x, y) R 2n + : h j (x, y) = 0, j i}. Then S 0 = R 2n + and S i [0, M] 2n for i = 1,..., m + 1. Thus h i (x, y) 0 for all (x, y) S i 1 for any i = 1,..., m + 1. Also, Si ([0, M] 2n ) = {0}, and thus for i = 2,..., m + 1, Si {(x, y) Si 1 : h i (x, y) = 0}. For i = 1 we have S 1 = {0} = {(x, y) S0 : h 1(x, y) = 0}. Therefore Theorem 4 can be applied to obtain the result. Thus far, it has been assumed that the degree of the POP constraints are all equal. This assumption can be dropped in all the previous results, through an appropriated reformulation of (9), as pointed out by the following remark. Remark 1 The condition deg(h i ) = d in Theorems 4 and 5 can be relaxed to deg(h i ) d. In this case problem (9) is equivalent to inf C d (q), Y s.t. C d (g i h i ), Y = 0, i = 1,..., m C d (1), Y = 1 Y C n+1,d where each g i R d deg(hi)[x], i = 1,..., m is any polynomial satisfying the following two conditions: (i) R n + (g i h i ) 1 (0) = R n + h 1 i (0) (ii) R n + ( g i h i ) 1 (0) = R n 1 + h i (0). For instance, each g i can be chosen to be g i (x) = (x x n + 1) d deg(hi). Condition (i) in Remark 1 ensures that the feasible set of (9) as well as the sets S i, i = 1,..., m do not change if each constraint h i (x) = 0, i = 1,..., m is replaced with g i (x)h i (x) = 0. Condition (ii) ensures that (S i h 1 i (0)) = Si 1 h i (0) if and only if (S i (g i h i ) 1 (0)) = Si ( g i h i ) 1 (0) for i = 1,..., m. We use Remark 1 in Section 4.4, in a problem where the polynomial constraints are of different degrees.
11 Completely Positive Reformulations for Polynomial Optimization Proofs of Theorems 4 and 5 For a given set U let conic(u) = { k i=1 λ ku k : λ k R +, u k U, k > 0} be the conic hull of U. The proofs of Theorem 4 and 5 rely on the following lemma. Lemma 2 For any d > 0 and n > 0, C n+1,d = conic(m d ({0, 1} R n +)). Proof Let U R n+1. Then conic(u) = conv cone(u), with cone(u) = {λu : λ 0, u U}. Also, for any u R n+1 and λ 0, M d (λu) = λ d M d (u). Thus cone M d (U) = M d (cone(u)). Taking U = {0, 1} R n + it follows that conic(m d ({0, 1} R n +)) = conv(m d (cone({0, 1} R n +))) = conv(m d (R n+1 + )). Proof (Proof of Theorem 4) Define the following sets F P OP = {x R n + : x is a feasible solution to (9)} O P OP = {x R n + : x is an optimal solution to (9)} F CP P = {Y S n+1 d : Y is a feasible solution to (10)} O CP P = {Y S n+1 d : Y is an optimal solution to (10)}. Let ν = inf{q(x) : x F P OP }. By Proposition 2, ν inf{ C d (q), Y : Y F CP P }. (15) In particular, the statement of the theorem holds when ν =. Now, assume q(x) bounded below in F P OP (i.e. ν > ). By Lemma 1 we obtain By Lemma 2, for any Y C n+1,d q(x) 0 for all x F P OP. (16) n 1 n 0 Y = λ k M d (1, u k ) + µ j M d (0, v j ) (17) k=1 j=1 for some n 0, n 1 0, λ k, µ j > 0 and u k, v j R n +. If Y F CP P, Also, for any i = 1,..., m n 1 1 = C d (1), Y = i=1 λ k n 1 n 0 0 = C d (h i ), Y = λ k h i (u k ) + µ j hi (v j ). (18) k=1 Observe that u k R n + = S 0, k = 1,..., n 1 and v j R n + = S0, j = 1,..., n 0. In particular, by condition (i) and Lemma 1 we have h 1 (v j ) 0, j = 1,..., n 0. Therefore, condition (i) again and (18) yield h 1 (u k ) = 0, k = 1,..., n 1 and h 1 (v j ) = 0, j = 1,..., n 0. Therefore u k S 0 h 1 1 (0) = S 1, k = 1,..., n 1 and v j S0 1 h 1 (0). Hence by condition (ii) v j S1, j = 1,..., n 0. Proceeding by induction on i = 1,..., m, it follows that u k S m = F P OP for each k = 1,..., n 1 and v j Sm = F P OP for each j = 1,..., n 0. This and (16) implies q(v j ) 0 for j = 1,..., n 0. Therefore, using (8) and (12), n 1 n 0 n 1 C d (q), Y = λ k q(u k ) + µ j q(v j ) λ k q(u k ) ν. (19) k=1 j=1 j=1 k=1
12 12 Javier Peña, Juan C. Vera, Luis F. Zuluaga From (15) and (19), part (a) follows. To prove (b), notice first that if x O P OP then M d (x ) F CP P and C d (q), M d (x ) = q(x ) = ν. Also, if Y O CP P from (19) each u k in decomposition (17) of Y is in O P OP. To prove part (c), apply X to Y to obtain n 1 n 0 n 1 X(Y ) = λ k X(M d (1, u k )) + µ j X(M d (0, v j )) = λ k u k conv O P OP. i=1 j=1 Proof (Proof of Theorem 5) The proof is exactly the same as the proof of Theorem 4, except for the paragraph between equations (18) and (19) that should be replaced by the following: Observe that u k R n +, k = 1,..., n 1 and v j R n +, j = 1,..., n 0. By condition (i) and Lemma 1 we have h i (v j ) 0 for j = 1,..., n 0 and i = 1,..., m. Therefore, condition (i) again and (18) yield u k F P OP for each k = 1,..., n 1 and v j {x R n + : h i (x) = 0, i = 1,..., m} for each j = 1,..., n 0. This and condition (ii) implies q(v j ) 0 for j = 1,..., n 0. Therefore, using (8) we obtain (19). i=1 3.3 From equalities to inequalities The CPP reformulation procedures presented in Section 3.1 for the equality constrained POPs (9) can be applied to inequality constrained POPs by adding slack variables. Without loss of generality, we assume the non-negativity of all variables in (1): inf q(x) s.t. h i (x) = 0, i = 1,..., m, g j (x) 0, j = 1,..., r, x 0, (20) for some given n-variate polynomials q, h i, g j R[x]. Problem (20) can be reformulated as inf q(x) s.t. h i (x) = 0, i = 1,..., m, g j (x) t j = 0, j = 1,..., r, x, t 0. From Corollary 2 it follows that (20) is equivalent to inf C 2d (q(x)), Y s.t. C 2d (h 2 i (x)), Y = 0, i = 1,..., m, C2d ((g j (x) t j ) 2 ), Y = 0, j = 1,..., r, C 2d (1), Y = 1 Y C n+r+1,2d, (21) provided that deg(h i ) = d for i = 1,..., m, deg(g j ) = d for j = 1,..., r and deg(q) 2d, and for h(x, t) := m i=1 h i(x) 2 + r j=1 (g j(x) t j ) 2. R n+r + h 1 (0) (R n+r + h 1 (0)),
13 Completely Positive Reformulations for Polynomial Optimization 13 4 Some problem classes with special structure In this section we present some consequences of the copositive reformulations for POPs presented in Section 3. Proposition 6 below provides sufficient conditions for (ii) in Theorem 4 to be satisfied when S i is a finite union of polyhedra. This result allows us to apply the completely positive reformulation procedure presented in Section 3.1 to linearly constrained mixed-integer programs with a nonlinear objective. Its proof relies on the following lemma. Lemma 3 Let S R n, and I {1,..., n}. If there exist M R such that x I M for all x S, then S {y R n : y I = 0}. Proof Assume y S, then there exist sequences λ k 0, λ k 0, and x k S such that λ k x k y. In particular, y I = lim k λ k x k I lim k λ k M = 0. Thus y I = 0. In the statements of Lemma 3 and Proposition 6 below denotes any norm in R I or R Ii. Proposition 6 Let L R n be a polyhedron, and for i = 1,..., m let I i {1,..., n}, and c i R Ii. Define L i := {x L : x Ii = c i } for i = 1,..., m and ˆL = m i=1 L i. Assume that there exist M i R, i = 1,..., m such that x Ii M i for all x L, and all i = 1,..., m. Then ˆL = L. Proof Since ˆL L, Proposition 3(iii) yields ˆL L. On the other hand, since ˆL, there exists i {1,..., m} such that ˆL L i. Therefore, ˆL L i (by Proposition 3(iii)) = L {x R n : x Ii = 0} (by Proposition 3(i)) = L (by Lemma 3). Here is a more detailed explanation of the second step above: Since L is a polyhedron, L and L i can be written a L = {x : Ax b} and L i = {x : Ax b, x Ii = c i }. Thus by Proposition 3(i)) L i = {y : Ay 0, x Ii = 0} = {y : Ay 0} {x : x Ii = 0} = L {x : x Ii = 0}. 4.1 Linearly constrained mixed binary quadratic programming Consider the linearly constrained mixed binary quadratic problem (P) in Theorem 2. Proceeding as in [16], assume the following two conditions hold. First, (P) is feasible and second, for each j B {1,..., n} x 0, a i x = b i, i = 1,..., m 0 x j 1. (22) This ensures that for all j B Rewrite (P) as x j (1 x j ) 0 for all x R n + : a i x = b i, i = 1,..., m. (23) min q(x) (a i x b i) 2 = 0, i = 1,..., m x j (1 x j ) = 0, j B x 0. Problem (24) satisfies the conditions of Theorem 4. Indeed, condition (i) follows readily from (23) and the fact that (a i x b i) 2 0 for all x R n + such that a i x = b i, i = 1,..., m. Condition (ii) (24)
14 14 Javier Peña, Juan C. Vera, Luis F. Zuluaga holds for the first m (linear) constraints in (P) by Proposition 3. Condition (ii) also holds for the remaining B binary constraints by Proposition 6 and eq. (22). It thus follows from Theorem 4 that (24) is equivalent to the linear conic program inf C 2 (q(x)), Y s.t. C 2 ((a i x b i) 2 ), Y = 0 i = 1,..., m, C 2 (x j (1 x j )), Y = 0 j B, C 2 (1), Y = 1 Y C n+1,2. (25) It is easy to show that (25) is equivalent to the completely positive reformulation (C) in Theorem 2 for the linearly constrained mixed binary quadratic problem (P). In particular, this means that the conditions of Theorem 4 capture the conditions for the completely positive reformulation of a linearly constrained mixed binary quadratic problem derived by Burer [16]. 4.2 Quadratically constrained quadratic programming Burer [16] provides an extension of Theorem 2 for quadratically constrained quadratic programs. Specifically, consider the problem min q 0 (x) a i x = b i, i = 1,..., m q l (x) = 0, l = 1,..., m q x 0, (26) where q l R 2 [x] for l = 0,..., m q are quadratic polynomials. As in Burer [16], through out, we assume that problem (26) is feasible. Clearly, problem (26) can have binary decision variables by letting the constraint x j (1 x j ) = 0 be one of the quadratic constraints q l (x) = 0, l = 1,..., m q for any j {1,..., n}. In [16] sufficient conditions are given for (26) to be equivalent to its completely positive reformulation given by inf C 2 (q 0 (x)), Y s.t. C 2 ((a i x b i) 2 ), Y = 0 i = 1,..., m C2 (q l (x)), Y = 0 l = 1,..., m q (27) C 2 (1), Y = 1 Y C n+1,2. Burer [16] shows that the following conditions ensure the equivalence of (26) and (27): For l = 1,..., m q x L q l (x) := x Q l x + 2c l x + κ l 0 and d L d j = 0 for all j B l, (28) where L := {x 0 : a i x = b i, i = 1,..., m} and B l := {j {1,..., n} : Q l ij = Ql ij 0 or cl j 0 for all i {1,..., n}}, with Q l S n, c l R n, κ l R, l = 1,..., m q. Proposition 7 and Example 2 below show that the conditions for the equivalence between (26) and (27) given in Theorem 4 are strictly weaker than (28). To establish Proposition 7 we rely on the following technical lemma. For S R n, we say that S is regular if x S and d S x + d S. (29)
15 Completely Positive Reformulations for Polynomial Optimization 15 Lemma 4 Let S R n be regular and h(x) R[x] be a polynomial satisfying S h 1 (0). Assume that d S d j = 0 or h(x) x j 0 for all j {1,..., n} (30) Then S h 1 (0) = S = (S h 1 (0)) and S h 1 (0) is regular. Proof Condition (30) implies x R n and d S h(x + d) = h(x) and h(x + d) = h(x). (31) Using (31), h(d) = h(0) = 0 for any d S. That is, S h 1 (0) = S. Now take d S and x S h 1 (0). Define λ k = 1/k and x k = x + kd for k = 1, 2,.... By regularity of S, we have x k S. By (31) we have h(x k ) = h(x + kd) = h(x) = 0. Notice that λ k 0, x k S h 1 (0), and d = lim k λ k x k (S h 1 (0)). Thus S (S h 1 (0)). This together with Proposition 3(iii) implies that S = (S h 1 (0)). To show that S h 1 (0) is regular, take x S h 1 (0) and d (S h 1 (0)) = S, by regularity of S, x + d S and by (31), h(x + d) = h(x) = 0. Therefore x + d S h 1 (0). Proposition 7 Let q(x) = q 0 (x), h i (x) = (a i x b i) 2, i = 1,..., m, and h m+l = q l (x), l = 1,..., m q. Condition (28) implies the conditions of Theorem 4. Proof Note that for i = 1,..., m, h i (x) satisfies both condition (i) (since h i (x) is a square) and condition (ii) (as a result of Proposition 3 (i)) of Theorem 4. Also, h m+l (x), l = 1,..., m q satisfies condition (i) of Theorem 4 since by assumption h m+l (x) 0 for all x L S m+l 1. What remains to show is that h m+l (x), satisfies condition (ii) in Theorem 4 for l = 1,..., m q. That is, {x Sm+l 1 : h m+l (x) = 0} = Sm+l 1 1 h m+l (0) S m+l = (S m+l h 1 (0)) for l = 1,..., m q. This fact follows by noticing that S m = L which is a polyhedron and therefore regular (c.f., Proposition 3(i), and (29)) and repeatedly applying Lemma 4 to S = S m+l 1, h = h m+l, l = 1,..., m q. The following example shows that the conditions of Theorem 4 are strictly weaker than (28). Example 2 Consider the problem min x 2 + 4x z 2 (y + z 1) 2 = 0 q(x, y, z) := (x 2) 2 + y(2x 3) = 0 x, y, z 0. (32) Let L = {(x, y, z) 0 : y + z = 1}. Notice (x, y, z) L 0 y 1 and then q(x, y, z) = (1 y)(x 2) 2 + y(x 1) 2 0 for all (x, y, z) L. Also, L = {(x, 0, 0) : x 0}, and thus (28) is not satisfied in this case. However, (L q 1 (0)) = {(1, 1, 0), (2, 0, 1)} = {(0, 0, 0)} = L {(x, y, z) : x 2 + 2yx = 0} = L q 1 (0). For h 1 (x, y, z) = (y + z 1) 2 and h 2 (x, y, z) = q(x, y, z), we get S 0 = R 3 + and S 1 = L. It thus follows from Theorem 4 that (32) is equivalent to the completely positive reformulation given by (27).
16 16 Javier Peña, Juan C. Vera, Luis F. Zuluaga 4.3 On the non-negativity condition Now we show that the non-negativity condition, h i (x) 0 for all x S i in Theorem 4 cannot be dropped. Consider min 4x y 2x 2 2xy y 2 x 2 xy = 0, y 2 (33) y = 0, x, y 0. Notice that the non-negativity condition does not hold for any of the two constraints. A completely positive programming relaxation of (33) is ( ) inf 2 2 1, Y ( ) s.t , Y = 0, ( ) , Y = 0, (34) ( ) , Y = 1, Y C 3,2. It is easy to check that the feasible set of (33) is {(0, 0), (0, 1), (1, 1)}, where (0, 1) and (1, 1) are the optimal solutions with optimal value equal to 2. On the other hand it is easy to check that Y = is feasible for (34) and has objective value 1 2 < 2. Thus the two problems are not equivalent. Actually it can be shown that Y is optimal for (34). On the other hand, it is easy to see that Corollary 2(iii) applies and so (33) is equivalent to inf C 4 (q 0 (x)), Y s.t. C 4 (q i (x)), Y = 0 i = 1, 2 C 4 (1), Y = 1 Y C 3,4, (35) for q 0 (x, y) := 4x y 2x 2 2xy y 2, q 1 (x, y) := (x 2 xy) 2 = 0, q 2 (x, y) := (y 2 y) 2 = 0. Recall problem (P) in Theorem 2, where L = {x R n + : a i x = b i, i = 1,..., m}, and B {1,..., n}. Note that Proposition 6 implies that both the key condition introduced in [16, eq. (1)], namely x L 0 x j 1 for all j B (36) and the weak condition introduced in [9, eq. (4)]; namely, x L x j are bounded for all j B, (37) are sufficient for condition (ii) in Theorem 4 to hold in the special case of linearly constrained mixed-binary quadratic programs. In [9, after Theorem 2.1] an implicit question is raised about
17 Completely Positive Reformulations for Polynomial Optimization 17 why, unlike the key condition (36), the weak condition (37) is not sufficient for the completely copositive convexification procedure to produce an equivalent formulation for the linearly constrained mixed-binary quadratic program (P). From Theorem 4, the reason is clear: while the key condition ensures that the non-negativity condition (i) in Theorem 4 holds, the weak condition does not; for instance, if the upper bound in one of the binary variables x j is greater than one, then the required non-negativity of the corresponding binary constraint x i (1 x i ) in L is not assured. The next proposition formally states this fact. Proposition 8 Under the definitions of Theorem 2 let q(x) = x Qx+2c x, h i (x) = (a i x b i) 2, i = 1,..., m, and h m+i = x i (1 x i ), i = 1,..., B (where w.l.o.g. it is assumed that the binary variables are the first B variables of problem (P)). Then: 1. The weak condition (37) implies condition (ii) in Theorem The key condition (36) implies conditions (i) and (ii) in Theorem 4. Proof Note that for i = 1,..., m, h i (x) satisfies both condition (i) (since h i (x) is a square) and condition (ii) (as a result of Proposition 3 (i)) of Theorem 4. Also notice that L = {x R n + : h i (x) = 0, i = 1,..., m} is a polyhedron. Then, using Proposition 6 with ˆL = m i=1 {x L : x i = 0} {x L : x i = 1}, it follows that (37) above implies condition (ii) of Theorem 4 for h m+i (x), i = 1,..., B, and the first statement of the proposition follows. Since (36) implies (37), to show the second statement of the proposition, it remains to show that (36) implies that h m+i (x) 0 for all x S i 1 L for i = 1,..., B. The later clearly follows from x L 0 x i 1 for all i B. However, as the example in this section shows, the key condition can be replaced by the weak condition at the price of increasing the degree of the polynomials involved in the formulation of the problem. 4.4 An instance with d = 3 Next, as an illustration we consider MAX3SAT. Given a 3CNF (i.e. with at most 3 variables per clause) formula Φ with n variables and m clauses, we want to know the maximum number of clauses in Φ that can be satisfied simultaneously by a truth assignment. We show a POP formulation for this problem class, where the assumptions of Theorem 4 hold, and thus an equivalent CPP reformulation of MAX3SAT is obtained. Given a 3CNF formula Φ, associate to it a polynomial P Φ (x, y) with 2n variables and m monomials. To do this, with a literal X i associate the variable x i and with X i associate the variable y i. With a clause associate the monomial formed by the product of the variables associated with its literals. With Φ associate P Φ (x, y), the polynomial formed by the sum of the monomials associated with the clauses from Φ. Given a truth assignment σ : {x 1,..., x n } {0, 1}, P Φ (1 σ, σ) is the number of clauses not satisfied by σ. Thus, the maximum number of clauses in Φ that any assignment can satisfy is equal to max m P Φ (x, y) s.t. x i + y i = 1 i = 1,..., n x i y i = 0 i = 1,..., n x, y 0. (38)
18 18 Javier Peña, Juan C. Vera, Luis F. Zuluaga As suggested by Remark 1, problem (38) can be rewritten as max m P Φ (x, y) s.t. (x i + y i + 1)(x i + y i 1) 2 = 0 i = 1,..., n x 2 i y i = 0 i = 1,..., n x, y 0. (39) Notice that for every i = 1,..., n x 2 i y i, (x i + y i + 1)(x i + y i 1) 2 0 for all x R 2n +. By Proposition 3 we have for every i = 1,..., n and every polyhedron S R 2n +, {(x, y) S : (x i + y i + 1)(x i + y i 1) 2 = 0} = {(x, y) S : x i + y i = 1} = {(c, d) S : c i + d i = 0} = {(c, d) S : (c i + d i ) 3 = 0}. Let S i = {(x, y) R 2n + : x j + y j = 1, j = 1,..., n and x j y j = 0, j = 1,..., i 1}. Therefore, we have that for every i = 1,..., n, {(x, y) S i : x 2 i y i = 0} = ({(x, y) S i : x i = 0} {(x, y) S i : y i = 0}) = {(x, y) S i : x i = 0} {(x, y) S i : y i = 0} = {(c, d) S i : c i = 0} {(c, d) S i : d i = 0} = {(c, d) S i : c 2 i d i = 0}. The second step above follows from Proposition 3(iv) and the third step follows from Proposition 3(iii). By Theorem 4, problem (39) is equivalent to the completely positive program inf C 3 (m P Φ (x, y)), Z s.t. C3 ((x i + y i + 1)(x i + y i 1) 2 ), Z = 0 i = 1,..., n C3 (x 2 i y i, Z = 0 i = 1,..., n C 3 (1), Z = 1 Z C 2n+1, POPs with compact feasible set By using Corollary 3 and squaring constraints, it is easy to see that any quadratic POP with compact feasible set can be reformulated as a conic (linear) program over the cone of fourthorder completely positive tensors C N,4, for a suitable N 0. In this section we extend this result to any POP with a compact feasible region. We do so by introducing new variables to reformulate the POP using quadratic and linear constrains only. The construction relies on the following convenient notation. For a given polynomial p(x), let mon(p) = {β Z n + : p β 0} the set of exponents of monomials appearing in p(x). Let M = mon(q) i=1,...,m mon(h i(x)) j=1,...,r mon(g j(x)) be the set of all monomials appearing in (1). Theorem 9 Let R be such that the feasible set F = {x R n : h i (x) = 0, i = 1,..., m, g j (x) 0, j = 1,..., r} problem (1) satisfies R x i R for all x F. Then (1) is equivalent to a conic program over the cone of fourth-order completely positive tensors C 2N+1,4, for N d M + n + r.
19 Completely Positive Reformulations for Polynomial Optimization 19 Proof Substituting x i 2R, in the POP, we can assume without loss of generality that 0 x i 1 for all x F. Let d = max{deg(q), deg(h 1 ),..., deg(h m )}. We next reformulate (1) as a quadratic polynomial optimization problem, by using (at most) d M new variables. To do so, write each m M as m = e im,1 + + e im, m for some 1 i m,1,, i m, m n and make the following change of variables: y m,1 = x im,1, and y m,i := y m,i 1 x em,i for 1 < i m. This change of variables yields y m, m = x m for all m M. Given a polynomial p(x) = β mon(p) p βx β R d [x] let ˆp(y) = m mon(p) p my m, m. Using this notation, problem (1) can be rewritten as: by xi R inf ˆq(y) s.t. ĥ i (y) 4 = 0, i = 1,..., m, (ĝ j (y) z j ) 4 = 0, j = 1,..., r, (y m,1 x im,1 ) 4 = 0, m M, (y m,i y m,i 1 x em,i ) 2 = 0, m M, i = 2,..., m, y m,i 0, m M, i = 1,..., m, z j 0, j = 1,..., r. (40) Where the vector z of slack variables is used to rewrite the inequalities as equalities, and the constraint y 0 is redundantly added, since x 0 for all feasible x. Let (x, y, z) be feasible for (40). Then x F and we have 0 x i 1. Then, for every m M and i m we have 0 y m,i 1. Also, for every j r we have z j g j 1. Hence using Corollary 3 with M = max j g j 1 and d = 4, it follows that (1) and (40) are equivalent to inf 4 (ˆq), Y s.t. C 4 (ĥ4 i ), Y = 0, i = 1,..., m C4 ((ĝ j (y) z j ) 4 ), Y = 0, j = 1,..., r, C4 ((y m,1 x im,1) 4 ), Y = 0, m M, C4 ((y m,i y m,i 1 x em,i) 2 ), Y = 0, m M, i = 2,..., d, C 4 (h 0 ), Y = 0 C 4 (1), Y = 1 Y C 2N+1,4, where N = d M + n + r, and h 0 = h 0 ((x, y, z), w) is defined as in Corollary 3. 5 Concluding remarks The results presented here characterize a general class of POPs, beyond quadratic problems, that can be reformulated as a conic program over the cone of completely positive polynomials. These results extend the work in [2, 3, 9, 16] on quadratic POPs, and follow a well established line of research on this topic [see, e.g., 2, 3, 8, 9, 12, 16, 27, 33, 51]. A main motivation behind this work is that it allows the use of the highly developed and elegant framework of conic programming. Also, the complexity of the POP is captured into the widely studied cones of completely positive matrices, or by conic duality copositive matrices, for which a number of of algorithmic optimization approaches have been proposed and studied [see, e.g., 11, 13, 15, 18, 19, 20, 21, 22, 26, 34, 46, 48]. In order to consider more general classes of POPs, herein we consider the cone of completely positive tensors, which generalize (in terms of order) the cone of completely positive matrices. Many of the algorithmic optimization approaches proposed for the cone of completely positive or copositive matrices naturally extend to the cone of completely positive tensors. For example, this is the case for the algorithmic approaches studied in [21, 22, 46, 48]; this
A Geometrical Analysis of a Class of Nonconvex Conic Programs for Convex Conic Reformulations of Quadratic and Polynomial Optimization Problems
A Geometrical Analysis of a Class of Nonconvex Conic Programs for Convex Conic Reformulations of Quadratic and Polynomial Optimization Problems Sunyoung Kim, Masakazu Kojima, Kim-Chuan Toh arxiv:1901.02179v1
More informationB-468 A Quadratically Constrained Quadratic Optimization Model for Completely Positive Cone Programming
B-468 A Quadratically Constrained Quadratic Optimization Model for Completely Positive Cone Programming Naohiko Arima, Sunyoung Kim and Masakazu Kojima September 2012 Abstract. We propose a class of quadratic
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA
More informationRelations between Semidefinite, Copositive, Semi-infinite and Integer Programming
Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations
More informationAnalysis of Copositive Optimization Based Linear Programming Bounds on Standard Quadratic Optimization
Analysis of Copositive Optimization Based Linear Programming Bounds on Standard Quadratic Optimization Gizem Sağol E. Alper Yıldırım April 18, 2014 Abstract The problem of minimizing a quadratic form over
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and
More informationThe maximal stable set problem : Copositive programming and Semidefinite Relaxations
The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu
More informationAn Adaptive Linear Approximation Algorithm for Copositive Programs
1 An Adaptive Linear Approximation Algorithm for Copositive Programs Stefan Bundfuss and Mirjam Dür 1 Department of Mathematics, Technische Universität Darmstadt, Schloßgartenstr. 7, D 64289 Darmstadt,
More informationLagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems
Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of
More informationOn Conic QPCCs, Conic QCQPs and Completely Positive Programs
Noname manuscript No. (will be inserted by the editor) On Conic QPCCs, Conic QCQPs and Completely Positive Programs Lijie Bai John E.Mitchell Jong-Shi Pang July 28, 2015 Received: date / Accepted: date
More informationA General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationConvex hull of two quadratic or a conic quadratic and a quadratic inequality
Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationTwo-Term Disjunctions on the Second-Order Cone
Noname manuscript No. (will be inserted by the editor) Two-Term Disjunctions on the Second-Order Cone Fatma Kılınç-Karzan Sercan Yıldız the date of receipt and acceptance should be inserted later Abstract
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationA new approximation hierarchy for polynomial conic optimization
A new approximation hierarchy for polynomial conic optimization Peter J.C. Dickinson Janez Povh July 11, 2018 Abstract In this paper we consider polynomial conic optimization problems, where the feasible
More informationA solution approach for linear optimization with completely positive matrices
A solution approach for linear optimization with completely positive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F.
More informationInterior points of the completely positive cone
Electronic Journal of Linear Algebra Volume 17 Volume 17 (2008) Article 5 2008 Interior points of the completely positive cone Mirjam Duer duer@mathematik.tu-darmstadt.de Georg Still Follow this and additional
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationA Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs
A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs Adam N. Letchford Daniel J. Grainger To appear in Operations Research Letters Abstract In the literature
More informationScaling relationship between the copositive cone and Parrilo s first level approximation
Scaling relationship between the copositive cone and Parrilo s first level approximation Peter J.C. Dickinson University of Groningen University of Vienna University of Twente Mirjam Dür University of
More informationLecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008
Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:
More informationMathematics 530. Practice Problems. n + 1 }
Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationA notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations
A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is
More informationCharacterizing Robust Solution Sets of Convex Programs under Data Uncertainty
Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationOptimization over Polynomials with Sums of Squares and Moment Matrices
Optimization over Polynomials with Sums of Squares and Moment Matrices Monique Laurent Centrum Wiskunde & Informatica (CWI), Amsterdam and University of Tilburg Positivity, Valuations and Quadratic Forms
More informationSeparation and relaxation for cones of quadratic forms
Separation and relaxation for cones of quadratic forms Samuel Burer Hongbo Dong May 14, 2010 Revised: April 28, 2011 Abstract Let P R n be a pointed, polyhedral cone. In this paper, we study the cone C
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationOn the projection onto a finitely generated cone
Acta Cybernetica 00 (0000) 1 15. On the projection onto a finitely generated cone Miklós Ujvári Abstract In the paper we study the properties of the projection onto a finitely generated cone. We show for
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationGlobal Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition
Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationBCOL RESEARCH REPORT 07.04
BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN
More informationLecture 6 - Convex Sets
Lecture 6 - Convex Sets Definition A set C R n is called convex if for any x, y C and λ [0, 1], the point λx + (1 λ)y belongs to C. The above definition is equivalent to saying that for any x, y C, the
More informationStructure in Mixed Integer Conic Optimization: From Minimal Inequalities to Conic Disjunctive Cuts
Structure in Mixed Integer Conic Optimization: From Minimal Inequalities to Conic Disjunctive Cuts Fatma Kılınç-Karzan Tepper School of Business Carnegie Mellon University Joint work with Sercan Yıldız
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationUniversity of Groningen. Copositive Programming a Survey Dür, Mirjam. Published in: EPRINTS-BOOK-TITLE
University of Groningen Copositive Programming a Survey Dür, Mirjam Published in: EPRINTS-BOOK-TITLE IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationLow-Complexity Relaxations and Convex Hulls of Disjunctions on the Positive Semidefinite Cone and General Regular Cones
Low-Complexity Relaxations and Convex Hulls of Disjunctions on the Positive Semidefinite Cone and General Regular Cones Sercan Yıldız and Fatma Kılınç-Karzan Tepper School of Business, Carnegie Mellon
More informationResearch Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007
Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationDisjunctive Cuts for Cross-Sections of the Second-Order Cone
Disjunctive Cuts for Cross-Sections of the Second-Order Cone Sercan Yıldız Gérard Cornuéjols June 10, 2014 Abstract In this paper we provide a unified treatment of general two-term disjunctions on crosssections
More informationSeparating Doubly Nonnegative and Completely Positive Matrices
Separating Doubly Nonnegative and Completely Positive Matrices Hongbo Dong and Kurt Anstreicher March 8, 2010 Abstract The cone of Completely Positive (CP) matrices can be used to exactly formulate a variety
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report
More informationCover Page. The handle holds various files of this Leiden University dissertation
Cover Page The handle http://hdl.handle.net/1887/32076 holds various files of this Leiden University dissertation Author: Junjiang Liu Title: On p-adic decomposable form inequalities Issue Date: 2015-03-05
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity Hayato Waki,
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationStructure of Valid Inequalities for Mixed Integer Conic Programs
Structure of Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç-Karzan Tepper School of Business Carnegie Mellon University 18 th Combinatorial Optimization Workshop Aussois, France January
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationGraph Coloring Inequalities from All-different Systems
Constraints manuscript No (will be inserted by the editor) Graph Coloring Inequalities from All-different Systems David Bergman J N Hooker Received: date / Accepted: date Abstract We explore the idea of
More informationA sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm
Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential
More informationNear-Potential Games: Geometry and Dynamics
Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationLifting for conic mixed-integer programming
Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationA ten page introduction to conic optimization
CHAPTER 1 A ten page introduction to conic optimization This background chapter gives an introduction to conic optimization. We do not give proofs, but focus on important (for this thesis) tools and concepts.
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationOn the relative strength of families of intersection cuts arising from pairs of tableau constraints in mixed integer programs
On the relative strength of families of intersection cuts arising from pairs of tableau constraints in mixed integer programs Yogesh Awate Tepper School of Business, Carnegie Mellon University, Pittsburgh,
More informationCSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance
More informationA Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials
A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationAn improved characterisation of the interior of the completely positive cone
Electronic Journal of Linear Algebra Volume 2 Volume 2 (2) Article 5 2 An improved characterisation of the interior of the completely positive cone Peter J.C. Dickinson p.j.c.dickinson@rug.nl Follow this
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationRelationships between upper exhausters and the basic subdifferential in variational analysis
J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationCutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs
Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs 1 Cambridge, July 2013 1 Joint work with Franklin Djeumou Fomeni and Adam N. Letchford Outline 1. Introduction 2. Literature Review
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More informationCopositive Plus Matrices
Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationContents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality
Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter
More information