POLYNOMIALS NONNEGATIVE ON A GRID AND DISCRETE OPTIMIZATION

Size: px
Start display at page:

Download "POLYNOMIALS NONNEGATIVE ON A GRID AND DISCRETE OPTIMIZATION"

Transcription

1 TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 354, Number 2, Pages S (01) Article electronically published on October 4, 2001 POLYNOMIALS NONNEGATIVE ON A GRID AND DISCRETE OPTIMIZATION JEAN B. LASSERRE Abstract. We characterize the real-valued polynomials on R n that are nonnegative (not necessarily strictly positive) on a grid K of points of R n, in terms of a weighted sum of squares whose degree is bounded and known in advance. We also show that the mimimization of an arbitrary polynomial on K (a discrete optimization problem) reduces to a convex continuous optimization problem of fixed size. The case of concave polynomials is also investigated. The proof is based on a recent result of Curto and Fialkow on the K-moment problem. 1. Introduction This paper is concerned with the characterization of real-valued polynomials on R n that are nonnegative (and not necessarily strictly positive) on a pre-defined grid of points of R n. That is, we consider the polynomials p(x) :R n R such that p(x) 0onK, wherek is the subset of R n defined by (1.1) K := {x R n g k (x) =0,,...,n}, where the polynomials g k (x) :R n R are given by x g k (x) := 2r k (x k a ki ), k =1,...,n. i=1 K defines a grid in R n with s := n 2r k points. We obtain a result in the spirit of the linear representation of polynomials positive on a compact semi-algebraic set, obtained by Putinar [10], Jacobi and Prestel [4, 5] in a general framework. Namely, we show that every polynomial p(x) of degree 2r 0 or 2r 0 1, nonnegative on K, can be written as a sum of squares of polynomials weighted by the polynomials g k (x) defining the set K, andwhosedegree is bounded by r + v with r := n (2r k 1) and v := max{r 0 r, max n r k}, independently of the grid points. The important thing is that in this case, the degree of the polynomials in that representation is bounded and known in advance. To prove this result, we use a detour and first consider the associated discrete optimization problem P p := min p(x), x K Received by the editors November 11, Mathematics Subect Classification. Primary 13P99, 30C10; Secondary 90C10, 90C22. Key words and phrases. Algebraic geometry; positive polynomials; K-moment problem; semidefinite programming. 631 c 2001 American Mathematical Society

2 632 JEAN B. LASSERRE which is known to be NP-hard in general. However, we show that P is equivalent to a continuous convex optimization problem whose size depends on the number of points, but not on the points in the grid. The idea is to define a sequence of refined so-called convex semidefinite (SDP) relaxations {Q i } of P. For the general optimization problem where one minimizes a polynomial p(x) onacompact semi-algebraic set K, we have shown in Lasserre [7] that the SDP relaxations Q i converge, that is, inf Q i p as i, provided the set K has some property that permits a linear representation of polynomials positive on K (a particular case of the more general Schmüdgen s representation [11]), as proved in Putinar [10], Jacobi and Prestel [4, 5]. This approach is also valid for 0-1 optimization problems (see Lasserre [8]). Other sequences of SDP relaxations have been provided for 0-1 optimization problems, notably the lift and proect procedure of Lovász and Schriver [9] (see also extensions by Koima and Tunçel [6]) that eventually yields the convex hull of the feasible set of P whenever a weak separation oracle is available for the homogeneous cone associated with the constraint set. The approach developed in Lasserre [7, 8]) is different and directly relates results of algebraic geometry with optimization. In particular, the relaxations {Q i } and their dual {Q i } perfectly match the duality between the K-moment problem and the theory of polynomials, positive on a compact semi-algebraic set K. However, in the present context, we may even further extend the properties of the above relaxations. We use a recent result of Curto and Fialkow [2] on the K-moment problem, to show that the optimal value p is obtained at most at the relaxation Q r+v. Next, by using a standard strong duality result in convex optimization, we show that the dual optimization problem Q r+v is solvable and yields the coefficients of the polynomials in the representation of p(x) p as a sum of squares, weighted by the polynomials g k (x) defining K. In a sense, the order of the relaxation Q i for which the optimum value p is obtained, measures the hardness of problem P, as it expresses the effort (in degree) needed to represent p(x) p as a weighted sum of squares. In the present case, this effort is finite and bounded by r + v. Italso follows that any discrete polynomial optimization problem on K, with additional arbitrary polynomial constraints h (x) 0, also reduces to a convex continuous optimization problem of fixed size. Finally, we also obtain a representation of concave polynomials. The paper is organized as follows. We first introduce the notation as well as some definitions. In Section 3, we present the results on the discrete optimization problem P. In Section 4, we then present the result on the representation of p(x) p as a weighted sum of squares. 2. Notation and definitions We adopt the following notation. Given any two real-valued symmetric matrices A, B let A, B denote the usual scalar product trace(ab) andleta B (resp. A B) standfora B positive semidefinite (resp. A B positive definite). Let (2.1) 1,x 1,x 2,...,x n,x 2 1,x 1x 2,...,x 1 x n,x 2 2,x 2x 3,...,x 2 n,...,xr 1,...,xr n be a basis for the space A r of real-valued polynomials of degree at most r, andlet s(r) be its dimension. Therefore, an r-degree polynomial p(x) :R n R is written p(x) = α p α x α, x R n,

3 POLYNOMIALS NONNEGATIVE ON A GRID 633 where x α = x α1 1 xα2 2...xαn n, with α i = k, is a monomial of degree k with coefficient p α. Denote by p = {p α } R s(r) the coefficients of the polynomial p(x) in the basis (2.1). Hence, the respective vectors of coefficients of the polynomials g k (x), k =1,...,n, in (1.1), are denoted {(g k ) α } = g k R s(2r k), k =1,...,n. We next define the important notions of moment matrix and localizing matrix already introduced in Curto and Fialkow [2], Berg [1] Moment matrix. Given an s(2r)-sequence (1,y 1,...), let M r (y) bethemoment matrix of dimension s(r) (denoted M(r) in Curto and Fialkow [2]), with rows and columns labelled by (2.1). For instance, for illustration purposes, and for clarity of exposition, consider the 2-dimensional case. The moment matrix M r (y) is the block matrix {M i, (y)} 0 i, r defined by (2.2) M i, (y) = i=1 y i+,0 y i+ 1,1... y i, y i+ 1,1 y i+ 2,2... y i 1, y,i y i+ 1,1... y 0,i+ To fix ideas, with n =2andr =2,oneobtains 1 y 10 y 01 y 20 y 11 y 0,2 y 10 y 20 y 11 y 30 y 21 y 12 M 2 (y) = y 01 y 11 y 02 y 21 y 12 y 03. y 20 y 30 y 21 y 40 y 31 y 22 y 11 y 21 y 12 y 31 y 22 y 13 y 02 y 12 y 03 y 22 y 13 y 04 Another more intuitive way of constructing M r (y) is as follows. If M r (y)(1,i)=y α and M r (y)(, 1) = y β,then (2.3) M r (y)(i, ) =y α+β, with α + β =(α 1 + β 1,,α n + β n ). M r (y) defines a bilinear form.,. y on A r by q(x),v(x) y := q, M r (y)v, q(x),v(x) A r, and if y is a sequence of moments of some measure µ y,then (2.4) q, M r (y)q = q(x) 2 µ y (dx) 0, so that M r (y) Localizing matrix. If the entry (i, ) ofthematrixm r (y) isy β,letβ(i, ) denote the subscript β of y β. Next, given a polynomial θ(x) : R n R with coefficient vector θ, wedefinethematrixm r (θy) by. (2.5) M r (θy)(i, ) = α θ α y {β(i,)+α}.

4 634 JEAN B. LASSERRE For instance, with M 1 (y) = 1 y 10 y 01 y 10 y 20 y 11 and x θ(x) =a x 2 1 x2 2, y 01 y 11 y 02 we obtain M 1 (θy) = a y 20 y 02, ay 10 y 30 y 12, ay 10 y 30 y 12, ay 20 y 40 y 22, ay 01 y 21 y 03 ay 11 y 31 y 13. ay 01 y 21 y 03, ay 11 y 31 y 13, ay 02 y 22 y 04 In a manner similar to what we have in (2.4), if y is a sequence of moments of some measure µ y,then (2.6) q, M r (θy)q = θ(x)q(x) 2 µ y (dx), for every polynomial q(x) :R n R with coefficient vector q R s(r). Therefore, M r (θy) 0 whenever µ y has its support contained in the set {θ(x) 0}. InCurto and Fialkow [2], M r (θy) is called a localizing matrix (denoted by M θ (r + v) if deg θ =2v or 2v 1). The K-moment problem identifies those sequences y that are moment-sequences of a measure with support contained in the semi-algebraic set K. In duality with the theory of moments is the theory of representation of positive polynomials, which dates back to Hilbert s 17th problem. This fact will be reflected in the semidefinite relaxations proposed later. For details and recent results, the interested reader is referred to Curto and Fialkow [2], Jacobi [3], Jacobi and Prestel [4, 5], Simon [12], Schmüdgen [11] and the many references therein. 3. The associated discrete optimization problem P Consider the discrete optimization problem P (3.1) P p := min {p(x) g k(x) =0,,...,n}, x R n where the polynomials g k (x) :R n R are defined by 2r k (3.2) g k (x) := (x a ki ), k =1,...,n, i=1 with r k N, k =1,...,n,andthe{a ki }, i =1,...,r k, are given real numbers such that for every k =1,...,n, a ki a k whenever i. For homogenity in notation, we have chosen to assume that all the polynomials g k (x) have an even degree. The results presented below are also valid when the polynomials have arbitrary degree. Let (3.3) K := {x R n g k (x) =0,,...,n} be the feasible set associated with P. As we minimize p(x) onk, we could assume that the degree 2r 0 (or 2r 0 1) of p(x) is not larger than r := n (2r k 1), since otherwise, using the equations g k (x) = 0, we may replace p(x) with another polynomial p(x) of degree not larger than r, and identical to p(x) onk. However, as for the representation of p(x) in 4, we will consider an arbitrary r 0, we do not make this assumption.

5 POLYNOMIALS NONNEGATIVE ON A GRID 635 When needed below, for i max k r k, the vectors g k R s(2r k) are extended to vectors of R s(2i) by completing with zeros. As we minimize p(x) we may and will assume that its constant term is zero, that is, p(0) = SDP relaxations of P. For i max k {0,n} r k, consider the following family {Q i } of convex positive semidefinite (psd) programs (or semidefinite programming (SDP) relaxations of P) (3.4) Q i min y with respective dual problems (3.5) Q i p α y α α M i (y) 0, M i rk (g k y) = 0, k =1,...,n, max X 0,Z k X(1, 1) g k (0)Z k (1, 1) X, B α + Z k,cα k = p α, α 0 where X, Z k are real-valued symmetric matrices, the dual variables associated with the constraints M i (y) 0andM i rk (g k y) 0 respectively, and where we have written M i (y) = B α y α ; M i rk (g k y)= Cαy k α,,...,n, α α for appropriate real-valued symmetric matrices B α,cα, k k =1,...,n. In the standard terminology, the constraint M i (y) 0 is called a linear matrix inequality (LMI) and Q i and its dual Q i are so-called positive semidefinite (psd) programs, the semidefinite (SDP) relaxations of P. Both are convex optimization problems that can be solved efficiently via interior points methods and nowadays, several software packages (like e.g. the LMI toolbox of MATLAB) are available. The reader interested in more details on semidefinite programming is referred to Vandenberghe and Boyd [14] and the many references therein. Note that the localizing matrices M i rk (g k y) are easily obtained from the data {g k } of the problem P by (2.5). Interpretation of Q i. The linear matrix inequalitites (LMI) constraints of Q i state (only) necessary conditions for y to be the vector of moments up to order 2i, of some probability measure µ y with support contained in K. This clearly implies that inf Q i p, for all i, since the vector of moments of the Dirac measure at a feasible point of P is feasible for Q i. Interpretation of Q i. Let X 0and{Z k}, k =1,...,n, be a feasible solution of Q i with value ρ. From the spectral decomposition of the symmetric matrices X and Z k,write X = u u and Z k = v k v k w kl w kl, k =1,...,n, l where the vectors {u } correspond to the positive eigenvalues of X, and the vectors {v k } (resp. {w kl }) correspond to the positive (resp. negative) eigenvalues of Z k. Consider the polynomials {u (x)} and {v k (x),w kl (x)} with respective coefficient vectors {u } and {v k,w kl } in the basis (2.1).

6 636 JEAN B. LASSERRE Let x R n be fixed, arbitrary. Then, from the feasibility of (X, Z k )inq i, (3.6) X, B α x α + Z k,cα k xα = p α x α, α 0, and (3.7) X(1, 1) + g k (0)Z k (1, 1) = ρ. Using the notation y x =(x 1,...,x n,x 2 1,x 1x 2,...,x i 1,...,xi n ), and summing up over all α, yields (3.8) X, M i (y x ) + Z k,m i rk (g k y x ) = p(x) ρ, or, equivalently, (3.9) u,m i (y x )u + v k,m i rk (g k y x )v k ] w kl,m i vk (g k y x )w kl = p(x) ρ. l Therefore, from the definition of the matrices M i (y),m i rk (g k y)(withy = y x )and using (2.4)-(2.6), p(x) ρ = (3.10) u (x) 2 + g k (x) v k (x) 2 w kl (x) 2. l As ρ p, p(x) ρ is nonnegative on K (strictly positive if ρ<p ), and one recognizes in (3.10) a linear representation into a weighted sum of squares of the polynomial p(x) p, nonnegative on K, as in the theory of representation of polynomials, strictly positive on a compact semi-algebraic set K (see e.g., Schmüdgen [11], Putinar [10], Jacobi [3], Jacobi and Prestel [4, 5]). Indeed, when the set K has a certain property, the linear (in the sense that no product term like g k (x)g l (x) is needed) representation (3.10) holds. This property states that there is some polynomial U(x) :R n R such that U(x) has the representation (3.10) and {U(x) 0} is compact (see Putinar [10], Jacobi and Prestel [4, 5]). It will be shown below that indeed, the polynomial p(x) p, which is only nonnegative (and not strictly positive) on K, has the representation (3.10). Hence, both psd programs Q i and Q i perfectly match the duality between the K-moment problem and the theory of polynomials that are positive on K Simplified SDP relaxations. The SDP relaxation Q i has a much simpler form that can be derived as follows. For convenience, write 2r k 1 (3.11) g k (x) :=x 2r k k + γ k x k, k =1,...,n. =0 Observe that in view of the construction of the localizing matrices in (2.5), and the form of the polynomials g k (x), k =1,...,n, the constraints M i rk (g k y)=0

7 POLYNOMIALS NONNEGATIVE ON A GRID 637 generate linear relationships between moments that can be translated into the moment matrix M i (y) as follows. Given a monomial x α,use(3.11)toreplacex 2r k k by 2r k 1 =0 γ k x k, k =1,...,n, every time α k 2r k,untilx α is expressed as a linear combination l h lx β l of monomials x β l with (β l ) k < 2r k for all k =1,...,n and all l, thatis, x α = x α1 1 xα2 2...xαn n = l h l x β l. Now, if M i (y)(s, t) =y α (= y α1,α 2,...,α n )istheentry(s, t) ofthematrixm i (y), then formally replace y α in M i (y)(s, t) by l h ly βl. Hence, the LMI constraints M i (y) 0andM i rk (g k y)=0,k =1,...,n, reduce to the single LMI constraint M i (y) 0 obtained from M i (y) 0 by making the above substitution in all the entries of M i (y). Observe that no matter how large i is, Mi (y) containsonly moments y β with β k < 2r k for all k =1,...,n. Therefore, r := n (2r k 1) is the maximum degree of distinct monomials in the sense that whenever i>ra polynomial of degree i can be expressed as a linear combination of monomials of degree less than r, after the simplification induced by the constraints g k (x) =0,k =1,...,n. Thus, with s := n 2r k, there are no more than s 1variablesy β in all relaxations Q i (that is, s is the number of monomials in (2.1) of degree less than r), and the relaxation Q i has the simplified form (3.12) min y p α y α α M i (y) 0, whereweonlyhavevariablesy α with α k < 2r k for all k =1,...,n,and{ p α } is the vector of coefficients of a polynomial p(x) identical to p(x) on K. For instance, in R 2 (n =2),letg k (x) =x 2 k x k (r k =1)sothatthesetK defines the 4 grid points (0, 0), (0, 1), (1, 0), (1, 1) in R 2. Then, (3.13) M 2 (y) = 1 y 10 y 01 y 20 y 11 y 02 y 10 y 10 y 11 y 30 y 21 y 12 y 01 y 11 y 02 y 21 y 12 y 03 y 20 y 30 y 21 y 40 y 31 y 22 y 11 y 21 y 12 y 3,1 y 22 y 13 y 02 y 12 y 03 y 22 y 13 y 04

8 638 JEAN B. LASSERRE is replaced with (3.14) M 2 (y) = 1 y 10 y 01 y 10 y 11 y 01 y 10 y 10 y 11 y 10 y 11 y 11 y 01 y 11 y 01 y 11 y 11 y 01 y 10 y 10 y 11 y 10 y 11 y 11 y 11 y 11 y 11 y 11 y 11 y 11 y 01 y 11 y 01 y 11 y 11 y 01 and only the variables y 10,y 01,y 11 appear in all the relaxations Q i Main result. Before stating the main result of this section, begin with the following crucial result: Proposition 3.1. Let r := n (2r k 1) and s := n 2r k. (a) All the simplified relaxations Q i involve at most s 1 variables y α. (b) For all the relaxations Q i with i>r, one has (3.15) rank M i (y) =rank M r (y). Proof. (a) is ust a consequence of the comment preceding Proposition 3.1. To get (b) observe that with i>r,onemaywrite M i (y) = M r(y) B(y) B (y) C(y) [ B(y) C(y) ] is for appropriate matrices B,C, and we next prove that each column of [ ] Mr (y) a linear combination of some columns of B. (y) Indeed, remember from (2.3) how an element M i (y)(k, p) can be obtained. Let y δ = M i (y)(k, 1) and y α = M i (y)(1,p). Then (3.16) M i (y)(k, p) =y η with η i = δ i + α i,i=1,...,n, that is, M i (y)(k, p) is the moment which corresponds [ to the monomial ] x[ δ x] α B(y)(., m) B (denoted M i (k, p) x δ x α ). Now, consider a column of, C(y)(., m) C that is, some column M i (y)(., p) ofm [ i (y), with ] first element y α = B(y)(1,m)= B(y) M i (y)(1,p). Therefore, the element (k, m) (or, equivalently, the element C(y) M i (y)(k, p)) is the variable y η in (3.16). Note that α corresponds to the monomial x α1...αn in the basis (2.1), of degree larger than r. We have seen that in view of the constraint M i rk (g k y) = 0, this element satisfies y α = γ y β (), that is, y α is a linear combination of variables y β () with (3.17) β () = β () 1...β n () and β() k < 2r k, =1,...,n. Now, as y β () corresponds to a monomial in the basis (2.1) of degree less than r, to each y β () corresponds a column M r (y)(., p ) (more precisely, M r (y)(1,p )

9 POLYNOMIALS NONNEGATIVE ON A GRID 639 y β ()) and thus, B(y)(1,m) is a linear combination of the elements M i (y)(1,p ) with coefficients γ as in (3.17). Next, by construction, the element M i (y)(k, p) corresponds to the monomial x δ x α, and we have the correspondences y η = M i (y)(k, p) x δ x α = x δ γ x β() = γ (x δ x β() ) [ ] Mr (y) γ B (k, p ), and thus, which states that the column [ Mr (y) B (y) whenever i>r. M i (y)(k, p) = [ B(y) C(y) γ [ Mr (y) B (y) ] (k, p ), ] (., m) is a linear combination of the columns ] (., p ). From, this it immediately follows that rank M i (y) =rankm r (y) We now are in position to state the following result. Theorem 3.2. Let P be as in (3.1) with K as in (3.3) and let r := n (2r k 1), v := max{r 0 r, max,...,n r k }. Then, whenever i r + v, (3.18) p =minq i, and every optimal solution y of Q i is the vector of moments of a probability measure µ y supported on s = rank M r (y) optimal solutions of P. Proof. First let i := r+v. Lety be any feasible solution of Q i. From Proposition 3.1, it follows that rank M r+1 (y) =rankm r (y). Therefore, from a deep result of Curto and Fialkow, namely [2, Th. 1.1, p. 4], M r (y) haspositive flat extensions M r+ (y) for all =1, 2,...,thatis,M r+ (y) 0 and rank M r+ (y) =rankm r (y) for all =1, 2,... Thisimpliesthaty is the vector of moments of some rank M r (y)- atomic probability measure µ y. Moreover, from M i rk (g k y) = 0 (equivalently M i rk (g k y) 0andM i rk ( g k y) 0), it also follows that µ y has its support contained in n {g k(x) =0}, thatis,thesetk. In the terminology of Curto and Fialkow [2], M r+v vk (g k y) is well-defined relative to the unique flat extension M r+v (y) ofm r (y). Theorem 1.1 in Curto and Fialkow [2, p. 4] is stated for the complex K-moment problem, but is valid for the K-moment problem in several real or complex variables (see [2, p. 2]). But then, for every admissible solution y of Q i,wehave p α y α = p(x) µ y (dx) α K p µ y (K) =p so that p inf Q i, which combined with inf Q i p for all i, yields inf Q i = p when i = r + v. Fori>r+ v, it suffices to notice that p inf Q i+1 inf Q i for all i, togetinfq i = p whenever i>r+ v. Moreover, to a global minimizer x K of P, corresponds the admissible solution y =(x 1,...,x n,...,(x 1) 2i,...,(x n) 2i )

10 640 JEAN B. LASSERRE of Q i,withvaluep, which implies inf Q i =minq i = p whenever i>r+ v. In Theorem 3.2, we have i r + v, andv depends on r 0. Indeed, to be consistent, Q i must be such that 2i 2r 0 to contain all the moments up to 2r 0. However, we can make the following remark. Remark 3.3. We have seen that Q i has the simplified equivalent form (3.12) with the single LMI constraint M i (y) 0 after the simplification induced by the constraints g k (x) =0. But,sinceM r (y) 0 implies that M r+ (y) 0and rank M r+ (y) =rankm r (y) for all =1,..., it follows that M r (y) 0 also implies M r+ (y) 0 and rank M r+ (y) =rank M r (y) for all =1,... Therefore, p is obtained by solving the SDP relaxation (3.19) min { y α p α y α M r (y) 0}, in which we only have variables y α such that α k < 2r k for all k =1,...,n. Therefore, no matter its degree, the global minimization of a polynomial p(x) on K reduces to a convex continuous optimization problem of fixed dimension, namely the convex psd program (3.19). This is because even if p(x) has degree larger than r, the{ p β } are the coefficients of a polynomial p(x) of degree less than r, identical to p(x) on K. However, this simplification is formally obtained at the relaxation Q r+v that we will need for the representation of p(x) in 4. Despite its theoretical interest, Theorem 3.2 is of little value for solving discrete optimization problems for the dimension of the psd program Q r+v (or, equivalently, (3.19)) is exponential in the problem size. Fortunately, in many cases, the optimal value p is obtained at relaxations Q i with i r + v. For instance, this will be the case whenever p(x) p has the representation (3.10) with polynomials {u (x)} of maximum degree a 0 <r+ v, and polynomials {v k (x),w kl (x)} of maximum degree a k <r+ v r k, k =1,...,n. Indeed, let {u,v k,w kl } be their vectors of coefficients in R s(i), R s(i r k) respectively, with i := max[a 0, max k (r k + a k )]. Then, the real-valued symmetric matrices X R s(a0) s(a0) and Z k R s(a k) s(a k ), k =1,...,n, defined by X := u u ; Z k := v k v k l w kl w kl, form an admissible solution of Q i with value p (ust redo backward the derivations (3.6)-(3.9)). Since from weak duality we must have sup Q i inf Q i p, it follows that inf Q i = p, and thus, min Q i = p (since the vector y of moments of the Dirac measure δ x at a global minimizer x of P, is obviously an admissible solution of Q i with value p ). For instance, consider the so-called MAX-CUT discrete optimization problem min x {0,1} x Mx, n where M R n n is a real-valued symmetric matrix with a null diagonal, which is known to be NP-hard. We have solved a sample of MAX-CUT problems in R 10 with the the nondiagonal entries of M randomly generated, uniformly between 0 and 1. In all cases, the SDP relaxation Q 2 provided the optimal value p,withno need to solve Q 11, as predicted by Theorem 3.2 (since in this case, with r k =1,r = k (2r k 1) = 10). The computational savings are significant since the simplified

11 POLYNOMIALS NONNEGATIVE ON A GRID 641 form Q 2 of Q 2 is a psd program with 385 variables y α and a single LMI constraint of dimension 56 56, whereas Q 11 would involve = 1023 variables and a matrix M 10 (y) of size ! Remark 3.4. If p(x) :R n R is concave, then its minimum p on the grid K is attained at some corner of K. Therefore, only the corner points are useful and one may replace the initial grid K, by the new grid K defined by the quadratic equality constraints g k (x) :=(x k a k1 ) (x k a k2rk )=0, k =1,...,n. The resulting grid K has only 2 n points and the optimal value p is obtained at most at the Q n+v relaxation (since r = n) Constrained discrete optimization. Consider now the problem P with the additional constraints h (x) 0, = 1,...,m, for some real-valued polynomials h (x) : R n R of degree 2v or 2v 1 (depending on the parity), that we may assume to be not larger than r. The new set K is now K := K [ m {h (x) 0}] and the new associated relaxations Q i now include the additional constraints M i v (h y) 0, =1,...,m. Theorem 3.2 also holds, that is, the (new) optimal value p is also obtained at the relaxation Q r+v with now v := max{r 0 r, max v, max k r k }. Indeed, as in the proof of Theorem 3.2, we may invoke Theorem 1.1 of Curto and Fialkow [2] to show that M r (y) admitsflat positive extensions M r+l (y), l =1,...,andasM r+v v (h y) are all well defined, relative to the unique flat extension M r+v (y), every admisssible solution y of Q r+v is the the vector of moments of some probability measure µ y supported on K,and the result follows with same arguments. 4. Representation of polynomials nonnegative on K We now investigate the representation of polynomials that are nonnegative on K. We will see that the previous results obtained for the discrete optimization problem P will help us to get the desired representation of p(x) p, and thus, the representation of every polynomial p(x), nonnegative on K. Wethenpresent results for concave polynomials Representation of polynomials, nonnegative on K. For a polynomial g(x), nonnegative on K, let g(x) :=g(x) g(0) and let p := min x K g(x) and 0 p := min x K g(x). Then, g(x) = g(x) p + p. Therefore, if one obtains arepresentationof g(x) p as a weighted sum of squares, one obtains the same representation for the polynomial g(x), positive on K, by adding the nonnegative constant term p. Therefore, with no loss of generality, we may and will assume that the constant term of p(x) vanishes, that is, p(0) = 0. Consider the polynomial p(x) p,with p := min x K p(x), which is nonnegative on K and let 2r 0 (resp. 2r 0 1) be its degree if even (resp. odd). Before proceeding further, we need the following remark. Remark 4.1. We have seen that with v := max{r 0 r, max r k},,...,n

12 642 JEAN B. LASSERRE we have min Q r+v = p and Q r+v is equivalent to solving the psd program min{ p α y α M r (y) 0}, y α where M r (y) is the simplified form of M r (y) after the simplifications in Q r+v induced by the constraints g k (x) =0,k =1,...,n (cf. (3.19) in Remark 3.3), so that we only have variables y α with α k < 2r k for all k. In fact, Mr (y) can be further simplified and reduced in size. Indeed, all the columns of M r (y) that correspond to monomials x α in the basis (2.1), and simplified as a linear combination of monomials x β with β k < 2r k for all k, canbedeleted (and the corresponding rows as well). Indeed, by rearranging rows and columns, write M r (y) = H r(y) B(y), B (y) C(y) where the elements of H r (y)(1,.) [ correspond ] to the monomials x β in the basis (2.1) B(y) with β k < 2r k for all k. Thus, is the submatrix of C(y) M r (y) whosecolumns are [ precisely ] the columns of M r (y) that are linear combinations of the columns of Hr (y) B. Therefore, we can write B(y) =H (y) r (y)w and C(y) =B (y)w for some matrix W. Next, let y be such that H r (y) 0. From C(y) =W H r (y)w 0, it follows that M r (y) 0 and, in addition, rank M r (y) =rankh r (y). Therefore, y is admissible for Q r+v with same value, so that solving Q r+v is equivalent to solving the new psd program of reduced size { min (4.1) Q r (y) α p αy α H r (y) 0. With s := n 2r k, H r (y) isamatrixofsizes s, (s is precisely the number of monomials x β with β k < 2r k for all k =1,...,n). To illustrate the above Remark 4.1, consider the matrix M 2 (y) in (3.14) (for the case n =2andg k (x) =x 2 k x k), which can be written 1 y 10 y 01 y 11 y 10 y 01 y 10 y 10 y 11 y 11 y 10 y 11 y 01 y 11 y 01 y 11 y 11 y 01 M 2 (y) = y 11 y 11 y 11 y 11 y 11 y 11, y 10 y 10 y 11 y 11 y 10 y 11 y 01 y 11 y 01 y 11 y 11 y 01 with the last two columns being identical to the second and third columns respectively. Therefore, in this case, solving Q 2+v reduces to solving the psd program min y α p αy α Q 2 H 2 (y) = 1 y 10 y 01 y 11 y 10 y 10 y 11 y 11 y 01 y 11 y 01 y 11 y 11 y 11 y 11 y 11 0.

13 POLYNOMIALS NONNEGATIVE ON A GRID 643 We need the simplified psd program Q r to prove the absence of a duality gap between Q r+v (y) and its dual Q r+v, which in turn, will yield the desired result on the representation of polynomials p(x), nonnegative on K. Theorem 4.2. Let p(x) :R n R be an arbitrary polynomial of degree 2r 0 or 2r 0 1, with p(0) = 0. Let P be as in (3.1) with K as in (3.3) and let r := n (2r k 1) and s := n 2r k.thenwithv := max{r 0 r, max n r k}. (i) There is no duality gap between the psd program Q r+v and its dual Q r+v. Moreover, Q r+v is solvable, that is, (4.2) max Q r+v =minq r+v = p. (ii) p(x) p has the following representation: m 0 m k n k (4.3) p(x) p = u (x) 2 + g k (x) v k (x) 2 w kl (x) 2, for some m 0 ( s(r + v)) polynomials {u (x)} of degree at most r + v and some polynomials {v k (x),w kl (x)} of degree at most r+v r k,withm k +n k s(r+v r k ), k =1,...,n. Proof. From Remark 4.1, we know that min Q r = p and the dimension of the matrix H r (y) iss s with s := n 2r k. Next, to each of the s (grid) points of K, labelled x(k),,...,s, corresponds an admissible solution y(k) ofq r+v with (4.4) y(k) =(x(k) 1,...,x(k) n,...,x(k) 2(r+v) 1,...,x(k) 2(r+v) n ), and with M r+v (y(k)) 0, k =1,...,s. Let ỹ(k) be the vector obtained from y(k) by keeping the only variables y(k) α corresponding to the monomials x α with α k < 2r k for all k, then the vector (1, ỹ(k)) satisfies H r (ỹ(k)) 0andỹ(k) is admissible for Q r (y) in(4.1). Moreover, the s points (1, ỹ(k)) R s are linearly independent (see a proof in the Appendix). Consider a convex combination z R s defined by s r z := z k (1, ỹ(k)), with z k > 0and z k =1. From the linearity of H r (z) it follows that s H r (z) = z k H r (ỹ(k)) 0. Moreover, from the definition of the moment matrix M r+v (y), all the matrices M r+v (y(k)) are the rank-one matrices [ ] 1 M r+v (y(k)) = [1,y(k)], k =1,...,s. y(k) From this and the definition of the simplified moment matrix H r (y), it also follows that all the matrices H r (ỹ(k)) are the rank-one matrices [ ] 1 H r (ỹ(k)) = [1, ỹ(k)], k =1,...,s. ỹ(k) l=1

14 644 JEAN B. LASSERRE As the points (1, ỹ(k)) are linearly independent in R s, it follows that H r (z) 0, that is, H r (z) is positive definite, and thus, z is a strictly admissible solution of Q r.in other words, Slater s interior point condition holds for the convex psd program Q r. As inf Q r = p >, by a standard (strong duality) result in convex optimization, we have that Q r is solvable and there there is no duality gap between Q r and Q r (see e.g. Sturm [13, Th. 2.24]). That is, max Q r =min Q r = p. If we now remember how Q r was obtained from Q r+v, it also follows that Q r+v is also solvable and there is no duality gap between Q r+v and Q r+v, which proves the first assertion of Theorem 4.2. Next, as Q r+v is solvable with optimal value p,letx 0andZk, k =1,...,n, be an optimal solution of Q r+v. From the spectral decomposition of the symmetric matrices X and Zk,write m 0 m k n k X = u u and Zk = v k v k w kl w kl, k =1,...,n, where the vectors {u } correspond to the m 0 positive eigenvalues of X,andthe vectors {v k } (resp. {w kl }) correspond to the m k positive (resp. the n k negative) eigenvalues of Zk. Consider the polynomials {u (x)} and {v k (x),w kl (x)} with respective coefficient vectors {u } and {v k,w kl }, in the basis (2.1). Fix x R n, arbitrary. From the feasibility of (X,Zk )inq r+v, X,B α x α + Zk,Ck α xα = p α x α, α 0, and from max Q r+v = p X (1, 1) + l=1 g k (0)Zk(1, 1) = p. Using the notation y x = (x 1,...,x n,x 2 1,x 1 x 2,...,x 2(r+v) 1,...,x 2(r+v) n ), and summing up over all α, yields X,M r+v (y x ) + Zk,M r+v rk (g k y x ) = p(x) p, or, equivalently, m 0 p(x) p = u,m r+v (y x )u + m k n k v k,m r+v rk (g k y x )v k w kl,m r+v rk (g k y x )w kl. Therefore, using (2.4)-(2.6) (with y x ), m 0 m k n k p(x) p = u (x) 2 + g k (x) v k (x) 2 w kl (x) 2 which is (4.3). l=1 l=1

15 POLYNOMIALS NONNEGATIVE ON A GRID Concave polynomials. As a consequence of Theorem 4.2, we obtain a representation of concave polynomials. Remember that the global minimum of a concave polynomial on a grid K is attained at some corner point of the grid K (see Remark 3.4). Therefore, let p(x) :R n R be a concave polynomial of degree 2r 0 or 2r 0 1, with p := min x K p(x). Then, (4.5) m 0 p(x) p = u (x) 2 + m k n k (x k a k1 )(a k2rk x k ) v k (x) 2 w kl (x) 2 for some polynomials {u (x)} of degree at most n + v, and some polynomials {v k (x)} and {w kl (x)} of degree at most n + v 1. This is because, we may replace the initial grid K with the coarser grid K defined by the 2 n corner points, so that we have r = n and v =max[1,r 0 n]. Moreover, from the concavity of p(x), p = min p(x) =min{p(x) a k1 x k a k2rk, k =1,...,n}, x K that is, p is also the global minimum of p(x) ontheboxs := n [a k1,a k2rk ] (equivalently, the convex hull of K). But, observe that (4.5) is also a linear representation with respect to the constraints (x a k1 )(a k2rk x) 0, k =1,...,n, that are necessary conditions for x S. Therefore, we have m0 p(x) min p(x) = u (x) 2 x S (4.6) m k n k + (x k a k1 )(a k2rk x k ) v k (x) 2 w kl (x) 2. Remark 4.3. The reader will easily convince himself that Theorem 3.2 of this paper is also valid if instead of a grid, K is now the more general variety n {g k(x) =0}, with polynomials g k (x) oftheform x g k (x) :=x r k k + h k (x), k =1,...,n, and where h k (x) is a polynomial of degree not larger than r k Concave polynomials on a simplex. We end up this section with the characterization of concave polynomials on a simplex. Let R n be the canonical simplex (4.7) :={x R n + x i =1}. Let p(x) :R n R be a concave polynomial of degree 2r 0 or 2r 0 1. Its minimum on is attained at some vertex of. Therefore, (4.8) P p =minp(x) = min p(x) x x {0,1} n = min {p(x) θ(x) :=1 n x i =0}. x {0,1} n i=1 The constraints x {0, 1} n are modelled with the polynomial constraints g k (x) :=x k x 2 k =0, k =1, 2,...,n. i=1 l=1 l=1

16 646 JEAN B. LASSERRE As we already did before, we also assume with no loss of generality that p(0) = 0. In view of Section 3.4 and Remark 3.3, (and as r = n, v := max[1,r 0 n]) the optimal value p is obtained at the Q n+v SDP relaxation Q n+v min α p αy α M n+v (y) 0, M n+v 1 (g k y) = 0, k =1,...,n. M n+v 1 (θy) = 0, The constraints M n+v 1 (g k y)=0andm n+v 1 (θy) = 0 generate linear relationships between the variables y α and in view of those constraints, there are only n 1 independent variables that may be chosen to be y ,y 0,10...0,,y , corresponding to the monomials x 1,,x n 1. Indeed, for instance, the constraint M n+v 1 (θy)(1, 1) = 0 states that y y y =1, which yields n 1 independent variables among the variables y α corresponding to first-order moments, that is, with α =1(where α = i α i). The constraint M n+v 1 (θy)(1, 2) = 0 generates y y y y y =0, which (as y α 0) yields y α = 0 for all variables y α corresponding to monomials x 1 x i, i =2,...,n, etc. so that finally, all the variables y α with α =2vanish. From the other constraints induced by M n+v 1 (θy) = 0, it follows that all the variables y α with α > 1, vanish. This corresponds to the fact that, on {0, 1} n,allthe monomials x α with α i < 2and α > 1 vanish. Thus, after these substitutions, Q n+v is the SDP relaxation, min α 1 p αy α 1 y y y y y Q n+v y y y y with (4.9) y y y =1. Therefore, using (4.9) to remove one variable, say y , the SDP Q n+v is strictly equivalent to the simplified SDP relaxation p min α 1 ( p α p )y α 1 y y y Q n+v y y H(y) := y y y y

17 POLYNOMIALS NONNEGATIVE ON A GRID 647 with {y α } R n 1. Next, let {x(k)}, k = 1,...,n, be the n vertices of of, that is, x(k) = e k R n with e ki = δ ki. To each x(k) R n corresponds the vector (1,y(k)) R n with y(k) = δ k, =1,...,n 1, for all k =1,...,n 1, and y(n) = 0. Therefore, the n-vectors (1,y(k)) are linearly independent, and, in addition, [ H(y(k)) = 1 y(k) ] [1,y(k)] 0, k =1,...,n, so that every y(k) is admissible for Q n+v.moreover,letz:= n z k(1,y(k)) with z k > 0and n z k = 1. From the linear independence of the vectors (1,y(k)), it follows that H(z) 0, and thus, z is a strictly admissible solution of Q n+v,thatis, Slater s interior point condition holds for Q n+v. As min Q n+v =minq n+v = p, from a standard result in convex optimization, it follows that there is no duality gap between Q n+v and its dual Q n+v, andsup Q n+v =max Q n+v = p. If we remember how Q n+v was obtained from Q n+v, this in turn implies that max Q n+v = p. Now, from the solvability of the dual Q n+v with max Q n+v = p, and proceeding as in the proof of Theorem 4.2, we obtain that m 0 (4.10) p(x) p = q (x) 2 + (x k x 2 k ) v k (x) 2 w kl (x) 2 l + (1 x ) y k (x) 2 z kl (x) 2 l for some polynomials {q (x)} of degree at most n + v, and some polynomials {v k (x)}, {w kl (x)} and {y k (x),z k (x)} of degree at most n + v 1. Hence, every concave polynomial p(x), nonnegative on, has the representation (4.10) (write it p(x) p + a for some scalar a>0). 5. Conclusion We have provided a representation of polynomials that are nonnegative on a grid K of points of R n. This representation is a sum of squares linearly weighted by the polynomials defining the grid K as in Putinar s representation. However, the degree of the polynomials in this representation is bounded and known in advance and depends on the size of the grid, not on the points of the grid. A related result is that every discrete optimization problem (on a grid) is also equivalent to a continuous convex optimization problem whose size depends on the size of the grid but not on the points of the grid. 6. Appendix Let {y(k)}, k =1,...,s, be the vectors in (4.4), that is, y(k) is the vector {x(k) β } of all monomials x β with β k < 2r k, evaluated at the point x(k) ofthegridk. We show by induction on the dimension of the space R n, that the s points {(1,y(k))} in R s are linearly independent.

18 648 JEAN B. LASSERRE Denote by K n the grid in R n and let S n R sn sn be the matrix formed by the s n points {(1,y(k))} in R sn where s n = n (2r k), that is, a 11 a a 1r1 a 21 a a 2r2 S n = a n1 a n2... a nrn. a 2 11 a a 2 1r 1 a 11 a 21 a 11 a a 1r1 a 2r Assume that the property is true (that is, S n is nonsingular) whenever the dimension is 1, 2,...,n. ConsideragridK n+1 in R n+1 defined by g k (x) = 0 for all k =1,...,n+1,andwhere g k (x) := 2r k (x a k ), k =1, 2,...,n+1. With s n+1 := n+1 2r k, we also define the s n+1 points {(1, ỹ(k))} in R sn+1 and the associated matrix S n+1. From its definition, and for an easy use of the induction argument, we can rearrange S n+1 to rewrite it as S n+1 = S n S n... S n, C 1 C 2... C 2rn+1 with C 1 = a n+11 a n a n+11 a 11 a n+11 a 11 a n a 1r1 a n+1,1 a 2 11 a n a 2 1r 1 a n that is, the first column C 1 (:, 1) of C 1 contains all the monomials x β with 1 < β n+1 < 2r n+1 evaluated at the point ỹ(k) =(a 11,a 21,...,a n1,a n+11 )ofthegrid K n+1. Similarly, the second column C 1 (:, 2) of C 1 contains all the monomials x β with 1 <β n+1 < 2r n+1 evaluated at the point (a 11,a 21,...,a n2,a n+11 ), etc. C i is a verbatim copy of C 1 with a n+11 replaced with a n+1i, for all i =1,..., 2r n+1. Next, by the induction hypothesis, S n is full rank, [ and ] thus the only way Sn for S n+1 to be singular, is that at least one column of matches exactly one C [ ] i Sn column of for some pair (i, ). But this is not possible. Indeed, assume that C C i (., m) =C (., p) forsomepair(m, p). Note that because of S n,wemusthave m = p, AnelementC i (k, p) isoftheformx β1 1 xβn n aβn+1 n+1i,whereastheelement C (k, p) isx β1 1 xβn n a βn+1 n+1. Therefore, it suffices to consider a row k corresponding to a monomial x β in the basis (2.1), with β i =0foralli =1,...,n, to yield the contradiction a βn+1 n+1i = aβn+1 n+1 since the a n+1k s are distinct.

19 POLYNOMIALS NONNEGATIVE ON A GRID 649 It remains to prove that the induction hypothesis is true for n =1. Withn =1 and a grid K 1 of 2r 1 points, the matrix S 1 reads x 1 x 2... x 2r1 x 2 1 x x 2 2r x 2r1 1 1 x 2r x 2r1 1 2r 1 If S 1 is singular, then there is a linear combination {γ k } of the rows such that 2r 1 1 k=0 γ k x k =0, x K 1, that is, the univariate polynomial x h(x) := 2r 1 1 k=0 γ k x k has 2r 1 distinct real zeros, which is impossible. References [1] C. Berg. The multidimensional moment problem and semi-groups, in: Moment in Mathematics (1987), pp , ed. H.J. Landau, Amer. Math. Soc. MR 89k:44013 [2] R.E. Curto and L. A. Fialkow. The truncated complex K-moment problem, Trans. Amer. Math. Soc. 352 (2000), pp MR 2000:47027 [3] T. Jacobi, A representation theorem for certain partially ordered commutative rings, Mathematische Zeitschrift, 237 (2001), pp CMP 2001:14 [4] T. Jacobi and A. Prestel, On special representations of strictly positive polynomials, Technical report, Konstanz University, Konstanz, Germany, January [5] T. Jacobi and A. Prestel, Distinguished representations of strictly positive polynomials, J. ReineAngew. Math. 532 (2001), CMP 2001:09 [6] M. Koima and L. Tunçel, Cones of matrices and successive convex relaxations of non convex sets, SIAMJ. Optim. 10 (2000), MR 2001i:90062 [7] J.B. Lasserre, Global optimization with polynomials and the problem of moments, SIAM J. Optim. 11 (2001), [8] J.B. Lasserre, Optimality conditions and LMI relaxations for 0-1 programs, LAAS- CNRS Technical report #00099, submitted. [9] L. Lovász and A. Schriver, Cones of matrices and set-functions and 0-1 optimization, SIAMJ. Optim. 1 (1991), MR 92c:52016 [10] M. Putinar, Positive polynomials on compact semi-algebraic sets, Ind. Univ. Math. J. 42 (1993), pp MR 95h:47014 [11] K. Schmüdgen, The K-moment problem for compact semi-algebraic sets, Math. Ann. 289 (1991), pp MR 92b:44011 [12] B. Simon, The classical moment problem as a self-adoint finite difference operator, Adv. Math. 137 (1998), MR 2001e:47020 [13] J. F. Sturm, Theory and algorithms of semidefinite programming, inhigh Performance Optimization methods, H. Frenk, K. Roos and S. Zhang eds., Kluwer Academic Publishers, Dordrecht, 2000, pp MR 2001e:90001 [14] L. Vandenberghe and S. Boyd, Semidefinite programming, SIAMReview38 (1996), pp MR 96m:90005 LAAS-CNRS, 7 Avenue du Colonel Roche, Toulouse Cédex, France address: lasserre@laas.fr

SEMIDEFINITE PROGRAMMING VS. LP RELAXATIONS FOR POLYNOMIAL PROGRAMMING

SEMIDEFINITE PROGRAMMING VS. LP RELAXATIONS FOR POLYNOMIAL PROGRAMMING MATHEMATICS OF OPERATIONS RESEARCH Vol. 27, No. 2, May 2002, pp. 347 360 Printed in U.S.A. SEMIDEFINITE PROGRAMMING VS. LP RELAXATIONS FOR POLYNOMIAL PROGRAMMING JEAN B. LASSERRE We consider the global

More information

Global Optimization with Polynomials

Global Optimization with Polynomials Global Optimization with Polynomials Geoffrey Schiebinger, Stephen Kemmerling Math 301, 2010/2011 March 16, 2011 Geoffrey Schiebinger, Stephen Kemmerling (Math Global 301, 2010/2011) Optimization with

More information

The moment-lp and moment-sos approaches

The moment-lp and moment-sos approaches The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY

More information

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations LAAS-CNRS and Institute of Mathematics, Toulouse, France Tutorial, IMS, Singapore 2012 LP-relaxations LP- VERSUS SDP-relaxations

More information

An explicit construction of distinguished representations of polynomials nonnegative over finite sets

An explicit construction of distinguished representations of polynomials nonnegative over finite sets An explicit construction of distinguished representations of polynomials nonnegative over finite sets Pablo A. Parrilo Automatic Control Laboratory Swiss Federal Institute of Technology Physikstrasse 3

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations LAAS-CNRS and Institute of Mathematics, Toulouse, France EECI Course: February 2016 LP-relaxations LP- VERSUS SDP-relaxations

More information

Optimization over Polynomials with Sums of Squares and Moment Matrices

Optimization over Polynomials with Sums of Squares and Moment Matrices Optimization over Polynomials with Sums of Squares and Moment Matrices Monique Laurent Centrum Wiskunde & Informatica (CWI), Amsterdam and University of Tilburg Positivity, Valuations and Quadratic Forms

More information

Convex Optimization & Parsimony of L p-balls representation

Convex Optimization & Parsimony of L p-balls representation Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE

WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 126, Number 10, October 1998, Pages 3089 3096 S 0002-9939(98)04390-1 WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE JEAN B. LASSERRE

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Strong duality in Lasserre s hierarchy for polynomial optimization

Strong duality in Lasserre s hierarchy for polynomial optimization Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares

Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares Journal of Global Optimization (2005) 33: 511 525 Springer 2005 DOI 10.1007/s10898-005-2099-2 Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares JIAWANG NIE 1 and JAMES W.

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

A new look at nonnegativity on closed sets

A new look at nonnegativity on closed sets A new look at nonnegativity on closed sets LAAS-CNRS and Institute of Mathematics, Toulouse, France IPAM, UCLA September 2010 Positivstellensatze for semi-algebraic sets K R n from the knowledge of defining

More information

On Polynomial Optimization over Non-compact Semi-algebraic Sets

On Polynomial Optimization over Non-compact Semi-algebraic Sets On Polynomial Optimization over Non-compact Semi-algebraic Sets V. Jeyakumar, J.B. Lasserre and G. Li Revised Version: April 3, 2014 Communicated by Lionel Thibault Abstract The optimal value of a polynomial

More information

Approximate Optimal Designs for Multivariate Polynomial Regression

Approximate Optimal Designs for Multivariate Polynomial Regression Approximate Optimal Designs for Multivariate Polynomial Regression Fabrice Gamboa Collaboration with: Yohan de Castro, Didier Henrion, Roxana Hess, Jean-Bernard Lasserre Universität Potsdam 16th of February

More information

Real solving polynomial equations with semidefinite programming

Real solving polynomial equations with semidefinite programming .5cm. Real solving polynomial equations with semidefinite programming Jean Bernard Lasserre - Monique Laurent - Philipp Rostalski LAAS, Toulouse - CWI, Amsterdam - ETH, Zürich LAW 2008 Real solving polynomial

More information

LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS

LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS MEHDI GHASEMI AND MURRAY MARSHALL Abstract. Recently Lasserre [6] gave a sufficient condition in terms of the coefficients for a polynomial f

More information

The moment-lp and moment-sos approaches in optimization

The moment-lp and moment-sos approaches in optimization The moment-lp and moment-sos approaches in optimization LAAS-CNRS and Institute of Mathematics, Toulouse, France Workshop Linear Matrix Inequalities, Semidefinite Programming and Quantum Information Theory

More information

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets Milan Korda 1, Didier Henrion,3,4 Draft of December 1, 016 Abstract Moment-sum-of-squares hierarchies

More information

Integer programming, Barvinok s counting algorithm and Gomory relaxations

Integer programming, Barvinok s counting algorithm and Gomory relaxations Integer programming, Barvinok s counting algorithm and Gomory relaxations Jean B. Lasserre LAAS-CNRS, Toulouse, France Abstract We propose an algorithm based on Barvinok s counting algorithm for P max{c

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

Sum of Squares Relaxations for Polynomial Semi-definite Programming

Sum of Squares Relaxations for Polynomial Semi-definite Programming Sum of Squares Relaxations for Polynomial Semi-definite Programming C.W.J. Hol, C.W. Scherer Delft University of Technology, Delft Center of Systems and Control (DCSC) Mekelweg 2, 2628CD Delft, The Netherlands

More information

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry Rekha R. Thomas University of Washington, Seattle References Monique Laurent, Sums of squares, moment matrices and optimization

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

arxiv: v1 [math.oc] 31 Jan 2017

arxiv: v1 [math.oc] 31 Jan 2017 CONVEX CONSTRAINED SEMIALGEBRAIC VOLUME OPTIMIZATION: APPLICATION IN SYSTEMS AND CONTROL 1 Ashkan Jasour, Constantino Lagoa School of Electrical Engineering and Computer Science, Pennsylvania State University

More information

Global Optimization with Polynomials

Global Optimization with Polynomials Global Optimization with Polynomials Deren Han Singapore-MIT Alliance National University of Singapore Singapore 119260 Email: smahdr@nus.edu.sg Abstract The class of POP (Polynomial Optimization Problems)

More information

A Sparse Flat Extension Theorem for Moment Matrices

A Sparse Flat Extension Theorem for Moment Matrices A Sparse Flat Extension Theorem for Moment Matrices Monique Laurent and Bernard Mourrain Abstract. In this note we prove a generalization of the flat extension theorem of Curto and Fialkow [4] for truncated

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Advanced SDPs Lecture 6: March 16, 2017

Advanced SDPs Lecture 6: March 16, 2017 Advanced SDPs Lecture 6: March 16, 2017 Lecturers: Nikhil Bansal and Daniel Dadush Scribe: Daniel Dadush 6.1 Notation Let N = {0, 1,... } denote the set of non-negative integers. For α N n, define the

More information

Detecting global optimality and extracting solutions in GloptiPoly

Detecting global optimality and extracting solutions in GloptiPoly Detecting global optimality and extracting solutions in GloptiPoly Didier Henrion 1, Jean-Bernard Lasserre 1 September 5, 5 Abstract GloptiPoly is a Matlab/SeDuMi add-on to build and solve convex linear

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

Convex hull of two quadratic or a conic quadratic and a quadratic inequality

Convex hull of two quadratic or a conic quadratic and a quadratic inequality Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION

CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION JEAN B. LASSERRE Abstract. We review several (and provide new) results on the theory of moments, sums of squares and basic semi-algebraic

More information

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization CS-11-01 How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization Hayato Waki Department of Computer Science, The University of Electro-Communications

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

arxiv:math/ v3 [math.oc] 5 Oct 2007

arxiv:math/ v3 [math.oc] 5 Oct 2007 arxiv:math/0606476v3 [math.oc] 5 Oct 2007 Sparse SOS Relaxations for Minimizing Functions that are Summations of Small Polynomials Jiawang Nie and James Demmel March 27, 2018 Abstract This paper discusses

More information

arxiv: v1 [math.oc] 14 Oct 2014

arxiv: v1 [math.oc] 14 Oct 2014 arxiv:110.3571v1 [math.oc] 1 Oct 01 An Improved Analysis of Semidefinite Approximation Bound for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints Yong Hsia a, Shu Wang a, Zi Xu

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

More information

arxiv: v2 [math.oc] 31 May 2010

arxiv: v2 [math.oc] 31 May 2010 A joint+marginal algorithm for polynomial optimization Jean B. Lasserre and Tung Phan Thanh arxiv:1003.4915v2 [math.oc] 31 May 2010 Abstract We present a new algorithm for solving a polynomial program

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

On families of anticommuting matrices

On families of anticommuting matrices On families of anticommuting matrices Pavel Hrubeš December 18, 214 Abstract Let e 1,..., e k be complex n n matrices such that e ie j = e je i whenever i j. We conjecture that rk(e 2 1) + rk(e 2 2) +

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Global Minimization of Rational Functions and the Nearest GCDs

Global Minimization of Rational Functions and the Nearest GCDs Global Minimization of Rational Functions and the Nearest GCDs Jiawang Nie James Demmel and Ming Gu January 12, 2006 Abstract This paper discusses the global minimization of rational functions with or

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints

Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Fakultät für Mathematik und Informatik

Fakultät für Mathematik und Informatik Fakultät für Mathematik und Informatik Preprint 2017-03 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite programg problems ISSN 1433-9307 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Representations of Positive Polynomials: Theory, Practice, and

Representations of Positive Polynomials: Theory, Practice, and Representations of Positive Polynomials: Theory, Practice, and Applications Dept. of Mathematics and Computer Science Emory University, Atlanta, GA Currently: National Science Foundation Temple University

More information

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 We present the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre

More information

Semidefinite representation of convex hulls of rational varieties

Semidefinite representation of convex hulls of rational varieties Semidefinite representation of convex hulls of rational varieties Didier Henrion 1,2 June 1, 2011 Abstract Using elementary duality properties of positive semidefinite moment matrices and polynomial sum-of-squares

More information

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows

Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Unbounded Convex Semialgebraic Sets as Spectrahedral Shadows Shaowei Lin 9 Dec 2010 Abstract Recently, Helton and Nie [3] showed that a compact convex semialgebraic set S is a spectrahedral shadow if the

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

1 Quantum states and von Neumann entropy

1 Quantum states and von Neumann entropy Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include PUTNAM TRAINING POLYNOMIALS (Last updated: December 11, 2017) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear and non-linear programming

Linear and non-linear programming Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

More information

ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION. 1. Introduction. f 0 (x)

ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION. 1. Introduction. f 0 (x) ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION JIAWANG NIE AND KRISTIAN RANESTAD Abstract. Consider the polynomial optimization problem whose objective and constraints are all described by multivariate polynomials.

More information

LOWER BOUNDS FOR POLYNOMIALS USING GEOMETRIC PROGRAMMING

LOWER BOUNDS FOR POLYNOMIALS USING GEOMETRIC PROGRAMMING LOWER BOUNDS FOR POLYNOMIALS USING GEOMETRIC PROGRAMMING MEHDI GHASEMI AND MURRAY MARSHALL Abstract. We make use of a result of Hurwitz and Reznick [8] [19], and a consequence of this result due to Fidalgo

More information

Noetherian property of infinite EI categories

Noetherian property of infinite EI categories Noetherian property of infinite EI categories Wee Liang Gan and Liping Li Abstract. It is known that finitely generated FI-modules over a field of characteristic 0 are Noetherian. We generalize this result

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

Linear conic optimization for nonlinear optimal control

Linear conic optimization for nonlinear optimal control Linear conic optimization for nonlinear optimal control Didier Henrion 1,2,3, Edouard Pauwels 1,2 Draft of July 15, 2014 Abstract Infinite-dimensional linear conic formulations are described for nonlinear

More information

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS HAIBIN CHEN, GUOYIN LI, AND LIQUN QI Abstract. In this paper, we examine structured tensors which have sum-of-squares (SOS) tensor decomposition, and study

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Semidefinite Representation of Convex Sets

Semidefinite Representation of Convex Sets Mathematical Programming manuscript No. (will be inserted by the editor J. William Helton Jiawang Nie Semidefinite Representation of Convex Sets the date of receipt and acceptance should be inserted later

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information