Optimization over Polynomials with Sums of Squares and Moment Matrices

Similar documents
Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry

On Polynomial Optimization over Non-compact Semi-algebraic Sets

Representations of Positive Polynomials: Theory, Practice, and

Global Optimization of Polynomials Using Generalized Critical Values and Sums of Squares

The moment-lp and moment-sos approaches

SUMS OF SQUARES, MOMENT MATRICES AND OPTIMIZATION OVER POLYNOMIALS. February 5, 2008

Global Minimization of Rational Functions and the Nearest GCDs

Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares

Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets

Semidefinite programming and convex algebraic geometry

Strange Behaviors of Interior-point Methods. for Solving Semidefinite Programming Problems. in Polynomial Optimization

Real solving polynomial equations with semidefinite programming

An Exact Jacobian SDP Relaxation for Polynomial Optimization

Detecting global optimality and extracting solutions in GloptiPoly

How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

The moment-lp and moment-sos approaches in optimization

arxiv:math/ v3 [math.oc] 5 Oct 2007

Global polynomial optimization with Moments Matrices and Border Basis

An explicit construction of distinguished representations of polynomials nonnegative over finite sets

Copositive Programming and Combinatorial Optimization

Global Optimization with Polynomials

An Introduction to Polynomial and Semi-Algebraic Optimization

Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Optimization over Nonnegative Polynomials: Algorithms and Applications. Amir Ali Ahmadi Princeton, ORFE

Advanced SDPs Lecture 6: March 16, 2017

Non-commutative polynomial optimization

Global Optimization with Polynomials

Approximate GCDs of polynomials and sparse SOS relaxations

on Semialgebraic Sets for Nonnnegative Polynomials Finding Representations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Semidefinite Programming

Robust and Optimal Control, Spring 2015

A new look at nonnegativity on closed sets

Semidefinite programming relaxations for semialgebraic problems

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Ranks of Real Symmetric Tensors

A Sparse Flat Extension Theorem for Moment Matrices

Rational Sums of Squares and Applications

Copositive Programming and Combinatorial Optimization

Strong duality in Lasserre s hierarchy for polynomial optimization

A brief history of noncommutative Positivstellensätze. Jaka Cimprič, University of Ljubljana, Slovenia

Approximate Optimal Designs for Multivariate Polynomial Regression

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

CONVEXITY IN SEMI-ALGEBRAIC GEOMETRY AND POLYNOMIAL OPTIMIZATION

Solving systems of polynomial equations and minimization of multivariate polynomials

Croatian Operational Research Review (CRORR), Vol. 3, 2012

Real Symmetric Matrices and Semidefinite Programming

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

Convex algebraic geometry, optimization and applications

Semidefinite Programming Basics and Applications

Convex Optimization & Parsimony of L p-balls representation

A new approximation hierarchy for polynomial conic optimization

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

SEMIDEFINITE PROGRAMMING VS. LP RELAXATIONS FOR POLYNOMIAL PROGRAMMING

Convex algebraic geometry, optimization and applications

On the Complexity of Testing Attainment of the Optimal Value in Nonlinear Optimization

Semi-definite representibility. For fun and profit

arxiv: v1 [math.oc] 6 Mar 2009

A positive definite polynomial Hessian that does not factor

arxiv: v1 [math.oc] 9 Sep 2015

Lift-and-Project Techniques and SDP Hierarchies

POSITIVITY AND SUMS OF SQUARES: A GUIDE TO RECENT RESULTS

Non-Convex Optimization via Real Algebraic Geometry

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

Research Reports on Mathematical and Computing Sciences

Comparison of Lasserre s measure based bounds for polynomial optimization to bounds obtained by simulated annealing

Completely Positive Reformulations for Polynomial Optimization

Semidefinite programming lifts and sparse sums-of-squares

arxiv: v1 [math.oc] 28 Dec 2018

Theoretical Computer Science. Computing sum of squares decompositions with rational coefficients

Analysis and synthesis: a complexity perspective

4. Algebra and Duality

Regular and Positive noncommutative rational functions

Global Optimization of Polynomials

Research overview. Seminar September 4, Lehigh University Department of Industrial & Systems Engineering. Research overview.

Sum of Squares Relaxations for Polynomial Semi-definite Programming

Introduction to Semidefinite Programming I: Basic properties a

MTNS Conference, Polynomial Inequalities and Applications. Kyoto, July

Matricial real algebraic geometry

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Optimization of Polynomials

Describing convex semialgebraic sets by linear matrix inequalities. Markus Schweighofer. Universität Konstanz

Network Utility Maximization With Nonconcave Utilities Using Sum-of-Squares Method

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

arxiv: v1 [math.oc] 31 Jan 2017

A NICHTNEGATIVSTELLENSATZ FOR POLYNOMIALS IN NONCOMMUTING VARIABLES

Chapter 2 The Approach of Moments for Polynomial Equations

Algebraic Optimization with Applications to System Theory

Dimension reduction for semidefinite programming

Large graphs and symmetric sums of squares

SPECTRAHEDRA. Bernd Sturmfels UC Berkeley

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Globally convergent algorithms for PARAFAC with semi-definite programming

Complexity of Deciding Convexity in Polynomial Optimization

Semidefinite Programming

Lecture 6 Verification of Hybrid Systems

CLOSURES OF QUADRATIC MODULES

5. Duality. Lagrangian

Transcription:

Optimization over Polynomials with Sums of Squares and Moment Matrices Monique Laurent Centrum Wiskunde & Informatica (CWI), Amsterdam and University of Tilburg Positivity, Valuations and Quadratic Forms Konstanz, October 2009 Optimization over Polynomials with Sums of Squares and Moment Matrices p.

Polynomial optimization problem (P) Minimize a polynomial function p over a basic closed semi-algebraic set K p min := inf x K p(x) where K := {x R n h 1 (x) 0,..., h m (x) 0} p, h 1,..., h m R[x] are multivariate polynomials Optimization over Polynomials with Sums of Squares and Moment Matrices p.

Unconstrained polynomial minimization: K = R n p min := inf x R n p(x) p min 0 p 0 on R n Example: The partition problem. A sequence a 1,..., a n N can be partitioned if i I a i = i [n]\i a i for some I [n], i.e. if p min = 0, where p(x) = ( n i=1 a ix i ) 2 + n i=1 (x2 i 1)2 E.g., the sequence 1, 1, 2, 2, 3, 4, 5 can be partitioned. NP-complete problem Optimization over Polynomials with Sums of Squares and Moment Matrices p.

Example: Testing matrix copositivity M R n n is copositive if x T Mx 0 x R n + i.e. if p min = 0, where p(x) = n i,j=1 M ijx 2 i x2 j co-np-complete problem 0 1 1 1 1 1 1 1 1 1 B @ 1 1 1 1C A 1 1 1 1 = 0 1 1 1 0 0 1 1 0 0 B @ 0 0 1 1C A 0 0 1 1 + 0 1 0 0 1 1 0 0 1 1 B @ 1 1 0 0C A 1 1 0 0 is copositive 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 B C @ 1 1 1 1 1 A 1 1 1 1 1 is copositive Optimization over Polynomials with Sums of Squares and Moment Matrices p.

0/1 Linear programming min c T x s.t. Ax b, x 2 i = x i (i = 1,..., n) Example: The stability number α(g) of a graph G = (V, E) can be computed via any of the programs: α(g) = max i V x i s.t. x i +x j 1 (ij E), x 2 i = x i (i V ) 1 α(g) = min xt (I + A G )x s.t. i V x i = 1, x i 0 (i V ) (P) is NP-hard for linear objective and quadratic constraints, or for quadratic objective and linear constraints Optimization over Polynomials with Sums of Squares and Moment Matrices p.

Strategy Approximate (P) by a hierarchy of convex (semidefinite) relaxations Shor (1987), Nesterov, Lasserre, Parrilo (2000-) Such relaxations can be constructed using representations of nonnegative polynomials as sums of squares of polynomials and the dual theory of moments Optimization over Polynomials with Sums of Squares and Moment Matrices p.

Underlying paradigm Testing whether a polynomial p is nonnegative is hard but one can test whether p is a sum of squares of polynomials efficiently via semidefinite programming Optimization over Polynomials with Sums of Squares and Moment Matrices p.

Plan of the talk Role of semidefinite programming in sums of squares SOS/Moment relaxations for (P) Main properties: (1) Asymptotic/finite convergence via SOS representation results for positive polynomials (2) Optimality criterion via results for the moment problem (3) Extract global minimizers by solving polynomial equations Application to unconstrained polynomial optimization Optimization over Polynomials with Sums of Squares and Moment Matrices p.

A beautiful monograph about positive polynomials... Alexander Prestel Charles N. Delzell Positive Polynomials From Hilbert s 17th Problem to Real Algebra Optimization over Polynomials with Sums of Squares and Moment Matrices p.

Some notation R[x] = R[x 1,..., x n ]: ring of polynomials in n variables R[x] d : all polynomials with degree d p R[x] d p(x) = α N n α d p α x α 1 1 xα n }{{ n} x α = α N n α d p α x α p(x) = p T [x] d after setting and p = (p α ) α : vector of coefficients [x] d = (x α ) α : vector of monomials Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

What is semidefinite programming? Semidefinite programming (SDP) is linear optimization over the cone of positive semidefinite matrices LP SDP vector variable matrix variable x R n X Sym n [symmetric matrix] x 0 X 0 [positive semidefinite] sup X C, X s.t. A j, X = b j (j = 1,..., m) X 0 There are efficient algorithms to solve semidefinite programs Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

A small example of SDP max (X 13 + X 31 )/2 such that X 0, X R 3 3 X 11 = 1, X 12 = 1 X 23 = 1, X 33 = 2 2X 13 + X 22 = 3 max c such that X = 1 1 c 1 3 2c 1 0 c 1 2 One can check that max c = 1 and min c = 1 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

The Gram-matrix method to recognize sums of squares [cf. Powers-Wörmann 1998] Write p(x) = α 2d p αx α R[x] 2d as a sum of squares: p(x) = k j=1 (u j(x)) 2 = k j=1 [x]t d u j u j T [x] d = [x] d ( k = α 2d j=1 u j u j T } {{ } =: U 0 x α( β, γ d β+γ=α )[x] d = U β,γ }{{} =p α ) β, γ d x β x γ U β,γ Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

Recognize sums of squares via SDP p(x) = α 2d p α x α is a sum of squares of polynomials The following semidefinite program is feasible: β, γ d β+γ=α U 0 U β,γ = p α ( α 2d) Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

Example: Is p = x 4 + 2x 3 y + 3x 2 y 2 + 2xy 3 + 2y 4 SOS? Solution: Try to write p(x, y) (x 2 xy y 2 ) Equating coefficients: a b c b d e c e f }{{} U x 2 xy with U 0 y 2 x 4 = x 2 x 2 x 3 y = x 2 xy x 2 y 2 = xy xy = x 2 y 2 xy 3 = xy y 2 y 4 = y 2 y 2 1 = a 2 = 2b 3 = d + 2c 2 = 2e 2 = f Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

Example continued 1 1 c Hence U = 1 3 2c 1 0 1 c 1 c 1 2 1 0 ( ) 1 1 1 For c = 1, U = 1 2 0 2 1 1 1 p = (x 2 + xy y 2 ) 2 + (y 2 + 2xy) 2 1 0 0 1 1 0 For c = 0, U = 3 1 1 3 3 0 2 2 2 2 0 3 1 0 1 1 2 2 2 p = (x 2 + xy) 2 + 3 2 (xy + y2 ) 2 + 1 2 (xy y2 ) 2 2 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

Which nonnegative polynomials are SOS? Hilbert [1888] classified the pairs (n, d) for which every nonnegative polynomial of degree d in n variables is SOS: n = 1 d = 2 n = 2, d = 4 Σ n,d P n,d for all other (n, d) Motzkin polynomial: x 4 y 2 + x 2 y 4 3x 2 y 2 + 1 lies in P 2,6 \ Σ 2,6 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

How many nonnegative polynomials are sums of squares? [Blekherman 2003]: Very few! At fixed degree 2d and large number n of variables, there are significantly more nonnegative polynomials than sums of squares: c n d 1 2 ( vol( Pn,2d ) vol( Σ n,2d ) ) 1 D C n d 1 2 P n,2d := {p P n,2d p homogeneous, deg(p) = 2d, S p(x)µ(dx) = 1} n 1 D := ( n+2d 1 2d ) 1, the dimension of the ambient space Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

How many nonnegative polynomials are sums of squares? Many! The SOS cone is dense in the cone of nonnegative polynomials (allowing variable degrees): [Lasserre 2004]: If p 0 on R n, then ǫ > 0 k N s.t. p + ǫ ( k h=0 n i=1 x 2h i h! ) is SOS [Lasserre-Netzer 2006]: If p 0 on [ 1, 1] n, then ǫ > 0 k N s.t. p + ǫ ( 1 + n i=1 x 2k i ) is SOS Optimization over Polynomials with Sums of Squares and Moment Matrices p. 1

Artin [1927] solved Hilbert s 17th problem [1900] ( pi ) 2, where p i, q i R[x] p 0 on R n = p = i q i That is, p q 2 is SOS for some q R[x] Sometimes, the shape of the common denominator is known: Pólya [1928] + Reznick [1995]: For p homogeneous p > 0 on R n \ {0} = p ( n i=1 x 2 i ) r SOS for some r N Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

An example [Parrilo 2000] M := 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 p is not SOS p := But ( 5 i=1 x2 i )p is SOS 5 i,j=1 M ij x 2 i x2 j This is a certificate that p 0 on R 5, i.e., that M is copositive Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

SOS certificates for positivity on a semi-algebraic set K Let K = {x R n h 1 (x) 0,..., h m (x) 0} Set h 0 := 1 Quadratic module: M(h) := Preordering: T(h) := { m j=0 s j h j s j Σ n } { } s e h e1 1 he m m s e Σ n e {0,1} m p M(h) = p T(h) = p 0 on K Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

The Positivstellensatz [Krivine 1964] [Stengle 1974] Not an equivalence: K = {x R (1 x 2 ) 3 0} p = 1 x 2 Then, p 0 on K, but p T(h) The Positivstellensatz characterizes equivalence: p 0 on K pf = p 2m + g for some f, g T(h) and some m N But this does not yield tractable relaxations for (P)! Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

The Positivstellensatz [Krivine 1964] [Stengle 1974] K = {x R n h 1 (x) 0,..., h m (x) 0} (1) p > 0 on K pf = 1 + g for some f, g T(h) (2) p 0 on K pf = p 2m + g for some f, g T(h), m N (3) p = 0 on K p 2m T(h) for some m N (2) = solution to Hilbert s 17th problem (3) = Real Nullstellensatz But this does not yield tractable relaxations for (P)!!! Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

SOS certificates of positivity on K compact Schmüdgen [1991]: Assume K is compact. p > 0 on K = p T(h) Putinar [1993]: Assume the following Archimedean condition holds: (A) p R[x] N N N ± p M(h) Equivalently: N N N n i=1 x2 i M(h) p > 0 on K = p M(h) Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

SOS relaxations for (P) [Lasserre 2001, Parrilo 2000] p min = inf x K p(x) = sup λ s.t. p λ 0 on K Relax p λ 0 on K [hard condition] by p λ M(h) [SOS but unbounded degrees...] by p λ M(h) 2t [tractable SDP!] { m } M(h) 2t := j=0 s jh j s j Σ n, deg(s j h j ) 2t Relaxation (SOSt): p sos t := sup λ s.t. p λ M(h) 2t p sos t+1 p min Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

Asymptotic convergence If (A) holds for K, then lim p sos t t = p min Proof: p p min + ǫ > 0 on K = t p p min + ǫ M(h) 2t = p sos t p min ǫ Note: A representation result valid for p 0 on K" gives finite convergence: p sos t = p min for some t [Nie-Schweighofer 2007]: p min p sos t where c = c(h) and c = c(p, h) c c log(t/c) for t big, Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

Dual moment relaxations for (P) [Lasserre 2001] min = inf x K p(x)= inf µ probability measure on K K p(x)dµ(x) = inf L R[x] L(p) s.t. L comes from a probability measure µ on K = inf L R[x] L(p) s.t. L(f) 0 f 0 on K [Haviland] The following are equivalent for L R[x] : L comes from a nonnegative measure on K, i.e., L(f) = K f(x)dµ(x) f R[x] L(p) 0 if p 0 on K Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

Dual moment relaxations for (P) [continued] p min = inf L R[x] L(p) s.t. L(f) 0 f 0 on K Relax by by L(f) 0 f 0 on K L(f) 0 f M(h) L(f) 0 f M(h) 2t Relaxation (MOMt): p mom t := inf L R[x] 2t L(p) s.t. L 0 on M(h) 2t Weak duality: p sos t p mom t p min Equality: p sos t = p mom t e.g. if int(k) Optimization over Polynomials with Sums of Squares and Moment Matrices p. 2

The dual relaxation (MOMt) is again an SDP L R[x] 2t M t(l) := (L(x α x β )) α, β t M t (L) is the moment matrix of L (of order t) Lemma: L(f 2 ) 0 f R[x] t M t (L) 0 Proof: L(f 2 ) = f T M t (L) f Can express L 0 on M(h) 2t, i.e., L(f 2 h j ) 0 f j with deg(f 2 h j ) 2t as SDP conditions Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

Optimality criterion Observation: Let L be an optimum solution to (MOMt). If L comes from a probability measure µ on K, then (MOMt) is exact: p mom t = p min. Question: How to recognize whether L has a representing measure on K? Next: Sufficient condition of Curto-Fialkow for the moment problem Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

A sufficient condition for the moment problem Theorem [Curto-Fialkow 1996] Let L R[x] 2t. If M t (L) 0 and (RC) rankm t (L) = rankm t 1 (L), then L has a representing measure. Corollary [Curto-Fialkow 2000] + [Henrion-Lasserre 2005] Set d := max j deg(h j )/2. Let L be an optimum solution to (MOMt) satisfying rankm t (L) = rankm t d (L). Then, p mom t = p min and V (KerM t (L)) { global minimizers of p on K} with equality if rankm t (L) is maximum. Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

Remarks Compute V (KerM t (L)) with the eigenvalue method. If the rank condition holds at a maximum rank solution, then (P) has finitely many global minimizers. But the reverse is not true! The rank condition holds in the finite variety case: K = {x R n h 1 (x) = 0,..., h k (x) = 0, h }{{} k+1 (x) 0,...} ideal I with V R (I) < Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

Finite convergence in the finite variety case K = {x R n h 1 (x) = 0,..., h k (x) = 0, h }{{} k+1 (x) 0,...} ideal I Theorem: [La 2002] [Lasserre/La/Rostalski 2007] (i) If V C (I) <, p sos t = p mom t = p min for some t (ii) If V R (I) <, p mom t = p min for some t Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

The flat extension theorem Theorem [Curto-Fialkow 1996] Let L R[x] 2t. If rank M t (L) = rank M t 1 (L), then there is an extension L R[x] of L for which rank M( L) = rank M t (L). Main tool: KerM( L) is an ideal. [La-Mourrain 2009] The flat extension theorem can be generalized to matrices indexed by a set C of monomials (connected to 1) and its closure C + = C x 1 C... x n C, satisfying rank M C = rank M C +. Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

The finite rank moment matrix theorem Theorem: [Curto-Fialkow 1996] Let L R[x]. M(L) 0 and rank M(L) =: r < L has a (unique) r-atomic representation measure µ. [La 2005] Simple proof for = : I := KerM(L) is a real radical ideal I is 0-dimensional, as dim R[x]/I = r Hence: V (I) = {x 1,..., x r } R n Verify: L is represented by µ = r i=1 L(p2 i )δ x i, where the p i s are interpolation polynomials at the x i s Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

Implementations of the SOS/moment relaxation method GloptiPoly by Henrion, Lasserre (incorporates the optimality stopping criterion and the extraction procedure for global minimizers) SOSTOOLS by Prajna, Papachristodoulou, Seiler, Parrilo YALMIP by Löfberg SparsePOP by Waki, Kim, Kojima, Muramatsu Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

Example 1 min p = 25(x 1 2) 2 (x 2 2) 2 (x 3 1) 2 (x 4 4) 2 (x 5 1) 2 (x 6 4) 2 s.t. (x 3 3) 2 + x 4 4, (x 5 3) 2 + x 6 4 x 1 3x 2 2, x 1 + x 2 2, x 1 + x 2 6, x 1 + x 2 2, 1 x 3 5, 0 x 4 6, 1 x 5 5, 0 x 6 10, x 1, x 2 0 order t rank sequence bound p mom t solution extracted 1 1 7 unbounded none 2 1 1 21 310 (5, 1, 5, 0, 5, 10) d = 1 The global minimum is found at the relaxation of order t = 2 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

Example 2 min p = x 1 x 2 s.t. x 2 2x 4 1 8x3 1 + 8x2 1 + 2 x 2 4x 4 1 32x3 1 + 88x2 1 96x 1 + 36 0 x 1 3, 0 x 2 4 order t rank sequence bound p mom t solution extracted 2 1 1 4 7 none 3 1 2 2 4 6.6667 none 4 1 1 1 1 6 5.5080 (2.3295, 3.1785) d = 2 The global minimum is found at the relaxation of order t = 4 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 3

An example where (RC) cannot hold Perturbed Motzkin form: p = x 2 1 x2 2 (x2 1 + x2 2 3x2 3 ) + x6 3 + ǫ(x6 1 + x6 2 + x6 3 ) K = {x 3 i=1 x2 i But (RC) never holds 1} (0, 0) is the unique minimizer as p M(h) and p sos t = p mom t < p min = 0 order t rank sequence bound p mom t val. moment vect. 3 1 4 9 13 2.11 10 5 1.67 10 44 4 1 4 10 20 35 1.92 10 9 4.47 10 60 5 1 4 10 20 35 56 2.94 10 12 1.26 10 44 6 1 4 10 20 35 56 84 3.54 10 12 1.5 10 44 7 1 4 10 20 35 56 84 120 4.09 10 12 2.83 10 43 d = 3, ǫ = 0.01 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

Application to Unconstrained Polynomial Minimization p min = inf x R n p(x) where deg(p) = 2d As there is no constraint, the relaxation scheme just gives one bound: p sos t = p mom t = p sos d = pmom d p min for all t d with equality iff p(x) p min is SOS How to get better bounds? Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

Idea: Transform the Unconstrained Problem into a Constrained Problem If p has a minimum: p min =p grad := inf x V R grad p(x) where V R grad := {x Rn p(x) = 0 (i = 1,..., n)} If, moreover, a bound R is known on the norm of a global minimizer: p min =p ball := inf R 2 P i x2 i 0 p(x) Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

When p attains its minimum The ball approach : Convergence of the SOS/MOM bounds to p min = p ball The gradient variety approach: [Demmel, Nie, Sturmfels 2004]: p > 0 on V R grad = p M grad p 0 on V R grad = p M grad if I grad radical M grad := M(± p/ x i ) = Σ n + n i=1 R[x] p/ x i } {{ } I grad Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

Convergence Result [Demmel, Nie, Sturmfels 2004] Asymptotic convergence of the SOS/MOM bounds to p grad Finite Convergence to p grad when the gradient ideal I grad is radical Hence: When p attains its minimum, we have a converging hierarchy of SDP bounds to p min Example: p = x 2 + (xy 1) 2 does not attain its minimum p min = 0 < p grad = 1 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

What if p is not known to have a minimum? Strategy 1: Perturb the polynomial p [Hanzon-Jibetean 2003] [Jibetean-Laurent 2004] ( n ) p ǫ (x) := p(x) + ǫ x 2d+2 i for small ǫ > 0 i=1 p ǫ has a minimum and lim ǫ 0 (p ǫ ) min = p min The global minimizers of p ǫ converge to global minimizers of p as ǫ 0 The gradient variety of p ǫ is finite finite convergence of (p ǫ ) sos t, (p ǫ ) mom t to (p ǫ ) min Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

Example: Perturb the polynomial p = (xy 1) 2 + x 2 inf p(x) s.t. p ǫ / x 1 = 0, p ǫ / x 2 = 0 ǫ order t rank sequence p mom t 10 2 3 2 6 8 0.00062169 10 2 4 2 2 2 7 0.33846 extracted solutions 10 2 5 2 2 2 2-0.33846 ±(0.4729,1.3981) 10 3 5 2 2 2 2-0.20824 ±(0.4060,1.9499) 10 4 5 2 2 2 2-0.12323 ±(0.3287,2.6674) 10 5 5 2 2 2 2-0.07132 ±(0.2574,3.6085) 10 6 5 2 2 2 2-0.040761 ±(0.1977,4.8511) 10 7 5 2 2 2 2-0.023131 ±(0.1503,6.4986) 10 8 5 2 2 2 2-0.013074 ±(0.1136,8.6882) 10 9 5 2 2 2 2-0.0073735 ±(0.0856,11.6026) 10 10 5 2 2 2 2-0.0041551 ±(0.0643,15.4849) Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

When p does not have a minimum: Algebraic/analytical approach of Schweighofer [2005] Strategy 2: Minimize p over its gradient tentacle If p min >, then p min = inf x K grad p(x) where K grad := {x R n p(x) 2 x 2 1} V R grad p = ( p/ x i ) n i=1 Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

Representation result on the gradient tentacle K grad [Schweighofer 2005]: Assume p min > and p has only isolated singularities at infinity (*) (e.g. n = 2). Then, p 0 on R n p 0 on K grad ǫ > 0 p + ǫ M(1 p(x) 2 x 2 ) Convergent SOS/moment bounds to p min = inf Kgrad p(x) (*): The system p d (x) = 0, p d 1 (x) = 0 has finitely many projective zeros, where p = p d + p d 1 +... + p 0 and p i is the homogeneous component of degree i Tools: Algebra (extension of Schmüdgen s theorem) + analysis (Parusinski s results on behaviour of polynomials at infinity) Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

When p does not have a minimum: The tangency variety approach of Vui-Son [2008] Strategy 3: Minimize p over its truncated tangency variety : ( ) { Γ p := x R n p(x) rank 1} x = {x g ij := x i p/ x j x j p/ x i = 0 i, j n} Γ 0 p := Γ p {x p(x) p(0)} [Vui-Son 2008]: For p R[x] such that p min > p 0 on R n p 0 on Γ 0 p ǫ > 0 p + ǫ M(p(0) p, ±g ij ) Convergent SOS/moment bounds to p min = inf Γ 0 p p(x) Optimization over Polynomials with Sums of Squares and Moment Matrices p. 4

Further directions Exploit structure (equations, sparsity, symmetry, convexity,...) to get smaller SDP programs [Kojima, Grimm, Helton, Lasserre, Netzer, Nie, Riener, Schweighofer, Theobald, Parrilo,...] Application to the generalized problem of moments, to approximating integrals over semi-algebraic sets,... [Henrion, Lasserre, Savorgnan,...] Extension to NC variables [Helton, Klep, McCullough, Schmüdgen, Schweighofer,...] Optimization over Polynomials with Sums of Squares and Moment Matrices p. 5