Copositive Programming and Combinatorial Optimization

Similar documents
Copositive Programming and Combinatorial Optimization

A solution approach for linear optimization with completely positive matrices

Modeling with semidefinite and copositive matrices

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Projection methods to solve SDP

Introduction to Semidefinite Programming I: Basic properties a

Solving large Semidefinite Programs - Part 1 and 2

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Lift-and-Project Techniques and SDP Hierarchies

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Conic approach to quantum graph parameters using linear optimization over the completely positive semidefinite cone

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

On the Sandwich Theorem and a approximation algorithm for MAX CUT

University of Groningen. Copositive Programming a Survey Dür, Mirjam. Published in: EPRINTS-BOOK-TITLE

New Lower Bounds on the Stability Number of a Graph

1 Strict local optimality in unconstrained optimization

An Adaptive Linear Approximation Algorithm for Copositive Programs

4. Algebra and Duality

Optimization over Polynomials with Sums of Squares and Moment Matrices

Semidefinite Programming

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

Semidefinite Programming Basics and Applications

The Difference Between 5 5 Doubly Nonnegative and Completely Positive Matrices

Applications of the Inverse Theta Number in Stable Set Problems

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

LNMB PhD Course. Networks and Semidefinite Programming 2012/2013

Completely Positive Reformulations for Polynomial Optimization

Four new upper bounds for the stability number of a graph

On a Polynomial Fractional Formulation for Independence Number of a Graph

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona

Analysis of Copositive Optimization Based Linear Programming Bounds on Standard Quadratic Optimization

Reducing graph coloring to stable set without symmetry

Optimization over Structured Subsets of Positive Semidefinite Matrices via Column Generation

Scaling relationship between the copositive cone and Parrilo s first level approximation

arxiv: v1 [math.oc] 23 Nov 2012

On the Lovász Theta Function and Some Variants

Chapter 3. Some Applications. 3.1 The Cone of Positive Semidefinite Matrices

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

1 The independent set problem

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Applications of semidefinite programming in Algebraic Combinatorics

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry

Semidefinite Programming

Interior points of the completely positive cone

Lecture 4: January 26

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Geometric Ramifications of the Lovász Theta Function and Their Interplay with Duality

Semidefinite programs and combinatorial optimization

SDP Relaxations for MAXCUT

Representations of Positive Polynomials: Theory, Practice, and

Separating Doubly Nonnegative and Completely Positive Matrices

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Copositive and Semidefinite Relaxations of the Quadratic Assignment Problem (appeared in Discrete Optimization 6 (2009) )

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities

Operations Research. Report Applications of the inverse theta number in stable set problems. February 2011

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Improved bounds on book crossing numbers of complete bipartite graphs via semidefinite programming

Linear and non-linear programming

Exploiting symmetry in copositive programs via semidefinite hierarchies

Lecture: Introduction to LP, SDP and SOCP

4-1 The Algebraic-Geometric Dictionary P. Parrilo and S. Lall, ECC

A ten page introduction to conic optimization

Positive semidefinite rank

Lecture 6: Conic Optimization September 8

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Written Examination

15. Conic optimization

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Lecture: Duality of LP, SOCP and SDP

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

Part II Strong lift-and-project cutting planes. Vienna, January 2012

Summer School: Semidefinite Optimization

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

On the structure of the 5 x 5 copositive cone

Relaxations of combinatorial problems via association schemes

Lecture: Duality.

c 2000 Society for Industrial and Applied Mathematics

Variable Objective Search

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Mustapha Ç. Pinar 1. Communicated by Jean Abadie

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

The complexity of optimizing over a simplex, hypercube or sphere: a short survey

On approximations, complexity, and applications for copositive programming Gijben, Luuk

5. Duality. Lagrangian

8 Approximation Algorithms and Max-Cut

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

Robust and Optimal Control, Spring 2015

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Introduction to LP and SDP Hierarchies

Global Optimization of Polynomials

Convex and Semidefinite Programming for Approximation

Technische Universität Ilmenau Institut für Mathematik

Transcription:

Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and J. Povh (Novo Mesto) Optimization and Applications Seminar p.1/46

Overview The power of copositivity Relaxations based on CP Heuristics based on CP Optimization and Applications Seminar p.2/46

The Power of Copositivity Copositive matrices Copositive programs Stable sets, Coloring Burer s theorem Optimization and Applications Seminar p.3/46

Completely Positive Matrices Let A = (a 1,...,a k ) be a nonnegative n k matrix, then X = a 1 a T 1 +... + a k a T k = AAT is called completely positive. COP = {X : X completely positive} COP is closed, convex cone. From the definition we get COP = conv{aa T : a 0}. For basics, see the book: A. Berman, N. Shaked-Monderer: Completely Positive Matrices, World Scientific 2003 Optimization and Applications Seminar p.4/46

Copositive Matrices Dual cone COP of COP in S n (sym. matrices): Y COP trxy 0 X COP a T Y a 0 vectors a 0. By definition, this means Y is copositive. CP = {Y : a T Y a 0 a 0} CP is dual cone to COP! Optimization and Applications Seminar p.5/46

Copositive Matrices Dual cone COP of COP in S n (sym. matrices): Y COP trxy 0 X COP a T Y a 0 vectors a 0. By definition, this means Y is copositive. CP = {Y : a T Y a 0 a 0} CP is dual cone to COP! Bad News: X / CP is NP-complete decision problem. Semidefinite matrices PSD: Y PSD a T Y a 0 a. Well known facts: PSD = PSD (PSD cone is selfdual. ) COP PSD CP Optimization and Applications Seminar p.6/46

Semidefinite and Copositive Programs Problems of the form max C,X s.t. A(X) = b, X PSD are called Semidefinite Programs. Problems of the form or max C,X s.t. A(X) = b, X CP max C,X s.t. A(X) = b, X COP are called Copositive Programs, because the primal or the dual involves copositive matrices. Optimization and Applications Seminar p.7/46

Why Copositive Programs? Copositive Programs can be used to solve combinatorial optimization problems. Stable Set Problem: Let A be adjacency matrix of graph, J be all ones matrix. Theorem (DeKlerk and Pasechnik (SIOPT 2002)) α(g) = max{ J,X : A + I,X = 1, X COP } = min{y : y(a + I) J CP }. This is a copositive program with only one equation (in the primal problem). This is a simple consequence of the Motzkin-Strauss Theorem. Optimization and Applications Seminar p.8/46

Proof (1) 1 α(g) = min{xt (A+I)x : x } (Motzkin-Strauss Theorem) = {x : i x i = 1, x 0} is standard simplex. We get 0 = min{x T (A + I eet α )x : x } = min{x T (α(a + I) J)x : x 0}. This shows that α(a + I) J is copositive. Therefore inf{y : y(a + I) J CP } α. Optimization and Applications Seminar p.9/46

Proof (2) Weak duality of copositive program gives: sup{ J,X : A + I,X = 1, X COP } inf{y : y(a + I) J CP } α. Optimization and Applications Seminar p.10/46

Proof (2) Weak duality of copositive program gives: sup{ J,X : A + I,X = 1, X COP } inf{y : y(a + I) J CP } α. Now let ξ be incidence vector of a stable set of size α. The matrix 1 α ξξt is feasible for the first problem. Therefore α sup{...} inf{...} α. This shows that equality holds throughout and sup and inf are attained. The recent proof of this result by DeKlerk and Pasechnik does not make explicit use of the Motzkin Strauss Theorem. Optimization and Applications Seminar p.11/46

Connections to theta function Theta function (Lovasz (1979) ): ϑ(g) = max{ J,X : x ij = 0 ij E, tr(x) = 1,X 0} α(g). Motivation: If ξ characteristic vector of stable set, then 1 ξ T ξ ξξt is feasible for above SDP. Optimization and Applications Seminar p.12/46

Connections to theta function Theta function (Lovasz (1979) ): ϑ(g) = max{ J,X : x ij = 0 ij E, tr(x) = 1,X 0} α(g). Motivation: If ξ characteristic vector of stable set, then 1 ξ T ξ ξξt is feasible for above SDP. Schrijver (1979) improvement: include X 0 In this case we can add up the constraints x ij = 0 and get ϑ (G) = max{ J,X : A,X = 0, tr(x) = 1,X 0,X 0}. (A... adjacency matrix). We have ϑ(g) ϑ (G) α(g). Optimization and Applications Seminar p.13/46

Connections to theta function Theta function (Lovasz (1979) ): ϑ(g) = max{ J,X : x ij = 0 ij E, tr(x) = 1,X 0} α(g). Motivation: If ξ characteristic vector of stable set, then 1 ξ T ξ ξξt is feasible for above SDP. Schrijver (1979) improvement: include X 0 In this case we can add up the constraints x ij = 0 and get ϑ (G) = max{ J,X : A,X = 0, tr(x) = 1,X 0,X 0}. (A... adjacency matrix). We have ϑ(g) ϑ (G) α(g). Replacing the cone X 0,X 0 by X COP gives α(g), see before. Optimization and Applications Seminar p.14/46

Graph Coloring Adjacency matrix A of a graph (left), associated partitioning (right). The graph can be colored with 5 colors. M is k-partition matrix if P Π such that P T MP is direct sum of k all-ones blocks. Number of colors = number of all-ones blocks = rank of M. Optimization and Applications Seminar p.15/46

Chromatic number M is k partition matrix if P Π such that P T MP is direct sum of k all-ones blocks. Number of colors = number of all-ones blocks = rank of M. Therefore chromatic number χ(g) of graph G can be defined as follows: χ(g) = min{rank(m) : M is partition matrix,m ij = 0 ij E(G)}. We need a better description of k-partition matrices. Lemma: M is partition matrix if and only if M = M T, m ij {0, 1}, (tm J 0 t rank(m)). Optimization and Applications Seminar p.16/46

Proof of Lemma Proof: : Nonzero principal minor of tm J has form ti s J s and s rank(m). Hence tm J 0 iff t rank(m). : t 0, therefore m ii = 1 (so each vertex in one color class). We also have M 0. 1 1 0 m ij = m jk = 1 implies m ik = 1 because 1 1 1 0. 0 1 1 Therefore M is direct sum of all-ones blocks. Optimization and Applications Seminar p.17/46

Chromatic number Hence χ(g) = min{rank(m) : M is partition matrix,m ij = 0 ij E} = min{t : M = M T,m ij {0, 1},m ij = 0 ij E,tM J 0}, using the previous lemma. Leaving out m ij {0, 1} gives SDP lower bound: χ(g) min{t : Y J 0, y ii = t i, y ij = 0 ij E(G)} = ϑ(g). This gives the second inequality in the Lovasz sandwich theorem, Lovasz (1979): ω(g) ϑ(g) χ(g). Optimization and Applications Seminar p.18/46

Copositive strengthening χ(g) min{t : Y J 0, y ii = t i, y ij = 0 ij E} = ϑ(g). Note that Y can be interpreted as tm, where M is partition matrix. By construction, we also have M COP. Hence we get the following strenghthening t (G) = min{t : Y J 0,Y COP,y ii = t i, y ij = 0 ij E} Dukanovic and R. (2006) show that t is equal to the fractional chromatic number χ f (G) of G. χ(g) χ f (G) = t (G) ϑ(g) Gvozdenovic and Laurent (2007) show that (unless P=NP), there is no polynomially computable number between χ(g) and χ f (G). Optimization and Applications Seminar p.19/46

A general copositive modeling theorem Burer (2007) shows the following general result for the power of copositive programming: The optimal values of P and C are equal: opt(p) = opt(c) (P) minx T Qx + c T x Here x IR n and m n. a T i x = b i, x 0, x i {0, 1} i m. (C) min tr(qx) + c T x, s.t. a T i x = b i, ( ) a T i Xa i = b 2 1 x T i, X ii = x i i m, COP x X Optimization and Applications Seminar p.20/46

Overview The power of copositivity Relaxations based on CP Heuristics based on CP Optimization and Applications Seminar p.21/46

Approximating COP We have now seen the power of copositive programming. Since optimizing over CP is NP-Hard, it makes sense to get approximations of CP or COP. To get relaxations, we need supersets of COP, or inner approximations of CP (and work on the dual cone). The Parrilo hierarchy uses Sum of Squares and provides such an outer approximation of COP (dual viewpiont!). We can also consider inner approximations of COP. This can be viewed as a method to generate feasible solutions of combinatorial optimization problems ( primal heuristic!). Optimization and Applications Seminar p.22/46

Relaxations Inner approximation of CP. CP := {M : x T Mx 0 x 0} Parrilo (2000) and DeKlerk, Pasechnik (2002) use the following idea to approximate CP from inside: M CP iff P(x) := ij x 2 ix 2 jm ij 0 x. A sufficient condition for this to hold is that P(x) has a sum of squares (SOS) representation. Theorem Parrilo (2000) : P(x) has SOS iff M = P + N, where P 0 and N 0. Optimization and Applications Seminar p.23/46

Parrilo hierarchy To get tighter approximations, Parrilo proposes to consider SOS representations of P r (x) := ( i x 2 i) r P(x) for r = 0, 1,... (For r = 0 we get the previous case.) Mathematical motivation by an old result of Polya. Theorem Polya (1928): If M strictly copositive then P r (x) is SOS for some sufficiently large r. Optimization and Applications Seminar p.24/46

Parrilo hierarchy (2) Parrilo characterizes SOS for r = 0, 1: P 0 (x) is SOS iff M = P + N, where P 0 and N 0. P 1 (x) is SOS iff M 1,...,M n such that M M i 0 (M i ) ii = 0 i (M i ) jj + 2(M j ) ij = 0 i j (M i ) jk + (M j ) ik + (M k ) ij 0 i < j < k The resulting relaxations are SDP. But the r = 1 relaxation involves n matrices and n SDP constraints to certify SOS. This is computationally challenging. Optimization and Applications Seminar p.25/46

Computational comparison We consider Hamming graphs and compare the P 0 and the P 1 relaxation of the chromatic number. graph n r = 0 r = 1 χ H(7,6) 128 53.33 63.9 64 H(8,6) 256 85.33 127.9 128 H(9,4) 512 51.19 53.9 H(10,8) 1024 383.99 511.9 512 H(12,4) 4096 211.86 255.5 Using the automorphism structure of Hamming graphs, the general certificate for r = 1 reduces to one additional matrix and one additional SDP constraint. (Computation time: a few minutes!) (see Dukanovic, R.: Math Prog (2008)) Optimization and Applications Seminar p.26/46

Overview The power of copositivity Relaxations based on CP Heuristics based on CP Optimization and Applications Seminar p.27/46

Inner approximations of COP We consider min C,X s.t. A(X) = b, X COP Remember: COP= {X : X = V V T, V 0}. Some previous work by: Bomze, DeKlerk, Nesterov, Pasechnik, others: Get stable sets by approximating COP formulation of the stable set problem using optimization of quadratic over standard simplex, or other local methods. Optimization and Applications Seminar p.28/46

Incremental version A general feasible descent approach: Let X = V V T with V 0 be feasible. Consider the regularized, and convex descent step problem: min ǫ C, X + (1 ǫ) X 2, such that A( X) = 0, X + X COP. Optimization and Applications Seminar p.29/46

Incremental version A general feasible descent approach: Let X = V V T with V 0 be feasible. Consider the regularized, and convex descent step problem: min ǫ C, X + (1 ǫ) X 2, such that A( X) = 0, X + X COP. For small ǫ > 0 we approach the true optimal solution, because we follow the continuous steepest descent path, projected onto COP. Unfortunately, this problem is still not tractable. We approximate it by working in the V -space instead of the X-space. Optimization and Applications Seminar p.30/46

Incremental version: V -space X + = (V + V )(V + V ) T hence, X = V V T X = X( V ) = V V T + V V T + ( V )( V T ). Now linearize and make sure V is small. Optimization and Applications Seminar p.31/46

Incremental version: V -space X + = (V + V )(V + V ) T hence, X = V V T X = X( V ) = V V T + V V T + ( V )( V T ). Now linearize and make sure V is small. We get min ǫ 2CV, V + (1 ǫ) V 2 such that 2A i, V = b i A i V,V i, V + V 0 This is convex approximation of nonconvex version in X( V ). Optimization and Applications Seminar p.32/46

Correcting the linearization The linearization obtained by replacing X( V ) with V V T + V V T introduces an error both in the cost function and in the constraints A(X) = b. We therefore include corrector iterations of the form V = V old + V corr before actually updating V V + V. Optimization and Applications Seminar p.33/46

Convex quadratic subproblem The convex subproblem is of the following form, after appropriate redefinition of data and variables x = vec(v + V ),... min c T x + ρx T x such that Rx = r, x 0. Optimization and Applications Seminar p.34/46

Convex quadratic subproblem The convex subproblem is of the following form, after appropriate redefinition of data and variables x = vec(v + V ),... min c T x + ρx T x such that Rx = r, x 0. If V is n k (k columns generating V ), x if of dimension nk and there are m equations. Optimization and Applications Seminar p.35/46

Convex quadratic subproblem The convex subproblem is of the following form, after appropriate redefinition of data and variables x = vec(v + V ),... min c T x + ρx T x such that Rx = r, x 0. If V is n k (k columns generating V ), x if of dimension nk and there are m equations. Since Hessian of cost function is identity matrix, this problem can be solved efficiently using interior-point methods. (convex quadratic with sign constraints and linear equations) Main effort is solving a linear system of order m, essentially independent of n and k. Optimization and Applications Seminar p.36/46

Test data sets COP problems coming from formulations of NP-hard problems are too difficult ( Stable Set, Coloring, Quadratic Assignment) to test new algorithms. Would like to have: Data (A,b,C) and (X,y,Z) such that (X,y,Z) is optimal for primal and dual (no duality gap). COP is nontrivial (optimum not given by optimizing over semidefiniteness plus nonnegativity ) generate instances of varying size both in n and m. Hard part: Z provably copositive!! Optimization and Applications Seminar p.37/46

A COP generator Basic idea: Let G be a graph, with Q G = A + I, and (X G,y G,Z G ) be optimal solution of ω(g) = max J,X such that Q G,X = 1, X COP = miny such that yq G J CP. We further assume: G = H K (strong graph product of H and K). K is perfect and H = C 5 (or some other graph with known clique number and ω(g) < ϑ (G)). This implies: ω(g) = ω(h)ω(k) (same for ϑ ) Max cliques in G through Kronecker products from H,K. Optimization and Applications Seminar p.38/46

COP generator (2) Then we have: y G := ω(h)ω(k) implies Z G = y G Q G J CP. Let X H and X K convex combinations of outer products of characteristic vectors of max cliques in H and K. This implies that X G = X H X K COP is primal optimal. Select m and Matrix A of size m n 2 and set b = A(X G ). Set C := Z G A T (y G ). This implies that (X G,y G,Z G ) is optimal for the data set A,b,C and that there is no duality gap, and the problem is nontrivial, because of ω(g) < ϑ (G). Therefore min{...,x COP } > min{...,x PSD N} Optimization and Applications Seminar p.39/46

Computational results A sample instance with n = 60, m = 100. z sdp = 9600, 82, z sdp+nonneg = 172.19,z cop = 69.75 it b-a(x) f(x) 1 0.002251-68.7274 5 0.000014-69.5523 10 0.000001-69.6444 15 0.000001-69.6887 20 0.000000-69.6963 The number of inner iterations was set to 5, column 1 shows the outer iteration count. Optimization and Applications Seminar p.40/46

Computational results A sample instance with n = 60, m = 100. z sdp = 9600, 82, z sdp+nonneg = 172.19,z cop = 69.75 it b-a(x) f(x) 1 0.002251-68.7274 5 0.000014-69.5523 10 0.000001-69.6444 15 0.000001-69.6887 20 0.000000-69.6963 The number of inner iterations was set to 5, column 1 shows the outer iteration count. But starting point: V 0 =.95 Vopt +.05 rand Optimization and Applications Seminar p.41/46

Computational results (2) Example (continued). recall n = 60, m = 100. z sdp = 9600, 82, z sdp+nonneg = 172.19,z cop = 69.75 start iter b-a(x) f(x) (a) 20 0.000000-69.696 (b) 20 0.000002-69.631 (c) 50 0.000008-69.402 Different starting points: (a) V =,95 * V opt +.05 * rand (b) V =.90 * V opt +.10 * rand (c) V = rand(n, 2n) Optimization and Applications Seminar p.42/46

Random Starting Point Example (continued), n = 60, m = 100. z sdp = 9600, 82, z sdp+nonneg = 172.19,z cop = 69.75 it b-a(x) f(x) 1 6.121227 1831.5750 5 0.021658 101.1745 10 0.002940-43.4477 20 0.000147-67.0989 30 0.000041-68.7546 40 0.000015-69.2360 50 0.000008-69.4025 Starting point: V 0 = rand(n, 2n) Optimization and Applications Seminar p.43/46

More results n m opt found b A(X) 50 100 314.48 314.90 4 10 5 60 120-266.99-266.48 4 10 5 70 140-158.74-157.55 3 10 5 80 160-703.75-701.68 5 10 5 100 100-659.65-655.20 8 10 5 Starting point in all cases: rand(n,2n) Inner iterations: 5 Outer iterations: 30 Optimization and Applications Seminar p.44/46

Some experiments with Stable Set max J,X such that tr(x) = 1, tr(a G X) = 0, X COP Only two equations but many local optima. We consider a selection of graphs from the DIMACS collection. Computation times in the order of a few minutes. name n ω clique found keller4 171 11 9 brock200-4 200 17 14 c-fat200-1 200 12 12 c-fat200-5 200 58 58 brock400-1 400 27 24 p-hat500-1 500 9 8 Optimization and Applications Seminar p.45/46

Last Slide We have seen the power of copositivity. Relaxations: The Parrilo hierarchy is computationally too expensive. Other way to approximate CP? Heuristics: Unfortunately, the subproblem may have local solutions, which are not local minima for the original descent step problem. The number of columns of V does not need to be larger than ( n+1) 2, but for practical purposes, this is too large. Further technical details in a forthcoming paper by I. Bomze, F. Jarre and F. R.: Quadratic factorization heuristics for copositive programming, technical report, (2008). Optimization and Applications Seminar p.46/46