Convex sets, conic matrix factorizations and conic rank lower bounds

Similar documents
Convex sets, matrix factorizations and positive semidefinite rank

Positive semidefinite rank

Equivariant semidefinite lifts and sum of squares hierarchies

Semidefinite programming lifts and sparse sums-of-squares

Lower bounds on the size of semidefinite relaxations. David Steurer Cornell

The convex algebraic geometry of rank minimization

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Dimension reduction for semidefinite programming

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Lifts of Convex Sets and Cone Factorizations

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Power and limitations of convex formulations via linear and semidefinite programming lifts. Hamza Fawzi

LIFTS OF CONVEX SETS AND CONE FACTORIZATIONS JOÃO GOUVEIA, PABLO A. PARRILO, AND REKHA THOMAS

V&V MURI Overview Caltech, October 2008

approximation algorithms I

9. Interpretations, Lifting, SOS and Moments

6.854J / J Advanced Algorithms Fall 2008

Semidefinite Programming Basics and Applications

Lecture 5 : Projections

1 Quantum states and von Neumann entropy

Graph structure in polynomial systems: chordal networks

Analysis and synthesis: a complexity perspective

Semidefinite Programming

Graph structure in polynomial systems: chordal networks

Robust and Optimal Control, Spring 2015

Sta$s$cal models and graphs

SEMIDEFINITE PROGRAM BASICS. Contents

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

On the existence of 0/1 polytopes with high semidefinite extension complexity

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

The Convex Geometry of Linear Inverse Problems

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Representations of Positive Polynomials: Theory, Practice, and

Collaborative Filtering Matrix Completion Alternating Least Squares

Real Symmetric Matrices and Semidefinite Programming

Convex Optimization & Parsimony of L p-balls representation

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

Convex Optimization in Classification Problems

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry

The convex algebraic geometry of linear inverse problems

Lecture: Introduction to LP, SDP and SOCP

Information Theory + Polyhedral Combinatorics

Rank minimization via the γ 2 norm

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Semidefinite programming and convex algebraic geometry

Lecture Note 5: Semidefinite Programming for Stability Analysis

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

Optimisation Combinatoire et Convexe.

SPECTRAHEDRA. Bernd Sturmfels UC Berkeley

Semidefinite programming relaxations for semialgebraic problems

DSOS/SDOS Programming: New Tools for Optimization over Nonnegative Polynomials

4. Algebra and Duality

Robust conic quadratic programming with ellipsoidal uncertainties

Complexity of Deciding Convexity in Polynomial Optimization

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Some upper and lower bounds on PSD-rank

Non-Convex Optimization via Real Algebraic Geometry

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Advanced SDPs Lecture 6: March 16, 2017

Semidefinite programs and combinatorial optimization

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Semidefinite Programming

What can be expressed via Conic Quadratic and Semidefinite Programming?

Convex Optimization M2

The Geometry of Semidefinite Programming. Bernd Sturmfels UC Berkeley

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Network Localization via Schatten Quasi-Norm Minimization

Convex Optimization M2

Sparse and Low-Rank Matrix Decompositions

NOTES ON HYPERBOLICITY CONES

Lecture: Examples of LP, SOCP and SDP

BBM402-Lecture 20: LP Duality

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

On the efficient approximability of constraint satisfaction problems

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

Positive semidefinite matrix approximation with a trace constraint

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Approximate cone factorizations and lifts of polytopes

On the local stability of semidefinite relaxations

Lift-and-Project Techniques and SDP Hierarchies

Complexity of 10 Decision Problems in Continuous Time Dynamical Systems. Amir Ali Ahmadi IBM Watson Research Center

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

WHEN DOES THE POSITIVE SEMIDEFINITENESS CONSTRAINT HELP IN LIFTING PROCEDURES?

Polygons as sections of higher-dimensional polytopes

The Convex Geometry of Linear Inverse Problems

Lecture 20: November 1st

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

On the existence of 0/1 polytopes with high semidefinite extension complexity

Latent Variable Graphical Model Selection Via Convex Optimization

Decentralized Control of Stochastic Systems

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

An E cient A ne-scaling Algorithm for Hyperbolic Programming

From Compressed Sensing to Matrix Completion and Beyond. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison

Notes taken by Graham Taylor. January 22, 2005

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Transcription:

Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute of Technology Based on joint work with João Gouveia (U. Coimbra), Rekha Thomas (U. Washington), and Hamza Fawzi (MIT) 1 / 31

Nonnegative factorizations Given a nonnegative matrix A R n m, a factorization A = UV where U R n k, V R k m are also nonnegative. The smallest such k is the nonnegative rank of the matrix A. Many applications: statistics, factor models, machine learning,... Very difficult problem, many heuristics exist. 2 / 32

Factorizations and hidden variables Let X, Y be discrete random variables, with joint distribution P[X = i, Y = j] = P ij. The nonnegative rank of P is the smallest support of a random variable W, such that X and Y are conditionally independent given W (i.e., X W Y is Markov): P[X = i, Y = j] = P[Z = s] P[X = i Z = s] P[Y = j Z = s]. s=1,...,k Relations with information theory, correlation generation, communication complexity, etc. Quantum versions are also of interest. As we ll see, fundamental in optimization... 3 / 32

Factorizations and hidden variables Let X, Y be discrete random variables, with joint distribution P[X = i, Y = j] = P ij. The nonnegative rank of P is the smallest support of a random variable W, such that X and Y are conditionally independent given W (i.e., X W Y is Markov): P[X = i, Y = j] = P[Z = s] P[X = i Z = s] P[Y = j Z = s]. s=1,...,k Relations with information theory, correlation generation, communication complexity, etc. Quantum versions are also of interest. As we ll see, fundamental in optimization... 3 / 32

Motivating example Representations of convex sets Examples The crosspolytope C n is the unit ball of the l 1 ball: C n := {x R n : n x i 1}. i=1 It is a polytope defined by 2 n linear inequalities: ±x 1 ± x 2 ± ± x n 1 The obvious linear program is exponentially large! 4 / 32

Representations of convex sets A better representation Examples By introducing slack or auxiliary variables, the set C n can be represented more conveniently: C n := {x R n : y R n, y i x i y i, n y i = 1}. i=1 This has only 2n variables (x 1, y 1,..., x n, y n ) and 2n + 1 constraints. A small linear program. Much better! What is going on in here? 5 / 32

Representations of convex sets A better representation Examples By introducing slack or auxiliary variables, the set C n can be represented more conveniently: C n := {x R n : y R n, y i x i y i, n y i = 1}. i=1 This has only 2n variables (x 1, y 1,..., x n, y n ) and 2n + 1 constraints. A small linear program. Much better! What is going on in here? 5 / 32

Representations of convex sets Geometric viewpoint Examples Geometrically, we are representing our polytope as a projection of a higher-dimensional polytope. The number of vertices does not increase, but the number of facets can grow exponentially! Complicated objects are sometimes easily described as projections of simpler ones. A general theme: algebraic varieties, graphical models,... 6 / 32

Representations of convex sets Geometric viewpoint Examples Geometrically, we are representing our polytope as a projection of a higher-dimensional polytope. The number of vertices does not increase, but the number of facets can grow exponentially! Complicated objects are sometimes easily described as projections of simpler ones. A general theme: algebraic varieties, graphical models,... 6 / 32

Representations of convex sets Extended formulations Extended formulations These representations are usually called extended formulations. Particularly relevant in combinatorial optimization (e.g., TSP). Seminal work by Yannakakis (1991), who used them to disprove the existence of a symmetric LP formulation for the TSP polytope. Nice recent survey by Conforti-Cornuéjols-Zambelli (2010). Our goal: to understand this phenomenon for convex optimization, not just LP. 7 / 32

Representations of convex sets Extended formulations Extended formulations These representations are usually called extended formulations. Particularly relevant in combinatorial optimization (e.g., TSP). Seminal work by Yannakakis (1991), who used them to disprove the existence of a symmetric LP formulation for the TSP polytope. Nice recent survey by Conforti-Cornuéjols-Zambelli (2010). Our goal: to understand this phenomenon for convex optimization, not just LP. 7 / 32

Representations of convex sets Extended formulations Extended formulations These representations are usually called extended formulations. Particularly relevant in combinatorial optimization (e.g., TSP). Seminal work by Yannakakis (1991), who used them to disprove the existence of a symmetric LP formulation for the TSP polytope. Nice recent survey by Conforti-Cornuéjols-Zambelli (2010). Our goal: to understand this phenomenon for convex optimization, not just LP. 7 / 32

Representations of convex sets Extended formulations Extended formulations in SDP Many convex sets and functions can be modeled by SDP or SOCP in nontrivial ways. Among others: Sums of eigenvalues of symmetric matrices Convex envelope of univariate polynomials Multivariate polynomials that are sums of squares Unit ball of matrix operator and nuclear norms Geometric and harmonic means E.g., Nesterov/Nemirovski, Boyd/Vandenberghe, Ben-Tal/Nemirovski, etc. Often, clever and non-obvious reformulations. 8 / 32

Representations of convex sets Extended formulations Extended formulations in SDP Many convex sets and functions can be modeled by SDP or SOCP in nontrivial ways. Among others: Sums of eigenvalues of symmetric matrices Convex envelope of univariate polynomials Multivariate polynomials that are sums of squares Unit ball of matrix operator and nuclear norms Geometric and harmonic means E.g., Nesterov/Nemirovski, Boyd/Vandenberghe, Ben-Tal/Nemirovski, etc. Often, clever and non-obvious reformulations. 8 / 32

Representations of convex sets Extended formulations Our questions Existence and efficiency: When is a convex set representable by conic optimization? How to quantify the number of additional variables that are needed? Given a convex set C, is it possible to represent it as C = π(k L) where K is a cone, L is an affine subspace, and π is a linear map? 9 / 32

Representations of convex sets Extended formulations Cone lifts of convex bodies When do such representations exist? Even ignoring complexity aspects, this question is not well understood. Why a sphere is not a polytope? Can every basic closed semialgebraic set be represented using semidefinite programming? What are obstructions for cone representability? 10 / 32

This talk: polytopes Representations of convex sets Slack operators What happens in the case of polytopes? (WLOG, compact with 0 int P). P = {x R n : f T i x 1} Polytopes have a finite number of facets f i and vertices v j. Define a nonnegative matrix, called the slack matrix of the polytope: [S P ] ij = 1 f T i v j, i = 1,..., F j = 1,..., V 11 / 32

This talk: polytopes Representations of convex sets Slack operators What happens in the case of polytopes? (WLOG, compact with 0 int P). P = {x R n : f T i x 1} Polytopes have a finite number of facets f i and vertices v j. Define a nonnegative matrix, called the slack matrix of the polytope: [S P ] ij = 1 f T i v j, i = 1,..., F j = 1,..., V 11 / 32

Representations of convex sets Example: hexagon (I) Slack operators Consider a regular hexagon in the plane. It has 6 vertices, and 6 facets. Its slack matrix is 0 0 1 2 2 1 1 0 0 1 2 2 S H = 2 1 0 0 1 2 2 2 1 0 0 1. 1 2 2 1 0 0 0 1 2 2 1 0 Trivial representation requires 6 facets. Can we do better? 12 / 32

Conic factorizations Factorizations and representability Cone factorizations and representability Geometric LP formulations exactly correspond to algebraic factorizations of the slack matrix. For polytopes, this amounts to a nonnegative factorization of the slack matrix: S ij = a i, b j, i = 1,..., v, j = 1,..., f where a i, b i are nonnegative vectors. Yannakakis (1991) showed that the minimal lifting dimension is equal to the nonnegative rank of the slack matrix. 13 / 32

Conic factorizations Factorizations and representability Cone factorizations and representability Geometric LP formulations exactly correspond to algebraic factorizations of the slack matrix. For polytopes, this amounts to a nonnegative factorization of the slack matrix: S ij = a i, b j, i = 1,..., v, j = 1,..., f where a i, b i are nonnegative vectors. Yannakakis (1991) showed that the minimal lifting dimension is equal to the nonnegative rank of the slack matrix. 13 / 32

Conic factorizations Example: hexagon (II) Factorizations and representability Regular hexagon in the plane. Its slack matrix is S H = 0 0 1 2 2 1 1 0 0 1 2 2 2 1 0 0 1 2 2 2 1 0 0 1 1 2 2 1 0 0 0 1 2 2 1 0. Nonnegative rank is 5. 14 / 32

Conic factorizations Example: hexagon (II) Factorizations and representability Regular hexagon in the plane. Its slack matrix is S H = 0 0 1 2 2 1 1 0 0 1 2 2 2 1 0 0 1 2 2 2 1 0 0 1 1 2 2 1 0 0 0 1 2 2 1 0. Nonnegative rank is 5. 14 / 32

Rank lower bounds Bounding nonnegative rank Want techniques to lower bound the nonnegative rank of a matrix. In applications, these bounds may yield: Minimal size of latent variables Complexity lower bounds on extended representations Known bounds exist (e.g. rank bound, combinatorial bounds, etc.). Want to do better, using convex optimization... 15 / 32

Rank lower bounds Two convex cones Two important and well-known convex cones of symmetric matrices: Copositive matrices: C := {M S n : x T Mx 0, x 0} Completely positive matrices: B := conv{xx T : x 0} These are proper cones (convex, closed, proper and solid), and they are dual to each other: C = B, B = C. 16 / 32

Rank lower bounds Two convex cones Two important and well-known convex cones of symmetric matrices: Copositive matrices: C := {M S n : x T Mx 0, x 0} Completely positive matrices: B := conv{xx T : x 0} These are proper cones (convex, closed, proper and solid), and they are dual to each other: C = B, B = C. 16 / 32

Rank lower bounds A convex bound for nonnegative rank Let A R m n + be a nonnegative matrix, and define { [ ] I W ν + (A) := max A, W : W R m n W T I copositive }. Then, ( ) ν+ (A) 2 rank + (A), A F where A F := i,j A2 i,j is the Frobenius norm of A. Essentially, a kind of nonnegative nuclear norm Convex, but hard... (membership in B and C is NP-hard!) But, we know how to approximate them... 17 / 32

Proof Rank lower bounds If A = r i=1 u ivi T, a scaling argument show that wlog we can take u i = v i for all i. By Cauchy-Schwarz, r i=1 u i v i r r = rank+ (A) i=1 u i 2 v i 2 We can then bound the numerator and denominator: Numerator: if W is feasible, then u T i Wv i u i v i, and thus A, W r i=1 u i v i. Denominator: A 2 F = r u i vi T, u j vj T i,j=1 r u i 2 v i 2. i=1 18 / 32

SOS Approximation Rank lower bounds Can approximate the cones C and B using sum of squares and semidefinite programming (P. 2000). We can write C as n C = M Sn : the polynomial M i,j xi 2 xj 2 is nonnegative. i,j=1 The kth order relaxation is defined as: ( n ) k n C [k] = M Sn : xi 2 M i,j x 2 i=1 i,j=1 i xj 2 is a sums-of-squares. Clearly, C [k] C and also C [k] C [k+1]. Furthermore, each C [k] is computable via SDP. 19 / 32

Simplest case (k = 0) Rank lower bounds The case k = 0 is the simple sufficient condition for copositivity M = P + N, P 0, N ij 0. Thus, the quantity ν [0] + (A) takes the more explicit form: [ ] I W ν [0] + (A) = max { A, W : For any k 0: W T I N n+m + S n+m + ν(a) ν [0] + (A) ν[k] + (A) ν +(A) rank + (A) A F where ν(a) is the standard nuclear norm. } 20 / 32

Simplest case (k = 0) Rank lower bounds The case k = 0 is the simple sufficient condition for copositivity M = P + N, P 0, N ij 0. Thus, the quantity ν [0] + (A) takes the more explicit form: [ ] I W ν [0] + (A) = max { A, W : For any k 0: W T I N n+m + S n+m + ν(a) ν [0] + (A) ν[k] + (A) ν +(A) rank + (A) A F where ν(a) is the standard nuclear norm. } 20 / 32

Rank lower bounds Comparison: Rank bound Trivially, rank(a) rank + (A). Can our bound improve on this? Consider 1 1 0 0 A = 1 0 1 0 0 1 0 1 0 0 1 1 It is known that rank(a) = 3 and rank + (A) = 4. We have ν [0] + (A) = 4 2, and thus our lower bound is sharp: 4 = rank + (A) ( ν [0] + (A) A F ) 2 = ( 4 2 8 ) 2 = 4. 21 / 32

Rank lower bounds Comparison: Boolean rank (rectangle covering) A lower bound used in communication complexity. Relies only on sparsity pattern of matrix. 0 1 1 1 A = 1 1 1 1 1 1 0 0. 1 1 0 0 The rectangle covering number of A is 2 since supp(a) can be covered with the two rectangles {1, 2} {2, 3, 4} and {2, 3, 4} {1, 2}. Our bound yields rank + (A) (ν [0] + (A)/ A F ) 2 = 3. In fact rank + (A) is exactly equal to 3: 1 1 0 A = 1 1 1 0 0 1 1 0 1 1 0 1 0 0. 1 0 0 0 0 1 1 22 / 32

Example: hypercube Rank lower bounds What is the extension complexity of the n-dimensional hypercube? Is there a better representation than the obvious 2n inequalities? Notice that the slack matrix is exponentially large (2n 2 n ). Rank bound is n + 1. Goemans face-counting lower bound gives 3n... Perhaps something nontrivial can be done? Proposition: Let C n = [0, 1] n be the hypercube in n dimensions and let S(C n ) R 2n 2n be its slack matrix. Then ( [0] ν + rank + (S(C n )) = (S(C ) 2 n)) = 2n. S(C n ) F 23 / 32

Example: hypercube Rank lower bounds What is the extension complexity of the n-dimensional hypercube? Is there a better representation than the obvious 2n inequalities? Notice that the slack matrix is exponentially large (2n 2 n ). Rank bound is n + 1. Goemans face-counting lower bound gives 3n... Perhaps something nontrivial can be done? Proposition: Let C n = [0, 1] n be the hypercube in n dimensions and let S(C n ) R 2n 2n be its slack matrix. Then ( [0] ν + rank + (S(C n )) = (S(C ) 2 n)) = 2n. S(C n ) F 23 / 32

Applications Rank lower bounds Yields computable approximations to an atomic norm for nonnegative factorization (e.g., Chandrasekaran-Recht-P.-Willsky 2010). Can use as regularizer for model selection problems (e.g., to identify near independent or most conditionally independent models). Q: In a statistical setting, tradeoffs between sample and computational complexity? (Srebro et al., Chandrasekaran-Jordan, etc). In particular, sample estimates in terms of Gaussian widths? 24 / 32

Applications Rank lower bounds Yields computable approximations to an atomic norm for nonnegative factorization (e.g., Chandrasekaran-Recht-P.-Willsky 2010). Can use as regularizer for model selection problems (e.g., to identify near independent or most conditionally independent models). Q: In a statistical setting, tradeoffs between sample and computational complexity? (Srebro et al., Chandrasekaran-Jordan, etc). In particular, sample estimates in terms of Gaussian widths? 24 / 32

Rank lower bounds Beyond LPs and nonnegative factorizations LPs are nice, but what about broader representability questions? In [GPT11], a generalization of Yannakakis theorem to the general convex case. General theme: Geometric extended formulations exactly correspond to algebraic factorizations of a slack operator. polytopes/lp convex sets/convex cones slack matrix slack operators vertices extreme points of C facets extreme points of polar C nonnegative factorizations conic factorizations S ij = a i, b j, a i, b j 0 S ij = a i, b j, a i K, b j K 25 / 32

Rank lower bounds Polytopes and PSD factorizations Even for polytopes, PSD factorizations can be interesting. Well-known example: the stable set or independent set polytope. Efficient SDP representations, but no known subexponential LP. Natural notion: positive semidefinite rank ([GPT 11]). Exactly captures the complexity of SDP-representability. 26 / 32

Rank lower bounds Polytopes and PSD factorizations Even for polytopes, PSD factorizations can be interesting. Well-known example: the stable set or independent set polytope. Efficient SDP representations, but no known subexponential LP. Natural notion: positive semidefinite rank ([GPT 11]). Exactly captures the complexity of SDP-representability. 26 / 32

Rank lower bounds PSD rank of a nonnegative matrix Let M R m n be a nonnegative matrix. The PSD rank of M, denoted rank psd, is the smallest r for which there exists r r PSD matrices {A 1,..., A m } and {B 1,..., B n } such that M ij = trace A i B j, i = 1,..., m j = 1,..., n. Natural generalization of nonnegative rank. The PSD rank determines the best semidefinite lifting. 27 / 32

Rank lower bounds PSD rank of a nonnegative matrix Let M R m n be a nonnegative matrix. The PSD rank of M, denoted rank psd, is the smallest r for which there exists r r PSD matrices {A 1,..., A m } and {B 1,..., B n } such that M ij = trace A i B j, i = 1,..., m j = 1,..., n. Natural generalization of nonnegative rank. The PSD rank determines the best semidefinite lifting. 27 / 32

Rank lower bounds Lower bounding PSD rank? Currently extending our bound to PSD rank, since combinatorial methods (based on sparsity patterns) cannot possibly work. But, a few unexpected difficulties... In the PSD case, the underlying norm is non-atomic, and the corresponding obvious inequalities do not hold... Noncommutative versions of C and B, quite complicated structure... Nice links between rank psd and quantum communication complexity, mirroring the situation between rank + and classical communication complexity (e.g., Fiorini et al. (2011), Jain et al. (2011), Zhang (2012)). 28 / 32

Computation Rank lower bounds Even for nonnegative factorization, non-convex and very difficult. A simple approach: alternating convex minimization. For instance, for PSD factorizations of a nonnegative matrix M = AB, we can alternate between minimizing over A = [A 1,..., A m ] T and B = [B 1,..., B n ]: minimize M AB A i 0 minimize M AB B i 0 These subproblems are SDPs (and if is the Euclidean norm, they are decoupled). However, no global guarantees. Ongoing work of F. Glineur (UCL). 29 / 32

The End END Thank You! Want to know more? J. Gouveia, P.A. Parrilo, R. Thomas, Lifts of convex sets and cone factorizations, Mathematics of Operations Research, to appear, 2013. arxiv:1111.3164. H. Fawzi, P.A. Parrilo, New lower bounds on nonnegative rank using conic programming, arxiv:1210.6970. 30 / 32

END 31 / 32

Example: hexagon (III) END A nonnegative factorization: 1 0 1 0 0 1 0 0 0 1 0 0 0 1 2 S H = 0 1 0 0 1 0 1 1 0 0 0 0 2 1 0 0 0 0 1 2 1 1 2 1 0 0 0 0 0 1 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1, 32 / 32