Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Similar documents
Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Semidefinite Programming Basics and Applications

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Approximation Algorithms

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex relaxation. In example below, we have N = 6, and the cut we are considering

IE 521 Convex Optimization

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

Convex Optimization M2

Handout 6: Some Applications of Conic Linear Programming

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Network Localization via Schatten Quasi-Norm Minimization

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Relaxations and Randomized Methods for Nonconvex QCQPs

Preliminaries Overview OPF and Extensions. Convex Optimization. Lecture 8 - Applications in Smart Grids. Instructor: Yuanzhang Xiao

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Quadratic reformulation techniques for 0-1 quadratic programs

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

Convex Optimization M2

Lecture 8. Strong Duality Results. September 22, 2008

6.854J / J Advanced Algorithms Fall 2008

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Introduction to Semidefinite Programming I: Basic properties a

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION

Kernel Methods. Machine Learning A W VO

Convex Optimization and Modeling

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

Gauge optimization and duality

5. Duality. Lagrangian

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Advances in Convex Optimization: Theory, Algorithms, and Applications

Lecture: Examples of LP, SOCP and SDP

4. Algebra and Duality

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Advanced Algorithms 南京大学 尹一通

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lecture: Duality.

Chordally Sparse Semidefinite Programs

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

On the local stability of semidefinite relaxations

Matrix Rank Minimization with Applications

EE 227A: Convex Optimization and Applications October 14, 2008

On Conic QPCCs, Conic QCQPs and Completely Positive Programs

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

SEMIDEFINITE PROGRAM BASICS. Contents

There are several approaches to solve UBQP, we will now briefly discuss some of them:

Lecture Semidefinite Programming and Graph Partitioning

Tractable Upper Bounds on the Restricted Isometry Constant

COM Optimization for Communications 8. Semidefinite Programming

A Convex Upper Bound on the Log-Partition Function for Binary Graphical Models

Dimension reduction for semidefinite programming

Lecture: Cone programming. Approximating the Lorentz cone.

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Lecture 6: Conic Optimization September 8

Modeling with semidefinite and copositive matrices

Convex Optimization Boyd & Vandenberghe. 5. Duality

Research Reports on Mathematical and Computing Sciences

Convex sets, conic matrix factorizations and conic rank lower bounds

Lecture 3: Semidefinite Programming

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Multi-stage convex relaxation approach for low-rank structured PSD matrix recovery

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

8 Approximation Algorithms and Max-Cut

An Optimization-based Approach to Decentralized Assignability

Sparse and Low Rank Recovery via Null Space Properties

Semidefinite Programming Duality and Linear Time-invariant Systems

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Lecture: Duality of LP, SOCP and SDP

Problem 1 (Exercise 2.2, Monograph)

Convex Optimization and l 1 -minimization

A Unified Theorem on SDP Rank Reduction. yyye

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

15-780: LinearProgramming

Low-Rank Matrix Recovery

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Dual fitting approximation for Set Cover, and Primal Dual approximation for Set Cover

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Lecture 20: November 1st

SDP Relaxations for MAXCUT

Positive semidefinite rank

Lecture 13 March 7, 2017

Semidefinite Programming

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

Semidefinite Programming

New Developments on OPF Problems. Daniel Bienstock, Columbia University. April 2016

Graph Partitioning Using Random Walks

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Dual bounds: can t get any better than...

Transcription:

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago, July 2015

Quadratically Constrained Quadratic Program (QCQP) Consider the following QCQP: minimize x Cx s.t. x A k x b k, k = 1,..., m, x C n where C, A 1,..., A m H n and b = [b 1,..., b m ] R m. Applications: optimal power flow, max-cut, 0-1 integer programs. NP-hard in general. 2

Objective Identify a family of nearby QCQPs that: are computationally tractable, yield feasible and nearly optimal solutions for the original QCQP. Characterize nearby problems via perturbations to the problem data ( A, b, C). 3

Semidefinite Relaxation QCQP can be reformulated as a rank-constrained SDP: minimize tr(cx) s.t. A(X) b, X 0, rank(x) 1. where we define A(X) = [tr(a 1 X),..., tr(a m X)]. Semidefinite relaxation by removing the rank constraint. minimize tr(cx) s.t. A(X) b, X 0. Parametrize SDP by its data: d = (A, b, C). 4

Semidefinite Relaxation QCQP can be reformulated as a rank-constrained SDP: minimize tr(cx) s.t. A(X) b, X 0, rank(x) 1. where we define A(X) = [tr(a 1 X),..., tr(a m X)]. Semidefinite relaxation by removing the rank constraint. minimize tr(cx) s.t. A(X) b, X 0. Parametrize SDP by its data: d = (A, b, C). 4

A Family of Perturbed SDPs Perturbed SDP: minimize tr((c + C)X) s.t (A + A)(X) b, X 0. Additive perturbations on data : d = ( A, 0, C). Would like to characterize perturbations to nearby SDPs that admit solutions, which (a) are feasible for the rank-constrained SDP(d), (b) have good performance guarantees. 5

A Numerical Example Original SDP(d) minimize tr(cx) subject to A(X) b, X 0. d = (A, b, C), problem data F(d), feasible set. 6

A Numerical Example Original SDP(d) minimize tr(cx) subject to A(X) b, X 0. d = (A, b, C), problem data F(d), feasible set X, minimizer of SDP(d). 7

Perturbation-Based Approximation Goal: Identify perturbations d of the problem data d satisfying: 1. Feasibility condition: F(d + d) is a nonempty subset of F(d). 2. Recovery condition: a minimizer X of SDP(d + d) with rank(x) 1 can be computed in poly-time. 3. Performance guarantee: OPT tr(cx) OPT + ϕ(d, d), where OPT is the optimal value of the SDP(d) and ϕ(d, d) 0 uniformly as d 0. 8

Perturbation-Based Approximation Goal: Identify perturbations d of the problem data d satisfying: 1. Feasibility condition: F(d + d) is a nonempty subset of F(d). 2. Recovery condition: a minimizer X of SDP(d + d) with rank(x) 1 can be computed in poly-time. 3. Performance guarantee: OPT tr(cx) OPT + ϕ(d, d), where OPT is the optimal value of the SDP(d) and ϕ(d, d) 0 uniformly as d 0. 8

Perturbation-Based Approximation Goal: Identify perturbations d of the problem data d satisfying: 1. Feasibility condition: F(d + d) is a nonempty subset of F(d). 2. Recovery condition: a minimizer X of SDP(d + d) with rank(x) 1 can be computed in poly-time. 3. Performance guarantee: OPT tr(cx) OPT + ϕ(d, d), where OPT is the optimal value of the SDP(d) and ϕ(d, d) 0 uniformly as d 0. 8

A Numerical Example Perturbed SDP(d + d) minimize tr((c + C)X) subject to (A + A)(X) b, X 0. d = (A, b, C), problem data d = ( A, 0, C), perturbation on data F(d + d), perturbed feas. set 9

Talk Outline Prior work Preliminary results Main results Existence and characterization of acyclic approximations Performance guarantees Future work 10

Prior Work Alternating projections [von Neumann, 39, Grigoriadis et al. 00, Lewis et al. 08] Nuclear norm minimization [Fazel et al. 01, Recht et al. 07, Chandrasekaran 12] Randomized rounding: Max-cut: 0.8785 approx. ratio. [Goemans et al. 95] QCQPs with: (a) A 1,..., A m 0 : O(log m) approx. ratio. [Nemirovski et al. 03] (b) b 1,..., b m > 0 : data-dependent approx. ratio. [He et al. 08] 11

Graph of Semidefinite Program Definition: Given problem data d, let G(d) denote the undirected graph induced by the collective sparsity pattern of the matrices C, A 1,..., A m. Example: 12

Linear Separability Definition: The data d is off-diagonally linearly separable from the origin if for all i j, there is a line through (0, 0) such that all points in lie on one side of the line. { [C] ij, [A 1 ] ij,..., [A m ] ij } Example: 13

Guarantees Semidefinite Relaxation: minimize tr(cx) s.t. A(X) b, X 0. X H n Theorem: [Sojoudi et al. 14, Bose et al 14] If the data d = (A, b, C) are: 1. off-diagonally linearly separable from the origin and 2. the graph G(d) is acyclic, then a minimizer X satisfying rank(x) 1 can be computed in poly-time. 14

Acyclic Semidefinite Approximations Definition: The SDP(d + d) is an (α, β)-acyclic approximation of the SDP(d) if 15

Acyclic Semidefinite Approximations Definition: The SDP(d + d) is an (α, β)-acyclic approximation of the SDP(d) if 1. G(d + d) is acyclic, 16

Acyclic Semidefinite Approximations Definition: The SDP(d + d) is an (α, β)-acyclic approximation of the SDP(d) if 1. G(d + d) is acyclic, 2. d + d are off-diagonally linearly separable from the origin, 17

Acyclic Semidefinite Approximations Definition: The SDP(d + d) is an (α, β)-acyclic approximation of the SDP(d) if 1. G(d + d) is acyclic, 2. d + d are off-diagonally linearly separable from the origin, 3. A α and C F β. 18

Distance to Infeasibility For d = (A, b, C), the set of primal infeasible problem instances is: I P = {d F(d) = }. Definition: [Renegar 94] The distance of the data d to I P is: dist IP (d) = inf{ d d π d IP }. Dual distance to infeasibility, dist ID (d), similarly defined. Can be approximated by solving an SDP. [Freund et al. 99] 19

Recovery Guarantees Theorem: Let SDP(d + d) be an (α, β)-acyclic approximation of SDP(d). If 1. C, A 1,..., A m 0 and 2. α < dist IP (d), then 3. Feasibility condition: F(d + d) is a nonempty subset of F(d). 4. Recovery condition : a minimizer X of the SDP(d + d) with rank(x ) 1 can be computed in poly-time. 20

Performance Guarantees 5. Performance guarantee: Let OPT be the optimal value of SDP(d). Any minimizer X of SDP(d + d) satisfies: where OPT tr(cx ) OPT + ϕ(d, d), ϕ(d, d) = max{ b 2, OPT}(µ + ν max{ C F + β, OPT}), µ = β dist ID (d), and ν = α dist ID (d) (dist IP (d) α). 21

Discussion of Results Error function ϕ(d, d) is monotone in (α, β) and ϕ(d, d) 0, uniformly as d 0. Good performance guarantees for: Near-acyclic and near-linearly-separable QCQPs, i.e., α, β small. Well-conditioned problems (i.e., distance to infeasibility is large). Performance guarantee is data-dependent. Condition not necessary. 22

A Numerical Example Perturbed SDP(d + d) X argmin tr((c + C)X) subject to (A + A)(X) b, X 0. Conservative sufficient condition α dist IP (d). Realized approximation ratio: tr(cx ) OPT = 1.55. 23

Conclusions Existence and characterization of a family of nearby QCQPs that are computationally tractable, yield feasible and nearly optimal solutions for the original QCQP. Performance guarantees are data-dependent. Good performance guaranteed for SDPs that are: Near-acyclic and near-linearly-separable, Well-conditioned. Can be generalized to double-sided inequality and equality constraints for special cases. 24

Future Directions Efficient algorithms to construct semidefinite acyclic approximations. A generalization to arbitrary rank-constrained SDPs: minimize tr(cx) subject to A(X) b, X 0, rank(x) r. X H n 25

Thank you! Raphael Louca e-mail: rl553 @cornell.edu 26