There are several approaches to solve UBQP, we will now briefly discuss some of them:

Similar documents
Modeling with semidefinite and copositive matrices

Relaxations and Randomized Methods for Nonconvex QCQPs

1. Introduction. Consider the following quadratic binary optimization problem,

Semidefinite Programming Basics and Applications

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Lecture: Examples of LP, SOCP and SDP

Introduction to Semidefinite Programming I: Basic properties a

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

COM Optimization for Communications 8. Semidefinite Programming

Conic optimization under combinatorial sparsity constraints

SF2822 Applied nonlinear optimization, final exam Saturday December

8 Approximation Algorithms and Max-Cut

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

Degeneracy in Maximal Clique Decomposition for Semidefinite Programs

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Mustafa Ç. Pinar 1 and Marc Teboulle 2

arxiv: v1 [math.oc] 26 Sep 2015

Mustapha Ç. Pinar 1. Communicated by Jean Abadie

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Module 04 Optimization Problems KKT Conditions & Solvers

Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming

6.854J / J Advanced Algorithms Fall 2008

Convex relaxation. In example below, we have N = 6, and the cut we are considering

The Ongoing Development of CSDP

Convex Optimization M2

4. Algebra and Duality

arxiv: v1 [math.oc] 23 Nov 2012

Quadratic reformulation techniques for 0-1 quadratic programs

1 Positive definiteness and semidefiniteness

Lecture 3: Semidefinite Programming

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Solving large Semidefinite Programs - Part 1 and 2

IE 521 Convex Optimization

Applications of Convex Optimization on the Graph Isomorphism problem

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming

RANK-TWO RELAXATION HEURISTICS FOR MAX-CUT AND OTHER BINARY QUADRATIC PROGRAMS

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

A coordinate ascent method for solving semidefinite relaxations of non-convex quadratic integer programs

EE 227A: Convex Optimization and Applications October 14, 2008

A STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Rank minimization via the γ 2 norm

Separation Techniques for Constrained Nonlinear 0 1 Programming

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

Applications of semidefinite programming, symmetry and algebra to graph partitioning problems

Exact Solution Methods for the k-item Quadratic Knapsack Problem

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Lecture 6: Conic Optimization September 8

Extended Linear Formulation for Binary Quadratic Problems

Research Reports on Mathematical and Computing Sciences

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Problem 1 (Exercise 2.2, Monograph)

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

CORC Technical Report TR Cuts for mixed 0-1 conic programming

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Unique Games and Small Set Expansion

SDPARA : SemiDefinite Programming Algorithm PARAllel version

Lecture 14: Optimality Conditions for Conic Problems

Mathematical Economics: Lecture 16

Applications of semidefinite programming in Algebraic Combinatorics

BiqCrunch: a semidefinite branch-and-bound method for solving binary quadratic problems

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts

SDP Relaxations for MAXCUT

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Eigenvalue Optimization for Solving the

FINDING LOW-RANK SOLUTIONS OF SPARSE LINEAR MATRIX INEQUALITIES USING CONVEX OPTIMIZATION

MIT Algebraic techniques and semidefinite optimization May 9, Lecture 21. Lecturer: Pablo A. Parrilo Scribe:???

Intrinsic products and factorizations of matrices

c 2000 Society for Industrial and Applied Mathematics

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

On the Sandwich Theorem and a approximation algorithm for MAX CUT

Lecture: Expanders, in theory and in practice (2 of 2)

Quadratic and Polynomial Inequalities in one variable have look like the example below.

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Decomposition Methods for Quadratic Zero-One Programming

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Semidefinite Programming

approximation algorithms I

Semidefinite Programming

Low-ranksemidefiniteprogrammingforthe. MAX2SAT problem. Bosch Center for Artificial Intelligence

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation -

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Exact SDP Relaxations with Truncated Moment Matrix for Binary Polynomial Optimization Problems

NEW RESULTS ON SOME QUADRATIC PROGRAMMING PROBLEMS RUI YANG DISSERTATION

Transcription:

3 Related Work There are several approaches to solve UBQP, we will now briefly discuss some of them: Since some of them are actually algorithms for the Max Cut problem (MC), it is necessary to show their equivalence. UBQP can model MC, as it was already shown in theorem 3, so here we show that MC can model UBQP as well. Theorem 5 Max Cut can model UBQP) It is possible to model UBQP using MC. Proof. Let Q and b be as in definition 1, and let: W = 0 ( 1 Qe + b)t 2 1Qe + b 1 Q 2 2 (where e is the vector of all ones) be the adjacency matrix of a graph G with node set V = {0 1... n}, i.e., let the cost c ij of the edge (i j) be w ij. Since any cut V has the same value as its complement, we will assume without loss of generality that 0 / V, whenever we define a cut of G. Let V be a cut of G, and define x(v ) = (x 1 (V ) x 2 (V )... x n (V )) where x i (V ) = 1 if i V and x i (V ) = 0 if i / V. It is easy to see that x(.) is a bijective function from the cuts of G to {0 1} n. We will now show that if the cut V has value K then f(x(v )) = K. Let V be a cut of G. The value of the cut is:

Chapter 3. Related Work 16 K = W ij i V j / V = W i0 + W ij i V j / V {0} = 1 Q ij + b i + 2 i V j {1...n} = 1 Q ij + b i 2 i V j V 1 = Q ij b i 2 i V j V i V 1 = 2 x(v )T Qx(V ) b T x(v ) j / V {0} 1 2 Q ij

Chapter 3. Related Work 17 3.1 SDP relaxation This relaxation was given in a classic paper by Goemans and Williamson [goemans1995]. Let G = (V E) with cost matrix C = {c ij } (if (i j) E then c ij = 0). Let L = Diag(Ce) C, where Diag(v) is the diagonal matrix with v in its diagonal. L is called the Laplacian matrix of G. A simple exercise shows that the max cut can be formulated as max{x T Lx x { 1 1} n } (note that here we are defining the cut by means of a vector in { 1 1} n and not {0 1} n ), which can be reformulated as: z = max trlx s.t. : diag(x) = e (3.1) rank(x) = 1 (3.2) X is positive semidefinite (3.3) Here X is thought as being X = xx T for some x { 1 1} n which would define the cut. Constraints (3.2) and (3.3) acts together to ensure X = xx T while (3.1) says that x { 1 1} n. This kind of formulation, except for the rank constraint (3.2), is a SDP (Semidefinite Programming) formulation, which can be solved polynomially [lin1995, borchers1999, fujisawa2003]. By solving this relaxation (namely, by dropping constraint (3.2)), we get an upper bound whose value is not worse than 1.1382z [goemans1995]. See also [goemans1995] for random procedure to generate a solution with expected value at least 0.87856z, which gives a lower bound.

Chapter 3. Related Work 18 These bounds can be combined in a branch and bound scheme.

Chapter 3. Related Work 19 3.2 Branch and Bound using SDP and the Bundle Method This method was described at [rendl2010], and to our knowledge is the best available today. The triangular inequalities for the Max Cut are as follows (here X is the matrix as in 3.1): X ij + X ik + X jk 1 X ij X ik X jk 1 i < j < k (3.4) X ij + X ik X jk 1 X ij X ik + X jk 1 which says that for any triangle, zero or two edges are in the cut. These constraints can be viewed as (X) b, where is a linear operator from n n to 4(n 3). If we use the formulation in 3.1 with (3.4), we get a new problem whose Lagrangian is L(X γ) = trlx + γ T (b (X)) Then the dual function is f(γ) = max X L(X γ), and the dual problem is min γ 0 f(γ). For a fixed γ the evaluation of f(γ) leads to a problem which can be easily solved by 3.1, so it uses the bundle method to solve the dual problem. Since the number of triangles can be very big (4 n 3 ), the dual is solved with only a portion of these triangles, and then it uses the solution (X λ ) to drop some of these constraints and add new ones. The solution to the dual problem gives an upper bound for the Max Cut which is used in a branch and bound algorithm. The branching is done on the edges of the graph. There is also an online solver using this method called Biq Mac (Binary Quadratic - Maximum Cut). It is available at http://biqmac.uni-klu.ac.at/. We compare the results from this solver to the results found by the new

Chapter 3. Related Work 20 method. They have also collected a big number of instances for both UBQP and Max Cut called the Biq Mac Library (available at http://biqmac.uniklu.ac.at/biqmaclib.html).

Chapter 3. Related Work 21 3.3 Relaxing equivalent problems to find a lower bound This method is described at [billionnet2007], and uses a rather trivial (but really clever) observation: Since x i {0 1}, then x 2 i = x i. With this in mind, consider the following problem: min f λ (x) = 1 n 2 xt Qx b T x + λ i (x 2 i x i ) s.t.: x {0 1} n (3.5) i=1 It is easy to see that f(x) = f λ (x) for all λ and for all feasible x, so the original problem can be replaced by the above problem for any λ. Also, f λ (x) = 1 2 xt (Q + 2Diag(λ))x (b + λ) T x, where Diag(λ) is the diagonal matrix with λ on its main diagonal. Let Q = Q + 2Diag(λ) and b = b + λ. So the problem of optimizing f λ (x) is again UBQP. The point of doing that is that even if the solution is the same, the solution to the relaxed problem (dropping the integrality constraint (3.5)) may vary, and for any λ it will be a lower bound to the original value. If λ is such that Q is positive definite, the relaxed problem is solvable directly by x λ = Q 1 b. So the objective value of the relaxed problem will be: x λ = Q 1 b obj λ = f λ (x λ) = 1 2 x T λ Q x λ b T x λ = 1 2 bt Q 1 Q Q 1 b b T Q 1 b = 1 2 bt Q 1 b

Chapter 3. Related Work 22 So, we can find a family of lower bounds obj λ. One very natural question that arrives is: is it possible to get the best (maximum) lower bound? It actually is, by solving the following semidefinite program: min K s.t.: A = 2K b + λ (b + λ) T Q + 2Diag(λ) is positive semidefinite. Where K is the maximum lower bound. It follows from the positive semidefiniteness of matrix A that: K 0. This is always true in the original problem because the objective value for x = 0 is zero. Q = Q+2Diag(λ) is positive semidefinite. Well, no surprise here neither. K 1 2 b Q 1 b, by the Schur Complement of A (see, for instance, [boyd2009]). From the fact that we are maximizing K (minimizing K) and the last observation, we will have that (K = 1 2 b Q 1 b ). So, what is done in this work is: first the maximum lower bound is found (and the corresponding λ). Then they find min(f λ (x) x {0 1} n ) using a general UBQP solver. This is done due to the observation that the performance of existing UBQP solvers strongly depends on the lower bound achieved by the relaxation of the root problem.