Approximation Algorithms

Similar documents
Lecture Semidefinite Programming and Graph Partitioning

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

Lecture 21 (Oct. 24): Max Cut SDP Gap and Max 2-SAT

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

SDP Relaxations for MAXCUT

Semidefinite Programming

Introduction to Semidefinite Programming I: Basic properties a

Semidefinite Programming Basics and Applications

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

6.854J / J Advanced Algorithms Fall 2008

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Convex and Semidefinite Programming for Approximation

Lecture 8: The Goemans-Williamson MAXCUT algorithm

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Lecture: Examples of LP, SOCP and SDP

Grothendieck s Inequality

Introduction to Matrix Algebra

Approximation Preserving Reductions

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Lectures 6, 7 and part of 8

Scheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17

6.854J / J Advanced Algorithms Fall 2008

Linear Programming. Scheduling problems

Summer School: Semidefinite Optimization

16.1 L.P. Duality Applied to the Minimax Theorem

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Handout 6: Some Applications of Conic Linear Programming

1 Positive definiteness and semidefiniteness

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

8 Approximation Algorithms and Max-Cut

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

Approximations for MAX-SAT Problem

Approximations for MAX-SAT Problem

Symmetric matrices and dot products

1 Matrix notation and preliminaries from spectral graph theory

Basic Concepts in Matrix Algebra

Mathematics 530. Practice Problems. n + 1 }

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Asymptotic Polynomial-Time Approximation (APTAS) and Randomized Approximation Algorithms

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

1 The independent set problem

There are two types of problems:

Integer Programming Methods LNMB

Positive Semi-definite programing and applications for approximation

Lecture 12 : Graph Laplacians and Cheeger s Inequality

Lecture 13 March 7, 2017

THE EIGENVALUE PROBLEM

Advanced Algorithms 南京大学 尹一通

SEMIDEFINITE PROGRAM BASICS. Contents

Semidefinite Programming

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Real Symmetric Matrices and Semidefinite Programming

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Lecture 8 : Eigenvalues and Eigenvectors

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

7. Symmetric Matrices and Quadratic Forms

Convex relaxation. In example below, we have N = 6, and the cut we are considering

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

Linear Algebra - Part II

An 0.5-Approximation Algorithm for MAX DICUT with Given Sizes of Parts

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Lecture 2: Linear Algebra Review

Critical Reading of Optimization Methods for Logical Inference [1]

Lecture 7: Positive Semidefinite Matrices

1 Strict local optimality in unconstrained optimization

Copositive matrices and periodic dynamical systems

1 Review: symmetric matrices, their eigenvalues and eigenvectors

Theory of Computation Chapter 9

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

Topics in Theoretical Computer Science April 08, Lecture 8

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

1 Last time: least-squares problems

Convex Optimization in Classification Problems

Math Matrix Algebra

IE418 Integer Programming

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Linear Algebra: Characteristic Value Problem

Absolute value equations

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Linear and non-linear programming

On the efficient approximability of constraint satisfaction problems

Lecture 5 : Projections

Duality of LPs and Applications

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:

Eigenvectors Via Graph Theory

The Difference Between 5 5 Doubly Nonnegative and Completely Positive Matrices

Transcription:

Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1

Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs provide another class of relaxations For problems expressed as strict quadratic programs Vector programs are equivalent to semidefinite programs Semidefinite programs can be solved in time polynomial in n and log(1/ε) A 0.87856 factor algorithm for MAX-CUT 2

Contents Strict quadratic programs and vector programs Properties of positive semidefinite matrices The semidefinite programming problem Randomized rounding algorithm Improving the guarantee for MAX-2SAT Notes 3

The maximum cut problem MAX-CUT Given an undirected graph G=(V,E), with edge weights w: E Q +, find a partition ( SS, ) of V so as to maximize the total weight of edges in this cut, i.e., edges that have one endpoint in S and one endpoint in S. 4

Strict quadratic programs and vector programs (1) A quadratic program is the problem of optimizing a quadratic function of integer valued variables, subject to quadratic constraints on these variables. Strict quadratic program: monomials of degree 0 or 2. Strict quadratic program for MAX-CUT: y i an indicator variable for vertex υ i with values +1 or -1. Partition, S = { υi yi = 1, } S = { υi yi = 1} If υ i and υ j on opposite sides, then yy i j = 1 and edge contributes w ij to objective function On the other hand, edge makes no contribution. 5

Strict quadratic programs and vector programs (2) An optimal solution to this program is a maximum cut in G. 1 maximize w 1 2 < 1 ( yy ) ij i j i j n 2 subject to yi 1, υi = V y i Z, υ V i 6

Strict quadratic programs and vector programs (3) This program relaxes to a vector program A vector program is defined over n vector variables in R n, say υ 1, υ 2,, υ n, and is the problem of optimizing a linear function of the inner products υ i. υ j for 1 i j n, subject to linear constraints on these inner products A vector program is obtained from a linear program by replacing each variable with an inner product of a pair of these vectors 7

Strict quadratic programs and vector programs (4) A strict quadratic program over n integer variables defines a vector program over n vector variables in R n Establish a correspondence between the n integer variables and the n vector variables, and replace each degree 2 term with the corresponding inner product y i. y j is replaced with υ i. υ j 8

Strict quadratic programs and vector programs (5) Vector program for MAX-CUT 1 maximize w 1 2 1 < i j n ( υυ ) ij i j subject to υ υ = 1, υ i i i V n υi R, υi V 9

Strict quadratic programs and vector programs (6) Because of the constraint υ i. υ j =1, the vectors υ 1, υ 2,, υ n are constrained to lie on the n- dimensional sphere S n-1 Any feasible solution to the strict quadratic program of MAX-CUT yields a solution to the vector program ( y ) i,0,...,0 is assigned to υ The vector program corresponding to a strict quadratic program is a relaxation of the quadratic program providing an upper bound on OPT Vector programs are approximable to any desired degree of accuracy in polynomial time i 10

Properties of positive semidefinite matrices (1) Let A be a real, symmetric n n matrix Then A has real eigenvalues and has n linearly independent eigenvectors A is positive semidefinite if n T x R, x Ax 0 Theorem 26.3 Let A be a real symmetric n n matrix. Then the following are equivalent: n T 1. x R, x Ax 0 2. All eigenvalues of A are nonnegative 3. There is an n n real matrix W, such that A = W T W 11

Properties of positive semidefinite matrices (2) 12

Properties of positive semidefinite matrices (3) With Cholesky decomposition a real symmetric matrix can be decomposed in polynomial time as A = UΛU T, where Λ is a diagonal matrix whose diagonal entries are the eigenvalues of A A is positive semidefinite if all the entries of Λ are nonnegative giving a polynomial time test for positive semidefiniteness The decomposition WW T is not polynomial time computable, but can be approcimated The sum of two positive semidefinite matrices is also positive semidefinite Convex combination is also positive semidefinite 13

The semidefinite programming problem (1) Let Y be an n n matrix of real valued variables with y ij entry The semidefinite programming problem is the problem of maximizing a linear function of the y ij s, subject to linear constraints on them, and the additional constraint that Y be symmetric and positive semidefinite 14

The semidefinite programming problem (2) Denote by R n n the space of n n real matrices The trace of a matrix A is the sum of its diagonal entries and is denoted by tr(a) The Frobenius inner product of matrices A, B is defined to be T A B= tr( AB) = ab ij i= 1 j= 1 M n denotes the cone of symmetric n n real matrices For A M A denotes that A is pos. sem. n, 0 n n ij 15

The semidefinite programming problem (3) The general semidefinite programming problem, S: max imize C Y subject to D Y = d, 1 i k Y i 0, Y M n i 16

The semidefinite programming problem (4) A matrix satisfying all the constraints is a feasible solution and so is any convex combination of these solutions Let A be an infeasible point. Let C be another point. A hyperplane C Y b is called a seperating hyperplane for A if all feasible points satisfy it and point A does not Theorem 26.4 Let S be a semidefinite programming problem, and A be a point in R n n. We can determine in polynomial time, whether A is feasible for S and, if it is not, find a separating hyperplane. 17

The semidefinite programming problem (4) Proof: Testing for feasibility involves ensuring that A is symmetric and positive semidefinite and satisfies all the constraints. This can be done in polynomial time. If A is infeasible, a separating hyperplane is obtained as follows. If A is not symmetric, α ij > α ji for some i,j. Then y ij y ji is a separating hyperplane If A is not positive semidefinite, then it has a negative eigenvalue, say λ, and υ the corresponding eigenvector. Then, ( T) T υυ Y= υ Yυ 0 is a separating hyperplane. If any of the linear constraints is violated, it directly yields a separating hyperplane 18

The semidefinite programming problem (5) Let V be a vector program on n n-dimensional vector variables υ 1, υ 2,, υ n. Define the corresponding semidefinite program, S, over n 2 variables y ij, for 1 i,j n as follows: Replace each inner product υ i. υ j by the variable y ij. Require that matrix Y is symmetric and positive semidefinite Lemma 26.5 Vector program V is equivalent to semidefinite program S 19

The semidefinite programming problem (6) Proof: One must show that corresponding to each feasible solution to V, there is a feasible solution to S of the same objective function value and vice versa Let α 1,, α n be a feasible solution to V. Let W be the matrix whose columns are α 1,, α n Then it is easy to see that A=W T W is a feasible solution to S having the same objective function value Let A be a feasible solution to S. By theorem 26.3 there is a n n matrix W such that A=W T W. Let α 1,, α n be the columns of W. Then it is easy to see that α 1,, α n is a feasible solution to V having the same objective function value 20

The semidefinite programming problem (7) The semidefinite programming relaxation to MAX-CUT is: 1 maximize w 1 2 < 1 ( yy ) ij i j i j n 2 subject to yi 1, υi = V Y Y 0, M n 21

Randomized rounding algorithm (1) Assume we have an optimal solution to the vector program Let α 1,, α n be an optimal solution and let OPT υ denote its objective function value These vectors lie on the n-dimensional unit sphere S n-1 We need a cut ( SS, ) whose weight is a large fraction of OPT υ 22

Randomized rounding algorithm (2) Let θ ij denote the angle between vectors α i and α j. The contribution of this pair of vectors to OPT υ is, w ij 2 (1 cos θ ) ij The closer θ ij is to π, the larger the contribution will be Pick r to be a uniformly distributed vector on the unit sphere and let S = { υ a r 0} i i 23

Randomized rounding algorithm (3) Lemma 26.6 Pr[ υ i and υ j are separated]= θ ij /π Proof: Project r onto the plane containing α i and α j Now, vertices υ i and υ j will be separated iff the projection lies in one of the two arcs of angle θ ij. Since r has been picked from a spherically symmetric distribution, its projection will be a random direction in the plane. The lemma follows. 24

Randomized rounding algorithm (4) Lemma 26.7 Let x 1,, x n be picked independently from the normal distribution with mean 0 and unit standard 2 2 deviation. Let d = x. 1 +... + xn Then, ( x /,..., / ) is a random vector on the unit 1 d xn d sphere. Algorithm 26.8 (MAX-CUT) 1. Solve vector program V. Let α 1,, α n be an optimal solution. 2. Pick r to be a uniformly distributed vector on the unit sphere. 3. Let S = υ a r 0 { } i i 25

Randomized rounding algorithm (5) Lemma 26.9 E[W] α*opt υ Corollary 26.10 The integrality gap for vector relaxation is at least a>0.87856 Theorem 26.11 There is a randomized approximation algorithm for MAX-CUT achieving an approximation factor of 0.87856 26

Improving the guarantee for MAX- 2SAT (1) MAX-2SAT is the restriction of MAX- SAT to formulae in which each clause contains at most two literals Already obtained a ¾ algorithm for that Semidefinite programming gives an improved algorithm Idea: convert the obvious quadratic program into a strict quadratic program 27

Improving the guarantee for MAX- 2SAT (2) To each Boolean variable x i introduce variable y i which is constrained to be either +1 or -1, for 1 i n. In addition introduce another variable y 0 which is also constrained to be either +1 or -1. x i is true if y i = y 0 and false otherwise The value υ(c) of clause C is defined to be 1 if C is satisfied and 0 otherwise For clauses containing only one literal 1+ yy 0 i 1 yy 0 υ( xi) = and υ( xi) = 2 2 i 28

Improving the guarantee for MAX- 2SAT (3) For a clause with 2 literals 1 yy1 yy 0 i 0 υ( xi xj) = 1 υ( xi) υ( xj) = 1 2 2 1 ( 3 2 = + yy ) 0 i + yy 0 j yyy 0 i j 4 1+ yy 1+ yy 1 yy 0 i 0 j i j = + + 4 4 4 j 29

Improving the guarantee for MAX- 2SAT (4) MAX-2SAT can be written as a strict quadratic program 0 < i j n ( + yy ) + b ( yy ) maximize α 1 1 ij i j ij i j 2 subject to i 1, 0 y = i n y Z, 0 i n i 30

Improving the guarantee for MAX- 2SAT (5) Corresponding vector program relaxation where vector variable υ i corresponds to y i 0 < i j n ( + ) + b ( ) maximize α 1 υ υ 1 υ υ ij i j ij i j subject to υi υj = 1, 0 i n n+ υ 1 i R, 0 i n 31

Improving the guarantee for MAX- 2SAT (6) The algorithm is similar to that for MAX- CUT Let α 0,, α n be an optimal solution. Pick a vector r uniformly distributed on the unit sphere in (n+1) dimensions and let y i = 1 iff r a 0 for 0 i n i This gives a truth assignment for the Boolean variables Let W be the random variable denoting the weight of this truth assignment 32

Improving the guarantee for MAX- 2SAT (7) Lemma 26.13 E[W] α * OPT υ Proof: E[ ] W = 2 aij Pr[ yi = yj ] + bij Pr[ yi yj ] 0 < i j n let θ ij denote the angle between a i and a j θij a Pr[ y ] ( 1 cos ) i yj = θij and π 2 θij a Pr[ y ] 1 ( 1 cos ) i = yj = + θij π 2 Therefore, [ ] ( θ ) ( θ ) E W a aij 1+ cos ij + bij 1 cos ij = a OPTυ 0 < i j n 33

The END 34