Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012

Similar documents
Lift-and-Project Techniques and SDP Hierarchies

Lecture : Lovász Theta Body. Introduction to hierarchies.

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry

Introduction to Semidefinite Programming I: Basic properties a

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs

Approximation Algorithms

Hierarchies. 1. Lovasz-Schrijver (LS), LS+ 2. Sherali Adams 3. Lasserre 4. Mixed Hierarchy (recently used) Idea: P = conv(subset S of 0,1 n )

arxiv: v1 [cs.ds] 29 Aug 2015

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

MINI-TUTORIAL ON SEMI-ALGEBRAIC PROOF SYSTEMS. Albert Atserias Universitat Politècnica de Catalunya Barcelona

Applications of semidefinite programming in Algebraic Combinatorics

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

arxiv: v1 [math.oc] 9 Sep 2015

arxiv:math/ v1 [math.co] 23 May 2000

Semidefinite Programming Basics and Applications

Cuts for mixed 0-1 conic programs

Lecture Semidefinite Programming and Graph Partitioning

Semidefinite programming lifts and sparse sums-of-squares

Integrality Gaps of Linear and Semi-definite Programming Relaxations for Knapsack

Semidefinite Programming

Equivariant semidefinite lifts and sum of squares hierarchies

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Introduction to LP and SDP Hierarchies

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Summer School: Semidefinite Optimization

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

CORC REPORT Approximate fixed-rank closures of covering problems

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Lift-and-Project Inequalities

On the matrix-cut rank of polyhedra

Semidefinite Programming

Real solving polynomial equations with semidefinite programming

Sherali-Adams Relaxations of Graph Isomorphism Polytopes. Peter N. Malkin* and Mohammed Omar + UC Davis

3.7 Strong valid inequalities for structured ILP problems

Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations

1 The independent set problem

A NEW SEMIDEFINITE PROGRAMMING HIERARCHY FOR CYCLES IN BINARY MATROIDS AND CUTS IN GRAPHS

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

LOWER BOUNDS FOR A POLYNOMIAL IN TERMS OF ITS COEFFICIENTS

Integer programming: an introduction. Alessandro Astolfi

Some Properties of Convex Hulls of Integer Points Contained in General Convex Sets

Lecture 11 October 7, 2013

linear programming and approximate constraint satisfaction

arxiv: v1 [math.oc] 31 Jan 2017

On the Lovász Theta Function and Some Variants

Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming

Lecture 5. 1 Goermans-Williamson Algorithm for the maxcut problem

Hypergraph Matching by Linear and Semidefinite Programming. Yves Brise, ETH Zürich, Based on 2010 paper by Chan and Lau

Lecture 4: Polynomial Optimization

Copositive matrices and periodic dynamical systems

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Global Optimization with Polynomials

Integer Programming, Part 1

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

M.6. Rational canonical form

A Review of Linear Programming

An explicit construction of distinguished representations of polynomials nonnegative over finite sets

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:

Lecture 3: Semidefinite Programming

SDP Relaxations for MAXCUT

Chapter 1. Preliminaries

On the Rank of Cutting-Plane Proof Systems

LP formulations for mixed-integer polynomial optimization problems Daniel Bienstock and Gonzalo Muñoz, Columbia University, December 2014

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Lectures 6, 7 and part of 8

A ten page introduction to conic optimization

Ellipsoidal Mixed-Integer Representability

Math 203A - Solution Set 1

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

Advanced SDPs Lecture 6: March 16, 2017

c 2000 Society for Industrial and Applied Mathematics

Lecture 8 : Eigenvalues and Eigenvectors

Groups, Rings, and Finite Fields. Andreas Klappenecker. September 12, 2002

Cutting Planes for RLT Relaxations of Mixed 0-1 Polynomial Programs

Integer Programming Methods LNMB

Lecture: Examples of LP, SOCP and SDP

Sherali-Adams relaxations of the matching polytope

Algebra Homework, Edition 2 9 September 2010

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set

arxiv: v15 [math.oc] 19 Oct 2016

Non-commutative polynomial optimization

3.8 Strong valid inequalities

Breaking the Rectangle Bound Barrier against Formula Size Lower Bounds

ADVANCED COMMUTATIVE ALGEBRA: PROBLEM SETS

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)

Mh -ILE CPYl. Caregi Mello University PITBRH ENYVNA123AS1718. Carnegi Melo Unovrsity reecs ~ 8

March Algebra 2 Question 1. March Algebra 2 Question 1

CHAPTER 14. Ideals and Factor Rings

Linear Programming. Scheduling problems

Semidefinite programming relaxations for semialgebraic problems

Analysis of Sparse Cutting-plane for Sparse IPs with Applications to Stochastic IPs

Convex Optimization & Parsimony of L p-balls representation

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

3. Linear Programming and Polyhedral Combinatorics

Transcription:

Notes on the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre by M. Laurent, December 13, 2012 We present the decomposition result of Karlin et al. [2] for the hierarchy of Lasserre and sketch how it can be used to bound the integrality gap for the knapsack problem. 1 SDP relaxations First we recall definitions for the hierarchy of Lasserre. Throughout we set V = [n], P(V ) denotes the collection of all subsets of V, and P t (V ) the collection of all subsets of V of size t. Moment matrices and the Lasserre relaxations Consider a vector y R P(V ) indexed by subsets of V. This vector y R P(V ) corresponds to a linear form L : R[x] R on the polynomial ring, defined by L(x I ) = y I for I V for all square-free monomials, and extended to all monomials by setting L(x α 1 1 xαn n ) = L(x I ) where I = {i [n] : α i 1}. In other words, L vanishes on the ideal in R[x] generated by x 2 i x i (i V ). Given y R P(V ) and a linear polynomial g(x) = b n i=1 a ix i, g y denotes the vector in R P(V ) (called shifted vector) with entries (g y) I = by I i V a i y I {i} = (b i I a i )y I i V \I a i y I {i}. That is, (g y) I = L(g(x)x I ) is obtained by linearizing the polynomial g(x)x I. Given a vector y R P 2t(V ), its moment matrix of order t is the matrix M t (y) indexed by P t (V ), with entries M t (y) I,J = y I J for I, J P t (V ). If g is a linear polynomial we can define the moment matrix M t 1 (g y) of order t 1 of the shifted vector g y. For any collection A P(V ), one can analogously define the moment matrix M A (y) = (y I J ) I,J A indexed by A. 1

Consider a polyhedron K and its integer hull K I : K = {x R n : Ax b}, K I = conv(k {0, 1} n ). We may assume that K [0, 1] n. Write the inequalities composing the linear system Ax b as a T l x b l (l [m]) or, equivalently, as g l (x) 0, where g l denotes the linear polynomial b a T l x. Definition 1.1 For an integer 1 t n, La t (K) is the set of all vectors y R P 2t(V ) satisfying y = 1, M t (y) 0 and M t 1 (g l y) 0 (l [m]). Moreover, la t (K) denotes the projection of La t (K) onto the subspace R n corresponding to (y 1,..., y n ). We have: K I la t (K) K. The goal is understand how well the relaxation la t (K) approximates the integer polytope K I. Recall that after n + 1 steps K I is found: K I = la n+1 (K). (1) This is based on the canonical lifting lemma (Lemma 1.1 below and the comment thereafter), which characterizes when a full moment matrix M n (y) is positive semidefinite. (See also [3] for details). The canonical lifting lemma We recall this result and its proof since they will be useful to understand the decomposition result of [2]. Indeed, the latter will in fact use a block matrix version of it, which we formulate in Lemma 1.2 below. Define the following matrix Z V indexed by P(V ), with entries Its inverse is given by Z V (I, J) = 1 if I J, and 0 otherwise. (2) Z 1 V (I, J) = ( 1) J\I if I J, and 0 otherwise. (3) Lemma 1.1 Let y R P(V ) and let D denote the diagonal matrix indexed by P(V ), whose I-th diagonal entry is ((Z V ) 1 y) I = J V :I J ( 1) J\I y J for I V. Then the following identity holds: Therefore, M n (y) 0 (Z V ) 1 y 0 M(y) = ZDZ T. (4) J V :I J ( 1) J\I y J 0 I V. (5) 2

Proof. = H:I J H (4) is direct verification: for I, J V we have: (ZDZ T ) I,J = H V K:H K Z I,H D H,H Z J,H = ( 1) K\H y K = K y K H:I J H H:I J H K D HH ( 1) K\H which is equal to y I J, since the right most sum is equal to 0 if K I J. Thus this shows M n (y) = ZDZ T, from which (5) follows directly. Let us now indicate how (1) follows from the above result. For this note that, for any J V, the J-th column of Z coincides with the vector ( ζ J = x I = ) x i {0, 1} P(V ), i I where x = χ J is the incidence vector of J. Therefore, for y R P(V ), the condition Z 1 y 0 is equivalent to y = Z(Z 1 y) and thus to the fact that y lies in the cone generated by the columns of Z; that is, I V M n (y) 0 Z 1 y 0 y R + {ζ J : J V }. When we add all the localizing constraints M n (g l y) 0 (l [m]), then we select only the J-th columns of Z corresponding to integer solutions in K. That is, Las n+1 (K) = R + {ζ J : χ J K}, and thus las n+1 (K) = K I. The canonical lifting lemma - block matrix form We now mention a block matrix version of Lemma 1.1. Namely, suppose we are given symmetric matrices Y I S m for all I V (for some m 1). Then, we can define the block moment matrix M n (Y ), indexed again by P(V ), and whose (I, J)-th block is the matrix Y I J. Analogously, we can define the block matrix version of the matrices Z V and (Z V ) 1, whose (I, J)-th blocks are, respectively, I m and ( 1) J\I I m if I J, and 0 otherwise. The result of Lemma 1.1 and its proof extend mutatis mutandis to this block matrix setting (this was observed in [1] already)., 3

Lemma 1.2 Let Y I S m be given for all I V and let M n (Y ) denote the corresponding block moment matrix, indexed by P(V ), with Y I J as (I, J)-th block. Then, M n (Y ) 0 ( 1) J\I Y J 0 I V. J V :I J For instance, if V = {1}, then M 1 (Y ) has the form ( ) Y Y M 1 (Y ) = 1, and M Y 1 Y 1 (Y ) 0 Y Y 1 0, Y 1 0. 1 If V = {1, 2}, then M 2 (Y ) has the form: Y Y 1 Y 2 Y 12 M 2 (Y ) = Y 1 Y 1 Y 12 Y 12 Y 2 Y 12 Y 2 Y 12, Y 12 Y 12 Y 12 Y 12 and M 2 (Y ) 0 Y Y 1 Y 2 + Y 12 0, Y 1 Y 12 0, Y 2 Y 12 0, Y 12 0. 2 Karlin at al. decomposition lemma for moment matrices We present here the decomposition result of [2]. Theorem 2.1 [2] Fix a subset S V and an integer 1 k < t. Consider a vector y R P 2t(V ). Assume that y La t (K) and satisfies the condition: I P 2t (V ) I S k = y I = 0. (6) Then, y P2t 2k (V ) (the projection of y onto the subspace indexed by P 2t 2k (V )) belongs to the convex hull of the set W S = {w La t k (K) : w i {0, 1} i S}. Here is a sketch of the proof: Step 1: Extend y to a vector ỹ R P(V ) by setting ỹ I = y I if I 2t, ỹ I = 0 otherwise. 4

(The advantage of working with ỹ is that all entries are defined.) By the condition (6), we deduce: I V I S k = ỹ I = 0. (7) Then L denotes the linear functional on R[x] associated to ỹ. Step 2: We decompose ỹ as ỹ = X S z X, (8) where the vectors z X R P(V ) will satisfy the following properties: zi X = z X for i X, zi X = 0 for i S \X (Lemma 2.1); M t k (z X ), M t k 1 (g l z X ) 0 (Corollary 2.1). Hence, z X 0, X S zx = y = 1, and z X = 0 implies z P X 2t 2k (V ) = 0. Furthermore, if zx 0 then we can define the scaled vector w X = zx whose projection w X z X P 2t 2k (V ) belongs to La t k(k). Hence, by (8), we conclude that y P2t 2k (V ) = ỹ P2t 2k (V ) = X S:z X 0 z X wx P 2t 2k (V ) belongs to the convex hull of W S. This shows the theorem. 2.1 Construction of the vectors z X Throughout the subset S V is fixed. We consider the extended vector ỹ R P(V ) and indicate how to construct the vectors z X. For this consider the following polynomial identity: 1 = (x i + (1 x i )) = x i (1 x i ) = P X. i S X S i X i S\X X S }{{} P X We can expand the polynomial P X as: P X = ( 1) J x X J. J S\X Define: z X := P X ỹ R P(V ) for all X S, 5

with entries z X I = J S\X ( 1) J ỹ I X J = L(P X x I ). As 1 = X S P X, we get the following decomposition for ỹ: ỹ = X S z X. Lemma 2.1 For all I V, we have: z X I = z X I\X. Moreover, z X I = { z X if I X 0 if I (S \ X). Proof. Use the fact that z X I = L(P X x I ) = L(x X I i S\X (1 x i)) combined with the fact that L(x i (1 x i )) = 0. 2.2 Positivity properties of the moment matrix of z X We show how positivity of the moment matrix M t (y) implies positivity for a suitable moment matrix of ỹ which in turn implies positivity of the matrix M t k (z X ) (analogously for the shifted vectors g l z X ). We give two proofs: the first one is using Gram representations (as in [2]) and the second one is by doing matrix manipulations (based on the canonical lifting lemma). Proof following Karlin et al. [2] Lemma 2.2 [2] Let ỹ R P(V ) and z X = P X ỹ for X S. Consider a collection A P(V ) which is closed under shifting by S, i.e., I A, J S = I J A. Then, we have: M A (ỹ) 0 = M A (z X ) 0. Proof. By assumption, M A (ỹ) 0 and thus there exist vectors v I (I A) for which ỹ I J = v I v J for all I, J A. Define the vectors w I := ( 1) H v I X H for I A. H S\X 6

Note that they are well defined since A is closed under shifting by S. We claim that for all I, J A z X I J = w I w J which shows M A (z X ) 0. On the one hand, we have: On the other hand, z X I J = (P X ỹ) I J = L(P X x I J ). w I w J = H,H S\X ( 1) H + H v I X H v J X H = H,H S\X ( 1) H + H y I J X H H = H,H S\X ( 1) H + H L(x I x J x X x H x H ) = L(x I x J x X ( H S\X ( 1) H x H ) 2 ) = L(x I x J x X ( i S\X (1 x i)) 2 ) = L(x I x J x X ( i S\X (1 x i)) = L(x I x J P X ) = z X I J. We used here the fact that L(x 2 i x i) = 0 for all i. Lemma 2.3 (i) For A = {I V : I \ S t k}, M A (ỹ) 0. (ii) For A = {I V : I \ S t k 1}, M A (g l ỹ) 0 for all l. Proof. (i) Pick a column index J A with J t. Then J = J S + J \ S with J t and J \ S t k implies J S k. Therefore, by the condition (7), ỹ J = 0 and analogously ỹ I J = 0 for all I A. Hence the matrix M A (ỹ) has the block form: ( ) A 0, (9) 0 0 where A is indexed by sets I A with I t 1. Hence A is a principal submatrix of M t (ỹ) = M t (y) 0 and thus A 0, which gives M A (ỹ) 0. (ii) Pick a column index J A with J t 1 and pick i V. Then, J \ S t k 1 implies J S k and thus ỹ I J = 0; analogously (J {i}) S k and thus ỹ J {i} I = 0 for any I A. Thus the matrix M A (g l ỹ) has the block form (9) where A is a submatrix of M t 1 (g l ỹ) = M t 1 (g l y) 0 and thus A 0, implying M A (g l ỹ) 0. 7

Corollary 2.1 We have: (i) M t k (z X ) 0 and (ii) M t k 1 (g l z X ) 0 for all X S and l [m]. Proof. For (i), we apply Lemma 2.3(i) combined with Lemma 2.2: Set A = {I V : I \ S t k} and note that A is closed under shifting by S and that P t k (V ) A. Then, M A (ỹ) 0 (by Lemma 2.3(i)), which implies M A (z X ) 0 (by Lemma 2.2) and thus M t k (z X ) 0 (as P t k (V ) A). Same reasoning for (ii): apply Lemma 2.2 and Lemma 2.3(ii), and observe that P t k 1 (V ) A. Proof using the canonical lifting lemma We give another (slightly different) proof for the result of Corollary 2.1, based on the result from Lemma 1.2 about block moment matrices. First we show that M t k (z X ) 0. For X S, define the symmetric matrix Y X indexed by P t k (V ) by and observe that Y X = (ỹ I J X ) I,J Pt k (V ) Y X = M t k (x X ỹ). Then we consider the corresponding block moment matrix M s (Y ) (s = S ), indexed by P(S), and whose (X, X )-block is Y X X ; that is, M s (Y ) = (Y X X ) X,X S. In other words, M s (Y ) is the symmetric matrix indexed by the pairs (I, X), where I t k and X S, and with ((I, X), (J, X ))-entry ỹ I X J X. First we show that M s (Y ) 0. (10) For this we first observe that any column indexed by a pair (I, X ) satisfying J X t is identically zero. Indeed, as J X = (J X ) S + (J X )\S with (J X ) \ S = J \ S J t k and J X t, this implies that (J X ) S k and thus ỹ J X = 0 (using (7)). Hence, if the (J, X )- column of M(Y ) is not identically zero, then J X t 1. This implies that M s (Y ) has the block form ( ) A 0 M s (Y ) =, 0 0 8

where A is indexed by pairs (J, X ) with J X t 1, so that its entries are ỹ I X J J = y I X J X. In other words, A is a principal submatrix of M t (y) (after possibly repeating rows/columns). Hence, A 0 and thus we can conclude that M s (Y ) 0. We can apply Lemma 1.2 to the block moment matrix M s (Y ): M s (Y ) 0 ( 1) X Y X X 0 for all X S. X S\X Now note that the matrix on the right hand side is M t k (z X ): ( 1) X Y X X = ( 1) X M t k (x X X y) = X S\X = M t k Therefore, X S\X X S\X ( 1) X x X X y = M t k (P X y) = M t k (z X ). M s (Y ) 0 M t k (z X ) 0 X S. As M s (Y ) 0 by (10), we can conclude that M t k (z X ) 0 for all X S. To show that M t k 1 (g l z X ) 0, the reasoning is analogous, after replacing ỹ by g l ỹ and noting that g l z X = g l (P X ỹ) = P X (g l ỹ). 3 Application to the knapsack problem 3.1 The knapsack problem We consider the knapsack problem: OPT = OPT(v, a, b) := max v T x s.t. a T x b, x {0, 1} n (11) and its LP relaxation: LP = LP(v, a, b) := max v T x s.t. a T x b, x [0, 1] n, (12) where v, a N n, b N. We may assume 0 < a i b for all i [n]. Define the affine polynomial g(x) = b a T x and let K = {x [0, 1] n g(x) 0} denote the feasible region of the LP problem (12). Moreover, K I denotes the convex hull of the integer points in K. The following well known fact is crucial to the analysis of the gap of the Lasserre relaxation. 9

Lemma 3.1 LP(v, a, b) OPT(v, a, b) + max i v i. Proof. The proof is based on constructing a feasible solution of (11) using the greedy algorithm: Order the items such that v 1 /a 1... v n /a n. Let i be the largest index such that the set I = {1,..., i} consisting of the first i items is a feasible solution of the knapsack problem (11), i.e., a(i) b, a(i) + a i+1 > b. We claim that LP(v, a, b) v(i) + v i+1. Indeed, let x be feasible for the LP problem. For j I, v j /a j v i+1 /a i+1 implies v j x j v i+1 a j x j v ( i+1 b ) a i x i < v ( i+1 a(i)+a i+1 a i x i ). a i+1 a i+1 a i+1 i I i I j I Therefore, v T x = i I j I v i x i + i I v i x i v(i)+v i+1 + j I (1 x j )a j ( vi+1 a i+1 v j a j ) v(i)+v i+1. Corollary 3.1 LP(v, a, b) 2 OPT(v, a, b). Proof. Directly from Lemma 3.1, since v(i), v i+1 OPT. 3.2 Estimating the integrality gap for the knapsack problem In this section we show how to estimate the integrality gap SDP/OPT, where SDP denotes the optimum value SDP lat (v, a, b) obtained when optimizing over la t (K), the Lasserre relaxation of order t. and OPT is the optimum value of the knapsack problem. Karlin, Mathieu and Thach Nguyen [2] showed the following bound for the integrality gap of the Lasserre hierarchy: Theorem 3.1 Let t 2. Let SDP lat (v, a, b) = max y Lat(K) i V v iy i denote the optimum value obtained when using the Lasserre relaxation of order t. Then, SDP lat (v, a, b) OPT(v, a, b) 1 + 1 t 1. (13) 10

Fix t 2. Following [2], define the set S := { i V v i > OPT(v, a, b) } t 1 consisting of the heaviest elements of V. The two key ingredients for the proof of Theorem 3.1 are: (i) the decomposition result (Theorem 2.1) applied to this set S and for the choice k = t 1 and (ii) the result of Lemma 3.1 applied to the restricted knapsack problem: max v i x i : a i x i b a(x), x {0, 1} V \S for a given subset X S. First we show that the condition (6) (needed in Theorem 2.1) holds for this choice of S and k = t 1. Lemma 3.2 Let y La t (K) and I P 2t (V ). (i) If a(i S) > b then y I = 0. (ii) If I S t 1 then y I = 0. Proof. First we note that (ii) follows easily from (i). Indeed, I S t 1 implies v(i S) > OPT and thus a(i S) > b which, by (i), implies y I = 0. We now turn to proving (i). We start with an observation which we will use below: If we can write I = I 1 I 2 with I 1, I 2 t, then y I1 = 0 implies y I = 0. To see this, consider the principal submatrix of M t (y) indexed by I 1, I 2 which has the form: As M t (y) 0, y I1 = 0 implies y I = 0. ( I 1 I 2 ) I 1 y I1 y I1 I 2. I 2 y I1 I 2 y I2 Case 1: I t 1. As (g y) I is the diagonal entry of M t 1 (g y) at position (I, I), we have: 0 (g y) I = (b a(i))y I i I a iy I i, with y I i 0 (since I i t) and b a(i) b a(i S) < 0. This implies y I = 0. Case 2: t I 2t 2. Suppose first that I S t 1 and write I = I 1 I 2, where I 1 = t 1 and I S I 1. Then, b < a(i S) a(i 1 ) implies y I1 = 0 (by Case 1). Suppose now that I S t and write I = I 1 I 2, where I 1 = t 1, I 1 S 11

and I 2 t 1. Then, v(i 1 ) > OPT (by the definition of S), which implies a(i 1 ) > b and thus y I1 = 0 by Case 1. Case 3: I = 2t 1, 2t. If I S t 1, write I = I 1 I 2 with I 1 = t and I S I 1. Then b < a(i S) a(i 1 ) implies y I1 = 0 (by Case 2). If I S t, write I = I 1 I 2 where I 1 = t and I 1 S. Then, v(i 1 ) > OPT implies a(i 1 ) > b and thus y I1 = 0 (by Case 2). Hence we can apply Theorem 2.1: The projection of y onto R P 2(V ) is a convex combination of vectors w X satisfying: w X = 1, wi X = 1 if i X, wi X = 0 if i S \ X, M 1 (w X ) 0 and g w X 0 (this is the condition M 0 (g w X ) 0). The condition M 1 (w X ) 0 implies that w X lies in the cube: 0 wi X 1 for all i V \ S. The condition g w X 0 reads: b a i a i wi X. i X Therefore, the vector (wi X) is feasible for the LP relaxation of the restricted knapsack problem: max v i x i : a i x i b a(x), x {0, 1} V \S. Say, J V \ S is an optimum solution of this restricted knapsack. Applying Lemma 3.1, we deduce that v i wi X v(j) + max v i. As a(x J) b, the set X J is feasible for the original knapsack problem and thus v(x J) OPT. Moreover, by the definition of S, we have: max v i OPT t 1. Combining these two facts, we obtain that v i wi X i V = v(x) + v i w X i which concludes the proof of Theorem 3.1. v(x) + v(j) + max v i OPT + OPT t 1, 12

References [1] N. Gvozdenovic, M. Laurent and F. Vallentin. Block-diagonal semidefinite programming hierarchies for 0/1 programming. Operations Research Letters 37:27 31, 2009. [2] A.R. Karlin, C. Mathieu and C. Thach Nguyen. Integrality gaps of linear and semidefinite programming relaxations for knapsack. Preprint at arxiv:1007.1283v1. Proceedings of IPCO 2011. [3] M. Laurent. A comparison of the Sherali-Adams, Lovász-Schrijver and Lasserre relaxations for 0-1 programming. Mathematics of Operations Research 28(3):470-496, 2003. 13