A closest vector problem arising in radiation therapy planning

Size: px
Start display at page:

Download "A closest vector problem arising in radiation therapy planning"

Transcription

1 A closest vector problem arising in radiation therapy planning Céline Engelbeen, Samuel Fiorini, Antje Kiesel arxiv: v2 [cs.dm] 26 Nov 2009 March 25, 2019 Abstract In this paper we consider the problem of finding a vector that can be written as a nonnegative integer linear combination of given 0-1 vectors, the generators, such that the l 1 -distance between this vector and a given target vector is minimized. We prove that this closest vector problem is NP-hard to approximate within a O(d) additive error, where d is the dimension of the ambient vector space. We show that the problem can be approximated within a O(d 3/2 ) additive error in polynomial time, by rounding an optimal solution of a natural LP relaxation for the problem. We also observe that in the particular case where the target vector is integer and the generators form a totally unimodular matrix, the problem can be solved in polynomial time. The closest vector problem arises in the elaboration of radiation therapy plans. In this context, the target is a nonnegative integer matrix and the generators are certain 0-1 matrices whose rows satisfy the consecutive ones property. Here we mainly consider the version of the problem in which the set of generators comprises all those matrices that have on each nonzero row a number of ones that is at least a certain constant. This set of generators encodes the so-called minimum separation constraint. Keywords: closest vector problem, decomposition of integer matrices, consecutive ones property, minimum separation constraint, radiation therapy. 1

2 Figure 1: A complete treatment unit at Saint-Luc Hospital (Brussels, Belgium). 1 Introduction Nowadays, radiation therapy is one of the most used methods for cancer treatment. The aim is to destroy the cancerous tumor by exposing it to radiation while trying to preserve the normal tissues and the healthy organs located in the radiation field. Radiation is commonly delivered by a linear accelerator (see Figure 1) whose arm is capable of doing a complete circle around the patient in order to allow different directions of radiation. In intensity modulated radiation therapy (IMRT) a multileaf collimator (MLC, see Figure 2) is used to modulate the radiation beam. This has the effect that radiation can be delivered in a more precise way by forming differently shaped fields and hence improves the quality of the treatment. After the physician has diagnosed the cancer and has located the tumor as well as the organs located in the radiation field, the planning of the radiation therapy sessions is determined in three steps. In the first step, a number of appropriate beam directions from which radiation will be delivered is chosen [9]. Secondly, the intensity function for each direction is determined. This function is encoded as an integer matrix in which each entry represents an elementary part of the radiation beam (called a bixel). The value of each entry is the intensity of the radiation This research is supported by an Actions de Recherche Concerte es (ARC) project of the Communaute franc aise de Belgique. Ce line Engelbeen is a research fellow of the Fonds pour la Formation a la Recherche dans l Industrie et dans l Agriculture (FRIA) and Antje Kiesel is a research fellow of the German National Academic Foundation. Department of Mathematics, Universite Libre de Bruxelles CP 216, Boulevard du Triomphe, 1050 Brussels, Belgium. cengelbe@ulb.ac.be. Department of Mathematics, Universite Libre de Bruxelles CP 216, Boulevard du Triomphe, 1050 Brussels, Belgium. sfiorini@ulb.ac.be. Institute for Mathematics, University of Rostock, Rostock, Germany. antje.kiesel@uni-rostock.de. 2

3 that we want to send throughout the corresponding bixel. Finally, the intensity matrix is segmented since the linear accelerator can only send a uniform radiation. This segmentation step mathematically consists in decomposing an m n intensity matrix A into a nonnegative integer linear combination of certain binary matrices S = (s ij ) that satisfy the consecutive ones property in each row, that is, s il = 1 and s ir = 1 for l r implies s ij = 1 for all l j r. Such a binary matrix is called a segment. In this paper, we focus on the case where the MLC is used in the so-called stepand-shoot mode, in which the patient is only irradiated when the leaves are not moving. Actually, segments are generated by the MLC and the segmentation step amounts to finding a sequence of MLC positions (see Figure 3). The generated intensity modulated field is just a superposition of homogeneous fields shaped by the MLC. Throughout the paper, [k] denotes the set {1, 2,..., k} for an integer k, and [l, r] denotes the set {l, l + 1,..., r} for integers l and r with l r. We also allow l = r + 1 where [l, r] =. Thus, an m n matrix S = (s ij ) is a segment if and only if there are integral intervals [l i, r i ] for i [m] such that { 1 if j [l i, r i ], s ij = 0 otherwise. A segmentation of the intensity matrix A is a decomposition A = k u j S j, where u j Z + and S j is a segment for j [k]. The coefficients are required to be integers because in practice the radiation can only be delivered for times that are multiples of a given unit time, called a monitor unit. In clinical applications a lot of constraints may arise that reduce the number of deliverable segments. For some technical or dosimetric reasons we might look for decompositions where only a subset of all segments is allowed. Those subsets might be explicitely given or defined by constraints like the interleaf collision constraint (denoted by ICC, also called interleaf motion constraint or interdigitation constraint, see [3], [5], [13] and [15]), the interleaf distance constraint (IDC, see [10]), the tongue-and-groove constraint (TGC, see [6], [16], [17], [18], [19], [20] and [25]), the minimum field size constraint (MFC, see [21]) or the minimum separation constraint (MSC, see [17]). For some of those constraints, it is still possible to decompose A exactly using the set of feasible segments (like for the ICC, the IDC and the TGC), for others an exact decomposition might be impossible. In this last case, if S := {S 1,..., S k } is our set of feasible segments, our aim is to find an approximation B that is decomposable with the segments in S and that minimizes A B 1 := m n a ij b ij. i=1 3

4 Later on in this paper, we will focus on the minimum separation constraint, that imposes a minimum leaf opening λ [n] in each open row of the irradiation field. More formally, a segment S given by its leaf positions ([l 1, r 1 ],..., [l m, r m ]) satisfies the minimum separation constraint if and only if r i l i implies r i l i λ 1 for all i [m]. This constraint was first introduced by Kamath, Sahni, Li, Palta and Ranka in [17], where the problem of determining if it is possible to decompose A or not under this constraint was solved. Here we show that the approximation problem under the minimum separation constraint can be solved in polynomial time with a minimum cost flow algorithm. The approximation problem described above motivates the definition of the following Closest Vector Problem (CVP). Recall that the l 1 -norm of a vector x R d is defined by x 1 := d i=1 x i. We say that x is binary if x i {0, 1} for all i [d]. The CVP is stated as follows: Input: A set G = {g 1, g 2,..., g k } of binary vectors in R d (the generators) and a vector a in R d (the target vector). Goal: Find coefficients u j Z + for j [k] such that a b 1 is minimum, where b := k u j g j. Measure: The total change TC := a b 1. The CVP that is the focus of the present paper differs significantly from the intensively studied CVP on a lattice that is used in cryptography (see, for instance, the recent survey by Micciancio and Regev [24]). In that context, G is any set of k linearly independent vectors in R d (in particular, k d), and one seeks a vector of the lattice { k } L := u j g j u j Z j [k] that is closest to the target vector a. In our case, the coefficients u j are assumed to be nonnegative integers, and the vectors of G are not assumed to be linearly independent (thus k may be bigger than d). In order to cope with the NP-hardness of the CVP, we design (polynomial-time) approximation algorithms. For the version of the CVP studied here it is natural to consider approximation algorithms with additive approximation guarantees (whereas approximation algorithms for the CVP on lattices have multiplicative guarantees). We say that a polynomial-time algorithm approximates the CVP within an additive error of if it outputs a feasible solution whose cost is at most OPT +, where OPT is the cost of an optimal solution. The rest of the paper is organized as follows: Section 2 is devoted to general results on the CVP. We start by observing that the particular case where the target vector is integer and the generators form a totally unimodular matrix is solvable in polynomial time. We also provide a direct reduction to minimum cost flow when the generators have the consecutive ones property. We afterwards show that, for a general collection G of generators, the CVP is NP-hard to approximate within a (ln 2 ɛ) d additive error, 4

5 for all ɛ > 0 (which in particular implies that the CVP is NP-hard). We conclude this section with an analysis of a natural approximation algorithm for the problem based on randomized rounding. We prove that the approximation b returned by the algorithm has a TC that exceeds the optimum by a O(d 3/2 ) additive error. In Section 3, we focus on the particular instances of the CVP arising in IMRT described above. We first show, using results of Section 2, that the problem can be solved in polynomial time when the generators encode the minimum separation constraint. We conclude the section with a further hardness of approximation result in case A is a 2 n matrix: it is NP-hard to approximate the problem within an additive error of ɛ n, for some ɛ > 0 (in particular, the corresponding restriction of the CVP is NP-hard). In Section 4, we generalize our results to the case where one does not only want to minimize the total change, but a combination of the total change and the sum of the coefficients u j for j [k]. In the IMRT context, this sum represents the total time during which the patient is irradiated, called the beam-on time. It is desirable to minimize the beam-on time, for instance, in order to reduce the side effects caused by diffusion of the radiation as much as possible. Finally, in Section 5, we conclude the paper with some open problems and point to connections between the CVP and discrepancy theory. 2 The closest vector problem In this section we consider the CVP in its most general form. We first consider the particular case where the target vector is integer and the binary matrix formed by the generators is totally unimodular and prove that the CVP is polynomial in this case. We afterwards prove that, for all ɛ > 0, there exists no polynomial-time approximation algorithm for the general case with an additive approximation guarantee of (ln 2 ɛ) d unless P = NP, and conclude the section by providing an approximation algorithm with a O(d 3/2 ) additive approximation guarantee. 2.1 Polynomial case Throughout this section, we assume that a Z d. relaxation of the CVP. (LP) s.t. min d (γ i + δ i ) i=1 Consider the following natural LP k u j g ij γ i + δ i = a i i [d] (1) u j 0 j [k] (2) γ i 0 i [d] (3) δ i 0 i [d]. (4) 5

6 In this formulation, u j is the coefficient of the generator g j and the approximation vector b of the target vector a is b = k u j g j. The vectors γ and δ are (essentially) the underand overdosage, that is, they model the deviation between a and b. Let G denote the d k binary matrix whose columns are the generators g 1, g 2,..., g k. If G is totally unimodular, then the same holds for the constraint matrix of (LP). Because a is assumed to be integer, any basic feasible solution of (LP) is integral. Thus, solving the CVP amounts to solving (LP) when G is totally unimodular. For the rest of this section, assume that the generators satisfy the consecutive ones property. In particular, G is totally unimodular. Now, we give a direct reduction to the minimum cost flow problem. Generators will be denoted by g l,r where [l, r] is the interval of ones of this generator, that is, g = g l,r if and only if g i = 1 for i [l, r] and g i = 0 otherwise. Let I be the set of intervals such that G = {g l,r [l, r] I }. We assume that there is no generator with an empty interval of ones (that is, l r always holds). Let a 0 = a d+1 := 0 and let the network D be defined as follows: The set of nodes is V (D) := [d + 1] = {1, 2,..., d + 1} and the set of arcs is A(D) := { (j, j + 1) j [d] } { (j + 1, j) j [d] } { (l, r + 1) [l, r] I }. Let us notice that parallel arcs can appear when the interval of a generator only contains one element. In such a case, we keep both arcs: the one representing the generator and the other one. The demand d(j) of each node j V (D) is given by d(j) := a j 1 a j. The arcs have infinite capacities and the costs are 1 for all arcs of type (j, j + 1) and (j + 1, j) and 0 for the other arcs. An example of the network is shown in Figure 4. Throughout this subsection we denote by δ + (j) the set of arcs of D that leave node j and by δ (j) the set of arcs of D that enter node j. The amount of flow on the arcs of a set A A(D) is denoted by φ(a ). So, more precisely: δ + (j) := {e A(D) k V : e = (j, k)} δ (j) := {e A(D) i V : e = (i, j)} φ(a ) := e A φ(e). Theorem 1. Let G and D be as above, let a Z d, and let OPT denote the optimal value of the corresponding CVP instance. Then, OPT equals the minimum cost of a flow in D. Proof. Let first φ : A(D) R + be a minimum cost flow in D. Since all capacities and demands in D are integral we may assume that φ is an integer flow. Let b be the corresponding approximation vector defined by its decomposition b = [l,r] I u l,rg l,r where u l,r := φ(l, r + 1) for all [l, r] I. 6

7 Because φ is a minimum cost flow, we have that φ(j, j + 1) = 0 or φ(j + 1, j) = 0, (5) for all j [d]. Otherwise both flow values can be reduced by 1 and this reduces the cost, in contradiction to the optimality of φ. In the decomposition of b the j-th component receives a contribution of all generators having an interval including the (j 1)-th component except the intervals ending in position (j 1); and it also receives a contribution from generators having their interval of ones starting in position j. Hence b j b j 1 = u j,r u l,j 1. (6) We will prove by induction on j that [j,r] I [l,j 1] I φ (j, j + 1) = (a j b j ) + and φ (j + 1, j) = (b j a j ) +, (7) where x + := max {x, 0}, for all x R. Notice that x y = (x y) + (y x) + holds for all x, y R. This fact is used repeatedly below. For j = 1, we have d (1) = a 0 a 1 = a 1 and a 1 = φ ( δ + (1) ) φ ( δ (1) ) = u 1,r + φ(1, 2) φ(2, 1) [1,r] I = b 1 + φ(1, 2) φ(2, 1) and thus φ(1, 2) = (a 1 b 1 ) + and φ(2, 1) = (b 1 a 1 ) +. For j > 1, we have a j a j 1 = φ ( δ + (j) ) φ ( δ (j) ) = u j,r + φ(j, j 1) + φ(j, j + 1) u l,j 1 φ(j 1, j) φ(j + 1, j) [j,r] I [l,j 1] I = b j b j 1 + (b j 1 a j 1 ) + (a j 1 b j 1 ) + + φ(j, j + 1) φ(j + 1, j) = b j b j 1 + b j 1 a j 1 + φ(j, j + 1) φ(j + 1, j) where the third equality holds by (6) and induction. Then we get a j = b j + φ(j, j + 1) φ(j + 1, j) and (7) holds for j. Hence, in particular, φ(j, j + 1) + φ(j + 1, j) = a j b j for all j [d] by (5) and the cost of φ equals a b 1. Therefore a minimum cost flow in D leads to a solution of the CVP with the same cost. Conversely, let an optimal decomposition b = [l,r] I u l,rg l,r for the CVP instance be given. We define φ by φ(l, r + 1) = u l,r for all [l, r] I, φ(j, j + 1) = (a j b j ) + for all j [d], φ(j + 1, j) = (b j a j ) + for all j [d]. 7

8 Using (6) it is easy to verify, that φ defines a flow in D. Furthermore, the cost of φ is the same as that of b. Corollary 2. The restriction of the CVP to instances where the set G of generators is formed by vectors satisfying the consecutive ones property can be solved in polynomial time. Proof. The proof of Theorem 1 gives a polynomial reduction from the considered restriction of the CVP to the minimum cost flow problem, which implies that the former is polynomial. 2.2 Hardness In this subsection we prove that the CVP is in general hard to approximate within an additive error of (ln 2 ɛ)d, for all ɛ > 0. To prove this, we consider the particular case where a is the all-one vector, that is, a = 1. The given set G is formed of k binary vectors g 1, g 2,..., g k in R d. Because a is binary, the associated coefficients u j for j [k] can be assumed to be binary as well. Theorem 3. For all ɛ > 0, there exists no polynomial-time approximation algorithm for the CVP with an additive approximation guarantee of (ln 2 ɛ) d (0.693 ɛ)d, unless P = NP. Proof. We define a gap-introducing reduction from 3SAT to the CVP (see Vazirani [27] for definitions). By combining the PCP theorem (see Arora, Lund, Motwani, Sudan and Szegedy [2]) and a reduction due to Feige [11] (see also Feige, Lovász and Tetali [12] and Cardinal, Fiorini and Joret [7]), we obtain a reduction from 3SAT to CVP with the following properties. For any given constants c > 0 and ξ > 0, it is possible to set the values of the parameters of the reduction in such a way that: The generators from G have all the same number d t to be larger than any given constant. of ones, where t can be assumed If the 3SAT formula φ is satisfiable, then a can be exactly decomposed as a sum of t generators of G. If the 3SAT formula φ is not satisfiable, then the support of any linear combination of x generators chosen from G is of size at most d ( 1 ( 1 1 t ) x + ξ ), for 1 x ct. Let u be an optimal solution of the instance of the CVP corresponding to the 3SAT formula φ and let x be the number of components of u equal to 1. Let us recall that in this proof, the target vector a is the all-one vector and hence all components of u can be assumed to be binary. Let b = k u jg j and let OPT be the optimal TC of this instance, that is, OPT = a b 1. From what precedes, if φ is satisfiable then OPT = 0. We claim that if φ is nonsatisfiable, then OPT > d (ln 2 ɛ), provided t is large enough and ξ is small enough. Clearly, the result follows from this claim. We distinguish three cases. 8

9 Case 1: x = 0. In this case OPT = d (we have b = 0 and a b 1 = d). Case 2 : 1 x ct. Let ρ denote the number of components b i of b that are nonzero. Thus d ρ is the number of b i equal to 0. The total change of b includes one unit for each component of b that is zero and a certain number of units caused by components of b larger than one. More precisely, we have: OPT = d ρ + x d t ρ ( x ) d t + 1 2d ( x = d t + 2 ( ( 1 ( 1 1 t ) x 1 2ξ = d ((1 β) x + 2β x 1 2ξ), 1 1 ) x ) + ξ t ) where β := 1 1. Note that β < 1 and taking t large corresponds to taking β close t to 1. In order to derive the desired lower bound on OPT we now study the function f(x) := (1 β) x + 2β x. The first derivative of f is f (x) = (1 β) + 2 ln β β x. It is easy to verify (since the second derivative of f is always positive) that f is concave and attains its minimum at x min = 1 ( ) β 1 ln β ln 2 ln β Hence we have, for all x > 0, By l Hospital s rule, hence we have f(x) f(x min ) = (1 β) x min + 2β x min ( ) 1 β 1 = (1 β) ln β ln 2 ln β = β 1 ( ( ) ) 2 ln β ln + 1. ln β β 1 (β 1) lim β 1 ln β = 1, f(x) ln ξ ɛ for t sufficiently large and ξ sufficiently small, which implies OPT d (ln 2 ɛ). 9 + β 1 ln β

10 Case 3: x > ct. Let again ρ be the number of components b i of b that are nonzero. The first ct generators used by the solution have some common nonzero entries. By taking into account the penalties caused by components of b larger than one, we have: ( OPT ct d t d 1 ( = d c 1 d ( c 1 e c ɛ ) d (ln 2 ɛ). ( 1 1 t ) ct + ξ ( 1 1 ) ) ct + ξ t The third inequality holds for t sufficiently large and ξ sufficiently small and the last one holds for c large enough. This concludes the proof of the claim. The theorem follows. ) 2.3 Approximation algorithm In this subsection we give a polynomial-time approximation algorithm that returns a vector b such that a b 1 OPT + O(d 3/2 ), where OPT denotes the cost of an optimal solution of the CVP. This algorithm rounds an optimal solution of the linear relaxation of the CVP given in Section 2.1 (see page 5 above). Let LP be the value of an optimal solution of (LP). Obviously, we have OPT LP. Note that for each basic feasible solution of (LP), there are at most d components of u that are nonzero. This is the case, because if we assume that l > d nonzero coefficients exist, then only k l inequalities of type (2) are satisfied with equality. As we have 2d + k variables, we need at least 2d + k independent equalities to define a vertex. Thus, there must be (2d + k) (k l) d = d + l > 2d equalities of type (3) and (4) that are satisfied with equality. This is a contradiction, as there are only 2d inequalities of type (3) and (4). Thus, for any extremal optimal solution of the linear program, at most d of the coefficients u j are nonzero. In the analysis of Algorithm 1 we use the following estimate that can be easily derived from known results (see the appendix for a proof). Lemma 1. Let l be a positive integer and let X 1, X 2,..., X l be l independent random variables such that, for all j [l], P [X j = 1 p j ] = p j and P [X j = p j ] = 1 p j. Then [ ] ln 2 E X 1 + X X l 2 We are now ready to prove the main result of this section. l. 10

11 Algorithm 1 Input: The target vector a R d and generators g 1, g 2,..., g k {0, 1} d. Output: An approximation b of a. Compute an extremal optimal solution (γ, δ, u ) of (LP) with b = k u jg j. for all j [k] do if u j is integer ũ j := u j, otherwise ũ j := end for b := k ũjg j. { u j with probability u j u j, u j with probability u j u j. Theorem 4. Algorithm 1 is a randomized polynomial-time algorithm that approximates ln 2 the CVP within an expected additive error of d 3/ d 3/2. 2 Proof. We have [ ] a E b 1 [ ] [ a E b b + E b ] 1 1 = LP + E k k u jg j ũ j g j 1 [ d k ( ) ] = LP + E g ij u j ũ j. Without loss of generality, we may assume that u j = 0, and thus ũ j = 0, for j > d. This is due to the fact that u( is a basic ) feasible solution, see the above discussion. Now, let X ij := g ij u j ũ j for all i, j [d]. For each fixed i [d], Xi1,..., X id are independent random variables satisfying X ij = 0 if g ij = 0 or u j Z and otherwise { u j u j with probability u j u j, X ij = u j uj with probability u j u j. By Lemma 1, we get: [ ] a E b 1 i=1 [ d ] LP + E X i1 + X i2 + + X id = LP + LP + OPT + i=1 i=1 d [ ] E X i1 + X i2 + + X id ln 2 2 d3/2 ln 2 2 d3/2. 11

12 By derandomizing Algorithm 1 with the method of conditional probabilities and expectation (see Alon and Spencer [1]), we obtain the following result: ln 2 Corollary 5. The CVP can be approximated within an additive error of d 3/2, in 2 polynomial time. 3 Application to IMRT In this section we consider a target matrix A and the set of generators S formed by segments (that is, binary matrices whose rows satisfy the consecutive ones property). In the first part of this section we consider the case where S is formed by all the segments that satisfy the minimum separation constraint. In the last part we consider any set of segments S. We show that in this last case the problem is hard to approximate, even if the matrix has only two rows. 3.1 The minimum separation constraint In this subsection we consider the CVP under the constraint that the set S of generators is formed by all segments that satisfy the minimum separation constraint. Given λ [n], this constraint requires that the rows which are not totally closed have a leaf opening of at least λ. Mathematically, the leaf positions of open rows i [m] have to satisfy r i l i λ 1. We cannot decompose any matrix A under this constraint. Indeed, the following single row matrix cannot be decomposed for λ = 3: A = ( ). The problem of determining if it is possible to decompose a matrix A under this constraint was proved to be polynomial by Kamath et al. [17]. Obviously, the minimum separation constraint is a restriction on the leaf openings in each single row, but does not affect the combination of leaf openings in different rows. Again, more formally, the set of allowed leaf openings in one row i is S i = {[l i, r i ] r i l i λ 1 or r i = l i 1}, and does not depend on i. If we denote a segment by the set of its leaf positions ([l 1, r 1 ],..., [l m, r m ]) then the set of feasible segments S for the minimum separation constraint is simply S = S 1 S 2 S m. Thus, in order to solve the CVP under the minimum separation constraint, it is sufficient to focus on single rows. Indeed, whenever the set of feasible segments has a structure of the form S = S 1 S 2 S m, which means that the single row solutions can be combined arbitrarily and we always get a feasible segment, solving the single row problem is sufficient. From Corollary 2, we then infer our next result. Corollary 6. The restriction of the CVP where the vectors are m n matrices and the set of generators is the set of all segments satisfying the minimum separation constraint can be solved in polynomial time. 12

13 3.2 Further hardness results As we have seen in Subsection 2.1, the CVP with generators satisfying the consecutive ones property is polynomial (see Theorem 1). Moreover, we have proved in Theorem 3 that the CVP is hard to approximate within an additive error of (ln 2 ɛ) d for a general set of generators. We now prove that, surprisingly, the case where generators contain at most two blocks of ones, which corresponds in the IMRT context of having a 2 n intensity matrix A, and a set of generators formed by 2 n segments, is hard to approximate within an additive error of ɛ n, for some ɛ > 0. Theorem 7. There exists some ɛ > 0 such that it is NP-hard to approximate the CVP, restricted to 2 n matrices and generators with their ones consecutive on each row, with an additive error of at most ɛ n. Proof. We prove this by reduction from the MAX-3SAT6 problem. problem: Let us recall the Instance: A CNF formula in which each clause contains three literals, and each variable appears non-negated in three clauses, and negated in three other clauses. Goal: Find truth values for the variables such that the number of satisfied clauses is maximized. Measure: The number of satisfied clauses. Let a MAX-3SAT6 instance Φ in the variables x 1,..., x s be given. Let c 1,..., c t denote the clauses of Φ. We say that a variable x i and a clause c j are incident if c j involves x i or its negation x i. By double-counting the incidence between variables and clauses, we find 6s = 3t, that is, t = 2s. We build an instance of the restricted CVP as follows: A is a matrix with two rows. In the first row of A, we put s consecutive intervals, each containing 6 consecutive ones, followed by 4s consecutive zeroes. In the second row of A, we put t = 2s consecutive intervals, each containing 5 consecutive ones. Thus A has 6s ones, followed by 4s zeros in its first row, and 10s ones in its second row. The size of A is 2 n, where n = 10s = 5t. To each variable x i there corresponds an interval I(x i ) := [6i 5, 6i] of 6 ones in the first row of A (in this proof, for the sake of simplicity, we identify intervals of the form [l, r] and the 1 n row vector they represent). We let also I( x i ) := I(x i ). For each interval I(x i ) = I( x i ) = [6i 5, 6i] we consider two decompositions into three sub-intervals that correspond to setting the variable x i true or false. The decomposition corresponding to setting x i true is I(x i ) = I 1 (x i ) + I 2 (x i ) + I 3 (x i ), where I 1 (x i ) := [6i 5, 6i 5], I 2 (x i ) := [6i 4, 6i 2] and I 3 (x i ) := [6i 1, 6i]. The decomposition corresponding to setting x i false is I( x i ) = I 1 ( x i ) + I 2 ( x i ) + I 3 ( x i ), where I 1 ( x i ) := [6i 5, 6i 4], I 2 ( x i ) := [6i 3, 6i 1] and I 3 ( x i ) := [6i, 6i]. An illustration is given in Figure 5. Similarly, to each clause c j there corresponds an interval I(c j ) := [5j 4, 5j] of 5 ones in the second row of A. We define ten sub-intervals that can be combined in several ways to decompose I(c j ). We let I 1 (c j ) := [5j 4, 5j 4], I 2 (c j ) := [5j 2, 5j 2], I 3 (c j ) := [5j, 5j], I 4 (c j ) := [5j 3, 5j 3], I 5 (c j ) := [5j 1, 5j 1], I 6 (c j ) := [5j 4, 5j 3], I 7 (c j ) := [5j 1, 5j], 13

14 I 8 (c j ) := [5j 3, 5j 1], I 9 (c j ) := [5j 4, 5j 1] and I 10 (c j ) := [5j 3, 5j]. An illustration is given in Figure 6. The three first sub-intervals I α (c j ) (α {1, 2, 3}) correspond to the three literals of the clause c j. The last seven sub-intervals I α (c j ) (α {4,..., 10}) alone are not sufficient to decompose I(c j ) exactly. In fact, if we prescribe any subset of the first three subintervals in a decomposition, we can complete the decomposition using some of the last seven intervals to an exact decomposition of I(c j ) in all cases but one: If none of the three first sub-intervals is part of the decomposition, the best we can do is to approximate I(c j ) by, for instance, I 9 (c j ), resulting in a total change of 1 for the interval. The CVP instance has k = 20s = 10t allowed segments. The first 3t segments correspond to pairs (y i, c j ) where y i {x i, x i } is a literal and c j is a clause involving y i. Consider such a pair (y i, c j ) and assume that y i is the α-th literal of c j, and c j is the β-th clause containing y i (thus α, β {1, 2, 3}). The segment associated to the pair (y i, c j ) is S(y i, c j ) := (I β (y i ), I α (c j )). The last 7t segments are of the form S γ (c j ) := (, I γ (c j )) where c j is a clause and γ {4,..., 10}. We denote the resulting set of segments by S = {S 1,..., S k }. This concludes the description of the reduction. Note that the reduction is clearly polynomial. Now suppose we have a truth assignment that satisfies σ of the clauses of Φ. Then we can find an approximate decomposition of A with a total change of t σ by summing all 3s segments of the form S(y i, c j ) where y i = x i if x i is set to true, and y i = x i if x i is set to false, and a subset of the segments S γ (c j ). Conversely, assume that we have an approximate decomposition B := k u js j of A with a total change of TC = A B 1. Because A is binary, we may assume that u is also binary. We say that a variable x i is coherent (w.r.t. B) if the deviation TC(x i ) for the interval I(x i ) is zero. The variable is said to be incoherent otherwise. Similarly, we say that a clause c j is satisfied (again, w.r.t. B) if the deviation TC(c j ) for the interval I(c j ) is zero, and unsatisfied otherwise. We can modify the approximation B = k u js j in such a way to make all variables coherent, without increasing TC, for the following reasons. First, while there exist numbers α, i, j and j such that S(x i, c j ) and S( x i, c j ) are both used in the decomposition and c j, c j are the α-th clauses respectively containing x i and x i, we can remove one of these two segments from the decomposition and change the segments of the form S γ (c j ) or S γ (c j ) that are used, in such a way that TC does not increase. More precisely, TC(x i ) decreases by at least one, and TC(c j ) or TC(c j ), but not both, increases by at most one. Thus we can assume that, for every indices α, i, j and j such that c j is the α-th clause containing x i and c j is the α-th clause containing x i, at most one of the two segments S(x i, c j ) and S( x i, c j ) is used in the decomposition. Similarly, we can assume that, for every such i, j and j, exactly one of the two segments S(x i, c j ) and S( x i, c j ) is used in the decomposition. Second, while some variable x i remains incoherent, we can replace the segment of the form S(y i, c j ), where y i {x i, x i }, with a segment of the form S(ȳ i, c j ) and change some segments of the form S γ (c j ) or S γ (c j ) ensuring that TC does not increase (again, TC(x i ) decreases by at least one and either TC(c j ) or TC(c j ) increases by at most one). Therefore, we can assume that all variables are coherent. 14

15 Now, by interpreting the relevant part of the decomposition of B as a truth assignment, we obtain truth values for the variables of Φ satisfying at least t TC of its clauses. Letting OPT(Φ) and OPT(A, S) denote the maximum number of clauses of Φ that can be simultaneously satisfied and the optimal total change of an approximation of A using the segments in S, we have OPT(Φ) = t OPT(A, S). There exists some δ (0, 1) such that it is NP-hard to distinguish, among all MAX- 3SAT6 instances Φ with t clauses, instances such that OPT(Φ) = t from instances such that OPT(Φ) < δ t. (This is one of the many consequences of the PCP theorem, see, for instance, [12].) Using the above reduction, we conclude the following. It is NPhard to distinguish, among all instances (A, S) of the restricted CVP, instances such that OPT(A, S) = 0 from instances such that OPT(A, S) > (1 δ) n. The result follows (we 5 may take, for instance, ɛ = (1 δ)/5). 4 Incorporating the beam-on time into the objective function. In this section we generalize the results of this paper in the case where we do not only want to minimize the total change, but a combination of the total change and the sum of the coefficients u j for j [k], that is, { min α a b 1 + β k } u j u Z k + where b = k u jg j and α and β are arbitrary nonnegative importance factors. Throughout this section, we study the CVP under this objective function. The resulting problem is denoted CVP-BOT. Let us recall that in the IMRT context the generators from G represent segments that can be generated by the MLC. The coefficient u j associated to the segment g j for j [k] gives the total time during which the patient is irradiated with the leaves of the MLC in a certain position. Hence, the sum of the coefficients exactly corresponds to the total time during which the patient is irradiated (beam-on time). In order to avoid overdosage in the healthy tissues due to unavoidable diffusion effects as much as possible, it is desirable to take the beam-on time into account in the objective function. We observe here that the main results of the previous sections still hold with the new objective function. For hardness results, this is obvious because taking α = 1 and β = 0 gives back the original objective function. Corollary 8. Let G and D be as described in Section 2.1, let a Z d, and let OPT denote the optimal value of the corresponding CVP-BOT instance. If we redefine the costs of the arcs of D by letting the cost of arcs of the form (j, j + 1) or (j + 1, j) (for j [d]) be α, and the costs of the other arcs be β, then OPT equals the minimum cost of a flow in D. Proof. The result follows easily from the proof of Theorem 1. 15

16 We can also find an approximation algorithm with an additive error of at most α ln 2 2 d 3/2. The algorithm is identical to Algorithm 1, except it uses the following linear relaxation: (LP ) min α s.t. d (γ i + δ i ) + β i=1 k u j k u j g ij γ i + δ i = a i i [d] u j 0 j [k] γ i 0 i [d] δ i 0 i [d]. Again, b = k u jg j. We refer to this algorithm as Algorithm 1. Adapting the proof of Theorem 4 yields the following result. Corollary 9. Algorithm 1 is a randomized polynomial-time algorithm that approximates ln 2 CVP-BOT within an expected additive error of α d 3/ α d 3/2. 2 Proof. By linearity of expectation, we have: [ ] [ E α a b k [ ] k ] a + β ũ j = α E b + β E ũ j 1 1 α E [ a b 1 ] + β LP-BOT + α k [ b u j + α E b ] 1 ln 2 2 d3/2. Finally, by derandomizing Algorithm 1, we obtain the following result. Corollary 10. There exists a polynomial-time algorithm that approximates CVP-BOT ln 2 within an additive error of α d 3/ Relation to discrepancy theory and open questions We begin this section by relating our results to discrepancy theory and conclude with the open questions. For background on discrepancy theory see, for instance, Alon and Spencer [1], and Beck and Sós [4]. In this section, A is again an m n matrix, but no longer an intensity matrix. Rather, we assume that A is an m n binary matrix whose rows may be interpreted as the 16

17 characteristic vectors of sets of a set system of size m on n points. This assumption keeps us within the realm of combinatorial discrepancy. Note that much of the notions and results are nevertheless valid for general m n matrices A. The linear discrepancy of A with respect to a vector w [0, 1] n is defined by lindisc (A, w) := and the linear discrepancy of A by lindisc (A) := min A(w z) z {0,1} n, (8) max lindisc (A). (9) w [0,1] n For each vector w [0, 1] n, we may regard each z {0, 1} n as being obtained by rounding the coordinates of w in some way, and the linear discrepancy of A with respect to w as a measure of how close the rounded vector z is from the original (fractional) vector w. The linear discrepancy of A is closely related to the discrepancy of A and its submatrices. The discrepancy of A is defined as disc(a) := min χ { 1,1} n Aχ = 2 lindisc ( A, 11), 2 and the hereditary discrepancy herdisc(a) as the maximum discrepancy of a submatrix of A. We have 1 disc(a) lindisc(a) herdisc(a), where the first inequality is trivial and 2 the second is due to Lovász, Spencer and Vesztergombi [22]. For p [1, ), and w [0, 1] n, we may also define the l p linear discrepancy of A (with respect to w) by replacing the l -norm in (8) and (9) by the l p -norm and scaling down by a factor of m 1/p. These two quantities are respectively denoted by lindisc p (A, w) and lindisc p (A). The definition of l p linear discrepancy above is compatible with the existing definition of l p discrepancy, see Matoušek [23]. In order to relate the CVP to discrepancy theory, consider the d k matrix G := (g 1,..., g k ) with the k generators of G as columns, and (b, γ, u ) an optimal solution of (LP). Thus u minimizes a Gu 1 among all nonnegative vectors u R k +. Now write u = u + {u }, where the i-th component of u and {u } is the floor and the fractional part of u i, respectively. Our strategy to approximate the CVP can be described as project and round. After projecting the target vector a to the point b = Gu of the cone generated by G that is closest to a, we round b = Gu to a feasible solution b = Gũ with ũ = u + v and v {0, 1} k. The analysis given in the proof of Theorem 4 uses the triangular inequality, and the fact that no feasible solution b can be closer to a than b, to get: a Gũ 1 a Gu 1 + Gu Gũ 1 OPT + G ({u } v) 1. The second term in the above inequalities determines the approximation guarantee of our algorithm. We showed that the rounding vector v {0, 1} k found by the randomized rounding achieves G ({u } v) 1 = O(d 3/2 ), on average. Obtaining the best possible rounding vector v amounts to determining min v {0,1} k G ({u } v) 1, a quantity that equals d times the l 1 linear discrepancy of G with respect to {u }. 17

18 Assume now that (b, γ, u ) is an extremal optimal solution of (LP). As shown above, this implies that at most d of the components of u are nonzero. This means that we can restrict the matrix G to a d d matrix, and forget all but d components of the vectors {u } and v. From now on, we assume G {0, 1} d d, {u } [0, 1] d and v {0, 1} d. It follows from a result of Spencer [26] that herdisc(g) < 6 d. By Lovász et al. s inequality stated above and the easy inequality d 1 x 1 x for a vector x R d, we then have: lindisc 1 (G) lindisc(g) herdisc(g) < 6 d. This implies that there always exists v {0, 1} d with G ({u } v) 1 < 6 d 3/2. Thus a result similar to Theorem 4 readily follows from two non-trivial results in discrepancy theory. However, we remark our proof of Theorem 4, which consists in analyzing the very natural Algorithm 1, is elementary (and gives a better constant than that obtained by the above argument). Here are some questions we left open for future work: The first type of question concerns the tightness of the (in)approximability results developed here. We proved that there is no polynomial-time approximation algorithm with an (additive) approximation guarantee of (ln 2 ɛ) d, unless P = NP. On the other hand, we give an approximation algorithm with a O(d 3/2 ) approximation guarantee. What is the true approximability threshold of the CVP? Should that question be too difficult, determining the integrality gap of the LP formulation of the CVP used in Section 2.3, would already be interesting, in our opinion. Here, the integrality gap equals the worst difference OPT LP over instances of the CVP. As the above discussion indicates, this is essentially a question in discrepancy theory. The second type of question addresses further ways in which our results could be extended. For instance, it would be interesting to understand how our results generalize when the l 1 -norm is replaced by an l p -norm with p (1, ]. The third type of question is more algorithmic and concerns the application of the CVP to IMRT. There exists a simple, direct algorithm for checking whether an intensity matrix A can be decomposed exactly under the minimum separation constraint or not [17, 14]. Is there a simple, direct algorithm for approximately decomposing an intensity matrix under this constraint as well? 6 Acknowledgments We thank Çiğdem Güler and Horst Hamacher from the University of Kaiserslautern for several discussions in the early stage of this research project. We also thank Maude Gathy, Guy Louchard and Yvik Swan from Université Libre de Bruxelles, and Konrad Engel from Universität Rostock for discussions. 18

19 References [1] N. Alon and J. Spencer, The probabilistic Method, Wiley-Interscience Series in Discrete Mathematics and Optimization, New York, [2] S. Arora, C. Lund, R. Motwani, M. Sudan and M. Szegedy, Proof verification and the hardness of approximation problems, Journal of the ACM, 45(3), , [3] D. Baatar, H.W. Hamacher, M. Ehrgott, and G.J. Woeginger, Decomposition of integer matrices and multileaf collimator sequencing, Discrete Appl. Math., 152, 6 34, [4] J. Beck and V. Sós, Discrepancy theory, in : Handbook of Combinatorics, North- Holland, AMsterdam, 1995, pages [5] N. Boland, H. W. Hamacher and F. Lenzen, Minimizing beam-on time in cancer radiation treatment using multileaf collimators, Networks, 43(4), , [6] T.R. Bortfeld, D.L. Kahler, T.J. Waldron and A.L. Boyer, X ray field compensation with multileaf collimators, Int. J. Radiat. Oncol. Biol. Phys., 28, , [7] J. Cardinal, S. Fiorini, G. Joret. Tight results on minimum entropy set cover. Algorithmica, 51(1), [8] K. Engel, A new algorithm for optimal multileaf collimator field segmentation, Discrete Appl. Math., 152(1-3), 35-51, [9] K. Engel, T. Gauer, A dose optimization method for electron radiotherapy using randomized aperture beams, submitted to Phys. Med. Biol., [10] C. Engelbeen, S. Fiorini, Constrained decompositions of integer matrices and their applications to intensity modulated radiation therapy, Networks, [11] U. Feige, A threshold of ln n for approximating set cover. J. ACM, 45(4), , [12] U. Feige, L. Lovász, and P. Tetali. Approximating min sum set cover. Algorithmica, 40(4), , [13] T. Kalinowski, A duality based algorithm for multileaf collimator field segmentation with interleaf collision constraint, Discrete Appl. Math., 152, 52 88, [14] T. Kalinowski, Realization of intensity modulated radiation fields using multileaf collimators, Information Transfer and Combinatorics, , Ahlswede et al., R., vol. 4123, LNCS, Springer-Verlag, [15] T. Kalinowski, Multileaf collimator shape matrix decomposition, Optimization in Medicine and Biology, pp , Lim, G.J., Auerbach Publishers Inc.,

20 [16] T. Kalinowski, Reducing the tongue-and-groove underdosage in MLC shape matrix decomposition, Algorithmic Operations Research,(3), 2, [17] S. Kamath, S. Sahni, J. Li, J. Palta and S. Ranka, Leaf sequencing algorithms for segmented multileaf collimation, Phys. Med. Biol., 48-(3), , [18] S. Kamath, S. Sartaj, J. Palta, S. Ranka and J. Li, Optimal leaf sequencing with elimination of tongue and groove underdosage, Phys. Med. Biol., (49), N7-N19, [19] S. Kamath, S. Sartaj, S. Ranka, J. Li, and J. Palta, A comparison of step and shoot leaf sequencing algorithms that eliminate tongue and groove effects, Phys. Med. Biol., (49), , [20] A. Kiesel, T. Kalinowski, MLC shape matrix decomposition without tongue-andgroove underdosage, submitted to Networks. [21] A. Kiesel, T. Gauer, Approximated segmentation considering technical and dosimetric constraints in intensity-modulated radiation therapy, submitted to JORS. [22] L. Lovász, J. Spencer and K. Vesztergombi, Discrepancy of set systems and matrices, Eur. J. Comb. 7: , [23] J. Matoušek, An L p version of the Beck-Fiala Conjecture, Europ. J. Combinatorics, 19, , 1998 [24] D. Micciancio, O. Regev, Lattice-based Cryptography, in: D.J. Bernstein and J. Buchmann (eds), Post-quantum Cryprography, Springer, [25] W. Que, J. Kung and J. Dai, Tongue and groove effect in intensity modulated radiotherapy with static multileaf collimator fields, Phys. Med. Biol., (49), , [26] J. Spencer, Six standard deviations suffice, Trans. Am. Math. Soc. 289: , [27] V. Vazirani, Approximation Algorithms, Springer,

21 Appendix Lemma. Let l be a positive integer and let X 1, X 2,..., X l be l independent random variables such that, for all j [l], P [X j = 1 p j ] = p j and P [X j = p j ] = 1 p j. Then [ ] ln 2 E X 1 + X X l 2 l. Proof. Let X := X 1 + X X l. By Lemmas A.1.6 and A.1.8 from Alon and Spencer [1], we have ) l E[e λx ] (p e (1 p)λ + (1 p) e λp λ e 2 8 l, for every λ R. Hence, e E[λ X ] E[e λ X ] E[e λx ] + E[e λx ] 2e λ2 8 l, where the first inequality follows from Jensen s inequality (and the fact that the exponential is convex) and the second inequality follows from the easy inequality e x e x + e x. 8 ln 2 Taking logarithms and letting λ =, we get the result. l 21

22 Figure 2: The multileaf collimator (MLC) Figure 3: Leaf positions and irradiation times determining an exact decomposition of a 4 4 intensity matrix. Figure 4: The network for an instance with d = 6 and k = I 1 (x i ) I 1 ( x i ) I 2 (x i ) I 2 ( x i ) I 3 (x i ) I 3 ( x i ) Figure 5: The sub-intervals used for the variable gadgets. 22

23 I 1 (c j ) I 4 (c j ) I 8 (c j ) I 2 (c j ) I 5 (c j ) I 9 (c j ) I 3 (c j ) I 6 (c 7 ) I 10 (c j ) I 7 (c j ) Figure 6: The sub-intervals used for the clause gadgets. 23

Multileaf collimator shape matrix decomposition

Multileaf collimator shape matrix decomposition Multileaf collimator shape matrix decomposition Thomas Kalinowski March 7, 2007 1 Introduction An important method in cancer treatment is the use of high energetic radiation. In order to kill tumor cells

More information

Optimal multileaf collimator field segmentation

Optimal multileaf collimator field segmentation Optimal multileaf collimator field segmentation Dissertation zur Erlangung des akademischen Grades doctor rerum naturalium (Dr. rer. nat.) der Mathemathisch Naturwissenschaftlichen Fakultät der Universität

More information

Utilizing IMRT Intensity Map Complexity Measures in Segmentation Algorithms. Multileaf Collimator (MLC) IMRT: Beamlet Intensity Matrices

Utilizing IMRT Intensity Map Complexity Measures in Segmentation Algorithms. Multileaf Collimator (MLC) IMRT: Beamlet Intensity Matrices Radiation Teletherapy Goals Utilizing IMRT Intensity Map Complexity Measures in Segmentation Algorithms Kelly Sorensen August 20, 2005 Supported with NSF Grant DMI-0400294 High Level Goals: Achieve prescribed

More information

A Metaheuristic for IMRT Intensity Map Segmentation

A Metaheuristic for IMRT Intensity Map Segmentation A Metaheuristic for IMRT Intensity Map Segmentation Athula Gunawardena, Warren D Souza, Laura D. Goadrich, Kelly Sorenson, Robert Meyer, and Leyuan Shi University of Wisconsin-Madison October 15, 2004

More information

Increasing the Span of Stars

Increasing the Span of Stars Increasing the Span of Stars Ning Chen Roee Engelberg C. Thach Nguyen Prasad Raghavendra Atri Rudra Gynanit Singh Department of Computer Science and Engineering, University of Washington, Seattle, WA.

More information

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction U.C. Berkeley Handout N2 CS294: PCP and Hardness of Approximation January 23, 2006 Professor Luca Trevisan Scribe: Luca Trevisan Notes for Lecture 2 These notes are based on my survey paper [5]. L.T. Statement

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

On the Complexity of the Minimum Independent Set Partition Problem

On the Complexity of the Minimum Independent Set Partition Problem On the Complexity of the Minimum Independent Set Partition Problem T-H. Hubert Chan 1, Charalampos Papamanthou 2, and Zhichao Zhao 1 1 Department of Computer Science the University of Hong Kong {hubert,zczhao}@cs.hku.hk

More information

Provable Approximation via Linear Programming

Provable Approximation via Linear Programming Chapter 7 Provable Approximation via Linear Programming One of the running themes in this course is the notion of approximate solutions. Of course, this notion is tossed around a lot in applied work: whenever

More information

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization Department of Computer Science Series of Publications C Report C-2004-2 The Complexity of Maximum Matroid-Greedoid Intersection and Weighted Greedoid Maximization Taneli Mielikäinen Esko Ukkonen University

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

18.5 Crossings and incidences

18.5 Crossings and incidences 18.5 Crossings and incidences 257 The celebrated theorem due to P. Turán (1941) states: if a graph G has n vertices and has no k-clique then it has at most (1 1/(k 1)) n 2 /2 edges (see Theorem 4.8). Its

More information

Approximated multileaf collimator field segmentation

Approximated multileaf collimator field segmentation Approximated multileaf collimator field segmentation Dissertation zur Erlangung des akademischen Grades doctor rerum naturalium (Dr. rer. nat.) der Mathematisch-Naturwissenschaftlichen Fakultät der Universität

More information

On the Hardness of Satisfiability with Bounded Occurrences in the Polynomial-Time Hierarchy

On the Hardness of Satisfiability with Bounded Occurrences in the Polynomial-Time Hierarchy On the Hardness of Satisfiability with Bounded Occurrences in the Polynomial-Time Hierarchy Ishay Haviv Oded Regev Amnon Ta-Shma March 12, 2007 Abstract In 1991, Papadimitriou and Yannakakis gave a reduction

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Lecture 10 Algorithmic version of the local lemma

Lecture 10 Algorithmic version of the local lemma Lecture 10 Algorithmic version of the local lemma Uriel Feige Department of Computer Science and Applied Mathematics The Weizman Institute Rehovot 76100, Israel uriel.feige@weizmann.ac.il June 9, 2014

More information

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems

Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Near-Optimal Algorithms for Maximum Constraint Satisfaction Problems Moses Charikar Konstantin Makarychev Yury Makarychev Princeton University Abstract In this paper we present approximation algorithms

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

Hardness of the Covering Radius Problem on Lattices

Hardness of the Covering Radius Problem on Lattices Hardness of the Covering Radius Problem on Lattices Ishay Haviv Oded Regev June 6, 2006 Abstract We provide the first hardness result for the Covering Radius Problem on lattices (CRP). Namely, we show

More information

Critical Reading of Optimization Methods for Logical Inference [1]

Critical Reading of Optimization Methods for Logical Inference [1] Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh

More information

Two-coloring random hypergraphs

Two-coloring random hypergraphs Two-coloring random hypergraphs Dimitris Achlioptas Jeong Han Kim Michael Krivelevich Prasad Tetali December 17, 1999 Technical Report MSR-TR-99-99 Microsoft Research Microsoft Corporation One Microsoft

More information

Integer programming, Barvinok s counting algorithm and Gomory relaxations

Integer programming, Barvinok s counting algorithm and Gomory relaxations Integer programming, Barvinok s counting algorithm and Gomory relaxations Jean B. Lasserre LAAS-CNRS, Toulouse, France Abstract We propose an algorithm based on Barvinok s counting algorithm for P max{c

More information

Efficient Approximation for Restricted Biclique Cover Problems

Efficient Approximation for Restricted Biclique Cover Problems algorithms Article Efficient Approximation for Restricted Biclique Cover Problems Alessandro Epasto 1, *, and Eli Upfal 2 ID 1 Google Research, New York, NY 10011, USA 2 Department of Computer Science,

More information

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8

Lecture 20: LP Relaxation and Approximation Algorithms. 1 Introduction. 2 Vertex Cover problem. CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8 CSCI-B609: A Theorist s Toolkit, Fall 2016 Nov 8 Lecture 20: LP Relaxation and Approximation Algorithms Lecturer: Yuan Zhou Scribe: Syed Mahbub Hafiz 1 Introduction When variables of constraints of an

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Out-colourings of Digraphs

Out-colourings of Digraphs Out-colourings of Digraphs N. Alon J. Bang-Jensen S. Bessy July 13, 2017 Abstract We study vertex colourings of digraphs so that no out-neighbourhood is monochromatic and call such a colouring an out-colouring.

More information

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover

A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani Abstract We study semidefinite programming relaxations of Vertex Cover arising

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Matt Weinberg Scribe: Sanjeev Arora One of the running themes in this course is

More information

Discrepancy Theory in Approximation Algorithms

Discrepancy Theory in Approximation Algorithms Discrepancy Theory in Approximation Algorithms Rajat Sen, Soumya Basu May 8, 2015 1 Introduction In this report we would like to motivate the use of discrepancy theory in algorithms. Discrepancy theory

More information

A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser

A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser Presented By: Chris Standish chriss@cs.tamu.edu 23 November 2005 1 Outline Problem Definition Frieze s Generic

More information

Acyclic Digraphs arising from Complete Intersections

Acyclic Digraphs arising from Complete Intersections Acyclic Digraphs arising from Complete Intersections Walter D. Morris, Jr. George Mason University wmorris@gmu.edu July 8, 2016 Abstract We call a directed acyclic graph a CI-digraph if a certain affine

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

SELECTIVELY BALANCING UNIT VECTORS AART BLOKHUIS AND HAO CHEN

SELECTIVELY BALANCING UNIT VECTORS AART BLOKHUIS AND HAO CHEN SELECTIVELY BALANCING UNIT VECTORS AART BLOKHUIS AND HAO CHEN Abstract. A set U of unit vectors is selectively balancing if one can find two disjoint subsets U + and U, not both empty, such that the Euclidean

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT

Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT ,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing

More information

The discrepancy of permutation families

The discrepancy of permutation families The discrepancy of permutation families J. H. Spencer A. Srinivasan P. Tetali Abstract In this note, we show that the discrepancy of any family of l permutations of [n] = {1, 2,..., n} is O( l log n),

More information

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated

More information

arxiv: v1 [cs.dm] 29 Oct 2012

arxiv: v1 [cs.dm] 29 Oct 2012 arxiv:1210.7684v1 [cs.dm] 29 Oct 2012 Square-Root Finding Problem In Graphs, A Complete Dichotomy Theorem. Babak Farzad 1 and Majid Karimi 2 Department of Mathematics Brock University, St. Catharines,

More information

18.10 Addendum: Arbitrary number of pigeons

18.10 Addendum: Arbitrary number of pigeons 18 Resolution 18. Addendum: Arbitrary number of pigeons Razborov s idea is to use a more subtle concept of width of clauses, tailor made for this particular CNF formula. Theorem 18.22 For every m n + 1,

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme

Lecture 4. 1 FPTAS - Fully Polynomial Time Approximation Scheme Theory of Computer Science to Msc Students, Spring 2007 Lecturer: Dorit Aharonov Lecture 4 Scribe: Ram Bouobza & Yair Yarom Revised: Shahar Dobzinsi, March 2007 1 FPTAS - Fully Polynomial Time Approximation

More information

Lecture 5 January 16, 2013

Lecture 5 January 16, 2013 UBC CPSC 536N: Sparse Approximations Winter 2013 Prof. Nick Harvey Lecture 5 January 16, 2013 Scribe: Samira Samadi 1 Combinatorial IPs 1.1 Mathematical programs { min c Linear Program (LP): T x s.t. a

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai

PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions. My T. Thai PCPs and Inapproximability Gap-producing and Gap-Preserving Reductions My T. Thai 1 1 Hardness of Approximation Consider a maximization problem Π such as MAX-E3SAT. To show that it is NP-hard to approximation

More information

Nonnegative Integral Subset Representations of Integer Sets

Nonnegative Integral Subset Representations of Integer Sets Nonnegative Integral Subset Representations of Integer Sets Michael J. Collins David Kempe Jared Saia Maxwell Young Abstract We consider an integer-subset representation problem motivated by a medical

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

Essential facts about NP-completeness:

Essential facts about NP-completeness: CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions

More information

3.3 Easy ILP problems and totally unimodular matrices

3.3 Easy ILP problems and totally unimodular matrices 3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =

More information

On Locating-Dominating Codes in Binary Hamming Spaces

On Locating-Dominating Codes in Binary Hamming Spaces Discrete Mathematics and Theoretical Computer Science 6, 2004, 265 282 On Locating-Dominating Codes in Binary Hamming Spaces Iiro Honkala and Tero Laihonen and Sanna Ranto Department of Mathematics and

More information

Rank Determination for Low-Rank Data Completion

Rank Determination for Low-Rank Data Completion Journal of Machine Learning Research 18 017) 1-9 Submitted 7/17; Revised 8/17; Published 9/17 Rank Determination for Low-Rank Data Completion Morteza Ashraphijuo Columbia University New York, NY 1007,

More information

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II)

Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Limits of Approximation Algorithms 28 Jan, 2010 (TIFR) Lec. 2: Approximation Algorithms for NP-hard Problems (Part II) Lecturer: Prahladh Harsha Scribe: S. Ajesh Babu We will continue the survey of approximation

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

The Maximum Flow Problem with Disjunctive Constraints

The Maximum Flow Problem with Disjunctive Constraints The Maximum Flow Problem with Disjunctive Constraints Ulrich Pferschy Joachim Schauer Abstract We study the maximum flow problem subject to binary disjunctive constraints in a directed graph: A negative

More information

arxiv: v1 [math.oc] 3 Jan 2019

arxiv: v1 [math.oc] 3 Jan 2019 The Product Knapsack Problem: Approximation and Complexity arxiv:1901.00695v1 [math.oc] 3 Jan 2019 Ulrich Pferschy a, Joachim Schauer a, Clemens Thielen b a Department of Statistics and Operations Research,

More information

Welsh s problem on the number of bases of matroids

Welsh s problem on the number of bases of matroids Welsh s problem on the number of bases of matroids Edward S. T. Fan 1 and Tony W. H. Wong 2 1 Department of Mathematics, California Institute of Technology 2 Department of Mathematics, Kutztown University

More information

A bad example for the iterative rounding method for mincost k-connected spanning subgraphs

A bad example for the iterative rounding method for mincost k-connected spanning subgraphs A bad example for the iterative rounding method for mincost k-connected spanning subgraphs Ashkan Aazami a, Joseph Cheriyan b,, Bundit Laekhanukit c a Dept. of Comb. & Opt., U. Waterloo, Waterloo ON Canada

More information

Building Graphs from Colored Trees

Building Graphs from Colored Trees Building Graphs from Colored Trees Rachel M. Esselstein CSUMB Department of Mathematics and Statistics 100 Campus Center Dr. Building 53 Seaside, CA 93955, U.S.A. resselstein@csumb.edu Peter Winkler Department

More information

Zero-sum square matrices

Zero-sum square matrices Zero-sum square matrices Paul Balister Yair Caro Cecil Rousseau Raphael Yuster Abstract Let A be a matrix over the integers, and let p be a positive integer. A submatrix B of A is zero-sum mod p if the

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

On the Logarithmic Calculus and Sidorenko s Conjecture

On the Logarithmic Calculus and Sidorenko s Conjecture On the Logarithmic Calculus and Sidorenko s Conjecture by Xiang Li A thesis submitted in conformity with the requirements for the degree of Msc. Mathematics Graduate Department of Mathematics University

More information

Approximation algorithms based on LP relaxation

Approximation algorithms based on LP relaxation CSE 594: Combinatorial and Graph Algorithms Lecturer: Hung Q. Ngo SUNY at Buffalo, Spring 2005 Last update: March 10, 2005 Approximation algorithms based on LP relaxation There are two fundamental approximation

More information

Practical Analysis of Key Recovery Attack against Search-LWE Problem

Practical Analysis of Key Recovery Attack against Search-LWE Problem Practical Analysis of Key Recovery Attack against Search-LWE Problem The 11 th International Workshop on Security, Sep. 13 th 2016 Momonari Kudo, Junpei Yamaguchi, Yang Guo and Masaya Yasuda 1 Graduate

More information

Sampling Contingency Tables

Sampling Contingency Tables Sampling Contingency Tables Martin Dyer Ravi Kannan John Mount February 3, 995 Introduction Given positive integers and, let be the set of arrays with nonnegative integer entries and row sums respectively

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

A NOTE ON THE PRECEDENCE-CONSTRAINED CLASS SEQUENCING PROBLEM

A NOTE ON THE PRECEDENCE-CONSTRAINED CLASS SEQUENCING PROBLEM A NOTE ON THE PRECEDENCE-CONSTRAINED CLASS SEQUENCING PROBLEM JOSÉ R. CORREA, SAMUEL FIORINI, AND NICOLÁS E. STIER-MOSES School of Business, Universidad Adolfo Ibáñez, Santiago, Chile; correa@uai.cl Department

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 Scribed by Luowen Qian Lecture 8 In which we use spectral techniques to find certificates of unsatisfiability

More information

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization James B. Orlin Sloan School of Management, MIT Cambridge, MA 02139 jorlin@mit.edu Abstract. We consider the problem of minimizing

More information

Matchings in hypergraphs of large minimum degree

Matchings in hypergraphs of large minimum degree Matchings in hypergraphs of large minimum degree Daniela Kühn Deryk Osthus Abstract It is well known that every bipartite graph with vertex classes of size n whose minimum degree is at least n/2 contains

More information

APPROXIMATION FOR DOMINATING SET PROBLEM WITH MEASURE FUNCTIONS

APPROXIMATION FOR DOMINATING SET PROBLEM WITH MEASURE FUNCTIONS Computing and Informatics, Vol. 23, 2004, 37 49 APPROXIMATION FOR DOMINATING SET PROBLEM WITH MEASURE FUNCTIONS Ning Chen, Jie Meng, Jiawei Rong, Hong Zhu Department of Computer Science, Fudan University

More information

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017)

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017) GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017) C. Croitoru croitoru@info.uaic.ro FII November 12, 2017 1 / 33 OUTLINE Matchings Analytical Formulation of the Maximum Matching Problem Perfect Matchings

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

C&O 355 Mathematical Programming Fall 2010 Lecture 18. N. Harvey

C&O 355 Mathematical Programming Fall 2010 Lecture 18. N. Harvey C&O 355 Mathematical Programming Fall 2010 Lecture 18 N. Harvey Network Flow Topics Max Flow / Min Cut Theorem Total Unimodularity Directed Graphs & Incidence Matrices Proof of Max Flow / Min Cut Theorem

More information

Monotone Submodular Maximization over a Matroid

Monotone Submodular Maximization over a Matroid Monotone Submodular Maximization over a Matroid Yuval Filmus January 31, 2013 Abstract In this talk, we survey some recent results on monotone submodular maximization over a matroid. The survey does not

More information

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint

An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint An Approximation Algorithm for MAX-2-SAT with Cardinality Constraint Thomas Hofmeister Informatik 2, Universität Dortmund, 44221 Dortmund, Germany th01@ls2.cs.uni-dortmund.de Abstract. We present a randomized

More information

New lower bounds for hypergraph Ramsey numbers

New lower bounds for hypergraph Ramsey numbers New lower bounds for hypergraph Ramsey numbers Dhruv Mubayi Andrew Suk Abstract The Ramsey number r k (s, n) is the minimum N such that for every red-blue coloring of the k-tuples of {1,..., N}, there

More information

A Characterization of Graphs with Fractional Total Chromatic Number Equal to + 2

A Characterization of Graphs with Fractional Total Chromatic Number Equal to + 2 A Characterization of Graphs with Fractional Total Chromatic Number Equal to + Takehiro Ito a, William. S. Kennedy b, Bruce A. Reed c a Graduate School of Information Sciences, Tohoku University, Aoba-yama

More information

On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times

On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times CORE DISCUSSION PAPER 2000/52 On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times Andrew J. Miller 1, George L. Nemhauser 2, and Martin W.P. Savelsbergh 2 November 2000

More information

Mathematical Programs Linear Program (LP)

Mathematical Programs Linear Program (LP) Mathematical Programs Linear Program (LP) Integer Program (IP) Can be efficiently solved e.g., by Ellipsoid Method Cannot be efficiently solved Cannot be efficiently solved assuming P NP Combinatorial

More information

Approximations for MAX-SAT Problem

Approximations for MAX-SAT Problem Approximations for MAX-SAT Problem Chihao Zhang BASICS, Shanghai Jiao Tong University Oct. 23, 2012 Chihao Zhang (BASICS@SJTU) Approximations for MAX-SAT Problem Oct. 23, 2012 1 / 16 The Weighted MAX-SAT

More information

The maximum edge-disjoint paths problem in complete graphs

The maximum edge-disjoint paths problem in complete graphs Theoretical Computer Science 399 (2008) 128 140 www.elsevier.com/locate/tcs The maximum edge-disjoint paths problem in complete graphs Adrian Kosowski Department of Algorithms and System Modeling, Gdańsk

More information

Semi-Simultaneous Flows and Binary Constrained (Integer) Linear Programs

Semi-Simultaneous Flows and Binary Constrained (Integer) Linear Programs DEPARTMENT OF MATHEMATICAL SCIENCES Clemson University, South Carolina, USA Technical Report TR2006 07 EH Semi-Simultaneous Flows and Binary Constrained (Integer Linear Programs A. Engau and H. W. Hamacher

More information

Approximations for MAX-SAT Problem

Approximations for MAX-SAT Problem Approximations for MAX-SAT Problem Chihao Zhang BASICS, Shanghai Jiao Tong University Oct. 23, 2012 Chihao Zhang (BASICS@SJTU) Approximations for MAX-SAT Problem Oct. 23, 2012 1 / 16 The Weighted MAX-SAT

More information

arxiv: v2 [cs.dm] 17 Dec 2009

arxiv: v2 [cs.dm] 17 Dec 2009 The interval constrained 3-coloring problem Jaroslaw Byrka Laura Sanità Andreas Karrenbauer arxiv:0907.3563v2 [cs.dm] 17 Dec 2009 Institute of Mathematics EPFL, Lausanne, Switzerland {Jaroslaw.Byrka,Andreas.Karrenbauer,Laura.Sanita}@epfl.ch

More information

Lecture Constructive Algorithms for Discrepancy Minimization

Lecture Constructive Algorithms for Discrepancy Minimization Approximation Algorithms Workshop June 13, 2011, Princeton Lecture Constructive Algorithms for Discrepancy Minimization Nikhil Bansal Scribe: Daniel Dadush 1 Combinatorial Discrepancy Given a universe

More information

Complexity of determining the most vital elements for the 1-median and 1-center location problems

Complexity of determining the most vital elements for the 1-median and 1-center location problems Complexity of determining the most vital elements for the -median and -center location problems Cristina Bazgan, Sonia Toubaline, and Daniel Vanderpooten Université Paris-Dauphine, LAMSADE, Place du Maréchal

More information

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2

Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 Multiplicativity of Maximal p Norms in Werner Holevo Channels for 1 < p 2 arxiv:quant-ph/0410063v1 8 Oct 2004 Nilanjana Datta Statistical Laboratory Centre for Mathematical Sciences University of Cambridge

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

1 Shortest Vector Problem

1 Shortest Vector Problem Lattices in Cryptography University of Michigan, Fall 25 Lecture 2 SVP, Gram-Schmidt, LLL Instructor: Chris Peikert Scribe: Hank Carter Shortest Vector Problem Last time we defined the minimum distance

More information

Independent Transversals in r-partite Graphs

Independent Transversals in r-partite Graphs Independent Transversals in r-partite Graphs Raphael Yuster Department of Mathematics Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, Tel Aviv, Israel Abstract Let G(r, n) denote

More information

Approximating maximum satisfiable subsystems of linear equations of bounded width

Approximating maximum satisfiable subsystems of linear equations of bounded width Approximating maximum satisfiable subsystems of linear equations of bounded width Zeev Nutov The Open University of Israel Daniel Reichman The Open University of Israel Abstract We consider the problem

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Lecture 16 November 6th, 2012 (Prasad Raghavendra)

Lecture 16 November 6th, 2012 (Prasad Raghavendra) 6.841: Advanced Complexity Theory Fall 2012 Lecture 16 November 6th, 2012 (Prasad Raghavendra) Prof. Dana Moshkovitz Scribe: Geng Huang 1 Overview In this lecture, we will begin to talk about the PCP Theorem

More information

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality. CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Complexity Theory. Jörg Kreiker. Summer term Chair for Theoretical Computer Science Prof. Esparza TU München

Complexity Theory. Jörg Kreiker. Summer term Chair for Theoretical Computer Science Prof. Esparza TU München Complexity Theory Jörg Kreiker Chair for Theoretical Computer Science Prof. Esparza TU München Summer term 2010 Lecture 19 Hardness of Approximation 3 Recap Recap: optimization many decision problems we

More information