A robust approach to the chance-constrained knapsack problem

Size: px
Start display at page:

Download "A robust approach to the chance-constrained knapsack problem"

Transcription

1 A robust approach to the chance-constrained knapsack problem Olivier Klopfenstein 1,2, Dritan Nace 2 1 France Télécom R&D, rue du gl Leclerc, Issy-les-Moux cedex 9, France 2 Université de Technologie de Compiègne, Laboratoire Heudiasyc UMR CNRS 6599, Compiègne Cedex, France. olivier.klopfenstein@francetelecom.com, nace@utc.fr June 6, 2006 Abstract Chance-constrained programming is a relevant model for many concrete problems. However, it is known to be very hard to tackle directly. In this paper, the chance-constrained knapsack problem (CKP) is addressed. Relying on the recent advances in robust optimization, a tractable combinatorial algorithm is proposed to solve CKP. It always provides feasible solutions for CKP. Moreover, for two specific classes of uncertain knapsack problems, it is proved to solve CKP at optimality. 1 Introduction. Many concrete problems require to take into account uncertainty on input data. Several frameworks have been proposed these last fifty years to deal with uncertainty in optimization problems. Among them, three main families of approaches can be distinguished: sensitivity analysis, stochastic optimization and robust optimization. As already remarked by many authors, sensitivity analysis is an approach deeply different from the two others, since it does not impact the optimization process: it only tries to say how good is a given solution with respect to uncertainty (post-optimization analysis). In other words, sensitivity analysis does not propose any solution. By contrast, stochastic and robust optimization aim at providing a solution adapted to the uncertain framework considered. The first one relies on probabilistic information, supposed to be available, and tries to find a solution which is good in a probabilistic sense. For instance, the expected value of the objective function can be optimized with respect to the considered possible events. Robust optimization aims generally at providing solutions feasible for all the uncertain events considered. Chance-constrained programming is a very attractive part of stochastic optimization (see for instance [9, 13]). It is devoted to finding the best solution which remains feasible with probability at least 1 ε, for a given ε > 0. Such a model is relevant for many problems, where it is acceptable that a given solution is not feasible for all the events taken into account, as soon as the unfeasibility probability is controlled. For instance, this approach becomes particularly interesting when dealing with problems for which uncertainty assumptions remain approximative. In this case, there is often no sense for looking for a solution always feasible, since the worst case (if there is one) is very likely to never occur. While the direct resolution of chance-constrained programs is often very difficult, some recent robust optimization frameworks bring a new way to deal with these problems [4, 8]. More precisely, given an initial set of uncertain events, the authors propose a way to relax it into a subset of events of probability not less than 1 ε, so that the associated robust optimization problem is tractable. As a result, solving the robust problem provides a feasible solution for the chance-constrained problem. The links between chance-constrained programming and robust optimization have been underlined only recently. [10] compares both approaches and proposes an intermediate resolution framework. The authors study the randomized construction of a set of events with the following property: a solution robust to this set of events is a solution of a chance-constrained program. [12] follows the same line, while dealing with uncertainty on data probability distributions themselves. [15] shows how the robust optimization framework developed in [4, 5] provides an approximation of chance-constrained programming. [11] also proposes robust optimization as a technique to obtain feasible solutions to chance-constrained programs. 1

2 In the same spirit, we propose to use robust optimization techniques to provide a solution to chanceconstrained problems. This work focuses on the knapsack problem, because of its importance for integer linear programming. When dealing with uncertain linear programs, the approach of Bertsimas and Sim [8] appears as particularly appropriate, especially because it preserves the linearity of the basis problem. Furthermore, it can be readily used for integer linear programs. Note that this approach can be seen as a particular case of the robustness model of [2, 3], extended with a probabilistic analysis. In the present paper, the links between this robustness model and chance-constrained programming are investigated for the specific case of the knapsack problem. A pseudo-polynomial time resolution algorithm is designed for the chance-constrained knapsack problem. It is proved to provide always feasible solutions, and even optimal solutions for two specific classes of uncertain knapsack problems. To the best of our knowledge, this is the first time an optimal resolution process is designed for a chance-constrained integer linear program where constraint coefficients are uncertain. 2 Problems and formulations. 2.1 The knapsack problem. Let I = {1,..., n}. Given a profit vector p IR n +, a weight vector w IR n + and a knapsack capacity c > 0, the classical knapsack problem is: i I p ix i i I w ix i c (1) x {0, 1} n For an extensive study of the knapsack problem, we refer for instance to [14]. It is known to be NP-hard, even though there exists a pseudo-polynomial dynamic programming algorithm to solve it. In the rest of this paper, the scalar product of two vectors a and b in IR n will sometimes be denoted by: a.b = i I a ib i. 2.2 The Chance-constrained Knapsack Problem. Let us suppose that the weight vector w is in fact not known with accuracy, that is, w can take values in a set W IR n +. Consider a probability measure P on W (P(w W) = P(W) = 1). The Chanceconstrained Knapsack Problem (CKP) is: i I p ix i P( i I w ix i c) 1 ε x {0, 1} n (2) where ε 0. That is, we look for the best solution x such that x remains feasible with probability at least 1 ε. As noted in most of the introductions to chance-constrained programming, the set of fractional points {x [0, 1] n P(w.x c) 1 ε} is often intractable because of multivariate integral calculations (see e.g. [9, 13] for more details). Example: Let us present a very simple example in IR 2 by describing the set: X = {x [0, 1] 2 P(w 1 x 1 + w 2 x 2 1) 3/4} (ε = 1/4). Suppose that both random weights w 1 and w 2 are uniformly distributed on [0,1]. Let us denote d their density function: d = 1I [0,1] (characteristic function of [0,1]). Let (x 1, x 2 ) [0, 1] 2. We denote by W 1,2 the random variable x 1,2 w 1,2. Each W i has a density function d i = 1/x i.1i [0,xi]. Then P(w 1 x 1 + w 2 x 2 1) = P(W 1 + W 2 1) = 1 t=0 { Easy calculations lead to: P(W 2 1 t) = d 1 (t).p(w 2 1 t)dt = x1 t=0 1 if t 1 x 2 (1 t)/x 2 if t 1 x 2 d 1 (t).p(w 2 1 t)dt If x 1 + x x 2 x 1, it follows that: P(w 1 x 1 + w 2 x 2 1) = 1. If x 1 + x 2 > 1 1 x 2 < x 1, we obtain: P(w 1 x 1 + w 2 x 2 1) = 1 x 2 t=0 d 1 (t)dt + x 1 t=1 x 2 d 1 (t).(1 t)/x 2 dt = 1/x 1 + 1/x 2 1/2.(x 2 /x 1 + x 1 /x 2 ) 1/(2x 1 x 2 ) 2

3 The set X is represented on Figure proba x_ x_ Figure 1: Probability of solutions x [0, 1] 2 for the example Hence, this very basic example shows that chance-constrained problems are quite hard to tackle in a direct way. But observe that CKP can be equivalently written: { } {p.x w V, w.x c} (3) V W:P(V ) 1 ε x {0,1} n (V, x) is said to be a feasible solution of (3) if V W, P(V ) 1 ε and: w V, w.x c. It is optimal if the profit p.x dominates this associated to any other feasible solution. Proposition 1 x is an optimal solution of CKP if, and only if, there exists V W of probability P(V ) 1 ε such that (V, x) is an optimal solution of (3). Proof: It is sufficient to observe that x is a feasible solution of CKP if, and only if, there exists V W of probability P(V ) 1 ε such that (V, x) is a feasible solution of (3). For the direct sense, consider V = {w W w.x c}. For the reverse one, observe that P(w.x c) P(V ). This formulation relies on robust optimization subproblems defined for subsets V of possible weights. For any subset V of W of probability measure P(V ) 1 ε, the resolution of x {0,1} n{p.x w V, w.x c} provides a feasible solution to CKP. This motivates the recourse to robust optimization models for solving CKP. More specifically, the aim of this study is to rely on an adaptation of the appealing framework proposed in [8] to provide good feasible solutions to CKP, and even optimal ones for some specific cases. 2.3 Solving CKP with robust models. From now on, we assume that the coefficients w i lie in intervals [w i, w i ]: w i 0 is the lowest possible weight value for element i, while w i w i is the largest one. Thus, W is included in the cartesian product of intervals [w i, w i ]. To each weight is associated a random variable (r.v.); for the sake of simplicity the random variables and their realizations are denoted by the same symbol w i. Let us also introduce, for all i I, the random variable η i : η i = (w i w i )/(w i w i ) [0, 1]. Let Γ {0,...,n}, the idea of the robust formulation is to find a solution feasible even though up to Γ coefficients of w take their largest values. Thus, in particular, if Γ = 0, we consider only the best scenario, where all weights take their lowest values; if Γ = n, the worst case is taken into account. Note that [8] presents a very similar model where Γ can be non-integral: Γ [0, n]. Although this could directly be adapted here, for the sake of simplicity, our models will assume the integrality of this robustness coefficient. 3

4 Within this framework, let us introduce the Robust Knapsack Problem (RKP), parameterized by Γ: i I p ix i For any x {0, 1} n, we denote: I(x) = {i I x i = 1}. i I\S w i x i + i S w ix i c, S I S = Γ x {0, 1} n (4) Lemma 1 For any feasible solution x of RKP: P(w.x c) P ( i I η i Γ ). Proof: Let x be a feasible solution of RKP. If I(x) Γ, there exists S I of cardinality Γ such that I(x) S. Then: i I w ix i = i I(x) w ix i = i S w ix i + i/ S w i x i c (since x is feasible for RKP). This implies that: P(w.x c) = 1, and thus: P(w.x c) P( i I η i Γ). Suppose now that I(x) Γ+1. For the sake of clarity, let us denote: δ i = w i w i. For any subset S I(x) of cardinality Γ: P(w.x > c) P ( w.x i S w ix i + i/ S w i x ) ( i = P i I(x) w i i S w i + i I(x)\S ( i) w = P i I(x) η iδ i i S i) ( δ = P i I(x)\S η iδ i i S δ i(1 η i )) ( P i I(x)\S η iδ i min j S δ j. i S i)) ( (1 η = P i I(x)\S η i[δ i / min j S δ j ] + i S η i Γ) The first inequality comes from the feasibility of x for RKP. Now, we can ( choose S I(x) so that: ) i I(x) \ S, δ i min j S δ j. In this case, we obtain: P(w.x > c) P i I(x) η i Γ, and thus, clearly: P(w.x > c) P ( i I η i Γ ). Note that this result is general, since it requires no assumption on probability distributions of weights. The robust model proposed, directly inspired by this of [8], appears as particularly relevant for approximating our chance-constrained program. First, the probability results of [8] can be adapted to our framework, for instance: Proposition 2 Assume that ( the random variables w i are independent and symmetrically distributed. Let ε (0, 1). If Γ 1/2. n + ) 2n ln(ε), a feasible solution of RKP will be feasible for CKP. Proof: Let us introduce η i = 2η i 1. The r.v. { η i } i I are independent and symmetrically distributed on [-1,1]. Suppose that Γ n/2. From the probability results of [8], we know that: P( i I η i 2Γ n) exp ( (2Γ n) 2 /(2n) ). On the other hand: P( i I η i 2Γ n) = P( i I η i Γ), and thus: P( i I η i Γ) exp ( (2Γ n) 2 /(2n) ). We conclude thanks to Lemma 1. Better, but also more complex, probability bounds are proved to exist and can lead to a better choice of Γ. Secondly, the robust set of events w associated to the set of feasible solutions of RKP (characterized in Lemma 1) seems appropriate, since in particular it discards the worst-case, where all weights take their imal values w i. Finally, as stated in [8], the multi-knapsack formulation (4) can also be equivalently written: x {0,1} n{p.x z 0, y 0 i I w i x i + Γz + i I y i c } and i I, z + y i (w i w i )x i This formulation makes the computational resolution tractable when using a branch-and-bound algorithm. One of the greatest advantages of the robust framework proposed in [8] is to preserve the linearity of the initial problem. Thus, as clearly expressed by the authors in their initial paper, it is directly usable for integer programming. However, this aspect has not been extensively investigated so far. [8, 7] provide some numerical tests on the robust formulation of some classical combinatorial problems. In particular, a robust knapsack problem very close to ours is introduced, and the authors show numerically 4

5 the impact of protection level Γ on the optimal profit. But to the best of our knowledge, the previous theoretical studies on applications of [8] for robust integer programming have focused on cases where only objective coefficients are uncertain. While [7] provides some complexity results, [16] proves a probability bound easier to compute than this proposed by [8]. [1] investigates some reformulations and tight linear programming formulations for this particular case of uncertain cost coefficients. By contrast, the present paper deals with robust integer programming applied to the knapsack problem when constraint coefficients are uncertain. Finally, with respect to classical stochastic programming approaches, the present study is somewhat original since it is fully combinatorial. That is, no convex optimization tools are needed. 3 On the theoretical link between CKP and RKP. Let I I and Γ {0,..., I }, we introduce the problem RKP(I,Γ): p.x i S w ix i + i I \S w ix i + i/ I w ix i c, S I S = Γ (5) x {0, 1} n RKP(I,Γ) can be seen as a robust knapsack problem (4) with all weights of elements in I \ I set to their worst possible value, with no uncertainty for these values. From now on, to be more precise, RKP will be denoted by RKP(I,Γ). Observe that for any two subsets I and I of I, I I implies that any feasible solution of RKP(I,Γ) is also feasible for RKP(I,Γ). On the other hand, considering Γ Γ, any feasible solution of RKP(I,Γ) is also feasible for RKP(I,Γ ). In particular, any feasible solution of RKP(I,Γ) is also feasible for RKP(I,Γ). Lemma 2 Let Γ {0,...,n}. Suppose that x is an optimal solution of RKP(I,Γ): (i) if Γ I(x ), x is an optimal solution of RKP(I(x ),Γ). (ii) If Γ I(x ), x is an optimal solution of RKP(I,n). Proof: Suppose that Γ I(x ). Since I(x ) I, any feasible solution of RKP(I(x ),Γ) is feasible for RKP(I,Γ). But it can be seen that x is a feasible solution of RKP(I(x ),Γ). Then, as it is optimal for RKP(I,Γ), it is optimal also for RKP(I(x ),Γ). If now Γ I(x ), there exists S I of cardinality Γ such that I(x ) S. Then: i I w ix i = i I(x ) w ix i = i S w ix i + i/ S w i x i c. The last inequality comes from the fact that x is feasible for RKP(I,Γ). Thus, x is shown to be a feasible solution of RKP(I,n). But since Γ n, RKP(I,Γ) is a relaxation of RKP(I,n): x is also optimal for RKP(I,n). Finally, observe that in the case where Γ = I(x ), both statements (i) and (ii) hold. Then, RKP(I(x ),Γ) and RKP(I,n) both admit x as an optimal solution. The following result shows the relevance of the robust setting proposed to deal with CKP when all weight variations (w i w i ) are identical: Theorem 1 Suppose that: i I, w i w i = δ > 0, and that {w i /δ} i I and c/δ are integers. Then, there exist I I and Γ {0,..., I } such that any optimal solution of RKP(I,Γ ) is an optimal solution of CKP. Proof: Let x be an optimal solution of CKP, we define: I = I(x ). Since x is feasible with probability at least 1 ε, we have: P(w.x c) = P ( i I w i c ) 1 ε. Let us denote V = {w W i I w i c}. Let consider now the following problem: (6) p.x w.x c, w V x {0, 1} n Let us prove first that any optimal solution of (6) is an optimal solution of CKP. Any feasible solution x of (6) is feasible for CKP, since: P(w.x c) P(w V ) 1 ε. Thus, we just have to show that 5

6 (6) has the same optimal value as CKP. This is clear, since by construction, the optimal solution x of CKP is a feasible solution of (6). The goal now is to find Γ {0,..., I } such that (6) is equivalent to RKP(I,Γ ). Observe that: w V i I η i ( c i I w i) /δ, and let us denote: Γ = min { I, ( c i I w } i) /δ. We have: w V i I η i Γ. Note that V φ, which implies that ( c i I w i) /δ 0. On the other hand, from the hypothesis, Γ is integral. Let us show that (6) is in fact equivalent to RKP(I,Γ ). Let us prove that x is a feasible solution of (6) if, and only if, x is a feasible solution of RKP(I,Γ ). Let x be feasible for (6). For a subset S I with S = Γ, let us prove that i S w ix i + i I \S w i x i + i/ I w ix i c. Let w(s) denote the weight vector whose components are w i if i I \ S, and w i otherwise. We have: i I w(s) i = i S w i + i I \S w i = Γ δ + i I w i c, and consequently: w(s) V. Then, since x is feasible for (6), we have: w(s).x = i S w ix i + i I \S w ix i + i/ I w ix i c. Hence, x is also feasible for RKP(I,Γ ). Suppose now that x is not feasible for (6), that is, there exists w V such that w.x > c, and let us prove that there exists S I with S = Γ such that: i S w ix i + i I \S w i x i + i/ I w ix i = i I w i x i + i S δx i + i/ I w ix i > c. Let look at I(x) I : if I(x) I Γ (a), we construct S with S = Γ such that S I(x) I. When I(x) I < Γ (b), let S be any extension of I(x) I in I of cardinality Γ : I(x) I S I and S = Γ (recall that, by construction, Γ I ). Then, if (a) is satisfied we obtain: i I w ix i + i S δx i + i/ I w ix i = i I w ix i + δγ + i/ I w i x i i I w i x i + i I η i δ+ i/ I w i x i i I w i x i + i I η i δx i + i/ I w i x i = w.x > c. The first inequality comes from w V ; for the last equality, recall that w i = w i + η i δ. Similarly if (b) is satisfied we obtain: i I w ix i + i S δx i + i/ I w ix i = i I w ix i + δ I(x) I + i/ I w ix i i I w i x i + i I(x) I η i δx i + i/ I w i x i = i I w i x i + i I η i δx i + i/ I w i x i = w.x > c. By contraposition, this proves that any feasible solution of RKP (I,Γ ) is also feasible for (6). Hence, it is shown that x is a feasible solution of (6) if, and only if, x is a feasible solution of RKP(I,Γ ). As a consequence, since both objective functions are identical, both problems are equivalent, and in particular, they admit the same optimal solutions. Since any optimal solution of (6) is an optimal solution of CKP, it is proved that any optimal solution of RKP(I,Γ ) is also optimal for CKP. One of the strengths of the above theorem is that it requires absolutely no assumption on probability distributions for weights. The conditions on integrality of {w i /δ} i I and c/δ are necessary to ensure that Γ can be chosen integral. Nevertheless, recall that in a more general setting, Γ may also be chosen non-integral (cf Section 2.3): in this case, the theorem could be easily generalized without the integrality conditions on {w i /δ} i I and c/δ. However, these assumptions do not seem very constraining for many practical applications, where δ will in fact be small. In the rest of this section, two particular cases are more specifically addressed, when weights or profits are all identical. In each case, the above theorem is specified, since we show that there exists a coefficient Γ such that all optimal solutions of RKP(I,Γ) are optimal also for CKP. Lemma 3 Suppose that: weights and profits can be sorted so that: i < j { wi w j p i p j, for all i I, w i w i = δ > 0, the r.v. {η i } i I are independent and identically distributed. There exists an optimal solution x of CKP such that I(x ) = {1,..., I(x ) }. Proof: Let x be an optimal solution of CKP, and let us build x such that I(x ) = {1,..., I(x) }. As I(x ) = I(x) and profits p i are sorted in non-increasing order: p.x p.x. Moreover, observe that by construction: i I(x ) w i i I(x) w i. Then: P(w.x c) = P( i I(x ) w i c) = P(δ. i I(x ) η i c i I(x ) w i ) = P(δ. i I(x) η i c i I(x ) w i ) (a) P(δ. i I(x) η i c i I(x) w i) (b) = P( i I(x) w i c) = P(w.x c) 6

7 As the r.v. {η i } i I are supposed independent and identically distributed, and since I(x ) = I(x), the r.v. i I(x) η i and i I(x ) η i are identically distributed: this ensures (a). (b) is a direct consequence of i I(x ) w i i I(x) w i. Thus, as x is feasible for CKP, x is also feasible for this problem. Hence, x is an optimal solution of CKP. Theorem 2 Suppose that: weights and profits can be sorted so that: i < j { wi w j p i p j, for all i I, w i w i = δ > 0, {w i /δ} i I and c/δ are integers, the r.v. {η i } i I are independent and identically distributed. Then, there exists Γ {0,...,n} such that any optimal solution of RKP(I,Γ ) is an optimal solution of CKP. Proof: From Theorem 1, we know that there exist I I and Γ {0,..., I } such that an optimal solution of RKP(I,Γ ) is an optimal solution of CKP. Moreover, from the proof, we know that I can be chosen so that I = I(x ) for some optimal solution x of CKP, and that Γ can be chosen so that: Γ = min { I, ( c i I w } i) /δ. Without loss of generality, from Lemma 3, we consider: I(x ) = {1,..., I }. Suppose first that Γ = I. In this case, problems RKP(I,Γ ) and RKP(I,n) are equivalent. Suppose now that Γ = ( c i I w i) /δ < I. Let us prove that any optimal solution of RKP(I,Γ ) is optimal also for CKP. Let x be an optimal solution of RKP(I,Γ ). If I(x ) Γ, x is feasible for RKP(I,n) and consequently: P(w.x c) = 1. Thus, in this case, x is feasible for CKP. Consider now that I(x ) Γ + 1, and let us prove that x is a feasible solution of CKP. Suppose that: i I(x ) w i > i I w i, then for any subset S I(x ) of cardinality Γ : w i x i + w i x i = w i + Γ δ > w i + Γ δ = c Γ δ + Γ δ = c i I i S i I\S i I(x ) This is a contradiction with the feasibility of x for RKP(I,Γ ). As a consequence: i I(x ) w i i I w i. Furthermore, as I is the collection of the I lowest values w i, this implies that I(x ) I. On the other hand, as I I, x is a feasible solution of RKP(I,Γ ), and we have: p.x p.x, i.e.: i I p i i I(x ) p i. As I is the collection of the I largest profit values, this implies that I(x ) I. As I is also the collection of the I lowest values w i, this leads to: i I(x ) w i i I w i. As a result: I(x ) = I and i I(x ) w i = i I w i. Now, let us see that x is feasible for CKP: P(w.x c) = P(δ. i I(x ) η i c i I(x ) w i ) = P(δ. i I(x ) η i c i I(x ) w i ) (a) = P(δ. i I(x ) η i c i I(x ) w i ) (b) = P(w.x c) (a) comes from i I(x ) w i = i I w i. (b) comes from I(x ) = I and probability assumptions on r.v. η i. This shows that x is a feasible solution of CKP. Finally, we have already seen that p.x p.x, which ensures that x is an optimal solution of CKP. As previously for Theorem 1, the integrality of c/δ and of w i /δ for all i I is not really a requirement, since Γ may also be chosen non-integral (cf Section 2.3). As particular cases of the above theorem, the two following results hold: Corollary 1 Suppose that: for all i I, w i = ω > 0 and w i w i = δ > 0, 7

8 ω/δ and c/δ are integers, the r.v. {η i } i I are independent and identically distributed. Then, there exists Γ {0,...,n} such that any optimal solution of RKP(I,Γ ) is an optimal solution of CKP. Corollary 2 Suppose that: for all i I, p i = ρ > 0 and w i w i = δ > 0, for all i I, w i /δ and c/δ are integers, the r.v. {η i } i I are independent and identically distributed. Then, there exists Γ {0,...,n} such that any optimal solution of RKP(I,Γ ) is an optimal solution of CKP. 4 Solving CKP. 4.1 Complexity of RKP and resolution algorithm. Within this section, RKP(I,Γ) is denoted simply RKP. Moreover, all data {w i } i I, {w i } i I and c are suppose to be non-negative integers. It is shown that the classical dynamic programming algorithm can be adapted, and that the pseudo-polynomiality of the classical knapsack problem is preserved: Theorem 3 RKP is weakly NP-hard: there exists a pseudo-polynomial time algorithm to solve it. Proof: That RKP is NP-hard comes immediately, since the particular case Γ = 0 is in fact the classical knapsack problem with weights {w i }. On the other hand, the classical dynamic programming algorithm for knapsack problems can be adapted to solve RKP. Let us denote by RKP k (Γ, b) the problem RKP for robust parameter Γ {0,..., n} and capacity b {1,...,c}, considering only elements in I k = {1,..., k} for k I. Let F k (Γ, b) denote its optimal solution value. We suppose without loss of generality that the elements of I are sorted so that: i < j (w i w i ) (w j w j ). For any k, we assume that: b < 0 F k (., b) =. For k = 1, F 1 is defined as follows: { 0 if b < w1 if Γ = 0: F 1 (0, b) = p 1 if b w 1 { 0 if b < w1 if Γ 1: F 1 (Γ, b) = p 1 if b w 1 Consider b {1,..., c} and Γ {1,...,n}. Then the optimal value of RKP k (Γ, b) can be computed by the recurrence formula: F k (Γ, b) = {F k 1 (Γ, b), p k + F k 1 (Γ 1, b w k )} Let us prove that F k (Γ, b) is the optimal value of RKP k (Γ, b). The result is obvious for k = 1. Let k 2. Let us show first that any feasible solution x k 1 {0, 1} k 1 of RKP k 1 (Γ, b) or RKP k 1 (Γ 1, b w k ) can be extended into a feasible solution x k {0, 1} k of RKP k (Γ, b) by adding a k th element to x k 1. If x k 1 is feasible for RKP k 1 (Γ, b), then considering x k k = 0 ensures that xk is feasible for RKP k (Γ, b). If x k 1 is feasible for RKP k 1 (Γ 1, b w k ), let us consider the vector x k with x k k = 1. Then, we have: i I k w i x k i + { i S (w i w i )x k i : S I k, S Γ} = i I k 1 w i xi k 1 + w k + { i S (w i w i )x k 1 i : S I k 1, S Γ 1}. This comes from the order assumed on weight variations. Combining this with: i I k 1 w i x k 1 i + { i S (w i w i )x k 1 i : S I k 1, S Γ 1} b w k, we obtain that x k is also feasible for RKP k (Γ, b). The reverse, that is for any feasible solution of RKP k (Γ, b), the first k 1 components give a feasible solution either for RKP k 1 (Γ, b) or for RKP k 1 (Γ 1, b w k ), can be shown as follows. Let x k denote 8

9 a feasible solution of RKP k (Γ, b). Let x k 1 {0, 1} k 1 denote the vector composed of the first k 1 elements in x k. Two cases can be distinguished: (a) x k = 0: then xk 1 is feasible for RKP k 1 (Γ, b). (b) x k k = 1: then we can prove by contraposition that xk 1 is a feasible solution of RKP k 1 (Γ 1, b w k ). Suppose that there exists S I k 1 with S = Γ 1, such that: i I k 1 w i x k 1 i + i S (w i w i )xi k 1 > b w k. Then, x k cannot be feasible for RKP k (Γ, b) since for S = S {k} we obtain: i I k w i x k i + i S (w i w i )x k i = i I k w i x k i + i S (w i w i )x k i + (w k w k ) = i I k 1 w i x k 1 i + i S (w i w i )x k 1 i + w k > b. Thus, we have shown that the set of vectors composed of the k 1 first components of any feasible solution x k of RKP k (Γ, b) is equal to the union of all feasible solutions of respectively RKP k 1 (Γ, b) and RKP k 1 (Γ 1, b w k ). Hence we have: F k (Γ, b) = {F k 1 (Γ, b), p k + F k 1 (Γ 1, b w k )}. Finally, observe that the required value F n (Γ, c) can be computed in pseudo-polynomial time O(nΓc). 4.2 An iterative algorithm. In the following algorithm, the value of Γ is progressively increased until the optimal solution of RKP(I,Γ) is a feasible solution of CKP. Algorithm 1: Step 1: Let k = 0. Step 2: Solve the problem RKP(I,k). Let x (k) denote its optimal solution. Step 3: Set I = I(x (k) ). Compute Γ, the largest value of {0,..., I } such that Step 4: x (k) is feasible for RKP(I,Γ ). Compute a bound B P( i I w i c). If B 1 ε, STOP. Step 5: Set k Γ + 1 and go to Step 2. At Step 4, observe that: P( i I w i c) = P(w.x (k) c). Thus, we know that the algorithm stops only with a solution feasible for CKP. Moreover, as x (k) is a feasible solution of RKP(I,k) (cf Lemma 2), we have Γ k. Hence, the increase of index k at each loop ensures the convergence: indeed, when k = n = I, the feasibility probability of x (n) is equal to 1 and the algorithm stops at Step 4. Thus, k can not exceed this value. As a result: Lemma 4 Algorithm 1 provides a feasible solution to CKP by solving at most n + 1 robust knapsack problems. Some further observations have to be made. First, from a given iteration k, it is unnecessary to consider the problems RKP(I,l), with k + 1 l Γ. Indeed, x (k) is already feasible for RKP(I,Γ ), since it is feasible for RKP(I,Γ ). That is the reason for considering directly Γ + 1 at Step 5. On the other hand, the value Γ at Step 3 can be computed in linear time. Indeed, it is the optimal solution of: Γ i I w i + S I : S =Γ i S (w i w i ) c Γ {0,...,n} It is sufficient to consider the empty set S = φ and to fill it progressively with the indices of the largest weight variations w i w i. As soon as i S w i w i > c i I w i, we have: Γ = S 1. To compute B at Step 4, we refer for instance to the probability bounds proposed in [8]. The bound already used in the proof of Proposition 2 under specific probability assumptions may be used. It is clear that the quality of the bound used will directly impact the number of iterations of the algorithm and the quality of the final solution. It has to be noted that if I = Γ, the feasibility probability is 1 and the algorithm stops (cf Lemma 2). As a consequence of Lemma 4, we have: Proposition 3 Algorithm 1 runs in pseudo-polynomial time by using the dynamic programming algorithm provided by the proof of Theorem 3. 9

10 4.3 Two particular cases. Let us finally consider the case of uniform weights, and this of uniform profit values. Lemma 5 Suppose that for all i I, w i = ω > 0 and w i w i = δ > 0. Then an optimal solution x of RKP(I,Γ) satisfies: { x min {n, (c Γδ)/ω } if Γ(ω + δ) c i = c/(ω + δ) otherwise i I Proof: Observe first that any optimal solution of the uniform RKP(I,Γ) will imize the sum of its components ( i I x i). Suppose that Γ(ω+δ) c. This means that an optimal solution x of RKP(I,Γ) has at least Γ elements: i I x i Γ. Then, x is in fact an optimal solution of the problem with the only constraint: i I ωx i c Γδ. The result follows. Now, if Γ(ω+δ) > c, a feasible solution of RKP(I,Γ) can not have more than Γ 1 elements. Then, an optimal solution x of RKP(I,Γ) is in fact an optimal solution of the problem with the only constraint: i I (ω + δ)x i c. This ends the proof. From this lemma, an optimal solution of RKP(I,Γ) can be built in linear time by considering successively the elements of largest profit values. Theorem 4 Suppose that: for all i I, w i = ω > 0 and w i w i = δ > 0, ω/δ and c/δ are integers, the r.v. {w i } i I are independent and identically distributed, P( i I w i c) is known for any subset I I. Then Algorithm 1 provides an optimal solution to CKP in polynomial time O(n 3 ). Proof: It has just been seen that for any value of Γ, an optimal solution of RKP(I,Γ) can be computed in linear time O(n). From Lemma 4, Algorithm 1 stops in at most n + 1 resolutions of robust problems, and since Step 3 is solved in time O(n), it runs in time O(n 3 ). From corollary 3.1, we know that one of the solutions x (k) explored with Algorithm 1 is optimal for CKP. Moreover, the sequence of objective values p.x (k) is non-increasing. As P( i I w i c) is supposed to be exactly known for any I I, the algorithm stops at Step 4 with the first solution feasible for CKP, which then is optimal. Similar results hold for uniform profits. As the proofs use exactly the same ideas as previously, they are not detailed. Lemma 6 Suppose that, for all i I, p i = ρ > 0 and w i w i = δ > 0. If c Γδ, let x be an optimal solution of: i I x i i I w i x i c Γδ (7) x {0, 1} n and let x be an optimal solution of: i I x i i I w ix i c x {0, 1} n (8) If c Γδ and i I x i Γ, x is optimal for RKP(I,Γ). Otherwise, x is optimal for RKP(I,Γ). Note that both problems (7) and (8) are solved in linear time. On the other hand, observe that (7) has no feasible solution if c Γδ < 0. Theorem 5 Suppose that: 10

11 for all i I, p i = ρ > 0 and w i w i = δ > 0, {w i /δ} i I and c/δ are integers, the r.v. {η i } i I are independent and identically distributed, P( i I w i c) is known for any subset I I. Then Algorithm 1 provides an optimal solution to CKP in polynomial time O(n 3 ). 4.4 An example. In this last section, the algorithm 1 is used to solve a simple uniform chance-constrained knapsack problem to the optimum. Consider the following problem: 10 i=1 ix i 10 i=1 w ix i 80 x {0, 1} 10 where all weights w i are uniformly distributed on [8, 12], i.e. w i = 8 and w i = 12 for all i I = {1,...,10}, and δ = 4. Let us consider ε = 0.1, that is, we look for a filling x of the knapsack which will be feasible with probability at least 90%. Note that under the above probability assumptions, the probabilities P( i I w i c) are known analytically for all subsets I I. In the following, let e i denote the 0-1 vector such that e i i = 1 and ei j = 0 for i j. k=0: k=1: From Lemma 5, we know that: i I x(0) i = 80/8 = 10, and thus: x (0) = 10 We have: Γ = ( )/4 = 0, and we compute: P( 10 i=1 w i 80) = 0 (the only event possible is w = w). From Lemma 5: i I x(1) i = (80 4)/8 = 9, and thus: x (1) = 10 i=2 ei. We have: Γ = (80 9 8)/4 = 2, and we compute: P( 10 i=1 ei. i=2 w i 80) = < 0.9. k=3: From Lemma 5: i I x(3) i = (80 12)/8 = 8, and thus: x (3) = 10 i=3 ei. We Γ = (80 8 8)/4 = 4, and we compute: P( 10 i=3 w i 80) = 0.5 < 0.9. k=5: From Lemma 5: i I x(5) i = (80 20)/8 = 7, and thus: x (5) = 10 i=4 ei. We compute: P( 10 i=4 w i 80) = > 0.9. STOP: x (5) is feasible, and thus optimal. Note that considering a feasibility probability of 100% would require to put less elements in the knapsack. Indeed, assuming that all weights take their worst values leads to put at most 80/12 = 6 items in the knapsack. 5 Conclusion. The chance-constrained knapsack problem has been addressed. A relevant robust optimization problem has been proposed. Its theoretical links with the chance-constrained problem have been investigated. Then, a tractable combinatorial algorithm has been designed to obtain good solutions to the chanceconstrained problem, by running a sequence of robust problems. When profits are identical, or when uncertain weights present all the same characteristics, this algorithm is proved to provide an optimal solution. References [1] A. Atamtürk, Strong formulations of robust mixed 0-1 programming, submitted to Math. Program.. Available at FILE/2004/01/816.pdf [2] W. Ben-Ameur and H. Kerivin, Offres de Réseaux Privés Virtuels Flexibles, Tech. Report NT/FTRD/7358, France Télécom R&D (2001). [3] W. Ben-Ameur and H. Kerivin, Routing of uncertain demands, Optim. Eng., Vol. 6, no. 3 (2005), pp

12 [4] A. Ben-Tal and A. Nemirovski, Robust solutions of Linear Programming problems contamined with uncertain data, Math. Program., A88 (2000), pp [5] A. Ben-Tal and A. Nemirovski, Robust optimization - methodology and applications, Math. Program., B92 (2002), pp [6] D. Bertsimas, D. Pachamanova and M. Sim, Robust Linear Optimization under General Norms, Oper. Res. Lett., Vol. 32, Issue 6 (2004), pp [7] D. Bertsimas and M. Sim, Robust discrete optimization and network flows, Math. Program., B98 (2003), pp [8] D. Bertsimas and M. Sim, The Price of Robustness, Oper. Res., Vol. 52, no. 1 (2004), pp [9] J.R. Birge and F. Louveaux, Introduction to Stochastic Programming, Springer-Verlag, [10] G. Calafiore and M.C. Campi, Uncertain convex programs: randomized solutions and confidence levels, Math. Program., A102 (2005), pp [11] X. Chen, M. Sim and P. Sun, A Robust Optimization Perspective of Stochastic Programming, submitted to Oper. Res. (2005). [12] E. Erdoğan and G. Iyengar, Ambiguous chance constrained problems and robust optimization, Math. Program. (online) DOI: /s (2005). [13] P. Kall and S.W. Wallace, Stochastic Programming, Wiley, Chichester, [14] S. Martello and P. Toth, Knapsack Problems: Algorithms and Computer Implementations, Wiley, [15] A. Nemirovski and A. Shapiro, Convex Approximations of Chance Constrained Programs, submitted to SIAM J. Optim.. [16] M.C. Pinar, A Note on Robust 0-1 Optimization with Uncertain Cost Coefficients, 4OR, Vol. 2 (2004), pp [17] N.V. Sahinidis, Optimization under uncertainty: state-of-the-art and opportunities, Computers & Chemical Engineering, 28 (2004), pp

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

Robust portfolio selection under norm uncertainty

Robust portfolio selection under norm uncertainty Wang and Cheng Journal of Inequalities and Applications (2016) 2016:164 DOI 10.1186/s13660-016-1102-4 R E S E A R C H Open Access Robust portfolio selection under norm uncertainty Lei Wang 1 and Xi Cheng

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Selected Topics in Chance-Constrained Programming

Selected Topics in Chance-Constrained Programming Selected Topics in Chance-Constrained Programg Tara Rengarajan April 03, 2009 Abstract We consider chance-constrained programs in which the probability distribution of the random parameters is deteristic

More information

arxiv: v1 [math.oc] 3 Jan 2019

arxiv: v1 [math.oc] 3 Jan 2019 The Product Knapsack Problem: Approximation and Complexity arxiv:1901.00695v1 [math.oc] 3 Jan 2019 Ulrich Pferschy a, Joachim Schauer a, Clemens Thielen b a Department of Statistics and Operations Research,

More information

Recoverable Robust Knapsacks: Γ -Scenarios

Recoverable Robust Knapsacks: Γ -Scenarios Recoverable Robust Knapsacks: Γ -Scenarios Christina Büsing, Arie M. C. A. Koster, and Manuel Kutschka Abstract In this paper, we investigate the recoverable robust knapsack problem, where the uncertainty

More information

Robust constrained shortest path problems under budgeted uncertainty

Robust constrained shortest path problems under budgeted uncertainty Robust constrained shortest path problems under budgeted uncertainty Artur Alves Pessoa 1, Luigi Di Puglia Pugliese 2, Francesca Guerriero 2 and Michael Poss 3 1 Production Engineering Department, Fluminense

More information

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i

Knapsack. Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Knapsack Bag/knapsack of integer capacity B n items item i has size s i and profit/weight w i Goal: find a subset of items of maximum profit such that the item subset fits in the bag Knapsack X: item set

More information

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints Pranjal Awasthi Vineet Goyal Brian Y. Lu July 15, 2015 Abstract In this paper, we study the performance of static

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Almost Robust Optimization with Binary Variables

Almost Robust Optimization with Binary Variables Almost Robust Optimization with Binary Variables Opher Baron, Oded Berman, Mohammad M. Fazel-Zarandi Rotman School of Management, University of Toronto, Toronto, Ontario M5S 3E6, Canada, Opher.Baron@Rotman.utoronto.ca,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:

More information

Robust combinatorial optimization with cost uncertainty

Robust combinatorial optimization with cost uncertainty Robust combinatorial optimization with cost uncertainty Michael Poss UMR CNRS 6599 Heudiasyc, Université de Technologie de Compiègne, Centre de Recherches de Royallieu, 60200 Compiègne, France. Abstract

More information

Random Convex Approximations of Ambiguous Chance Constrained Programs

Random Convex Approximations of Ambiguous Chance Constrained Programs Random Convex Approximations of Ambiguous Chance Constrained Programs Shih-Hao Tseng Eilyan Bitar Ao Tang Abstract We investigate an approach to the approximation of ambiguous chance constrained programs

More information

Modeling Uncertainty in Linear Programs: Stochastic and Robust Linear Programming

Modeling Uncertainty in Linear Programs: Stochastic and Robust Linear Programming Modeling Uncertainty in Linear Programs: Stochastic and Robust Programming DGA PhD Student - PhD Thesis EDF-INRIA 10 November 2011 and motivations In real life, Linear Programs are uncertain for several

More information

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty Christoph Buchheim Jannis Kurtz Received:

More information

Mixed-Integer Nonlinear Programming

Mixed-Integer Nonlinear Programming Mixed-Integer Nonlinear Programming Claudia D Ambrosio CNRS researcher LIX, École Polytechnique, France pictures taken from slides by Leo Liberti MPRO PMA 2016-2017 Motivating Applications Nonlinear Knapsack

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

1 Column Generation and the Cutting Stock Problem

1 Column Generation and the Cutting Stock Problem 1 Column Generation and the Cutting Stock Problem In the linear programming approach to the traveling salesman problem we used the cutting plane approach. The cutting plane approach is appropriate when

More information

arxiv: v1 [cs.dm] 27 Jan 2014

arxiv: v1 [cs.dm] 27 Jan 2014 Randomized Minmax Regret for Combinatorial Optimization Under Uncertainty Andrew Mastin, Patrick Jaillet, Sang Chin arxiv:1401.7043v1 [cs.dm] 27 Jan 2014 January 29, 2014 Abstract The minmax regret problem

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms

CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms CS264: Beyond Worst-Case Analysis Lecture #15: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden November 5, 2014 1 Preamble Previous lectures on smoothed analysis sought a better

More information

Convex relaxations of chance constrained optimization problems

Convex relaxations of chance constrained optimization problems Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011

More information

Chance constrained optimization - applications, properties and numerical issues

Chance constrained optimization - applications, properties and numerical issues Chance constrained optimization - applications, properties and numerical issues Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP) May 31, 2012 This

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,

More information

Branch-and-Price-and-Cut Approach to the Robust Network Design Problem without Flow Bifurcations

Branch-and-Price-and-Cut Approach to the Robust Network Design Problem without Flow Bifurcations Submitted to Operations Research manuscript (Please, provide the mansucript number!) Branch-and-Price-and-Cut Approach to the Robust Network Design Problem without Flow Bifurcations Chungmok Lee Department

More information

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies

More information

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms

CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms CS264: Beyond Worst-Case Analysis Lecture #18: Smoothed Complexity and Pseudopolynomial-Time Algorithms Tim Roughgarden March 9, 2017 1 Preamble Our first lecture on smoothed analysis sought a better theoretical

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

Restricted robust uniform matroid maximization under interval uncertainty

Restricted robust uniform matroid maximization under interval uncertainty Math. Program., Ser. A (2007) 110:431 441 DOI 10.1007/s10107-006-0008-1 FULL LENGTH PAPER Restricted robust uniform matroid maximization under interval uncertainty H. Yaman O. E. Karaşan M. Ç. Pınar Received:

More information

Remarks on Bronštein s root theorem

Remarks on Bronštein s root theorem Remarks on Bronštein s root theorem Guy Métivier January 23, 2017 1 Introduction In [Br1], M.D.Bronštein proved that the roots of hyperbolic polynomials (1.1) p(t, τ) = τ m + m p k (t)τ m k. which depend

More information

Part 4. Decomposition Algorithms

Part 4. Decomposition Algorithms In the name of God Part 4. 4.4. Column Generation for the Constrained Shortest Path Problem Spring 2010 Instructor: Dr. Masoud Yaghini Constrained Shortest Path Problem Constrained Shortest Path Problem

More information

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Min-max- robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Christoph Buchheim, Jannis Kurtz 2 Faultät Mathemati, Technische Universität Dortmund Vogelpothsweg

More information

Discrete Applied Mathematics. Tighter bounds of the First Fit algorithm for the bin-packing problem

Discrete Applied Mathematics. Tighter bounds of the First Fit algorithm for the bin-packing problem Discrete Applied Mathematics 158 (010) 1668 1675 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam Tighter bounds of the First Fit algorithm

More information

CORC Technical Report TR Ambiguous chance constrained problems and robust optimization

CORC Technical Report TR Ambiguous chance constrained problems and robust optimization CORC Technical Report TR-2004-0 Ambiguous chance constrained problems and robust optimization E. Erdoğan G. Iyengar September 7, 2004 Abstract In this paper we study ambiguous chance constrained problems

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

2 Chance constrained programming

2 Chance constrained programming 2 Chance constrained programming In this Chapter we give a brief introduction to chance constrained programming. The goals are to motivate the subject and to give the reader an idea of the related difficulties.

More information

Chance-constrained optimization with tight confidence bounds

Chance-constrained optimization with tight confidence bounds Chance-constrained optimization with tight confidence bounds Mark Cannon University of Oxford 25 April 2017 1 Outline 1. Problem definition and motivation 2. Confidence bounds for randomised sample discarding

More information

The Value of Adaptability

The Value of Adaptability The Value of Adaptability Dimitris Bertsimas Constantine Caramanis September 30, 2005 Abstract We consider linear optimization problems with deterministic parameter uncertainty. We consider a departure

More information

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack Hassene Aissi, Cristina Bazgan, and Daniel Vanderpooten LAMSADE, Université Paris-Dauphine, France {aissi,bazgan,vdp}@lamsade.dauphine.fr

More information

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Ruken Düzgün Aurélie Thiele July 2012 Abstract This paper describes a multi-range robust optimization approach

More information

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No., February 20, pp. 24 54 issn 0364-765X eissn 526-547 360 0024 informs doi 0.287/moor.0.0482 20 INFORMS A Geometric Characterization of the Power of Finite

More information

Stochastic programs with binary distributions: Structural properties of scenario trees and algorithms

Stochastic programs with binary distributions: Structural properties of scenario trees and algorithms INSTITUTT FOR FORETAKSØKONOMI DEPARTMENT OF BUSINESS AND MANAGEMENT SCIENCE FOR 12 2017 ISSN: 1500-4066 October 2017 Discussion paper Stochastic programs with binary distributions: Structural properties

More information

Improved Fully Polynomial time Approximation Scheme for the 0-1 Multiple-choice Knapsack Problem

Improved Fully Polynomial time Approximation Scheme for the 0-1 Multiple-choice Knapsack Problem Improved Fully Polynomial time Approximation Scheme for the 0-1 Multiple-choice Knapsack Problem Mukul Subodh Bansal V.Ch.Venkaiah International Institute of Information Technology Hyderabad, India Abstract

More information

Pareto Efficiency in Robust Optimization

Pareto Efficiency in Robust Optimization Pareto Efficiency in Robust Optimization Dan Iancu Graduate School of Business Stanford University joint work with Nikolaos Trichakis (HBS) 1/26 Classical Robust Optimization Typical linear optimization

More information

A dynamic programming approach for a class of robust optimization problems

A dynamic programming approach for a class of robust optimization problems A dynamic programming approach for a class of robust optimization problems Agostinho Agra, Dritan Nace, Michael Poss, Marcio Costa Santos To cite this version: Agostinho Agra, Dritan Nace, Michael Poss,

More information

Two-stage stochastic matching and spanning tree problems: polynomial instances and approximation

Two-stage stochastic matching and spanning tree problems: polynomial instances and approximation Two-stage stochastic matching and spanning tree problems: polynomial instances and approximation Bruno Escoffier a Laurent Gourvès a Jérôme Monnot a, Olivier Spanjaard b a LAMSADE-CNRS, Université Paris

More information

the results from this paper are used in a decomposition scheme for the stochastic service provision problem. Keywords: distributed processing, telecom

the results from this paper are used in a decomposition scheme for the stochastic service provision problem. Keywords: distributed processing, telecom Single Node Service Provision with Fixed Charges Shane Dye Department ofmanagement University of Canterbury New Zealand s.dye@mang.canterbury.ac.nz Leen Stougie, Eindhoven University of Technology The

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

A Bicriteria Approach to Robust Optimization

A Bicriteria Approach to Robust Optimization A Bicriteria Approach to Robust Optimization André Chassein and Marc Goerigk Fachbereich Mathematik, Technische Universität Kaiserslautern, Germany Abstract The classic approach in robust optimization

More information

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization Dimitris Bertsimas Sloan School of Management and Operations Research Center, Massachusetts

More information

The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items

The Knapsack Problem. n items with weight w i N and profit p i N. Choose a subset x of items Sanders/van Stee: Approximations- und Online-Algorithmen 1 The Knapsack Problem 10 15 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i

More information

LP based heuristics for the multiple knapsack problem. with assignment restrictions

LP based heuristics for the multiple knapsack problem. with assignment restrictions LP based heuristics for the multiple knapsack problem with assignment restrictions Geir Dahl and Njål Foldnes Centre of Mathematics for Applications and Department of Informatics, University of Oslo, P.O.Box

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

An efficient implementation for the 0-1 multi-objective knapsack problem

An efficient implementation for the 0-1 multi-objective knapsack problem An efficient implementation for the 0-1 multi-objective knapsack problem Cristina Bazgan, Hadrien Hugot, and Daniel Vanderpooten LAMSADE, Université Paris Dauphine, Place Du Maréchal De Lattre de Tassigny,

More information

A note on scenario reduction for two-stage stochastic programs

A note on scenario reduction for two-stage stochastic programs A note on scenario reduction for two-stage stochastic programs Holger Heitsch a and Werner Römisch a a Humboldt-University Berlin, Institute of Mathematics, 199 Berlin, Germany We extend earlier work on

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Ambiguous Chance Constrained Programs: Algorithms and Applications

Ambiguous Chance Constrained Programs: Algorithms and Applications Ambiguous Chance Constrained Programs: Algorithms and Applications Emre Erdoğan Adviser: Associate Professor Garud Iyengar Partial Sponsor: TÜBİTAK NATO-A1 Submitted in partial fulfilment of the Requirements

More information

The Knapsack Problem. 28. April /44

The Knapsack Problem. 28. April /44 The Knapsack Problem 20 10 15 20 W n items with weight w i N and profit p i N Choose a subset x of items Capacity constraint i x w i W wlog assume i w i > W, i : w i < W Maximize profit i x p i 28. April

More information

A Linear Decision-Based Approximation Approach to Stochastic Programming

A Linear Decision-Based Approximation Approach to Stochastic Programming OPERATIONS RESEARCH Vol. 56, No. 2, March April 2008, pp. 344 357 issn 0030-364X eissn 526-5463 08 5602 0344 informs doi 0.287/opre.070.0457 2008 INFORMS A Linear Decision-Based Approximation Approach

More information

ILP Formulations for the Lazy Bureaucrat Problem

ILP Formulations for the Lazy Bureaucrat Problem the the PSL, Université Paris-Dauphine, 75775 Paris Cedex 16, France, CNRS, LAMSADE UMR 7243 Department of Statistics and Operations Research, University of Vienna, Vienna, Austria EURO 2015, 12-15 July,

More information

Affine Recourse for the Robust Network Design Problem: Between Static and Dynamic Routing

Affine Recourse for the Robust Network Design Problem: Between Static and Dynamic Routing Affine Recourse for the Robust Network Design Problem: Between Static and Dynamic Routing Michael Poss UMR CNRS 6599 Heudiasyc, Université de Technologie de Compiègne, Centre de Recherches de Royallieu,

More information

Branch-and-cut (and-price) for the chance constrained vehicle routing problem

Branch-and-cut (and-price) for the chance constrained vehicle routing problem Branch-and-cut (and-price) for the chance constrained vehicle routing problem Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo May 25th, 2016 ColGen 2016 joint work with

More information

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Takafumi Kanamori and Akiko Takeda Abstract. Uncertain programs have been developed to deal with optimization problems

More information

EE 227A: Convex Optimization and Applications April 24, 2008

EE 227A: Convex Optimization and Applications April 24, 2008 EE 227A: Convex Optimization and Applications April 24, 2008 Lecture 24: Robust Optimization: Chance Constraints Lecturer: Laurent El Ghaoui Reading assignment: Chapter 2 of the book on Robust Optimization

More information

A 0-1 KNAPSACK PROBLEM CONSIDERING RANDOMNESS OF FUTURE RETURNS AND FLEXIBLE GOALS OF AVAILABLE BUDGET AND TOTAL RETURN

A 0-1 KNAPSACK PROBLEM CONSIDERING RANDOMNESS OF FUTURE RETURNS AND FLEXIBLE GOALS OF AVAILABLE BUDGET AND TOTAL RETURN Scientiae Mathematicae Japonicae Online, e-2008, 273 283 273 A 0-1 KNAPSACK PROBLEM CONSIDERING RANDOMNESS OF FUTURE RETURNS AND FLEXIBLE GOALS OF AVAILABLE BUDGET AND TOTAL RETURN Takashi Hasuike and

More information

Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0

Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0 Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0 Brieuc Conan-Guez 1 and Fabrice Rossi 23 1 INRIA, Domaine de Voluceau, Rocquencourt, B.P. 105 78153 Le Chesnay Cedex,

More information

Discussion of Hypothesis testing by convex optimization

Discussion of Hypothesis testing by convex optimization Electronic Journal of Statistics Vol. 9 (2015) 1 6 ISSN: 1935-7524 DOI: 10.1214/15-EJS990 Discussion of Hypothesis testing by convex optimization Fabienne Comte, Céline Duval and Valentine Genon-Catalot

More information

Lecture 4: Random-order model for the k-secretary problem

Lecture 4: Random-order model for the k-secretary problem Algoritmos e Incerteza PUC-Rio INF2979, 2017.1 Lecture 4: Random-order model for the k-secretary problem Lecturer: Marco Molinaro April 3 rd Scribe: Joaquim Dias Garcia In this lecture we continue with

More information

Hierarchy among Automata on Linear Orderings

Hierarchy among Automata on Linear Orderings Hierarchy among Automata on Linear Orderings Véronique Bruyère Institut d Informatique Université de Mons-Hainaut Olivier Carton LIAFA Université Paris 7 Abstract In a preceding paper, automata and rational

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Recoverable Robustness in Scheduling Problems

Recoverable Robustness in Scheduling Problems Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker

More information

Worst case analysis for a general class of on-line lot-sizing heuristics

Worst case analysis for a general class of on-line lot-sizing heuristics Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University

More information

On the Tightness of an LP Relaxation for Rational Optimization and its Applications

On the Tightness of an LP Relaxation for Rational Optimization and its Applications OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0030-364X eissn 526-5463 00 0000 000 INFORMS doi 0.287/xxxx.0000.0000 c 0000 INFORMS Authors are encouraged to submit new papers to INFORMS

More information

Min-max-min Robust Combinatorial Optimization Subject to Discrete Uncertainty

Min-max-min Robust Combinatorial Optimization Subject to Discrete Uncertainty Min-- Robust Combinatorial Optimization Subject to Discrete Uncertainty Christoph Buchheim Jannis Kurtz Received: date / Accepted: date Abstract We consider combinatorial optimization problems with uncertain

More information

Robust Optimization for Empty Repositioning Problems

Robust Optimization for Empty Repositioning Problems Robust Optimization for Empty Repositioning Problems Alan L. Erera, Juan C. Morales and Martin Savelsbergh The Logistics Institute School of Industrial and Systems Engineering Georgia Institute of Technology

More information

Variable Objective Search

Variable Objective Search Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

Differential approximation results for the Steiner tree problem

Differential approximation results for the Steiner tree problem Differential approximation results for the Steiner tree problem Marc Demange, Jérôme Monnot, Vangelis Paschos To cite this version: Marc Demange, Jérôme Monnot, Vangelis Paschos. Differential approximation

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach L. Jeff Hong Department of Industrial Engineering and Logistics Management The Hong Kong University of Science

More information

Theory and applications of Robust Optimization

Theory and applications of Robust Optimization Theory and applications of Robust Optimization Dimitris Bertsimas, David B. Brown, Constantine Caramanis May 31, 2007 Abstract In this paper we survey the primary research, both theoretical and applied,

More information

3. Branching Algorithms

3. Branching Algorithms 3. Branching Algorithms COMP6741: Parameterized and Exact Computation Serge Gaspers Semester 2, 2015 Contents 1 Introduction 1 2 Maximum Independent Set 3 2.1 Simple Analysis................................................

More information

Strong Formulations of Robust Mixed 0 1 Programming

Strong Formulations of Robust Mixed 0 1 Programming Math. Program., Ser. B 108, 235 250 (2006) Digital Object Identifier (DOI) 10.1007/s10107-006-0709-5 Alper Atamtürk Strong Formulations of Robust Mixed 0 1 Programming Received: January 27, 2004 / Accepted:

More information

Safe Approximations of Chance Constraints Using Historical Data

Safe Approximations of Chance Constraints Using Historical Data Safe Approximations of Chance Constraints Using Historical Data İhsan Yanıkoğlu Department of Econometrics and Operations Research, Tilburg University, 5000 LE, Netherlands, {i.yanikoglu@uvt.nl} Dick den

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

A Recourse Approach for the Capacitated Vehicle Routing Problem with Evidential Demands

A Recourse Approach for the Capacitated Vehicle Routing Problem with Evidential Demands A Recourse Approach for the Capacitated Vehicle Routing Problem with Evidential Demands Nathalie Helal 1, Frédéric Pichon 1, Daniel Porumbel 2, David Mercier 1 and Éric Lefèvre1 1 Université d Artois,

More information

8 Knapsack Problem 8.1 (Knapsack)

8 Knapsack Problem 8.1 (Knapsack) 8 Knapsack In Chapter 1 we mentioned that some NP-hard optimization problems allow approximability to any required degree. In this chapter, we will formalize this notion and will show that the knapsack

More information

Approximation results for the weighted P 4 partition problem

Approximation results for the weighted P 4 partition problem Approximation results for the weighted P 4 partition problem Jérôme Monnot a Sophie Toulouse b a Université Paris Dauphine, LAMSADE, CNRS UMR 7024, 75016 Paris, France, monnot@lamsade.dauphine.fr b Université

More information

Approximability of the Two-Stage Stochastic Knapsack problem with discretely distributed weights

Approximability of the Two-Stage Stochastic Knapsack problem with discretely distributed weights Approximability of the Two-Stage Stochastic Knapsack problem with discretely distributed weights Stefanie Kosuch Institutionen för datavetenskap (IDA),Linköpings Universitet, SE-581 83 Linköping, Sweden

More information

Max-Min Fairness in multi-commodity flows

Max-Min Fairness in multi-commodity flows Max-Min Fairness in multi-commodity flows Dritan Nace 1, Linh Nhat Doan 1, Olivier Klopfenstein 2 and Alfred Bashllari 1 1 Université de Technologie de Compiègne, Laboratoire Heudiasyc UMR CNRS 6599, 60205

More information

Robust optimization for resource-constrained project scheduling with uncertain activity durations

Robust optimization for resource-constrained project scheduling with uncertain activity durations Robust optimization for resource-constrained project scheduling with uncertain activity durations Christian Artigues 1, Roel Leus 2 and Fabrice Talla Nobibon 2 1 LAAS-CNRS, Université de Toulouse, France

More information

Surrogate upper bound sets for bi-objective bi-dimensional binary knapsack problems

Surrogate upper bound sets for bi-objective bi-dimensional binary knapsack problems Surrogate upper bound sets for bi-objective bi-dimensional binary knapsack problems Audrey Cerqueus, Anthony Przybylski, Xavier Gandibleux Université de Nantes LINA UMR CNRS 624 UFR sciences 2 Rue de la

More information