Two-stage integer programs with stochastic right-hand sides: a superadditive dual approach

Size: px
Start display at page:

Download "Two-stage integer programs with stochastic right-hand sides: a superadditive dual approach"

Transcription

1 Math. Program., Ser. B 108, (2006) Digital Object Identifier (DOI) /s y Nan Kong Andrew J. Schaefer Brady Hunsaker Two-stage integer programs with stochastic right-hand sides: a superadditive dual approach Received: July 7, 2004 / Accepted: April 20, 2005 Published online: June 2, 2006 Springer-Verlag 2006 Abstract. We consider two-stage pure integer programs with discretely distributed stochastic right-hand sides. We present an equivalent superadditive dual formulation that uses the value functions in both stages. We give two algorithms for finding the value functions. To solve the reformulation after obtaining the value functions, we develop a global branch-and-bound approach and a level-set approach to find an optimal tender. We show that our method can solve randomly generated instances whose extensive forms are several orders of magnitude larger than the extensive forms of those instances found in the literature. Key words. Stochastic Programming Integer Programming Superadditive Duality Global Branch and Bound Level Sets 1. Introduction We consider the following class of two-stage pure integer stochastic programs: (P1) : max c T x + IE ξ Q(x, ξ(ω)) (1) subject to Ax b, x Z n 1 +, where Q(x, ξ(ω)) = max d T y subject to Wy h(ω) Tx, y Z n 2 +, and random variable ω from a probability space (,F, P) is used to describe the realizations of the uncertain parameters or scenarios. The numbers of constraints and decision variables in stage i are m i and n i, for i = 1, 2, and c and b are known vectors in IR n 1 A. J. Schaefer B. Hunsaker: Department of Industrial Engineering, University of Pittsburgh, 1048 Benedum Hall, Pittsburgh, PA 15261, USA. schaefer@ie.pitt.edu, hunsaker@engr.pitt.edu N. Kong: Department of Industrial and Management Systems Engineering, University of South Florida, 4202 E. Fowler Avenue, ENB 118, Tampa, FL 33620, USA. kong@eng.usf.edu. Mathematics Subject Classification (2000): 90C15, 90C10, 90C06 This work is supported by National Science Foundation grants DMI and DMI

2 276 N. Kong et al. and IR m 1, respectively. The first-stage constraint matrix A is a known matrix in IR m 1 n 1. Note that the technology matrix T IR m 2 n 1, the recourse matrix W IR m 2 n 2, and the second-stage objective function d IR n 2 are all deterministic, so that the stochastic component consists of only h(ω). For each ω, h(ω) IR m 2. In (P1) we also make the following assumptions: A1 The random variable ω follows a discrete distribution with finite support. A2 The first-stage feasibility set X = { x Ax b, x Z n } 1 + is nonempty and bounded. A3 Q(x, ξ(ω)) is finite for all x X and ω. A4 The first-stage constraint matrix, the technology matrix, and the recourse matrix are integral, i.e., A Z m 1 n 1, T Z m 2 n 1, W Z m 2 n 2. Assumption A1 is justified by the results of Schultz [39], who observed that the optimal solution to any stochastic program with continuously distributed ω can be approximated within any given accuracy by the use of a discrete distribution. Assumption A2 along with integrality restrictions in the first stage ensures that X is a finite set. Assumption A3 explicitly requires that Q(x, ξ(ω)) is feasible for all x X and ω, which is known as relatively complete recourse. Note that most of the related work in the literature makes assumptions similar to Assumptions A1 A3, e.g., [1, 11, 41, 42]. Assumption A4 guarantees that without loss of generality, we can assume b Z m 1, h(ω) Z m 2 for all ω. In this paper, we develop a two-phase solution procedure to solve a superadditive dual reformulation of an important class of two-stage stochastic integer programs. The reformulation takes advantage of the fact that many similar integer programs must be solved. This approach is relatively insensitive to the numbers of scenarios and decision variables, but it is sensitive to the number of rows as well as the magnitude of feasible right-hand side values. In Section 2, we review some related research results in stochastic integer programming, particularly some algorithmic development results, as well as some superadditive duality properties of integer programs. In Section 3, we present a superadditive dual reformulation of (P1). Since ω is discretely distributed, the reformulation provides many similar integer programs in the first stage and second stage. In the first phase of our solution procedure, we construct the value functions in both stages to solve these integer programs and take advantage of the similarity among them using IP duality. Section 4 considers the first phase of the solution procedure. In this section, we propose two algorithms to compute the first- and second-stage value functions. In the second phase of the solution procedure, search space reduction methods are applied to evaluate objective functions c T x and Q(x) implicitly and optimize c T x + Q(x) efficiently with respect to the tender variable. Section 5 considers the second phase of the solution procedure. In this section, we describe two search techniques to find an optimal tender for the purpose of reducing the number of objective function evaluations that are needed. In Section 6, we discuss some implementation details and present computational results. These computational results indicate that the method described in this paper is able to solve a specific class of instances whose extensive forms are several orders of magnitude larger than the extensive forms of instances previously reported in the literature. One feature of these instances is that there are very few rows but enormously many columns in both stages. We draw conclusions and give directions for future research in Section 7.

3 Two-stage SIP: a superadditive dual approach Mathematical preliminaries We first review some research results in stochastic integer programming with particular emphasis on three papers closely related to this work. We then review the concept of value functions and some properties of integer programs, which will be used later to develop the algorithms to find value functions Stochastic integer programming Relative to stochastic linear programs, very little is known about two-stage stochastic integer programs. When integrality restrictions are only imposed in the first stage in other words, the recourse problem is continuous models retain convexity properties and algorithms similar to those for stochastic linear programs may be applied [49]. Such problems arise from a variety of real-world applications, including cargo network scheduling [34], manufacturing capacity planning [4], and telecommunication network planning [13, 45]. When some or all second-stage decision variables are also required to be integers, Q(x) becomes nonconvex and discontinuous in general [33, 46]. Models in this case are classified as stochastic programs with (mixed-)integer recourse. A few unit commitment problems fall into this category [38]. Due to the inherent difficulties in two-stage stochastic programs with integer recourse, problems with special structures have been identified and studied. An example is stochastic programs with simple integer recourse (SIR), in which the only possible recourse action is to incur linear penalties for any shortage or surplus [26, 27, 29, 30, 33]. In the case of general two-stage stochastic integer programs, relatively few solution techniques have been developed. The majority of algorithmic developments for these problems are based on cutting planes or branch-and-bound techniques, which have proved their effectiveness in solving large-scale deterministic mixed-integer programs. Laporte and Louveaux [31] developed a decomposition-based approach that combines the L-shaped method [48] and branch-and-cut techniques. Norkin et al. [37] proposed a stochastic branch-and-bound algorithm for minimizing the expected recourse function of a discrete stochastic program over a finite set. Other branch-and-bound approaches include scenario decomposition techniques [8, 9] and a branch-and-fix approach [2, 3]. Some recent papers have applied cutting planes [10, 11, 42 44]. In recent years, test set methods have gained interest in stochastic integer programming. For example, Graver test sets [18] have been attempted for solving the pure integer version of two-stage stochastic integer programs [19]. A detailed discussion of various algorithms for stochastic integer programs can be found in Carøe [8], Klein Haneveld and van der Vlerk [28], and Schultz [40]. Next, we describe in more detail three papers closely related to this work. (P1) is similar to the problem considered in Schultz et al. [41], except that we further assume that all first-stage decision variables are integral and the first-stage feasibility set is bounded. The authors applied Gröbner basis reductions [6, 47] to exploit the underlying relationship within the family of second-stage integer programs parameterized by their right-hand sides. However, determining Gröbner bases is challenging even for problems with small

4 278 N. Kong et al. size. A recent implementation of computing Gröbner bases [14] indicated that these drawbacks may be overcome. The recourse problem of the examples in [41] has only 4 binary decision variables and 2 constraints. In addition, the set of points in the space of first-stage decision variables for which the objective function has to be evaluated may be large. Therefore, a level-set approach and some further improvements were presented that aim at reducing the number of candidate points. Ahmed et al. [1] proposed a dual reformulation of a general class of two-stage stochastic integer programs and developed a global branch-and-bound algorithm that is guaranteed finite termination and avoids explicit enumeration of the search space. Their algorithm achieved better computational performance than previous works on several test problems from the literature. Carøe and Tind [11] generalized the convex hull approximation of the expected recourse function in the L-shaped method. They extended the integer L-shaped method for pure integer second-stage cases by introducing nonlinear cuts generated via IP duality. They also employed branch-and-bound and cutting plane techniques for solving the second-stage problem in the generalized L-shaped method. However, they concluded that solving the master problem obtained from either approach would in general become cumbersome. No implementation details or computational results were provided in their paper Superadditive duality for integer programs Superadditive duality was pioneered by Gomory [17], who showed strong duality for the group problem. An explicit statement of a superadditive dual appeared in Johnson [21] in the context of a cyclic group problem. The treatment for other integer programs was given in Johnson [23, 24]. For a summary of results on superadditive duality, see Johnson [22] and Nemhauser [35]. Before stating some necessary results on superadditive duality, we introduce the concept of a value function. Much of the notation here is the same as that in Nemhauser and Wolsey [36]. Given G Z m n, we consider the following family of parameterized pure integer programs: { (PIP): z(β) = max γ T x } { x S(β), S(β) = x Z n + } Gx β for β Z m. The function, z( ) : Z m Z, is called the value function of (PIP). We say that z(β) = if S(β) = and that z(β) =+ if the objective value is unbounded from above. We define opt(β) to be argmax { γ T x x S(β) } ; that is, the set of optimal primal solutions to (PIP) given a right-hand side β. Define S LP (β) = { x IR n + Gx β} and define z LP (β) = max { γ T x x S LP (β) }, the linear relaxation of z(β). We first state several elementary properties of the value function. The proofs of these propositions can be found in [36, 50]. Proposition 1. z(0) {0, }.Ifz(0) = 0, then z(β) < for all β Z m.ifz(0) =, then z(β) =± for all β Z m. Problems with z(β) =± for all β Z m reduce to feasibility problems. Hence, for simplicity of exposition, we assume z(0) = 0 and thus z(β) < for all β Z m.

5 Two-stage SIP: a superadditive dual approach 279 Proposition 2. Let g j and γ j be the j th column of the constraint matrix G and the j th coefficient of the objective function γ T x of (PIP), respectively. Then, z(g j ) γ j for j = 1,...,n. Proposition 3. (Nondecreasing). The value function of (PIP) is nondecreasing over Z m. Proposition 4. (Superadditivity). The value function of (PIP) is superadditive over D = { β Z m S(β) }. That is, for all β 1,β 2 D,ifβ 1 + β 2 D, z(β 1 ) + z(β 2 ) z(β 1 + β 2 ). The superadditivity of the value function of pure integer programs was first studied by Blair and Jeroslow [5] and Wolsey [50]. Proposition 5. (Integer Complementary Slackness). If ˆx opt(β), then z(gx) = γ T x and z(gx) + z(β Gx) = z(gx) + z ( G ( ˆx x )) = z(β), for all x Z n + such that x ˆx. Corollary 1. (Column Elimination) If z(g j )>γ j, then for all β Z m and all ˆx opt(β), ˆx j = 0. We provide a more general column elimination procedure in Section Superadditive dual reformulation The key concept behind our development is to reformulate (P1) via a decision variable transformation, similar to the one presented in Ahmed et al. [1]. With this reformulation, we divide the solution procedure into two phases. Unlike Ahmed et al. [1], where a global branch-and-bound framework was presented, we give more attention to the first phase in which we develop two algorithms to find the first-stage and second-stage value functions defined over an enormous number of right-hand sides. Unlike Carøe and Tind [11], we exploit the properties of superadditive duality for finding the value functions in both stages efficiently. In the second phase, we implicitly search a solution space constructed through the reformulation. One technique to reduce the search space is a branch-and-bound approach. The other technique is based on a level-set approach, which differs from the one in Schultz et al. [41] in the way that it constructs the set of candidate solutions first and then searches for the optimal solution in this set. We reformulate (P1) using the value functions of pure integer programs in both stages. Define B 1 to be the set of vectors β 1 IR m 2 such that there exists x X with β 1 = Tx, i.e., B 1 = { β 1 IR m 2 x X, β 1 = Tx }, where X Z n 1 + is the first-stage feasibility set. Define B 2 to be the set of vectors β 2 IR m 2 such that there exist β 1 B 1 and ω, β 2 = h(ω) β 1, i.e., B 2 = β1 B 1 ω {h(ω) β 1 }. Note that since T Z m 2 n 1, all vectors contained in B 1 are, in fact, integral. Together with the condition h(ω) Z m 2 for all ω, this implies that all vectors in B 2 are also integral. For any β 1 Z m 2, define the first-stage value function as: ψ(β 1 ) = max { c T x x S 1 (β 1 ) },S 1 (β 1 ) = { x Z n 1 + Ax b, T x β 1 }. (2)

6 280 N. Kong et al. Note that the condition Tx = β 1 in the definition of B 1 is replaced by Tx β 1 in (2). As a matter of fact, Tx β 1 is equivalent to Tx = β 1 by Proposition 3 in defining the first-stage value function. Using the inequalities instead of the equalities enables us to apply superadditive duality in the algorithmic aspect. For any β 2 Z m 2, define the second-stage value function as: φ(β 2 ) = max { d T y y S 2 (β 2 ) }, S 2 (β 2 ) = { y Z n 2 + Wy β 2 }. (3) Then we reformulate (P1) as: { (P2): max ψ(β) + IE ξ φ(h(ω) β) β B 1}. (4) Variables β in (P2) are known as the tender variables. Instead of searching X, we search the space of tender variables to obtain a global optimum. Note that the linear transformation β = Txpreserves the finiteness of the search space with Assumption A2. The following result establishes the correspondence between the optimal solutions to (P1) and (P2). Theorem 1. Let β be an optimal solution to (P2). Then ˆx arg max { c T x Ax b, Tx β,x Z n } 1 + is an optimal solution to (P1). Furthermore, the optimal objective values of the two problems are equal. The proof of Theorem 1 is similar to the one for Theorem 3.2 of Ahmed et al. [1] except for the difference in the definition of the first-stage value function. It is evident that finding the value functions in both stages is the key to solving (P2). In the next section, we present one algorithm based on integer programming and one based on dynamic programming to find the value functions ψ( ) and φ( ). To simplify the exposition we continue the use of notation z( ) for a generic value function in the next section. In our current implementation, value functions z(β) are stored in computer memory explicitly for all β B. As the dimension of B increases, B increases exponentially and each z(β) for β B takes more memory. Hence the algorithms are only suitable for problems with small B when value functions are explicitly stored. However, recent results in [12] indicated that the value functions of an integer program can be encoded and computed as a polynomial-size rational generating function. This may offer a way to overcome the computational difficulty in future work. 4. Finding the value function of a parameterized integer program Very little is known about how to compute the value function efficiently for a general integer program. Llewellyn and Ryan [32] showed how a value function can be constructed from Chvatal-Gomory cuts. Burdet and Johnson [7] presented an algorithm that strengthens Chvatal-Gomory cuts by using superadditive functions. Neither of these works presented any computational results. Klabjan [25] studied a family of computationally tractable superadditive dual functions and developed a solution methodology that computes the value function. He also discussed implementation details and presented computational results on several set partitioning instances.

7 Two-stage SIP: a superadditive dual approach 281 With the integrality specification on problem parameters A, T, and W in (P1), we consider a class of integer programs z(β) = max { γ T x Gx β,x Z n +} for β B, where G is integral. This broad class of problems includes the group problem, the integer knapsack problem, and the set packing problem, for example. In (P1) Assumptions A1 and A2 ensure the finiteness of B 1 and B 2. Therefore, we assume that B is finite. Additionally, under Assumptions A2 and A3, the value functions in both stages, ψ(β 1 ) and φ(β 2 ), are finite for all β 1 B 1 and β 2 B 2, respectively. Therefore, we assume that z(β) is finite for all β B. Hence, a dual feasible solution π exists for z LP (β) with respect to any β B. Since B 1, B 2 Z m 2, we also assume B Z m. Next, we present two algorithms for finding the value function z( ) An integer-programming-based algorithm for finding the value function The algorithm described in this section defines l( ) and u( ) to be lower and upper bounds of z( ), respectively, and maintains l(β) z(β) u(β) for all β B throughout the procedure of finding z( ). Once l(β) = u(β), z(β) is known. The algorithm terminates when z(β) is determined for all β B. At each iteration, the algorithm updates l(β) and u(β) for some β B by performing the following three basic operations. 1. Solve an integer program given a right-hand side β B and obtain an optimal primal solution ˆx. 2. Given ˆx, the optimal primal solution obtained from the first operation, apply the complementary slackness property for integer programs (Proposition 5). 3. Apply the nondecreasing and superadditive properties (Propositions 3 and 4). Most of the algorithms for finding the value function of integer programs in the literature construct the value function by iteratively improving the upper bounding superadditive function without solving the integer program for any right-hand side [7, 25]. Our proposed algorithm, on the other hand, solves integer programs for some right-hand sides β B. Another difference is that our algorithm terminates once z(β) is determined for all β B. Algorithm 1. An IP-based Algorithm for Finding the Value Function Step 0: Initialize the lower bound l 0 (β) = for all β B. Forj = 1,...,n,if g j B, set l 0 (g j ) = γ j. Without loss of generality, we assume that there are no duplicate columns. Find z LP (β 0 ) for an arbitrary β 0 B to obtain an optimal dual solution π 0. Initialize the upper bound u 0 (β) = π0 T β for all β B. Set k 1. Step 1: Put l k (β) l k 1 (β) and u k (β) u k 1 (β) for all β B. Select β k B such that l k (β k )<u k (β k ). Solve the integer program with right-hand side β k to obtain an optimal primal solution ˆx k. Set l k (β k ) = u k (β k ) = γ T ˆx k. Initialize a vector list L k ={β k }. Step 2: Select all x Z n + such that x ˆxk.IfGx B, set l k (Gx) = u k (Gx) = γ T x and L k L k {Gx}.Ifβ k Gx B, l k (β k Gx) = u k (β k Gx) = γ T ( ˆx k x ) and L k L k {β k Gx}. Let V be the set containing all right-hand sides β B such that l k (β) u k (β). Select an arbitrary set V k V.

8 282 N. Kong et al. Step 3: For any (β, β ) V k L k, (3a) if β β,l k (β) max{l k (β), l k (β )}; (3b) if β β,u k (β) min{u k (β), u k (β )}. Step 4: For any (β, β ) V k L k, (4a) If β β B\L k, l k (β) max{l k (β), l k (β ) + l k (β β )}; (4b) If β β B\L k, u k (β) min{u k (β), u k (β ) l k (β β)}; (4c) If β + β B\L k, u k (β) min{u k (β), u k (β + β ) l k (β )}. Step 5: If l k (β) = u k (β) for all β B, terminate with solution z( ) = l k ( ) = u k ( ); otherwise, set k k + 1 and go to Step 1. A possible improvement to Algorithm 1 is column elimination. Forj = 1,...,n, if g j B, one can compare l k 1 (g j ) and γ j at the beginning of iteration k>1. By Corollary 1, if l k 1 (g j )>γ j, one can delete the j th column of (PIP). Note that the running time of column elimination at each iteration is O(n). To prove the correctness of Algorithm 1, we first show that l k ( ) and u k ( ) are lower bound and upper bound functions of z( ) at any iteration k, respectively. Then we show that at each iteration there exists at least one right-hand side whose optimal objective value is obtained. Lemma 1. At any iteration k in Algorithm 1, l k (β) z(β) u k (β) for all β B. Proof. We show the result by induction. In Step 0, u 0 (β) = π0 T β z(β) for β B. Denote G to be the set of columns in the constraint matrix G of (PIP). For β B\G, l 0 (β) = z(β).forj = 1,...,n,ifg j B, l 0 (g j ) = γ j z(g j ). Suppose that at iteration k 1 0, Lemma 1 holds. At iteration k, in Steps 1 and 2, for β L k, l k (β) = u k (β) = z(β). In Step 3, for β L k, l k (β ) = u k (β ) = z(β ). Hence for β V k,ifβ β, l k (β) max{l k (β), l k (β )} z(β), and if β β, u k (β) min{u k (β), u k (β )} z(β), since the result held at iteration k 1 and z( ) satisfies the nondecreasing property. In Step 4, for β L k, l k (β ) = u k (β ) = z(β ). Hence for β V k,ifβ β B\L k, l k (β) max{l k (β), l k (β )+l k (β β )} z(β),if β β B\L k, u k (β) min{u k (β), u k (β ) l k (β β)} z(β), and if β+β B\L k, u k (β) = min{u k (β), u k (β + β ) l k (β )} z(β). All inequalities above hold because the result held at iteration k 1 and z( ) satisfies the superadditivity property. Therefore, Lemma 1 also holds at iteration k. Theorem 2. Algorithm 1 terminates finitely with optimal solutions to z( ) with respect to all β B. Proof. Consider any iteration k 1. Clearly after Step 1 of the iteration, there exists at least one β B such that l k (β) = u k (β) = z(β) but l k 1 (β) z(β) or u k 1 (β) z(β). By Lemma 1, l k (β) z(β) u k (β) for all β B at any iteration k, and since B is finite, the result follows A dynamic-programming-based algorithm for finding the value function In this section we discuss a simple algorithm for finding the value function. It only applies, however, to problems with nonnegative G. Therefore, we assume that G is also nonnegative here. Thus, B Z m + with the assumption that z(β) is finite for all β B.

9 Two-stage SIP: a superadditive dual approach 283 If β i < 0 for some i = 1,...,m, z(β) =, which violates the above assumption. Because B is finite, there exists a nonnegative hyper-rectangle rooted at the origin that contains B. Therefore, once the value of z( ) for every integer point in the hyper-rectangle is found, we have obtained z(β) for all β B. Denote B to be the hyper-rectangle and b = (b 1,b 2,...,b m ) to be the largest vector in B componentwise. Hence for simplicity of exposition, we let B = B Z m +, where B ={[0,b 1] [0,b 2 ] [0,b m ]}. In contrast to the IP-based algorithm, the following algorithm only defines l( ) and does not need to solve any integer programs. The algorithm is motivated by Gilmore and Gomory s dynamic programming recursion for the knapsack problem [16]. Algorithm 2. A DP-based Algorithm for Finding the Value Function Step 0: Initialize the lower bound l 0 (β) = 0 for all β B.Forj = 1,...,n,ifg j B 1, l 0 (g j ) = γ j and insert g j into a vector list L. Set l 1 (β) = l 0 (β) for all β B. Set k 1. Step 1: Denote the k th vector in L by β k and the i th element of a vector β by β i. Let β = β k. Update all vectors β such that β B and β β k with the following lexicographic order: (1a) Set β 1 β and l k (β) max{l k (β), l k (β k ) + l k (β β k )}. (1b) If β 1 b 1, go to Step (1c); otherwise, go to Step (1a). (1c) If for all i = 1,...,m, β i b i, go to Step 2. Otherwise, let s = min{i : β i <b i }. Set β i βi k for i = 1,...,s 1. Set β s β s + 1 and go to Step (1a). Step 2: If k = L, terminate with solution z( ) = l k ( ). Otherwise, put l k+1 (β) l k (β) for all β B, set k k + 1 and go to Step 1. Denote B j to be the set containing all β B such that β g j. Step 0 of Algorithm 2 solves z(β) for all β B\ n j=1 B j, which is shown below in Proposition 6. Steps 1 and 2 in the algorithm are essentially similar to Step (4a) in Algorithm 1. They update l( ) iteratively, which is also similar to the dynamic programming recursion presented in [16]. Proposition 6. For all β B\ n j=1 B j, z(β) = 0. Theorem 3. [16]. For any β n j=1 B j, z(β) = max{γ j + z(β g j ) : g j B,j = 1,...,n}. (5) Theorem 4. Algorithm 2 terminates with optimal solutions to z( ) in at most n iterations. Proof. For any β B\ n j=1 B j, we initialize l 0 (β) = 0 in Step 0 and do not update them afterward. By Proposition 6, z(β) = 0 for β B\ n j=1 B j. We assume that the algorithm terminates at iteration k = L. Then l k (β) = z(β) for all β B\ n j=1 B j. Suppose there exists β n j=1 B j such that l k (β) z(β) and for all β β and β n j=1 B j, l k (β ) = z(β ). Then clearly l k (β) < z(β) by Lemma 1. It follows that there exists a j {1,...,n}, such that l k (β)<γ j +z(β g j ) by Theorem 3. Since l k (g j ) γ j and l k (β g j ) = z(β g j ), it follows that l k (g j )+l k (β g j ) γ j +z(β g j )> l k (β), which contradicts the superaddivity of l k ( ). Hence, l k (β) = z(β) for all β n j=1 B j. The result thus follows from k = L n. Step 1 of Algorithm 2 requires O( B ) calculations, so the overall running time of the algorithm is O(n B ).

10 284 N. Kong et al. 5. Finding the optimal tender Once the value functions in both stages are found, an optimal tender β for (P2) is determined by searching B 1. The brute-force way to do this is to evaluate the objective function in (P2) for all β B 1. We call this method exhaustive search. We will propose two additional methods for finding β that are often more efficient. The first one is a global branch-and-bound approach in which bounds are designed for each hyper-rectangle that is a subset of B 1. The second approach is a level-set approach that evaluates the objective function of (P2) only in a subset of B A global branch-and-bound approach The proposed branch-and-bound approach is based on a generic global branch-andbound method described in Horst and Tuy [20]. In the presentation of the following algorithm, M is a list of unfathomed hyper-rectangles, each of which is associated with a subproblem of the form f k = max { ψ(β) + IE ξ φ(h(ω) β) β P k Z m } 2, and a lower bound µ k f k and an upper bound ν k f k. Algorithm 3. A Global Branch-and-Bound Algorithm Step 0: Initialization Construct the hyper-rectangle P 0 := [ l 0,u 0] = m [ 2 i=1 l 0 i,u 0 ] i such that B 1 P 0 Z m 2. Initialize the list M { P 0} and a global lower bound L = ψ(β 0 ) + IE ξ φ(h(ω) β 0 ) with an arbitrarily selected β 0 B 1. Set µ 0 = ψ(l 0 ) + IE ξ φ(h(ω) u 0 ) and ν 0 = ψ(u 0 ) + IE ξ φ(h(ω) l 0 ). Set k 1. Step 1: Subproblem selection If M =, terminate with optimal solution β ; otherwise, select and delete from M a hyper-rectangle P k := [ l k,u k] = m [ 2 i=1 l k i,u k ] i. Step 2: Subproblem pruning (2a) If ν k L or P k B 1 =, go to Step 1. (2b) If µ k <ν k, i.e., P k is an unfathomed hyper-rectangle, go to Step 3. (2c) If µ k = ν k and L<µ k, update L = µ k = ν k = f k, and arbitrarily select a β P k B 1 and set β = β. (2d) Delete from M all hyper-rectangles P k with ν k L, i.e., M M { } P k ν k L. Go to Step 1. Step 3: Subproblem partitioning Choose a dimension i,1 i m 2, such that li k <u k i. Divide P k into two hyper-rectangles P k 1 and P k 2 along dimension i as: P k 1 := [ l k 1,u k ] 1 = [ li k, ( u k i + li k ) ] [ /2 i i l k i,u k ] i and P k 2 := [ l k 2,u k ] 2 = [ ( u k i + li k ) ] [ /2 + 1,u k i i i l k i,u k ] i. Add the two hyper-rectangles P k i, i = 1, 2, to M, i.e., M M { P k 1, P k } 2. Set µ k i = ψ(l k i ) + IE ξ φ(h(ω) u k i ) and ν k i = ψ(u k i ) + IE ξ φ(h(ω) l k i ), i = 1, 2. Set k k + 1 and go to Step 1. Note that Algorithm 3 requires the storage of first-stage value function ψ( ) over P 0 Z m 2, and the storage of second-stage value function φ( ) over the integer set induced by P 0 Z m 2. Proposition 7. For any P k, µ k and ν k are a lower bound and an upper bound respectively of max { ψ(β) + IE ξ φ(h(ω) β) β P k Z m } 2.

11 Two-stage SIP: a superadditive dual approach 285 Proof. In P k, l k β u k for all β P k. By the nondecreasing property of ψ( ) and φ( ), ψ(l k ) ψ(β) ψ(u k ) and φ(h(ω) u k ) φ(h(ω) β) φ(h(ω) l k ) for all ω. To prove the correctness of Algorithm 3, we use the concept of finite consistency from Horst and Tuy [20]. Then we show the proposed bounding operation is finitely consistent and thus the algorithm terminates finitely. Definition 1. [20]. A bounding operation is called finitely consistent if, at every step, any { unfathomed hyper-rectangle can be further refined, and if any decreasing sequence P k } of successively refined hyper-rectangles is finite. Lemma 2. [20]. In a branch-and-bound procedure, suppose that the bounding operation is finitely consistent. Then the procedure terminates after finitely many steps. See Theorem IV.1 in [20] for the proof of Lemma 2. Theorem 5. Algorithm 3 terminates with an optimal solution to (P2) after a finite number of steps. Proof. Any unfathomed hyper-rectangle P k can be further refined by the subproblem partitioning described in Algorithm 3. It satisfies the first condition of finite consistency. Since P k Z m 2 is a finite integer vector set, the second condition follows. Hence, the bounding operation in Algorithm 3 is finitely consistent and thus the algorithm terminates after finitely many steps by Lemma 2. Next we show that Algorithm 3 terminates with an optimal tender. In the algorithm, we update L only after a hyper-rectangle can be fathomed. That is, for any hyperrectangle P k such that µ k = ν k and P k B 1,wesetL max{l, µ k } max { ψ(β) + IE ξ φ(h(ω) β) β B 1} in Step (2c). This shows that L is a valid global lower bound in the algorithm. Let β B 1 be a global optimum to (P2). Suppose at any iteration k, β P k and µ k <ν k, then P k can be further refined in Step 3. It follows that the refining operation on P k terminates at iteration k >kwhen µ k = ν k = ψ(β ) + IE ξ φ(h(ω) β ). Therefore, we have L=ν k = max { ψ(β) + IE ξ φ(h(ω) β) β B 1} in Step (2c) The minimal tender approach In this section we describe a level-set approach to reduce the search space B 1. It is based on the observation that there must exist an optimal solution to (P2) that is an extreme point of the level set associated with its first-stage objective value. In other words, there exists an optimal tender satisfying the condition that each smaller tender also has a strictly smaller objective value in the first stage. We call such tenders minimal tenders, and form a candidate set that only contains all minimal tenders and thus has fewer candidate solutions. This approach only applies, however, to the cases where the technology matrix T in (P1) is also nonnegative. In those cases, since ψ( ) is finite, B 1 Z m 2 +. Definition 2. A vector β B 1 is a minimal tender if for all i, i = 1,...,m 2, either β i = 0orψ(β e i )<ψ(β), where e i is the i th unit vector. Let be the set containing all minimal tenders in B 1.

12 286 N. Kong et al. Theorem 6. There exists an optimal solution to (P2) that is a minimal tender. That is, { max ψ(β) + IEξ φ(h(ω) β) } { = max ψ(β) + IEξ φ(h(ω) β) }. β β B 1 Proof. Clearly since B 1 {, max β ψ(β) + IEξ φ(h(ω) β) } max β B 1 { ψ(β) + IEξ φ(h(ω) β) }. Suppose there does not exist a β that{ is an optimal tender in (P2), i.e., for all β, ψ(β) + IE ξ φ(h(ω) β) < max β B 1 ψ(β) + IEξ φ (h(ω) β)}. Let β be the lexicographically minimum optimal tender. By assumption, β/, thus there exists an i {1,...,m 2 }, such that β i 1 and ψ ( ) ( ) β e i = ψ β. Since φ( ) is nondecreasing, φ ( h(ω) β ) φ ( h(ω) ( )) β e i for all scenarios ω. It follows that ψ ( β e i ) + IEξ φ ( h(ω) ( β e i )) ψ ( β ) + IEξ φ ( h(ω) β ). Therefore, β e i is also an optimal tender in (P2), which is a contradiction. We define ρ = / B 1. Intuitively, as ρ 1, the computational benefit of searching could be surpassed by the computational burden of determining by the definition of minimal tenders. The value of ρ is typically not known until ψ( ) is completely determined. In some special cases, and ρ can be identified analytically. One of them is presented as follows. Proposition 8. Suppose that c is strictly positive, and that T contains I m2, the m 2 - dimensional identity matrix, as a submatrix. Then = B 1 and so that ρ = 1. In general, it is still possible to calculate an upper bound on ρ a priori. In the remainder of this paper, let opt(β) refer to the set that contains optimal solutions to ψ( ) for β Z m 2 +. Lemma 3. For any β and for any ˆx opt(β), T ˆx = β. Proof. The result is trivial when β = 0. By definition, T ˆx β. Suppose T x β for some β \{0} and x opt ( β ), then there must exist an i {1,...,m 2 }, such that ( ) β i 1 and T x β e i. This implies that x S 1 β ei, the feasible solution set for ψ ( ) β e i, contradicting the assumption that β is a minimal tender. Definition 3. [5]. An integral monoid is a set M of vectors of Z m which forms a semigroup under addition in Z m. In other words: (i) 0 M; and (ii) if u, v M, then u + v M. We define T to be the index set of columns in T that are minimal tenders and M to be the integral monoid generated by {t j } j T. Lemma 4. For any β \{0} and ˆx opt(β),if ˆx j 1, then j T. Proof. Suppose for some β \{0} and x opt ( β ), there exists a j / T such that x j 1. It follows that ψ(t j ) = c T e j = c j by integer complementary slackness. Since j / T, t j /, there exists an i {1,...,m 2 }, such that t j e i 0 and ψ(t j e i ) = ψ(t j ) = c j. Let x opt(t j e i ) and thus x S 1 (t j e i ) and c T x = c j.

13 Two-stage SIP: a superadditive dual approach 287 Considering x e j + x, it follows that T ( x e j + x ) β t j + t j e i = β e i and c T ( x e j + x ) = c T x c j + c j = c T x. Therefore, ψ ( ) β e i c T x = ψ ( β ). Due to nonnegativity of T, β = T x Te j = t j, and thus t j e i 0 implies β e i 0. Hence, β/, which is a contradiction. Theorem 7. The minimal tender set M B 1 and so that ρ M B 1 / B 1. Proof. Clearly minimal tender 0 M B 1. Consider any β \{0} and ˆx opt(β). By Lemma 3, T ˆx = β. By Lemma 4, there exists some j T such that ˆx j 1. It implies that ˆx e j, and β t j due to nonnegativity of T. By integer complementary slackness, ψ(β) = ψ ( T ( )) ˆx e j + ψ(tj ). It follows that ψ(β) ψ(t j ) = ψ(β t j ). Consider any i {1,...,m 2 }.Ifβ i t ij = 0, β t j e i / Z m 2 +.Ifβ i t ij 1, β t j e i Z m 2 + and thus β i 1 due to nonnegativity of T. Hence, β \{0} implies that ψ(β) > ψ(β e i ). It follows from superadditivity that ψ(β t j ) = ψ(β) ψ(t j )>ψ(β e i ) ψ(t j ) ψ(β t j e i ). Hence β t j is a minimal tender. By induction, β = j T k j t j, where k j Z + for j T,i.e., β can be written as a nonnegative integer combination of vectors in {t j } j T. It follows that M B 1 and thus ρ = / B 1 M B 1 / B Using minimal tenders to reduce the primal formulation As shown later in Lemma 5, one can delete all columns j/ T in ψ( ) when computing ψ(β) for β. This results in a smaller superadditive dual reformulation of (P1). In Theorem 8 we show the equivalence between this reformulation and (P2). For any β Z m 2 +, let us define ψ (β) = max c j x j x S 1 (β), (6) j T where S 1 (β) = x Zn 1 + a j x j b, t j x j β. (7) j T j T Then the reduced superadditive dual formulation is Lemma 5. For β, ψ (β) = ψ(β). max { ψ (β) + IE ξ φ(h(ω) β) β }. (8) Proof. The result is trivial for the case β = 0. For any β \{0} and ˆx opt(β), ˆx j = 0 for all j/ T by Lemma 4. Clearly ˆx S 1 (β). Therefore, ψ(β) = n 2 j=1 c j ˆx j = j T c j ˆx j ψ (β). On the other hand, for any optimal solution x to ψ (β), one can construct solution x as follows. For any j T, x j = x j and for any j / T, x j = 0. Clearly x S 1 (β) and j T c j x j = j T c j x j. It implies that x is an optimal solution to ψ (β). Since x S 1 (β), ψ (β) ψ(β) and the result follows.

14 288 N. Kong et al. Theorem 8. There exists an optimal solution to (P2) that is an optimal solution to (8). That is, { max ψ (β) + IE ξ φ(h(ω) β) } { = max ψ(β) + IEξ φ(h(ω) β) }. β β B 1 Proof. Lemma 5 implies that max β { ψ (β) + IE ξ φ(h(ω) β) } = max β { ψ(β) + IEξ φ(h(ω) β) }. Therefore, the equivalence between (8) and (P2) follows from Theorem 6. Corollary 2. Let β be an optimal solution to (8). Then ˆx arg max { c T x Ax b, Tx β,x Z n } 1 + is an optimal solution to (P1). Furthermore, the optimal objective values of the two problems are equal. Corollary 3. There exists an optimal solution x to (P1) where x j = 0 for j/ T. 6. Computational experiments We conducted our computational experiments on randomly generated instances with Assumptions A1 A4 stated earlier in the paper. Moreover, we assumed in all instances that the sets of feasible right-hand sides in both stages, B 1 and B 2, are hyper-rectangular nonnegative integer vector sets rooted at 0, i.e., B k = B k Z m 2 + where Bk = m [ ] 2 i=1 0,b k i, k = 1, 2. This assumption allowed us to test both algorithms in the first phase of the solution procedure and both approaches in the second phase. Since all value functions needed to be stored in computer memory, with this additional assumption, we could only consider cases where right-hand sides h(ω) for all ω are relatively small componentwise. We also ignored first-stage constraints Ax b in our computational experiments. Without loss of generality, first-stage constraints can be incorporated into the technology matrix T, where the corresponding rows are 0 in the recourse matrix W. This simplification would however limit our ability of solving instances with many first-stage constraints due to the value function storage limitation stated above. In the first phase of the solution procedure, we tested both algorithms to find the value functions in both stages of (P2). To simplify the implementation of the IP-based algorithm, we selected V k at each iteration k differently than previously presented. [ To] be specific, for each β j L k, we updated all vectors in the set V k j = m 2 [ ] m 2 i=1 β j i,bj i. Hence, at iteration k we updated all vectors in V k = 0,β j i i=1 L k j=1 Vkj.In our implementation of the IP-based algorithm, we also omitted Step 3 since it can be accomplished equivalently in Step 4 for the considered instances. In the second phase of the solution procedure, we compared all three strategies for finding an optimal tender of (P2). With the branch-and-bound approach, we implemented the proposed global branch-and-bound algorithm in which the initial global lower bound was set to be max { IE ξ φ(h(ω)), ψ(b 1 ) + IE ξ φ(h(ω) b 1 ) }, the maximum between the objective function values of (P2) with respect to 0 and b 1. With the minimal tender approach, we

15 Two-stage SIP: a superadditive dual approach 289 did not explore the computational tradeoff between constructing the superset of the minimal tender set M B 1 and evaluating the objective function in (P2) over the superset. We simply checked if each β B 1 is a minimal tender by definition and then formed the minimal tender set, after obtaining the first-stage value function. An optimal solution to (P2) was obtained by evaluating the objective function with respect to each β. All computational experiments were conducted on a Pentium IV PC with 2GHz CPU and 2GB RAM Random instance generation We begin by presenting our random instance generation scheme. First, each of deterministic components in an instance was randomly generated with a uniform distribution given an associated value range. These deterministic components include the first-stage objective c, the second-stage objective d, the technology matrix T, and the recourse matrix W. A Bernoulli distribution was used to model the density of the generated matrices T and W. Second, the scenario set that contains all h(ω) was uniformly generated given its associated value range and the number of scenarios. Then each scenario was assigned with a randomly generated probability. All scenarios were checked to ensure that there were no duplicate scenarios. Third, B k = m [ ] 2 0,b k i Z m 2 + for k = 1, 2 were constructed such that bi 1 = min ω h i (ω) and bi 2 = max ω h i (ω) for i = 1,...,m 2. Hence, B 1 = m 2 ( i=1 b 1 i + 1 ) and B 2 = m 2 i=1 (b2 i + 1). Note that each h(ω) was required to be bounded by b 1 and b 2 componentwise. By using the above random instance generation scheme, we considered 10 instance classes. Their characteristics are presented in Tables 1 and 2. In Table 1, δ is the parameter for the density of matrices T and W ; m 2, n 1, n 2 are as denoted in the paper. The min and max listed under c j, d j, t ij, and w ij are the lower and upper bounds of the uniform distribution associated with the coefficients of each deterministic component. For instance, in class IC1, c j was generated with the uniform distribution in the interval [1, 5]. The min and max related to h i (ω) in Table 1 are the minimum and maximum of the interval associated with the uniform distribution respectively. In each of the 10 instance classes, we generated all m 2 elements in h(ω), for ω, independently with the same uniform distribution. We considered a set of integer points as scenarios in the hyper-rectangle H that is bounded inclusively by the min and max in each dimension, i.e., H = m [ 2 i=1 b 1 i,bi 2 ]. When the number of chosen integer points in H is relatively large compared to H Z m 2 +, it is likely that bi 1 = bj 1 and b2 i = bj 2 for 1 i<j m 2. For instance, in instance class IC1, b 1 = (5, 5, 5, 5, 5, 5) T, b 2 = (9, 9, 9, 9, 9, 9) T, and the number of scenarios is approximately 2 1 of H Z m 2 + ; in instance class IC7, b 1 = (5, 5, 5, 5, 5, 5, 5) T, b 2 = (10, 10, 10, 10, 10, 10, 10) T, and = H Z m 2 +. Table 2 presents B 1 and B 2 that are the numbers of right-hand sides we needed to consider in each stage to compute ψ( ) and φ( ). The numbers of variables and constraints in the extensive form are also shown in the table. For each instance class, we generated and solved several instances. These instances are available online in SMPS format [15]. We omitted instances where the optimal solution was either the smallest or largest vector in B 1, as such instances were solved i=1

16 290 N. Kong et al. Table 1. Characteristics of test problems (I) c j d j t ij w ij h i (ω) δ m 2 n 1 n 2 min max min max min max min max min max IC IC IC IC IC IC IC IC IC IC Table 2. Characteristics of test problems (II) B 1 Extensive Form B 2 Scenarios Constraints Variables IC IC IC IC IC IC IC IC IC IC very quickly and do not provide any insight. As a mnemonic, the instances were named ICm-n, where m is the instance class index and n is the instance index Finding the optimal value function The computational results for the first phase of the solution procedure are given in Tables 3 and 4, which reports several different running times for the IP- and DP-based algorithms. The running times for the IP-based algorithm include the times spent in solving deterministic integer programs, initializing the lower bound function, applying the integer complementary slackness property, and applying the superadditivity property. These times are denoted by t IP, t LBI, t ICS, and t SUP respectively. The running times for the DP-based algorithm contain the times spent in Steps 0 and 1 that are denoted as t 1 and t 2. We also report the number of deterministic integer programs solved in the IP-based algorithm as well as the total running time for each algorithm, denoted by t total. All reported running times are in seconds except those exceeding an imposed 5-hour time limit. Both tables show that the DP-based algorithm is computationally superior to the IP-based algorithm on all instances we considered, which affirms our hypothesis that the DP-based algorithm fits the generated instances better. It has been observed in many studies, however, that DP type algorithms are much less efficient for m 2 > 10. We expect

17 Two-stage SIP: a superadditive dual approach 291 Table 3. Computational results in phase I (first stage) Algorithm 1 Algorithm 2 Instance # IPs t IP t LBI t ICS t SUP t total t 1 t 2 t total IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC Table 4. Computational results in phase I (second stage) Algorithm 1 Algorithm 2 Instance # IPs t IP t LBI t ICS t SUP t total t 1 t 2 t total IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC IC7-2 > 5 hrs IC7-3 > 5 hrs IC8-1 > 5 hrs IC8-2 > 5 hrs IC9-1 > 5 hrs IC9-2 > 5 hrs IC9-3 > 5 hrs IC10-1 > 5 hrs

18 292 N. Kong et al. Table 5. Computational results in phase II ES BB MT Instance CPU Time CPU Time CPU Time Time Computing (s.) ρ IC1-1 0:03:17 0:00:20 0:02: IC2-1 0:03:18 0:16:33 0:03: IC2-2 0:03:17 0:06:13 0:01: IC2-3 0:03:20 0:07:23 0:03: IC3-1 0:03:17 0:20:24 0:01: IC3-2 0:03:20 0:17:01 0:01: IC3-3 0:03:13 0:35:28 0:02: IC4-1 0:03:24 0:00:25 0:01: IC4-2 0:03:20 0:00:39 0:03: IC5-1 0:03:20 0:03:41 0:03: IC5-2 0:03:14 0:05:21 0:03: IC5-3 0:03:30 0:16:48 0:02: IC6-1 0:03:17 0:04:05 0:00: IC6-2 0:03:24 0:11:04 0:00: IC6-3 0:03:10 0:02:15 0:00: IC7-1 23:20:10 0:24:59 2:22: IC7-2 22:46:35 43:19:41 6:45: IC7-3 22:53:40 49:09:49 4:44: IC8-1 23:26:25 1:35:22 5:45: IC8-2 24:03:43 5:09:48 6:37: IC9-1 24:07:15 4:29:29 5:16: IC9-2 23:33:09 17:40:50 1:52: IC9-3 23:01:17 56:47:39 3:23: IC :26:49 27:43:39 23:21: that the superiority of the DP-based algorithm over the IP-based algorithm would end as the number of second-stage constraints increases. Not surprisingly, as the number of decision variables or the number of feasible right-hand sides increases, the total running time for either algorithm increases, as indicated from the comparison between the instance classes. As the computational results show, our approach is more sensitive to increasing the number of constraints than the number of decision variables. In addition, our approach is also sensitive to the increase of the magnitude of feasible right-hand side values. Another observation is that the total running time decreases and the number of deterministic integer programs solved increases as the matrix density δ increases Finding the optimal tender Table 5 reports the computational results in the second phase of the solution procedure. For the branch-and-bound algorithm (BB), the time for finding an optimal tender is recorded. For the minimal tender approach (MT), the time spent in computing the minimal tender set is indicated in addition to the total time for finding an optimal tender. The ratio between the numbers of minimal tenders and feasible right-hand sides in the first stage, denoted by ρ, is also presented in the table. We also report the running time with the exhaustive search method (ES). All CPU times but the time spent in computing are in the form of hh:mm:ss. There is no decisive conclusion we could draw in terms of the comparison between BB and MT from the computational results in Table 5. However, MT tends to outper-

Two-Stage Quadratic Integer Programs with Stochastic Right-Hand Sides

Two-Stage Quadratic Integer Programs with Stochastic Right-Hand Sides Two-Stage Quadratic Integer Programs with Stochastic Right-Hand Sides Osman Y. Özaltın Department of Industrial Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA, oyo1@pitt.edu Oleg Prokopyev

More information

On a Level-Set Characterization of the Value Function of an Integer Program and Its Application to Stochastic Programming

On a Level-Set Characterization of the Value Function of an Integer Program and Its Application to Stochastic Programming On a Level-Set Characterization of the Value Function of an Integer Program and Its Application to Stochastic Programming Andrew C. Trapp, Oleg A. Prokopyev, Andrew J. Schaefer School of Business, Worcester

More information

Totally Unimodular Stochastic Programs

Totally Unimodular Stochastic Programs Totally Unimodular Stochastic Programs Nan Kong Weldon School of Biomedical Engineering, Purdue University 206 S. Martin Jischke Dr., West Lafayette, IN 47907 nkong@purdue.edu Andrew J. Schaefer Department

More information

Stochastic Integer Programming An Algorithmic Perspective

Stochastic Integer Programming An Algorithmic Perspective Stochastic Integer Programming An Algorithmic Perspective sahmed@isye.gatech.edu www.isye.gatech.edu/~sahmed School of Industrial & Systems Engineering 2 Outline Two-stage SIP Formulation Challenges Simple

More information

Generation and Representation of Piecewise Polyhedral Value Functions

Generation and Representation of Piecewise Polyhedral Value Functions Generation and Representation of Piecewise Polyhedral Value Functions Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial and Systems Engineering,

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 38 (2010) 328 333 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl The bilevel knapsack problem with stochastic

More information

Decomposition Algorithms with Parametric Gomory Cuts for Two-Stage Stochastic Integer Programs

Decomposition Algorithms with Parametric Gomory Cuts for Two-Stage Stochastic Integer Programs Decomposition Algorithms with Parametric Gomory Cuts for Two-Stage Stochastic Integer Programs Dinakar Gade, Simge Küçükyavuz, Suvrajeet Sen Integrated Systems Engineering 210 Baker Systems, 1971 Neil

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

Integer Programming Duality

Integer Programming Duality Integer Programming Duality M. Guzelsoy T. K. Ralphs July, 2010 1 Introduction This article describes what is known about duality for integer programs. It is perhaps surprising that many of the results

More information

Stochastic Integer Programming

Stochastic Integer Programming IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method

More information

Introduction to integer programming II

Introduction to integer programming II Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

Citation for published version (APA): van der Vlerk, M. H. (2002). Convex approximations for complete integer recourse models. s.n.

Citation for published version (APA): van der Vlerk, M. H. (2002). Convex approximations for complete integer recourse models. s.n. University of Groningen Convex approximations for complete integer recourse models van der Vlerk, Maarten H. IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44 1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence

More information

On the Value Function of a Mixed Integer Linear Optimization Problem and an Algorithm for its Construction

On the Value Function of a Mixed Integer Linear Optimization Problem and an Algorithm for its Construction On the Value Function of a Mixed Integer Linear Optimization Problem and an Algorithm for its Construction Ted K. Ralphs and Anahita Hassanzadeh Department of Industrial and Systems Engineering, Lehigh

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with GUB Constraints

Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with GUB Constraints Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with GUB Constraints Brian Keller Booz Allen Hamilton, 134 National Business Parkway, Annapolis Junction, MD 20701, USA, keller

More information

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set ON THE FACETS OF THE MIXED INTEGER KNAPSACK POLYHEDRON ALPER ATAMTÜRK Abstract. We study the mixed integer knapsack polyhedron, that is, the convex hull of the mixed integer set defined by an arbitrary

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018 Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Duality for Mixed-Integer Linear Programs

Duality for Mixed-Integer Linear Programs Duality for Mixed-Integer Linear Programs M. Guzelsoy T. K. Ralphs Original May, 006 Revised April, 007 Abstract The theory of duality for linear programs is well-developed and has been successful in advancing

More information

Bilevel Integer Optimization: Theory and Algorithms

Bilevel Integer Optimization: Theory and Algorithms : Theory and Algorithms Ted Ralphs 1 Joint work with Sahar Tahernajad 1, Scott DeNegre 3, Menal Güzelsoy 2, Anahita Hassanzadeh 4 1 COR@L Lab, Department of Industrial and Systems Engineering, Lehigh University

More information

Column Generation in Integer Programming with Applications in Multicriteria Optimization

Column Generation in Integer Programming with Applications in Multicriteria Optimization Column Generation in Integer Programming with Applications in Multicriteria Optimization Matthias Ehrgott Department of Engineering Science The University of Auckland, New Zealand email: m.ehrgott@auckland.ac.nz

More information

Stochastic Programming in Enterprise-Wide Optimization

Stochastic Programming in Enterprise-Wide Optimization Stochastic Programming in Enterprise-Wide Optimization Andrew Schaefer University of Pittsburgh Department of Industrial Engineering October 20, 2005 Outline What is stochastic programming? How do I formulate

More information

Three-partition Flow Cover Inequalities for Constant Capacity Fixed-charge Network Flow Problems

Three-partition Flow Cover Inequalities for Constant Capacity Fixed-charge Network Flow Problems Three-partition Flow Cover Inequalities for Constant Capacity Fixed-charge Network Flow Problems Alper Atamtürk, Andrés Gómez Department of Industrial Engineering & Operations Research, University of California,

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014

THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM LEVI DELISSA. B.S., Kansas State University, 2014 THE EXISTENCE AND USEFULNESS OF EQUALITY CUTS IN THE MULTI-DEMAND MULTIDIMENSIONAL KNAPSACK PROBLEM by LEVI DELISSA B.S., Kansas State University, 2014 A THESIS submitted in partial fulfillment of the

More information

Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with Random Recourse

Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with Random Recourse Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with Random Recourse Lewis Ntaimo Department of Industrial and Systems Engineering, Texas A&M University, 3131 TAMU, College Station,

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Fenchel Decomposition for Stochastic Mixed-Integer Programming

Fenchel Decomposition for Stochastic Mixed-Integer Programming Fenchel Decomposition for Stochastic Mixed-Integer Programming Lewis Ntaimo Department of Industrial and Systems Engineering, Texas A&M University, 3131 TAMU, College Station, TX 77843, USA, ntaimo@tamu.edu

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Inexact cutting planes for two-stage mixed-integer stochastic programs

Inexact cutting planes for two-stage mixed-integer stochastic programs Inexact cutting planes for two-stage mixed-integer stochastic programs Ward Romeijnders, Niels van der Laan Department of Operations, University of Groningen, P.O. Box 800, 9700 AV, Groningen, The Netherlands,

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

where X is the feasible region, i.e., the set of the feasible solutions.

where X is the feasible region, i.e., the set of the feasible solutions. 3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative

More information

Monoidal Cut Strengthening and Generalized Mixed-Integer Rounding for Disjunctions and Complementarity Constraints

Monoidal Cut Strengthening and Generalized Mixed-Integer Rounding for Disjunctions and Complementarity Constraints Monoidal Cut Strengthening and Generalized Mixed-Integer Rounding for Disjunctions and Complementarity Constraints Tobias Fischer and Marc E. Pfetsch Department of Mathematics, TU Darmstadt, Germany {tfischer,pfetsch}@opt.tu-darmstadt.de

More information

A New Subadditive Approach to Integer Programming: Theory and Algorithms

A New Subadditive Approach to Integer Programming: Theory and Algorithms A New Subadditive Approach to Integer Programming: Theory and Algorithms Diego Klabjan Department of Mechanical and Industrial Engineering University of Illinois at Urbana-Champaign Urbana, IL email: klabjan@uiuc.edu

More information

A Branch-Reduce-Cut Algorithm for the Global Optimization of Probabilistically Constrained Linear Programs

A Branch-Reduce-Cut Algorithm for the Global Optimization of Probabilistically Constrained Linear Programs A Branch-Reduce-Cut Algorithm for the Global Optimization of Probabilistically Constrained Linear Programs Myun-Seok Cheon, Shabbir Ahmed and Faiz Al-Khayyal School of Industrial & Systems Engineering

More information

Convex approximations for a class of mixed-integer recourse models

Convex approximations for a class of mixed-integer recourse models Ann Oper Res (2010) 177: 139 150 DOI 10.1007/s10479-009-0591-7 Convex approximations for a class of mixed-integer recourse models Maarten H. Van der Vlerk Published online: 18 July 2009 The Author(s) 2009.

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

Mixed Integer Linear Programming Formulations for Probabilistic Constraints

Mixed Integer Linear Programming Formulations for Probabilistic Constraints Mixed Integer Linear Programming Formulations for Probabilistic Constraints J. P. Vielma a,, S. Ahmed b, G. Nemhauser b a Department of Industrial Engineering, University of Pittsburgh 1048 Benedum Hall,

More information

On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times

On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times CORE DISCUSSION PAPER 2000/52 On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times Andrew J. Miller 1, George L. Nemhauser 2, and Martin W.P. Savelsbergh 2 November 2000

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

Integer Programming Duality in Multiple Objective Programming

Integer Programming Duality in Multiple Objective Programming Integer Programming Duality in Multiple Objective Programming Kathrin Klamroth 1 Jørgen Tind 1 Sibylle Zust 2 03.07.2003 Abstract The weighted sums approach for linear and convex multiple criteria optimization

More information

1 Column Generation and the Cutting Stock Problem

1 Column Generation and the Cutting Stock Problem 1 Column Generation and the Cutting Stock Problem In the linear programming approach to the traveling salesman problem we used the cutting plane approach. The cutting plane approach is appropriate when

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Fundamental Domains for Integer Programs with Symmetries

Fundamental Domains for Integer Programs with Symmetries Fundamental Domains for Integer Programs with Symmetries Eric J. Friedman Cornell University, Ithaca, NY 14850, ejf27@cornell.edu, WWW home page: http://www.people.cornell.edu/pages/ejf27/ Abstract. We

More information

Sequential pairing of mixed integer inequalities

Sequential pairing of mixed integer inequalities Sequential pairing of mixed integer inequalities Yongpei Guan, Shabbir Ahmed, George L. Nemhauser School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta,

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Inexact cutting planes for two-stage mixed-integer stochastic programs Romeijnders, Ward; van der Laan, Niels

Inexact cutting planes for two-stage mixed-integer stochastic programs Romeijnders, Ward; van der Laan, Niels University of Groningen Inexact cutting planes for two-stage mixed-integer stochastic programs Romeijnders, Ward; van der Laan, Niels IMPORTANT NOTE: You are advised to consult the publisher's version

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

Computations with Disjunctive Cuts for Two-Stage Stochastic Mixed 0-1 Integer Programs

Computations with Disjunctive Cuts for Two-Stage Stochastic Mixed 0-1 Integer Programs Computations with Disjunctive Cuts for Two-Stage Stochastic Mixed 0-1 Integer Programs Lewis Ntaimo and Matthew W. Tanner Department of Industrial and Systems Engineering, Texas A&M University, 3131 TAMU,

More information

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 21 Dr. Ted Ralphs IE406 Lecture 21 1 Reading for This Lecture Bertsimas Sections 10.2, 10.3, 11.1, 11.2 IE406 Lecture 21 2 Branch and Bound Branch

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

n-step mingling inequalities: new facets for the mixed-integer knapsack set

n-step mingling inequalities: new facets for the mixed-integer knapsack set Math. Program., Ser. A (2012) 132:79 98 DOI 10.1007/s10107-010-0382-6 FULL LENGTH PAPER n-step mingling inequalities: new facets for the mixed-integer knapsack set Alper Atamtürk Kiavash Kianfar Received:

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

Integer Linear Programming

Integer Linear Programming Integer Linear Programming Solution : cutting planes and Branch and Bound Hugues Talbot Laboratoire CVN April 13, 2018 IP Resolution Gomory s cutting planes Solution branch-and-bound General method Resolution

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Multivalued Decision Diagrams. Postoptimality Analysis Using. J. N. Hooker. Tarik Hadzic. Cork Constraint Computation Centre

Multivalued Decision Diagrams. Postoptimality Analysis Using. J. N. Hooker. Tarik Hadzic. Cork Constraint Computation Centre Postoptimality Analysis Using Multivalued Decision Diagrams Tarik Hadzic Cork Constraint Computation Centre J. N. Hooker Carnegie Mellon University London School of Economics June 2008 Postoptimality Analysis

More information

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional

More information

Solving Bilevel Mixed Integer Program by Reformulations and Decomposition

Solving Bilevel Mixed Integer Program by Reformulations and Decomposition Solving Bilevel Mixed Integer Program by Reformulations and Decomposition June, 2014 Abstract In this paper, we study bilevel mixed integer programming (MIP) problem and present a novel computing scheme

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38 1 / 38 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 38 1 The L-Shaped Method 2 Example: Capacity Expansion Planning 3 Examples with Optimality Cuts [ 5.1a of BL] 4 Examples

More information

Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse

Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse Merve Bodur 1, Sanjeeb Dash 2, Otay Günlü 2, and James Luedte 3 1 Department of Mechanical and Industrial Engineering,

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

3.10 Column generation method

3.10 Column generation method 3.10 Column generation method Many relevant decision-making problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock, crew scheduling, vehicle

More information

Column Generation. MTech Seminar Report. Soumitra Pal Roll No: under the guidance of

Column Generation. MTech Seminar Report. Soumitra Pal Roll No: under the guidance of Column Generation MTech Seminar Report by Soumitra Pal Roll No: 05305015 under the guidance of Prof. A. G. Ranade Computer Science and Engineering IIT-Bombay a Department of Computer Science and Engineering

More information

Can Li a, Ignacio E. Grossmann a,

Can Li a, Ignacio E. Grossmann a, A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann

More information

A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY

A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY APPLICATIONES MATHEMATICAE 23,2 (1995), pp. 151 167 G. SCHEITHAUER and J. TERNO (Dresden) A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY Abstract. Many numerical computations

More information

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Robert M. Freund April 29, 2004 c 2004 Massachusetts Institute of echnology. 1 1 Block Ladder Structure We consider

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Operations Research Lecture 6: Integer Programming

Operations Research Lecture 6: Integer Programming Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the

More information

Lifting 2-integer knapsack inequalities

Lifting 2-integer knapsack inequalities Lifting 2-integer knapsack inequalities A. Agra University of Aveiro and C.I.O. aagra@mat.ua.pt M.F. Constantino D.E.I.O., University of Lisbon and C.I.O. miguel.constantino@fc.ul.pt October 1, 2003 Abstract

More information

Can Li a, Ignacio E. Grossmann a,

Can Li a, Ignacio E. Grossmann a, A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann

More information

TMA947/MAN280 APPLIED OPTIMIZATION

TMA947/MAN280 APPLIED OPTIMIZATION Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points

More information

Decomposition with Branch-and-Cut Approaches for Two Stage Stochastic Mixed-Integer Programming

Decomposition with Branch-and-Cut Approaches for Two Stage Stochastic Mixed-Integer Programming Decomposition with Branch-and-Cut Approaches for Two Stage Stochastic Mixed-Integer Programming by Suvrajeet Sen SIE Department, University of Arizona, Tucson, AZ 85721 and Hanif D. Sherali ISE Department,

More information

Scenario grouping and decomposition algorithms for chance-constrained programs

Scenario grouping and decomposition algorithms for chance-constrained programs Scenario grouping and decomposition algorithms for chance-constrained programs Yan Deng Shabbir Ahmed Jon Lee Siqian Shen Abstract A lower bound for a finite-scenario chance-constrained problem is given

More information

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting Summary so far z =max{c T x : Ax apple b, x 2 Z n +} I Modeling with IP (and MIP, and BIP) problems I Formulation for a discrete set that is a feasible region of an IP I Alternative formulations for the

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications

More information

3.10 Column generation method

3.10 Column generation method 3.10 Column generation method Many relevant decision-making (discrete optimization) problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock,

More information

Stochastic Mixed-Integer Programming

Stochastic Mixed-Integer Programming Stochastic Mixed-Integer Programming Tutorial SPXII, Halifax, August 15, 2010 Maarten H. van der Vlerk University of Groningen, Netherlands www.rug.nl/feb/mhvandervlerk 1 Outline Stochastic Mixed-Integer

More information

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Week Cuts, Branch & Bound, and Lagrangean Relaxation Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is

More information

On the Approximate Linear Programming Approach for Network Revenue Management Problems

On the Approximate Linear Programming Approach for Network Revenue Management Problems On the Approximate Linear Programming Approach for Network Revenue Management Problems Chaoxu Tong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853,

More information

Asteroide Santana, Santanu S. Dey. December 4, School of Industrial and Systems Engineering, Georgia Institute of Technology

Asteroide Santana, Santanu S. Dey. December 4, School of Industrial and Systems Engineering, Georgia Institute of Technology for Some for Asteroide Santana, Santanu S. Dey School of Industrial Systems Engineering, Georgia Institute of Technology December 4, 2016 1 / 38 1 1.1 Conic integer programs for Conic integer programs

More information

Stochastic Dual Dynamic Integer Programming

Stochastic Dual Dynamic Integer Programming Stochastic Dual Dynamic Integer Programming Jikai Zou Shabbir Ahmed Xu Andy Sun December 26, 2017 Abstract Multistage stochastic integer programming (MSIP) combines the difficulty of uncertainty, dynamics,

More information

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Almost Robust Optimization with Binary Variables

Almost Robust Optimization with Binary Variables Almost Robust Optimization with Binary Variables Opher Baron, Oded Berman, Mohammad M. Fazel-Zarandi Rotman School of Management, University of Toronto, Toronto, Ontario M5S 3E6, Canada, Opher.Baron@Rotman.utoronto.ca,

More information

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,

More information

Duality, Warm Starting, and Sensitivity Analysis for MILP

Duality, Warm Starting, and Sensitivity Analysis for MILP Duality, Warm Starting, and Sensitivity Analysis for MILP Ted Ralphs and Menal Guzelsoy Industrial and Systems Engineering Lehigh University SAS Institute, Cary, NC, Friday, August 19, 2005 SAS Institute

More information

Part 4. Decomposition Algorithms

Part 4. Decomposition Algorithms In the name of God Part 4. 4.4. Column Generation for the Constrained Shortest Path Problem Spring 2010 Instructor: Dr. Masoud Yaghini Constrained Shortest Path Problem Constrained Shortest Path Problem

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

Optimization Methods in Management Science

Optimization Methods in Management Science Optimization Methods in Management Science MIT 15.05 Recitation 8 TAs: Giacomo Nannicini, Ebrahim Nasrabadi At the end of this recitation, students should be able to: 1. Derive Gomory cut from fractional

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Integer Programming Part II

Integer Programming Part II Be the first in your neighborhood to master this delightful little algorithm. Integer Programming Part II The Branch and Bound Algorithm learn about fathoming, bounding, branching, pruning, and much more!

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

IP Duality. Menal Guzelsoy. Seminar Series, /21-07/28-08/04-08/11. Department of Industrial and Systems Engineering Lehigh University

IP Duality. Menal Guzelsoy. Seminar Series, /21-07/28-08/04-08/11. Department of Industrial and Systems Engineering Lehigh University IP Duality Department of Industrial and Systems Engineering Lehigh University COR@L Seminar Series, 2005 07/21-07/28-08/04-08/11 Outline Duality Theorem 1 Duality Theorem Introduction Optimality Conditions

More information