Sampling-Based Progressive Hedging Algorithms in Two-Stage Stochastic Programming

Size: px
Start display at page:

Download "Sampling-Based Progressive Hedging Algorithms in Two-Stage Stochastic Programming"

Transcription

1 Sampling-Based Progressive Hedging Algorithms in Two-Stage Stochastic Programming Nezir Aydin *, Alper Murat, Boris S. Mordukhovich * Department of Industrial Engineering, Yıldız Technical University, Besiktas/Istanbul, 34349, Turkey Department of Industrial and Systems Engineering, Wayne State University, Detroit, MI 48202, USA Department of Mathematics, Wayne State University, Detroit, MI 48202, USA Abstract Most real-world optimization problems are subject to uncertainties in parameters. In many situations where the uncertainties can be estimated to a certain degree, various stochastic programming (SP) methodologies are used to identify robust plans. Despite substantial advances in SP, it is still a challenge to solve practical SP problems, partially due to the exponentially increasing number of scenarios representing the underlying uncertainties. Two commonly used SP approaches to tackle this complexity are approximation methods, i.e., Sample Average Approximation (SAA), and decomposition methods, i.e., Progressive Hedging Algorithm (PHA). SAA, while effectively used in many applications, can lead to poor solution quality if the selected sample sizes are not sufficiently large. With larger sample sizes, however, SAA becomes computationally impractical. In contrast, PHA---as an exact method for convex problems and a very effective method in terms of finding very good solutions for nonconvex problems--suffers from the need to iteratively solve many scenario subproblems, which is computationally expensive. In this paper, we develop novel SP algorithms integrating SAA and PHA methods. The proposed methods are innovative in that they blend the complementary aspects of PHA and SAA in terms of exactness and computational efficiency, respectively. Further, the developed methods are practical in that they allow the analyst to calibrate the tradeoff between the exactness and speed of attaining a solution. We demonstrate the effectiveness of the developed integrated approaches, Sampling-Based Progressive Hedging Algorithm (SBPHA) and Discarding SBPHA (d-sbpha), over the pure strategies (i.e., SAA). The validation of the methods is demonstrated through two-stage stochastic Capacitated Reliable Facility Location Problem (CRFLP). Key words: stochastic programming; facility location; hybrid algorithms; progressive hedging; sample average approximation. 1. Introduction Most practical problems are subject to uncertainties in problem parameters. There are two major mathematical approaches to modeling uncertainties. The first one applies to deterministic problems, where only admissible regions of uncertain parameter changes are available for decision makers. A natural approach of handle such situations is seeking the guaranteed result in the worst case scenario by using methods of robust optimization and game/minimax theory; see, e.g., Ben-Tal et al. (2009), Bertsimas at al. (2011), and the references therein. The framework of robust and minimax optimization allows us to develop efficient numerical techniques involving generalized differentiation as, e.g., in appropriate versions of nonsmooth Newton methods; see Jeyakumar et al. (2013). However, better results can be achieved as a rule if some stochastic/statistical information is available to measure uncertainties. This paper is devoted to developing the latter approach. Stochastic Programming (SP) methodologies are often resorted for solving problems with uncertainties in parameters either exactly or with a statistical bound on the optimality gap. Beginning with Dantzig s (1955) introduction of a recourse model, where the solution could be * Corresponding author: address: nzraydin@yildiz.edu.tr; Tel:+90 (212) ;Fax:+90 (212) Adresses: amurat@wayne.edu (A. Murat), boris@math.wayne.edu (B.S. Mordukhovich) 1

2 adjusted based on the consequences of random events, the SP area has grown becoming an important tool for optimization under uncertainty. There is increased attention in the operations research community in tackling challenging problems with various problem parameter uncertainties; see, e.g., Lulli and Sen (2006), Topaloglou et al. (2008), Bomze et al. (2010), Peng et al. (2011), Toso and Alem (2014), Shishebori and Babadi (2015). A common precursor assumption in applying most of the SP methodologies is that the probability distributions of the random events are either known, or can be estimated with an acceptable accuracy. For majority of the SP problems, the goal is to identify a feasible solution that minimizes or maximizes the expected value of a function over all possible realizations of the random events (Solak, 2007). The most extensively studied SP models are two-stage models (Dantzig, 1955). In two-stage SP problems, the decision variables are partitioned into two main sets, where the first stage decision variables are decided before the uncertain parameters become available. Once the random events realized, design or operative strategy improvements (i.e., second stage recourse decisions) can be made at a certain cost. The objective is to optimize the sum of first stage costs and the expected value of the random second stage or recourse costs (Ahmed and Shapiro, 2002). An extensive number of solution methods is proposed for solving two-stage SP problems. These solution methods can be classified as either exact or approximation methods. Both analytical solution methods as well as methods that algorithmically solve SP problems to yield the optimal solution are considered as exact solution methods. Because of complexity of SP, several approximation algorithms in the form of sampling-based methods (Ahmed and Shapiro, 2002) or heuristic methods (Higle and Sen, 1991) are proposed to take decisions under uncertainty. When the random variable set is finite with a relatively small number of joint realizations (i.e., scenarios), a SP can be formulated as a deterministic equivalent program and can be solved exactly via an optimization algorithm (Rockafellar and Wets, 1991). The special structure of this deterministic equivalent program also calls for the application of large-scale optimization techniques, e.g., decomposition methods. Such decomposition methods can be categorized into two types. The first type decomposes the problem by stages as, e.g., L-shaped method (Slyke and Wets, 1969; Birge and Louveaux, 1997), while the second type decomposes the problem by scenarios. The latter category s methods are primarily based on Lagrangian relaxation of the non-anticipativity constraints, where each scenario in the scenario tree corresponds to a single deterministic mathematical program as, e.g., Progressive Hedging Algorithm (Rockafellar and Wets, 1991; Lokketangen, and Woddruff, 1996). A subproblem obtained by stage or scenario composition may include multiple stages in by-stage decomposition method and multiple scenarios in byscenario decomposition methods, respectively (Chiralaksanakul, 2003). As a popular approximation method, Monte Carlo sampling-based algorithms are commonly used in solving large scale SP problems (Morton and Popova, 2001). Monte Carlo sampling method can be deployed within either interior or exterior of the optimization algorithm. In the interior sampling-based methods the computationally expensive or difficult exact computations are replaced with the Monte Carlo estimates during the algorithm execution (Verweij et al., 2003). In the exterior sampling-based methods, the stochastic process is approximated by a finite scenario tree obtained through the Monte Carlo sampling. The solution to the problem with the constructed scenario tree is an approximation of the optimal objective function value. The exterior sampling based method is also referred to as the sample average approximation method in the SP literature (see, e.g., Shapiro, 2002). In this study, we propose a novel algorithm, called Sampling-Based Progressive Hedging Algorithm (SBPHA), and an improved version of SBPHA (discarding-sbpha) to solve a class of two-stage SP problems. A standard formulation of the two-stage stochastic program is (Kall and Wallace, 1994; Birge and Louveaux, 1997; Ahmed and Shapiro, 2002): Min x X {g(x) ct x + E[φ(x, ξ)]}, (1) 2

3 where φ(x, ξ) inf y Y {qt y: Wy h Tx} (2) is the optimal value, and where ξ (q, T, W, h) denotes the vector of parameters of the second stage problem. It is assumed that some or all of the components of ξ are random. The expectation of (1) is then taken with the respect to the known probability distribution of ξ. Problem (1) solves for the first stage variables, x R n 1, which must be selected prior to any realization of ξ, while problem (2) solves for the second stage variables, y R n 2, with a given first stage decision as well as the realization of ξ. In SBPHA we hybridize Progressive Hedging Algorithm (PHA) and the external samplingbased approximation algorithm, Sample Average Approximation (SAA), to efficiently solve twostage SP problems. While the standard SAA procedure is effective with sufficiently large samples, the required sample size can be quite large for the desired confidence level. Furthermore, for combinatorial problems, where the computational complexity increases faster than linearly in the sample size, SAA is often executed with a smaller sample size by generating and solving several SAA problems with i.i.d. samples. These additional SAA replications with the same sample size are likely to provide a better solution in comparison with the best solution found so far. However, by selecting the best performing sample solution, the SAA procedure effectively discards the remaining sample solutions, which contain valuable information about problem s uncertainty. The main idea of the proposed hybrid method SBPHA is to reuse all the information embedded in sample solutions by iteratively solving the samples with adding an augmented Lagrangian penalty term (as in PHA) to find a common solution that all the samples agree on. The rest of the paper is organized as follows. In 2 we briefly summarize the SAA and PHA methods and then describe the Sampling-Based Progressive Hedging Algorithm (SBPHA) and its d- SBPHA modification in detail. In 3 we present the scenario-based capacitated reliable facility location problem and its mathematical formulation, and then report computational experiments comparing the solution quality and CPU time efficiency of SAA with those of the proposed hybrid algorithms (SBPHA and d- SBPHA). We conclude the paper with discussions and future research directions in Solution Methodology In this section we first summarize PHA and SAA methods. Next we describe the proposed algorithms (SBPHA and d-sbpha) in detail Progressive Hedging Algorithm Most SP problems have key discrete decision variables in one or more of the stages; e.g., binary decisions to open a plant (Watson and Woodruff, 2011). Rockafellar and Wets (1991) proposed a scenario decomposition based method (PHA) that can be used to solve challenging very large linear or mixed-integer SP problems, especially in cases where effective techniques for solving individual scenarios exist. While the PHA possesses theoretical converges for SP problems where all decision variables are continuous, it is used as a heuristic method when some or all decision variables are discrete (Lokketangen and Woodruff, 1996; Fan and Liu, 2010; Watson and Woodruff, 2011). Many studies on progressive hedging approach are applied in several fields such as: lot sizing (Haugen et al., 2001), portfolio optimization (Barro, and Canestrelli, 2005), resource allocation in network flow (Watson and Woodruff, 2011), operation planning (Gonçalves et al., 2012), forest planning (Veliz et al., 2014), facility location (Gade et al., 2014), and server location and unit commitment (Guo et al., 2015). A standard approach to solve two-stage SP (1)-(2) is by constructing scenario tree via generating a finite number of joint realizations ξ s for s S, called scenarios, and allocating to each ξ s a positive weight p s such that s S p s = 1 (Shapiro 2008). The generated set, {ξ 1,, ξ S }, of 3

4 scenarios, with the corresponding probabilities p s,, p S is considered as a representation of the underlying joint probability distribution of the random parameters. Using this representation, the expected value function E[φ(x, ξ)] can be calculated as E[φ(x, ξ)] = s S p s φ(x, ξ s ). By duplicating the second stage decisions, y s, for every scenario ξ s, i.e., y s = y(ξ s ) for all s S, the two-stage problem (1)-(2) can be equivalently formulated as follows: Min c T x + s S p s φ(x, y s, ξ s ) (3) x,y s,,y S x X, y s φ(x, ξ s ) for all s S, where ξ s (q s, T s, W s, h s ), s S, are the corresponding scenarios (Shapiro, 2008). For the sake of simplicity, the mathematical formulation for each scenario subproblem is denoted by Min (cx) + p s (f s y s ) (4) s.t (x, y s ) φ(x, ξ s ) s, where the first stage decision vector x does not depend on scenario, and where x s = x for all s S. Further, y s represents second stage decision variables, which are determined given a first stage decision (x s ) and a specific ξ s. Finally, f s denotes the scenario specific coefficient vectors of the second stage. Problem (4) is the well-known extensive form of a two-stage stochastic program (Watson and Woodruff, 2011). Next we present the pseudo-code of PHA to show how PHA converges to a common solution taking into account all the scenarios belonging to the original problem. Let ρ be a penalty factor (ρ > 0), ε be a convergence threshold over the first stage decisions and k be the iteration number. The basic PHA Algorithm is stated as follows; see, e.g., Watson and Woodruff (2011): PHA Algorithm 1. k 0 2. For all s S, x k s argmin x,ys (cx + f s y s ): (x, y s ) φ(x, ξ s ) 3. x k k s S p s x s 4. For all s S, w k s ρ(x k s x k) 5. k k For all s S, x s k argmin x,ys (cx + ω s k 1 x + ρ 2 x x k f s y s ) : (x, y s ) φ(x, ξ s ) 7. x k k s S p s x s 8. For all s S, ω k s ω k 1 s + ρ(x k s x k) 9. π k s S p s x k s x k 10. If π k < ε, then go to step 5, else terminate When decision vector x is continuous, PHA converges with linear rate to a common solution vector x, which all the scenarios agree on. However, problem becomes much more difficult to solve when x is integer, because integer variables make SP problems nonconvex (Watson and Woodruff, 2011). Detailed information on behavior of the PHA methodology can be found in Wallace and Helgason (1991), Mulvey and Vladimirou (1991), Lokketangen and Woodruff (1996), Crainic et al. (2011), and Watson and Woodruff (2011) Sample Average Approximation The SAA method has become a popular technique in solving large-scale SP problems over the past decade due to its application ease and scope. It has been shown that feasible solutions obtained by SAA converge to an optimal solution provide that the sample size is sufficiently large (Ahmed and Shapiro, 2002). However, even when these sample sizes are quite large, the actual convergence rate depends on the problem conditioning. Several studies reported successful applications of SAA to various stochastic programs (Verweij et al., 2003; Kleywegt et al., 2002; 4

5 Shapiro and Homem-de-Mello, 2001; Fliege and Xu, 2011;Wang et al., 2011; Long et al., 2012; Hu et al., 2012; Wang et al., 2012; Aydin and Murat, 2013; Ayvaz et al., 2015). The key idea of the SAA approach to solve SP can be described as follows. A sample ξ 1,, ξ N of N realizations of the random vector ξ is randomly generated, and subsequently the expected value function E[φ(x, ξ)] is approximated by the sample average function N 1 N n=1 φ(x, ξ). In order to reduce variance within SAA, Latin Hypercube Sampling (LHS) may be used instead of uniform sampling. Performance comparisons of LHS and uniform sampling, within the SAA scheme, are analyzed in Ahmed and Shapiro (2002). The resulting SAA problem Min x X {ĝ N (x) c T x + N 1 N n=1 φ(x, ξ n )} is then solved by deterministic optimization algorithms. As N increases, the SAA algorithm converges to the optimal solution of SP (1) as shown in Ahmed and Shapiro, (2002) and in Kleywegt et al. (2002). Since solving SAA becomes a challenge with large N, the practical implementation of this algorithm often features multiple replications of the sampling, solving each of the sample SAA problems, and selecting the best found solution upon evaluating the solution quality by using the either the original scenario set or a reference scenario sample set. We now provide a description of the SAA procedure as follows. SAA Procedure: Initialize: Generate M independent random samples m = 1, 2,, M with scenario sets N m,, where N m = N. Each sample m consists of N realizations of independently and identically distributed (i.i.d.) random scenarios. We also select a reference sample N to be sufficiently large, e.g., as N N. Step 1: For each sample m, solve the following two-stage SP problem and record the sample optimal objective function value v m and the sample optimal solution x m : Min x X {c T x + 1 N m N m n=1 φ(x, ξ n) }. (5) Step 2: Calculate the average v M of the sample optimal objective function values obtained in Step 1 as follows: v M = 1 M M m=1 vm. (6) Step 3: Estimate the true objective function value v m of the original problem for each sample s optimal solution. Solve the following problem for each sample by using the optimal first stage decisions x m from step 1: v m = Minimize c T x m + 1 N N s=1 φ(xm, ξ s ). (7) Step 4: Select the solution x m with the best v m, i.e., x SAA = argmin m=1,,m as the solution and v SAA = min v m as the objective function value of SAA. m=1,,m Let v denote the optimal objective function value of the original problem (1-2). Then v M is an unbiased estimator of E[v], which is the expected optimal objective function value of the sample problems. Since E[v] v, the value of v M provides a statistical lower bound on v (Ahmed and Shapiro, 2002). When the first and the second stage decision variables in (1) and (2) are continuous, it has been proved that an optimal solution to the SAA problem solves also the true problem with probability approaching to one at an exponential rate as N increases (Shapiro and Homem-de- 5

6 Mello, 2001; Ahmed and Shapiro, 2002; Meng and Xu, 2006; Liu and Zhang, 2012; Xu and Zhang, 2013; Shapiro and Dentcheva, 2014). Determining the required minimal sample size N is an important task for SAA applicants, which was investigated by many researchers as in (Kleywegt et al. 2002; Ahmed and Shapiro, 2002; Shapiro, 2002; Ruszczynski and Shapiro, 2003a) Sampling-Based Progressive Hedging Algorithm (SBPHA) We now describe the proposed SBPHA algorithm, which is a hybridization of the SAA and PHA. The motivation for this hybridization originates from the final stage of the SAA method (Step 4, in SAA) where, aft selecting the best performing solution, the rest of the sample solutions are discarded. This discarding of the (M 1) sample solutions results in losses in terms of both valuable sample information (increasing with M) as well as the effort spent in solving for each sample s solution (increasing with N). The proposed SBPHA offers a solution to these losses by considering each sample SAA problem as though it is a scenario subproblem in the PHA. Accordingly, in the proposed SBPHA approach, we modify the SAA method by iteratively resolving the sample SAA problems while, at the end of each iteration, penalizing deviations from the probability weighted solution of the samples and the best performing solution to the original problem (i.e., as in the PHA). Hence, a single iteration of the SBPHA corresponds to the classical implementation of SAA method. An important distinction of the SBPHA from classical PHA is the sampling concept and the size of the subproblems solved. The classical PHA solves many subproblems each corresponding to a single scenario in the entire scenario set one by one at every iteration, and evaluates the probability weighted solution using the individual scenario probabilities. In comparison, the SBPHA solves only a few numbers of subproblems each corresponding to samples with multiple scenarios and determines the probability weighted solution in a different way than PHA (explained in detail below). Note that while solving individual sample problems in SBPHA is more difficult than solving a single scenario subproblem in PHA, SBPHA solves much fewer number of subproblems. Clearly, SBPHA makes a trade-off between the number of sample subproblems to solve and the size of each sample subproblem. We first present the proposed SBPHA algorithm and then describe its steps in detail. For clarity, we give the notation used precedes the algorithmic steps. Note that for brevity, we only define the notation that are new or different than those used in the preceding sections: Notation: k, k max : iteration index and maximum number of iterations P m, P m : probability and normalized probability of realization of sample m, m M P m x m,k x k x k x best v best k v best ω m k ρ k : solution vector for sample m at iteration k : probability weighted solution vector at iteration k : balanced solution vector at iteration k : best incumbent solution : objective function value of the best incumbent solution with respect to N : objective function value of the best solution at iteration k with respect to N : dual variable vector for sample m at iteration k : penalty factor at iteration k = 1 6

7 β α k α ε α ε x SBPHA v SBPHA : update parameter for the penalty factor, 1 < β < 2 : weight for the best incumbent solution at iteration k, 0 α 1 : update factor for the weight of the best incumbent solution, 0 α : Euclidean norm distance between sample solutions x m,k and x k at iteration k : convergence threshold for solution spread : best solution found by SBPHA : objective function value of the best solution found by SBPHA The pseudo-code for the Sampling Based Progressive Hedging Algorithm is as follows: Sampling-Based Progressive Hedging Algorithm (SBPHA) for Two-Stage SP Problems: 1: Initialize: Generate M samples, m = 1, 2,, M each with N m scenarios, where N m = N 2: Generate a reference sample set with N scenarios, where N N 3: k 0, ω k=0 m 0 for m = 1,, M, α k=0 1, and require ρ k=0 0 4: P m s Nm p s, P m P m M, P : = {P m} m=1 P m m Executing SAA to get initial solution: 5: Execute Steps 1-4 of SAA Algorithm for each m to get x m 6: x best x SAA, and v best v SAA 7: for m = 1, 2,..., M, do 8: x m,k=0 x m 9: end for 10: While (ε k ε or x k x best ), and(k < k max ) do 11: k k + 1, 12: x k P x m,k 1 13: x k α k x k + (1 α k )x best 14: If α k 1 = 0, α k α k 1 else α k α k 1 α 15: If k 2,ρ k { βρk 1 if ε k > ε k 1 /2 ρ k 1, else ρ k ρ k otherwise 16: ω k m ω k 1 m + ρ k (x m,k 1 x k) Solve each sample (subproblem) with N m scenarios: 17: for m = 1, 2,, M, do 18: [v m,k, x m,k ] argmin {c T x m,k + 1 N m n=1 φ(x, ξ n) + ω k m x m,k + ρ k 7 N m 2 xm,k x k 2 } (8) 19: end for 20: M ε k : = ( m=1 x m,k x k ) 1/2 Calculate the performances of solutions got in step 17-19: 21: for m = 1, 2,.., M, do 22: [v m,k ] Min {c T x m,k + 1 N φ(x, ξ N s=1 s) } (9) 23: end for k min(v m,k ) 24: v best

8 25: v best { v best k k if v best < v best v best otherwise 26: x best { argmin m=1,,m v m,k if v best x best otherwise 27: end while 28: x SBPHA x best, v SBPHA v best. 8 k < v best The first step in SBPHA s initialization is to execute the standard SAA procedure (Steps 5-6). In the initialization step of SBPHA, unlike SAA, we also calculate sample m s probability and normalized probabilities, e.g., P m and P m, which are used in to calculate sample m s probability weighted average solution x k at iteration k (Step 12). Next, in Step 13, we calculate the samples balanced solution (x k) as a weighted average of the average solution (x k) and the incumbent best solution (x best ). The x best is initially obtained as the solution to the SAA problem (Step 6) and tgen updated based on the evaluation of the improved sample solutions and the most recent incumbent best (Step 26). In calculating the balanced solution (x k), the SBPHA uses a weight factor α k (0,1) to tune the bias between the sample s current iteration average solution and the best incumbent solution. High values of α k tend the balanced solution (hence the sample solutions in the next iteration) to the samples average solution, whereas low values tend x k to the incumbent best solution. There are two alternative implementations of SBPHA concerning this bias tuning, where the α k can be static by setting α = 0 or dynamically changing over the iterations by setting α > 0 (see Step 14). The advantage of dynamic α k is that, beginning with a large α k, we first prioritize the sample average solution until the incumbent best solution quality improves. This approach allows guiding the sample solutions to a consensus sample average initially and then directing the consensus sample average in the direction of evolving the best solution. In Step 15, we update the penalty factor ρ k depending whether the distance (ϵ k ) of the sample solutions from the most recent balanced solution has sufficiently improved. We choose the improvement threshold as half of the distance in the previous iteration (e.g., ϵ k 1 ). Similarly, in Step 16, we update the dual variable (ω m k ) using the standard subgradient method of convex optimization. Note that the ω m k are the Lagrange multipliers associated with the equivalence of each sample s solution to the balanced solution. In Step 18, we solve each sample problem with additional objective function terms representing the dual variables and calculate the deviation of the sample solutions from the balanced solution (i.e., ε k ). Step 22 estimates the objective function value of each sample solution in the original problem using the reference set N. Steps 25 and 26 identify the sample solution x m,k with the best v m,k in iteration k and updates the incumbent best v best if there is any improvement. Note that v best is monotonicaly nonincreasing with the SBPHA iterations. Steps 22, and correspond to the integration of the SAA method selection of the best performing sample solution. Rather than terminating with the best sample solution, the proposed SBPHA conveys this information in the next iteration through the balanced solution. The SBPHA algorithm terminates when either of the two stopping conditions are met. If the iteration limit is reached k k max or when all the sample solutions converge to the balanced solution within a tolerance, then SBPHA terminates with the best found solution. The worst case solution of the SBPHA is equivalent to the SAA solution. This can be observed by noting that the best incumbent solution is initialized with the SAA s solution and v best is monotonicaly nonincreasing with the SBPHA iterations. Hence, the SBPHA ensures that there is always a feasible solution, which has the same performance or a better one in comparison with that of SAA.

9 2.4. Discarding-SBPHA (d-sbpha) for binary-first stage SP problems: The Discarding-SBPHA approach extends the SBPHA one by finding an improved and ideally optimal solution to the original problem. The main idea of d-sbpha is to resolve SBPHA by adding constraint(s) to the sample subproblems in (8) that prohibits finding the same best incumbent solution(s) found in earlier d-sbpha iterations. This prohibition is achieved through constraints that are violated if x m,k overlaps with any of the best incumbent solutions (x best ) found so far in the d-sbpha iterations. This modification of SBPHA can be considered as a globalization of SBPHA in the sense that, by repeating the discarding steps, d-sbpha guarantees to find an optimal solution, albeit, with infinite number of discarding steps. The d-sbpha approach initializes with the x best iteration of the SBPHA solution and the SBPHA iteration s parameter values (ω,α,ρ ), where this solution is first encountered in the SBPHA iteration history. We now provide the additional notation and algorithmic steps of d-sbpha and then describe it in detail. Additional notation for d-sbpha o d, d max D n Dt D t 1 D t 0 : iteration number where d- SBPHA finds a solution for the first time; see below : number discarding iterations and the maximum number of discarding iteration : set of discarded solutions : number of binary decision variables that are equal to 1 in discarded solution D t D : set of decision variables that are equal to 1 in discarded solution t : set of decision variables that are equal to 0 in discarded solution t d- SBPHA for binary-first stage SP problems: 1: Initialize: execute Steps 1-28 of SBPHA 2: x best x SBPHA, D φ, o 0, d 0, x m,o x m,k, ρ o ρ k, α o α k Start d-sbpha procedure 3: While d d max do 4: d d + 1 5: for m = 1, 2,, M, do 6: x m,k x m,o 7: end for 8: ρ k ρ o 9: α k α o 10: for m = 1,, 2,, M, do 11: ω k o m ω m 12: end for 13: D D {x best } 14: Execute Steps of SBPHA 15: for m = 1, 2,, M do 16: [v m,k, x m,k ] argmin {c T x m,k + 1 N m φ(x, ξ N m n=1 n) + ω k m x m,k + ρk 2 xm,k x k 2 } (10) s. t. x D1 x m,k t x D0 x m,k t n Dt 1, t = 1,, D 17: end for 18: Execute steps of SBPHA 9

10 19: x d SBPHA d x best, v d SBPHA d v best 20: end while 21: v d SBPHA d SBPHA best min d=1,,d v d 22: x d SBPHA best x d SBPHA d SBPHA d : = argmin d=1,,d v d Initialization step of the d-sbpha is the implementation of the original SBPHA with the only difference is to set up the starting values of parameters as the values of the SBPHA algorithm, where the current best solution is found. Also in Step 1, the set of the solutions that are discarded is updated to prevent the algorithm to reconverge to the same solution. In Steps 2-12, the parameters are updated. Step 13 updates the set of discarded solutions by including the most recent x best. Step 14 executes SBPHA steps to update the parameters of sample problems. Note that Step 16 has the same objective function as in Step 18 of SBPHA with additional discarding constraints that prevent finding the first-stage solutions that are already found in the preceding d-sbpha iterations. Step 18 executes steps of SBPHA, which test solutions quality and performs the updating of the best solution according to the SAA approach. The only difference, in Step 20, between d-sbpha and SBPHA is that d-sbpha checks whether the maximum number of discards is reached. If the discarding iteration limit is reached, then the algorithm reports the solution with the best performance (in Steps 21 and 22), else continues discarding. Note that with the discarding strategy, d-sbpha terminates with a better or the same solution in comparison with SBPHA and is guaranteed to terminate with an optimal solution that is achieved via infinitely many of discarding iterations while also discarding the constraints Lower bounds on SBPHA and d-sbpha: The majority of the computational effort of SBPHA (and d-sbpha) is spent in solving the sample subproblems as well as evaluating the first stage decisions in the larger reference set. Especially, for combinatorial problems with discrete first-stage decisions, the former effort is more significant than the latter. To improve the computational performance, we propose using a sample specific lower bound employed while solving the sample subproblems. The theoretical justification of the proposed lower bound for sample problems is that if the balanced solution does not change, then the solution of the sample problems is nondecreasing. Hence, one can use the previously found solution as the optimal lower bound (due to Lagrangian duality in optimization). However, if the balanced solution changes, then the lower bound of the previous solution is not guaranteed, and thus the lower bound is removed or a conservative estimate of the lower bound is utilized. Let lb m,k be the lower bound for sample m at iteration k in SBPHA (or d-sbpha). In Step 18 in SBPHA (Step 16 in d-sbpha) a valid cut should be added: v m,k lb m,k, m, m = 1, M, where lb m,k = c lb lb m,k 1, and 0 c lb 1, where c lb is a tuning parameter to adjust the tightness of the lower bound. However, c lb should not be close to 1 because it might cause infeasible solutions. There is a trade-off on the value of c lb. Higher values might cause either infeasible or suboptimal solutions, lower values does not provide consistent/tight constraints that should help improving the solution time. In this study, by testing multiple values for c lb, we suggest applicants to choose the range of 0. 4 c lb Providing a justified lower bound to the optimization problem saves approximately 10%-15% of the solution time Characteristics of SBPHAA and d-sbpha Here we present and justify several statements characterizing mathematical well-posedness of the proposed algorithms. 10

11 Proposition 1 (Equivalence): SBPHA is equivalent to SAA if the algorithm is executed only one time. Further, SBPHA is equivalent to PHA if the samples are mutually exclusive and their union is the entire scenario set. Proof: We proceed with verifying the tow statements of the proposition as formulated. For SAA: If SBPHA terminated at Step 1, then x SBPHA = x SAA, and v SBPHA = v SAA. This allows us to conclude that SBPHA is equivalent to SAA. For PHA: Under specified assumptions and for M = S and N m=1,,m = 1, we have SBPHA=PHA. Let us further consider a two-stage SP problem with finitely many scenarios ξ s, s = 1,, S, so that S each scenario occurs with probability p s, where s=1 p s = 1. Treating SBPHA with samples as the individual scenarios, e.g., M = S and N m=1,,m, = 1 for m m, we can conclude that P m = p s. If the weight for the best incumbent solution and the update factor for the weight of the best incumbent solution are equal to 1 and 0, respectively, at every iteration (α k 1, α = 0), then x k: = x k k s S p s x s and x SBPHA = x PHA and v SBPHA = v PHA. Proposition 2 (Convergence): SBPHA algorithm converges and terminates with the best solution found at an earlier iteration. Proof: We prove this by contradiction. Assume that SBPHA finds, at iteration k, the best solution as x best = x. Suppose that the SBPHA algorithm converges to a solution x x best which has a worse objective value than x, with respect to the reference scenario set. Note that the convergence yields x k = x k 1 = x, while assuming that k max = and ε = 0. In the last update we must have the equality x k = x = α k x + (1 α k )x best. Since α k < 1, this equality is satisfied if and only if x = x best, which is a contradiction. Proposition 3: The SBPHA and d-sbpha algorithms have the same convergence properties as SAA with respect to the sample size. Proof: It is showed in Ahmed and Shapiro (2002) and Ruszczynski and Shapiro (2003b) that SAA converges with probability one (w.p.1) to optimal solution of the original problem as sample size increases to infinity (N ). Since Step 1 in SBPHA is the implementation of SAA and that SBPHA does converge to the best solution found (Proposition 2), we can simply argue that SBPHA and d-sbpha converges to optimal solution of the original problem, as SAA does with increasing sample size. Furthermore, since SBPHA and d-sbpha guarantee a better or same solution quality as SAA provides, we can conjecture that SBPHA and d-sbpha have more chance to reach the optimality than SAA with a given number of samples and sample size. Proposition 4: The d-sbpha algorithm converges to the optimum solution as d. Proof: Given that d-sbpha is not allowed finding the same solution in the worst case, the d- SBPHA iterates as many times as the number of feasible solutions (infinite in the continuous and finite in the discrete case) for the first stage decisions before it finds an optimal solution. Note that the proposed algorithm is effective for the problems where the first stage decision variables are binary. Clearly, as the number of discarding constraints added increases linearly with the number discarding iterations, the resulting problems become more difficult to solve. However, in our 11

12 experiments for a particular problem type, we observed that, in the vast majority of experiments, d- SBPHA finds an optimal solution in less than 10 discarding iterations. 3. Experimental Study Let us now describe the experimental study performed to investigate the computational and solution quality performance of the proposed SBPHA and d-sbpha for solving two-stage SP problems. We benchmark the results of SBPHA and d-sbpha with those of SAA. All the algorithms are implemented in Matlab R2010b, and integer programs are solved with CPLEX The experiments are conducted on a PC with Intel(R) Core 2 CPU, 2.13 GHz processor and 2.0 GB RAM running on Windows 7 OS. Next we describe the test problem, Capacitated Reliable Facility Location Problem (CRFLP), in Section 3.1 and experimental design in Section 3.2. In Section 3.3, we report on the sensitivity analysis results for SBPHA and d-sbpha s performance with respect to algorithm s parameters. In Section 3.4, we present and discuss the benchmarking results Capacitated Reliable Facility Location Problem (CRFLP) Facility location is a strategic supply chain decision requiring significant investments to anticipate and plan for uncertain future events (Owen and Daskin, 1998; Melo et al., 2009). An example of such uncertain supply chain events is the disruption of facilities that are critical for the ability to efficiently satisfy the customer demand (Schütz, 2009). These disruptions can be either natural disasters, i.e., earthquake, floods, or man-made as terrorist attacks (Murali et al., 2012), labor strikes, etc. For a detailed review of uncertainty considerations in facility location problems, the reader is referred to Snyder (2006) and Snyder and Daskin (2005). Snyder and Daskin (2005) developed a reliability-based formulation for the Uncapacitated Facility Location Problem (UFLP) and the p-median problem (PMP). More recently, Shen et al. (2011) studied a variant of the reliable UFLP called uncapacitated reliable facility location problem (URFLP). The authors proposed efficient approximation algorithms for URFLP using the special structure of the problem. Several studies were applied to UFLP, such as Pandit (2004), Arya et al. (2004), Resende and Werneck (2006), Yun et al. (2014), An et al. (2014). However, approximations that are employed in UFLP cannot be applied to the general class of facility location problems such as Capacitated Reliable Facility Location Problem (CRFLP). Li et al. (2013) developed Lagrangian relaxation-based (LR) solution algorithms for the reliable P-median problem (RPMP) and the reliable uncapacitated fixedcharge location problem (RUFL). Albareda-Sambola et al. (2015) proposed two mathematical formulations and developed a metaheuristic based algorithm to minimize total travel cost in a network, where facilities are subject to probabilistic failures. The model they considered is also a reliable p-median type problem. In practice, capacity decisions are considered jointly with the location decisions. Further, the capacities of facilities often cannot be changed (at least with a reasonable cost) in the event of disruption. Following the facility failure, customers can be assigned to other facilities only if these facilities have sufficient available capacity. Thus capacitated reliable facility location problems are more complex than their uncapacitated counterparts (Shen et al., 2011). Gade (2007) applied SAA in combination with a dual decomposition method to solve CRFLP. Later, Aydin and Murat (2013) proposed a swarm intelligence based SAA algorithm to solve CRFLP. We now introduce the notation used for the formulation of CRFLP. Let F R and F U denote the set of possible reliable and unreliable facility sites, respectively, F = F R F U denote the set of all possible facility sites including the emergency facility, and D denote the set of customers. Let f i be the fixed cost for locating facility i F, which is incurred if the facility is opened, and d j be the demand for customer j D. Further, c ij denotes the cost of satisfying each unit demand of customer j from facility i and includes such variable cost drivers as transportation, production, etc. There are failure scenarios where the unreliable facilities can fail and become incapacitated to serve any customer demand. In such cases, demand from customers needs to be allocated between the 12

13 surviving facilities and emergency facility (f e ) subject to capacity availability. Each unit of demand, which is satisfied by the emergency facility, causes a large penalty (h j ) cost. This penalty can be incurred due to either finding an alternative source, or due to the lost sale. Finally, the facility i has a limited capacity and can serve at most b i units of demand. We formulate the CRFLP as a two-stage SP problem. In the first stage, the location decisions are made before the random failures of the located facilities. In the second stage, following the facility failures, the customer-facility assignment decisions are made for every customer given the facilities that have not failed. The goal is to identify the set of facilities to be opened while minimizing the total cost of open facilities as well as the expected cost of meeting demand of customers from the surviving facilities and the emergency facility. In the scenario-based formulation of CRFLP, let s denote a failure scenario and the set of all the failure scenarios is S. Let s p s be the occurrence probability of scenario s and s S p s = 1. Further, let k i indicate whether the facility i survives (then k s i = 1) and k s i = 0 otherwise. For instance, in the case of independent facility failures, we have S = 2 F U possible failure scenarios. s The binary decision variable x i specifies whether facility i is opened, and the variable y ij specifies the rate of demand of customer j is satisfied by facility i in scenario s. The scenario-based formulation of the CRFLP as a two-stage SP is as follows: Minimize subject to s f i x i + p s d j c ij y ij i F y ij s i F s S j D i F 13 (11) = 1 j D, s S, (12) y ij s x i j D, i F, s S, (13) d j y s ij k s i b i i F, s S, (14) j D x i {0,1} i F, (15) y ij s [0,1] j D, i F, s S. (16) The objective function in formulation (11) minimizes the total fixed cost of opening facilities and the expected second stage cost of satisfying customer demand through lasting and emergency facility. Constraints (12) ensure that demand of each customer is fully satisfied by either open facilities, or the emergency facility in every failure scenario. Constraints (13) ensure that a customer s demand cannot be served from a facility that is not opened in every failure scenario. Constraints (14) prevent the assignment of any customer to a facility if it is failed and also ensure the total demand assigned the facility does not exceed its capacity in every failure scenario. Constraints (15) enforces integrality of location decisions and (16) ensure that demand satisfaction rate for any customer-facility pair is within [0, 1] Experimental Setting We used the test data sets provided in Zhan (2007) which are also used in Shen et al. (2011) for URFLP. In these data sets, the coordinates of site locations (facilities, customers) are i.i.d and sampled from U[0,1] U[0,1]. The sets of customer and facility sites are identical. The customer demand is also i.i.d., sampled from U[0,1000], and rounded to the nearest integer. The fixed cost of opening an unreliable facility is i.i.d. and sampled from U[500,1500], and rounded to the nearest integer. For the reliable facilities, we set the fixed cost to 2,000 for all facilities. The variable costs c ij for customer j and facility i (excluding the emergency facility) are chosen as the Euclidean distance between sites. We assign a large penalty cost, (20), h j for serving customer j from emergency facility. Zhan (2007) and Shen et al., (2011) consider URFLP and thus their data sets do

14 not have facility capacities. In all our experiments, we selected identical capacity levels for all facilities, i.e., b i=1,.., F = 2,000. Datasets used in our study is given in Appendix B Table 9. In generating the failure scenarios, we assume that the facility failures are independently and identically distributed according to the Bernoulli distribution with probability q i, i.e., the failure probability of facility i. We experimented with two sets of failure probabilities; first set of experiments consider uniform failure rates, i.e., q i FU = q where q = {0.1, 0.2, 0.3}, and the second set of experiments consider bounded non-uniform failure rates i.e. q i, where q i 0.3. We restricted the failure probabilities with 0.3 since larger failure rates are not realistic. The reliable facilities and emergency facility are perfectly reliable, i.e., q i (FR f e ) = 1. Note that in the case, where q i FU = 0, corresponds to the deterministic fixed-charge facility location problem. The failure scenarios s S are generated as follows. Let F s f F U denote the facilities that are failed, s and F r FU F U \F s f be the set of surviving facilities in scenario s. The facility indicator parameter in scenario s become k s i =0 if i F s f, and k s i =1 otherwise, e.g., if i F s r F R {f e }. The probability of scenario s is then calculated by p s = q F s f (1 q) F r s. In all the experiments, we used D = F U F R = = 20 sites, which gives us is a large-sized CRFLP problem that is more difficult to solve than the uncapacitated version (URFLP). The size of the failure scenario set is S = 4,096. The deterministic equivalent formulation has 20 binary x i and 1,720,320 = F D S = ,096 continuous y s ij variables. Further, there are 1,888,256 = 81, ,720, ,016 = D S + F D S + F S constraints corresponding to constraints (12)--(14) in the CRFLP formulation. Hence, the size of the constraint matrix of the deterministic equivalent MIP formulation is 1,720,320 1,888,256 which cannot be tackled with exact solution procedures (e.g., branch-and-cut or column generation methods). Note that while solving LPs with this size is computationally feasible, the presence of the binary variables makes the solution a daunting task. We generated sample sets for SAA and the SBPHA (and d-sbpha) by randomly sampling from U[0,1] as follows. Given the scenario probabilities, p s, we calculate the scenario cumulative probability vector {p 1, (p 1 + p 2 ),, ( p 1 + p p S 1 ), 1}, which has S intervals. We first generate a random number and then select the scenario corresponding to the interval containing the random number. We tested the SAA, SBPHA, and d-sbpha algorithms with varying number of samples (M) and sample sizes (N). Whenever possible, we use the same sample sets for all the three methods. We select the reference set (N ) as the entire scenario set, i.e., N = S, which is used to evaluate the second stage performance of a solution. We note that this is computationally tractable due to a relatively small number of scenarios and that the second stage problem is an LP. In cases of large scenario set or integer second stage problem, one should select N S Parameter Sensitivity In this subsection, we analyze the sensitivity of SBPHA with respect to the weight for the best incumbent solution parameter (α), penalty factor (ρ), and update parameter for the penalty factor (β). Recall that α determines the bias of the best incumbent solution in determining the samples balanced solution, which is obtained as a weighted average of the best incumbent solution and the samples probability weighted solution. The parameter ρ penalizes the Euclidean distance of a solution from the samples balanced solution and β is the multiplicative update parameter for ρ between two iterations. In all these tests, we set (M, N) = (5, 10) and q = 0.3 unless otherwise stated. We experimented with two α strategies, static and dynamic α. We solved in total 480 (= 10 replications 48 parameter settings) problem instances. The summary results of solving CRFLP using 10 independent sample sets (replications) with static strategy, β = {1.1,1.2,1.3,1.4,1.5,1.8}, and ρ = {1,20,40,80,200} are presented in Table 1. The detailed results of the 10 replications of Table 1 together with the detailed replication 14

15 results with static strategy for α={0.6,0.7,0.8} and dynamic strategy α = {0.02,0.03,0.05} are presented in Appendix A, Table 5. The first column in Table 1 shows the α strategy and its parameter value. Note that in the dynamic strategy, we select the initial value as α k=0 = 1 in Appendix A, Table 5. The second and third columns show penalty factor (ρ) and update parameter for the penalty factor (β), respectively. The objective function values for 10 replications (each replication consists of M = 5 samples) are reported in columns 4 13 (shown only for replications 1, 2 and 10 in Table 1 and detailed results are shown in Appendix A, Table 5). The first column under the Objective heading presents the average objective function value across 10 replications, and the second column under the Objective heading presents the optimality gap (i.e., gap 1 ) between the average replication solution and the best objective function value found, which is , while the third and fourth columns under the Objective heading present the minimum and maximum objective values across 10 replications. Average objective function value and gap 1 are calculated as follows: Average = 1 Rep r=1 v r SBPHA, (17) v Rep Rep Average v Gap 1 = v Rep 100%, (18) v where Rep is the number of replications, e.g., Rep = 10 in this section s experiments. In the last column, we report on the computational (CPU) time in seconds for tests. The complete results on CPU times are provided in Appendix A, Table 6. First observation from Table 1 is that SBPHA is relatively insensitive to the α strategy employed and the parameter settings selected. Secondly, we observe that the performance of SBPHA with different parameter settings depends highly on the sample (see Table 5). As seen in replication 7 of Table 5, most of the configurations show good performance as they all obtain the optimal solution. Further, as the α increases, the best incumbent solution becomes increasingly more important leading to the decreased computational time. While some parameter settings exhibit good performance in solution quality, their computational times are also higher, and vice versa. Table 1: Summary objective function results for solving 10 replications of CRFLP with different parameter settings Alpha(α) Rho (ρ) Replication v best Objective Time (s) Strategy/ Parameter Static/ Start Update Parameter(β) Average Gap1 (%) Min Max Average ,825 9,032 9,515 9, ,995 10, ,751 9,104 9,483 9, ,995 10, ,547 9,271 9,332 9, ,024 10, ,547 8,995 9,404 9, ,995 10, ,528 9,271 9,586 9, ,024 10, ,362 8,995 9,112 9, ,995 9, ,547 9,032 9,167 9, ,995 9, ,362 9,287 9,096 9, ,995 10, In selecting the parameter settings for SBPHA, we are interested in a parameter setting that offers a balanced trade-off between the solution quality and the solution time. In order to determine such parameter settings, we developed a simple, yet effective, parameter configuration selection index. The selection criterion is the product of average gap 1 and CPU time. Clearly, smaller the The best solution is obtained by selecting the best among all the SBPHA solutions (e.g., out of 480 solutions) and the the time-restricted solution of the CPLEX. The latter solution is obtained by solving the deterministic equivalent using CPLEX method with %0.05 optimality gap tolerance and 10 hours (36,000 seconds) of time limit until either the CPU time-limit is exceeded or the CPLEX terminates due to insufficient memory. 15

Stochastic Integer Programming An Algorithmic Perspective

Stochastic Integer Programming An Algorithmic Perspective Stochastic Integer Programming An Algorithmic Perspective sahmed@isye.gatech.edu www.isye.gatech.edu/~sahmed School of Industrial & Systems Engineering 2 Outline Two-stage SIP Formulation Challenges Simple

More information

Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem

Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem Improvements to Benders' decomposition: systematic classification and performance comparison in a Transmission Expansion Planning problem Sara Lumbreras & Andrés Ramos July 2013 Agenda Motivation improvement

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions Huseyin Topaloglu School of Operations Research and Information Engineering, Cornell

More information

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No., February 20, pp. 24 54 issn 0364-765X eissn 526-547 360 0024 informs doi 0.287/moor.0.0482 20 INFORMS A Geometric Characterization of the Power of Finite

More information

Scenario grouping and decomposition algorithms for chance-constrained programs

Scenario grouping and decomposition algorithms for chance-constrained programs Scenario grouping and decomposition algorithms for chance-constrained programs Yan Deng Shabbir Ahmed Jon Lee Siqian Shen Abstract A lower bound for a finite-scenario chance-constrained problem is given

More information

Almost Robust Optimization with Binary Variables

Almost Robust Optimization with Binary Variables Almost Robust Optimization with Binary Variables Opher Baron, Oded Berman, Mohammad M. Fazel-Zarandi Rotman School of Management, University of Toronto, Toronto, Ontario M5S 3E6, Canada, Opher.Baron@Rotman.utoronto.ca,

More information

Decomposition Algorithms for Two-Stage Chance-Constrained Programs

Decomposition Algorithms for Two-Stage Chance-Constrained Programs Mathematical Programming manuscript No. (will be inserted by the editor) Decomposition Algorithms for Two-Stage Chance-Constrained Programs Xiao Liu Simge Küçükyavuz Luedtke James Received: date / Accepted:

More information

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem.

An artificial chemical reaction optimization algorithm for. multiple-choice; knapsack problem. An artificial chemical reaction optimization algorithm for multiple-choice knapsack problem Tung Khac Truong 1,2, Kenli Li 1, Yuming Xu 1, Aijia Ouyang 1, and Xiaoyong Tang 1 1 College of Information Science

More information

Network design for a service operation with lost demand and possible disruptions

Network design for a service operation with lost demand and possible disruptions Network design for a service operation with lost demand and possible disruptions Opher Baron, Oded Berman, Yael Deutsch Joseph L. Rotman School of Management, University of Toronto, 105 St. George St.,

More information

Complexity of two and multi-stage stochastic programming problems

Complexity of two and multi-stage stochastic programming problems Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA The concept

More information

Progressive Hedging-Based Meta- Heuristics for Stochastic Network Design

Progressive Hedging-Based Meta- Heuristics for Stochastic Network Design Progressive Hedging-Based Meta- Heuristics for Stochastic Network Design Teodor Gabriel Crainic Xiaorui Fu Michel Gendreau Walter Rei Stein W. Wallace January 2009 1. 2. 3. 4. 5. Progressive Hedging-Based

More information

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

MODELS AND ALGORITHMS FOR RELIABLE FACILITY LOCATION PROBLEMS AND SYSTEM RELIABILITY OPTIMIZATION

MODELS AND ALGORITHMS FOR RELIABLE FACILITY LOCATION PROBLEMS AND SYSTEM RELIABILITY OPTIMIZATION MODELS AND ALGORITHMS FOR RELIABLE FACILITY LOCATION PROBLEMS AND SYSTEM RELIABILITY OPTIMIZATION By ROGER LEZHOU ZHAN A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Computations with Disjunctive Cuts for Two-Stage Stochastic Mixed 0-1 Integer Programs

Computations with Disjunctive Cuts for Two-Stage Stochastic Mixed 0-1 Integer Programs Computations with Disjunctive Cuts for Two-Stage Stochastic Mixed 0-1 Integer Programs Lewis Ntaimo and Matthew W. Tanner Department of Industrial and Systems Engineering, Texas A&M University, 3131 TAMU,

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

On a class of minimax stochastic programs

On a class of minimax stochastic programs On a class of minimax stochastic programs Alexander Shapiro and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology 765 Ferst Drive, Atlanta, GA 30332. August 29, 2003

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Quantifying Stochastic Model Errors via Robust Optimization

Quantifying Stochastic Model Errors via Robust Optimization Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations

More information

Optimization Driven Scenario Grouping

Optimization Driven Scenario Grouping Optimization Driven Scenario Grouping Kevin Ryan 1, Shabbir Ahmed 1, Santanu S. Dey 1, and Deepak Rajan 2 1 School of Industrial & Systems Engineering, Georgia Institute of Technology 2 Lawrence Livermore

More information

On solving the multi-period location-assignment problem under uncertainty

On solving the multi-period location-assignment problem under uncertainty On solving the multi-period location-assignment problem under uncertainty María Albareda-Sambola 2 Antonio Alonso-Ayuso Laureano Escudero Elena Fernández 2 Celeste Pizarro Romero. Departamento de Estadística

More information

Stochastic Integer Programming

Stochastic Integer Programming IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method

More information

On the Approximate Linear Programming Approach for Network Revenue Management Problems

On the Approximate Linear Programming Approach for Network Revenue Management Problems On the Approximate Linear Programming Approach for Network Revenue Management Problems Chaoxu Tong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853,

More information

A. Shapiro Introduction

A. Shapiro Introduction ESAIM: PROCEEDINGS, December 2003, Vol. 13, 65 73 J.P. Penot, Editor DOI: 10.1051/proc:2003003 MONTE CARLO SAMPLING APPROACH TO STOCHASTIC PROGRAMMING A. Shapiro 1 Abstract. Various stochastic programming

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44 1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence

More information

Stochastic Uncapacitated Hub Location

Stochastic Uncapacitated Hub Location Stochastic Uncapacitated Hub Location Ivan Contreras, Jean-François Cordeau, Gilbert Laporte HEC Montréal and Interuniversity Research Centre on Enterprise Networks, Logistics and Transportation (CIRRELT),

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional

More information

The Capacitated Reliable Fixed-charge Location Problem: Model and Algorithm

The Capacitated Reliable Fixed-charge Location Problem: Model and Algorithm Lehigh University Lehigh Preserve Theses and Dissertations 2015 The Capacitated Reliable Fixed-charge Location Problem: Model and Algorithm Rui Yu Lehigh University Follow this and additional works at:

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,

More information

Stochastic Unit Commitment with Topology Control Recourse for Renewables Integration

Stochastic Unit Commitment with Topology Control Recourse for Renewables Integration 1 Stochastic Unit Commitment with Topology Control Recourse for Renewables Integration Jiaying Shi and Shmuel Oren University of California, Berkeley IPAM, January 2016 33% RPS - Cumulative expected VERs

More information

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Manish Bansal Grado Department of Industrial and Systems Engineering, Virginia Tech Email: bansal@vt.edu Kuo-Ling Huang

More information

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization Dimitris Bertsimas Sloan School of Management and Operations Research Center, Massachusetts

More information

Generation and Representation of Piecewise Polyhedral Value Functions

Generation and Representation of Piecewise Polyhedral Value Functions Generation and Representation of Piecewise Polyhedral Value Functions Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial and Systems Engineering,

More information

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse Ted Ralphs 1 Joint work with Menal Güzelsoy 2 and Anahita Hassanzadeh 1 1 COR@L Lab, Department of Industrial

More information

Sample Average Approximation (SAA) for Stochastic Programs

Sample Average Approximation (SAA) for Stochastic Programs Sample Average Approximation (SAA) for Stochastic Programs with an eye towards computational SAA Dave Morton Industrial Engineering & Management Sciences Northwestern University Outline SAA Results for

More information

The Orienteering Problem under Uncertainty Stochastic Programming and Robust Optimization compared

The Orienteering Problem under Uncertainty Stochastic Programming and Robust Optimization compared The Orienteering Problem under Uncertainty Stochastic Programming and Robust Optimization compared Lanah Evers a,b,c,, Kristiaan Glorie c, Suzanne van der Ster d, Ana Barros a,b, Herman Monsuur b a TNO

More information

A New Dynamic Programming Decomposition Method for the Network Revenue Management Problem with Customer Choice Behavior

A New Dynamic Programming Decomposition Method for the Network Revenue Management Problem with Customer Choice Behavior A New Dynamic Programming Decomposition Method for the Network Revenue Management Problem with Customer Choice Behavior Sumit Kunnumkal Indian School of Business, Gachibowli, Hyderabad, 500032, India sumit

More information

The Retail Planning Problem Under Demand Uncertainty. UCLA Anderson School of Management

The Retail Planning Problem Under Demand Uncertainty. UCLA Anderson School of Management The Retail Planning Problem Under Demand Uncertainty George Georgiadis joint work with Kumar Rajaram UCLA Anderson School of Management Introduction Many retail store chains carry private label products.

More information

Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization

Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization Rudabeh Meskarian 1 Department of Engineering Systems and Design, Singapore

More information

Multistage Robust Mixed Integer Optimization with Adaptive Partitions

Multistage Robust Mixed Integer Optimization with Adaptive Partitions Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0000-0000 eissn 0000-0000 00 0000 0001 INFORMS doi 10.187/xxxx.0000.0000 c 0000 INFORMS Multistage Robust Mixed Integer Optimization with Adaptive Partitions

More information

Reformulation and Sampling to Solve a Stochastic Network Interdiction Problem

Reformulation and Sampling to Solve a Stochastic Network Interdiction Problem Network Interdiction Stochastic Network Interdiction and to Solve a Stochastic Network Interdiction Problem Udom Janjarassuk Jeff Linderoth ISE Department COR@L Lab Lehigh University jtl3@lehigh.edu informs

More information

The Sample Average Approximation Method Applied to Stochastic Routing Problems: A Computational Study

The Sample Average Approximation Method Applied to Stochastic Routing Problems: A Computational Study Computational Optimization and Applications, 24, 289 333, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. The Sample Average Approximation Method Applied to Stochastic Routing

More information

Stochastic Optimization

Stochastic Optimization Chapter 27 Page 1 Stochastic Optimization Operations research has been particularly successful in two areas of decision analysis: (i) optimization of problems involving many variables when the outcome

More information

Estimation and Optimization: Gaps and Bridges. MURI Meeting June 20, Laurent El Ghaoui. UC Berkeley EECS

Estimation and Optimization: Gaps and Bridges. MURI Meeting June 20, Laurent El Ghaoui. UC Berkeley EECS MURI Meeting June 20, 2001 Estimation and Optimization: Gaps and Bridges Laurent El Ghaoui EECS UC Berkeley 1 goals currently, estimation (of model parameters) and optimization (of decision variables)

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 38 1 / 38 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 38 1 The L-Shaped Method 2 Example: Capacity Expansion Planning 3 Examples with Optimality Cuts [ 5.1a of BL] 4 Examples

More information

An Optimal Path Model for the Risk-Averse Traveler

An Optimal Path Model for the Risk-Averse Traveler An Optimal Path Model for the Risk-Averse Traveler Leilei Zhang 1 and Tito Homem-de-Mello 2 1 Department of Industrial and Manufacturing Systems Engineering, Iowa State University 2 School of Business,

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

The Effect of Supply Disruptions on Supply Chain. Design Decisions

The Effect of Supply Disruptions on Supply Chain. Design Decisions The Effect of Supply Disruptions on Supply Chain Design Decisions Lian Qi Department of Supply Chain Management & Marketing Sciences Rutgers Business School, Rutgers University, Newark, NJ Zuo-Jun Max

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Designing the Distribution Network for an Integrated Supply Chain

Designing the Distribution Network for an Integrated Supply Chain Designing the Distribution Network for an Integrated Supply Chain Jia Shu and Jie Sun Abstract We consider an integrated distribution network design problem in which all the retailers face uncertain demand.

More information

K-Adaptability in Two-Stage Mixed-Integer Robust Optimization

K-Adaptability in Two-Stage Mixed-Integer Robust Optimization K-Adaptability in Two-Stage Mixed-Integer Robust Optimization Anirudh Subramanyam 1, Chrysanthos E. Gounaris 1, and Wolfram Wiesemann 2 asubramanyam@cmu.edu, gounaris@cmu.edu, ww@imperial.ac.uk 1 Department

More information

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano ... Our contribution PIPS-PSBB*: Multi-level parallelism for Stochastic

More information

Fenchel Decomposition for Stochastic Mixed-Integer Programming

Fenchel Decomposition for Stochastic Mixed-Integer Programming Fenchel Decomposition for Stochastic Mixed-Integer Programming Lewis Ntaimo Department of Industrial and Systems Engineering, Texas A&M University, 3131 TAMU, College Station, TX 77843, USA, ntaimo@tamu.edu

More information

STRC. A Lagrangian relaxation technique for the demandbased benefit maximization problem

STRC. A Lagrangian relaxation technique for the demandbased benefit maximization problem A Lagrangian relaxation technique for the demandbased benefit maximization problem Meritxell Pacheco Paneque Bernard Gendron Virginie Lurkin Shadi Sharif Azadeh Michel Bierlaire Transport and Mobility

More information

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach L. Jeff Hong Department of Industrial Engineering and Logistics Management The Hong Kong University of Science

More information

Stochastic Programming: From statistical data to optimal decisions

Stochastic Programming: From statistical data to optimal decisions Stochastic Programming: From statistical data to optimal decisions W. Römisch Humboldt-University Berlin Department of Mathematics (K. Emich, H. Heitsch, A. Möller) Page 1 of 24 6th International Conference

More information

A robust approach to the chance-constrained knapsack problem

A robust approach to the chance-constrained knapsack problem A robust approach to the chance-constrained knapsack problem Olivier Klopfenstein 1,2, Dritan Nace 2 1 France Télécom R&D, 38-40 rue du gl Leclerc, 92794 Issy-les-Moux cedex 9, France 2 Université de Technologie

More information

Disconnecting Networks via Node Deletions

Disconnecting Networks via Node Deletions 1 / 27 Disconnecting Networks via Node Deletions Exact Interdiction Models and Algorithms Siqian Shen 1 J. Cole Smith 2 R. Goli 2 1 IOE, University of Michigan 2 ISE, University of Florida 2012 INFORMS

More information

Three-partition Flow Cover Inequalities for Constant Capacity Fixed-charge Network Flow Problems

Three-partition Flow Cover Inequalities for Constant Capacity Fixed-charge Network Flow Problems Three-partition Flow Cover Inequalities for Constant Capacity Fixed-charge Network Flow Problems Alper Atamtürk, Andrés Gómez Department of Industrial Engineering & Operations Research, University of California,

More information

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization

Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Benders Decomposition Methods for Structured Optimization, including Stochastic Optimization Robert M. Freund April 29, 2004 c 2004 Massachusetts Institute of echnology. 1 1 Block Ladder Structure We consider

More information

Regularized optimization techniques for multistage stochastic programming

Regularized optimization techniques for multistage stochastic programming Regularized optimization techniques for multistage stochastic programming Felipe Beltrán 1, Welington de Oliveira 2, Guilherme Fredo 1, Erlon Finardi 1 1 UFSC/LabPlan Universidade Federal de Santa Catarina

More information

Lagrangean relaxation

Lagrangean relaxation Lagrangean relaxation Giovanni Righini Corso di Complementi di Ricerca Operativa Joseph Louis de la Grange (Torino 1736 - Paris 1813) Relaxations Given a problem P, such as: minimize z P (x) s.t. x X P

More information

Modeling Uncertainty in Linear Programs: Stochastic and Robust Linear Programming

Modeling Uncertainty in Linear Programs: Stochastic and Robust Linear Programming Modeling Uncertainty in Linear Programs: Stochastic and Robust Programming DGA PhD Student - PhD Thesis EDF-INRIA 10 November 2011 and motivations In real life, Linear Programs are uncertain for several

More information

The multi-period incremental service facility location problem

The multi-period incremental service facility location problem Computers & Operations Research ( ) www.elsevier.com/locate/cor The multi-period incremental service facility location problem Maria Albareda-Sambola a,, Elena Fernández a, Yolanda Hinojosa b, Justo Puerto

More information

Decomposition Algorithms with Parametric Gomory Cuts for Two-Stage Stochastic Integer Programs

Decomposition Algorithms with Parametric Gomory Cuts for Two-Stage Stochastic Integer Programs Decomposition Algorithms with Parametric Gomory Cuts for Two-Stage Stochastic Integer Programs Dinakar Gade, Simge Küçükyavuz, Suvrajeet Sen Integrated Systems Engineering 210 Baker Systems, 1971 Neil

More information

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Ruken Düzgün Aurélie Thiele July 2012 Abstract This paper describes a multi-range robust optimization approach

More information

A Progressive Hedging Approach to Multistage Stochastic Generation and Transmission Investment Planning

A Progressive Hedging Approach to Multistage Stochastic Generation and Transmission Investment Planning A Progressive Hedging Approach to Multistage Stochastic Generation and Transmission Investment Planning Yixian Liu Ramteen Sioshansi Integrated Systems Engineering Department The Ohio State University

More information

Lagrange Relaxation: Introduction and Applications

Lagrange Relaxation: Introduction and Applications 1 / 23 Lagrange Relaxation: Introduction and Applications Operations Research Anthony Papavasiliou 2 / 23 Contents 1 Context 2 Applications Application in Stochastic Programming Unit Commitment 3 / 23

More information

On Robust Optimization of Two-Stage Systems

On Robust Optimization of Two-Stage Systems Mathematical Programming manuscript No. (will be inserted by the editor) Samer Takriti Shabbir Ahmed On Robust Optimization of Two-Stage Systems Received: date / Revised version: date Abstract. Robust-optimization

More information

Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with GUB Constraints

Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with GUB Constraints Disjunctive Decomposition for Two-Stage Stochastic Mixed-Binary Programs with GUB Constraints Brian Keller Booz Allen Hamilton, 134 National Business Parkway, Annapolis Junction, MD 20701, USA, keller

More information

Comparison of Modern Stochastic Optimization Algorithms

Comparison of Modern Stochastic Optimization Algorithms Comparison of Modern Stochastic Optimization Algorithms George Papamakarios December 214 Abstract Gradient-based optimization methods are popular in machine learning applications. In large-scale problems,

More information

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009 Branch-and-Bound Algorithm November 3, 2009 Outline Lecture 23 Modeling aspect: Either-Or requirement Special ILPs: Totally unimodular matrices Branch-and-Bound Algorithm Underlying idea Terminology Formal

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

IBM Research Report. Stochasic Unit Committment Problem. Julio Goez Lehigh University. James Luedtke University of Wisconsin

IBM Research Report. Stochasic Unit Committment Problem. Julio Goez Lehigh University. James Luedtke University of Wisconsin RC24713 (W0812-119) December 23, 2008 Mathematics IBM Research Report Stochasic Unit Committment Problem Julio Goez Lehigh University James Luedtke University of Wisconsin Deepak Rajan IBM Research Division

More information

Stochastic programs with binary distributions: Structural properties of scenario trees and algorithms

Stochastic programs with binary distributions: Structural properties of scenario trees and algorithms INSTITUTT FOR FORETAKSØKONOMI DEPARTMENT OF BUSINESS AND MANAGEMENT SCIENCE FOR 12 2017 ISSN: 1500-4066 October 2017 Discussion paper Stochastic programs with binary distributions: Structural properties

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

Best subset selection via bi-objective mixed integer linear programming

Best subset selection via bi-objective mixed integer linear programming Best subset selection via bi-objective mixed integer linear programming Hadi Charkhgard a,, Ali Eshragh b a Department of Industrial and Management Systems Engineering, University of South Florida, Tampa,

More information

Reformulation of chance constrained problems using penalty functions

Reformulation of chance constrained problems using penalty functions Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF

More information

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications François Vanderbeck University of Bordeaux INRIA Bordeaux-Sud-Ouest part : Defining Extended Formulations

More information

ORIGINS OF STOCHASTIC PROGRAMMING

ORIGINS OF STOCHASTIC PROGRAMMING ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Asymptotic analysis of a greedy heuristic for the multi-period single-sourcing problem: the acyclic case

Asymptotic analysis of a greedy heuristic for the multi-period single-sourcing problem: the acyclic case Asymptotic analysis of a greedy heuristic for the multi-period single-sourcing problem: the acyclic case H. Edwin Romeijn Dolores Romero Morales August 29, 2003 Abstract The multi-period single-sourcing

More information

IV. Violations of Linear Programming Assumptions

IV. Violations of Linear Programming Assumptions IV. Violations of Linear Programming Assumptions Some types of Mathematical Programming problems violate at least one condition of strict Linearity - Deterministic Nature - Additivity - Direct Proportionality

More information

Models and Algorithms for Stochastic and Robust Vehicle Routing with Deadlines

Models and Algorithms for Stochastic and Robust Vehicle Routing with Deadlines Accepted in Transportation Science manuscript (Please, provide the mansucript number!) Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information