A Probabilistic Model for Minmax Regret in Combinatorial Optimization

Size: px
Start display at page:

Download "A Probabilistic Model for Minmax Regret in Combinatorial Optimization"

Transcription

1 A Probabilistic Model for Minmax Regret in Combinatorial Optimization Karthik Natarajan Dongjian Shi Kim-Chuan Toh May 21, 2013 Abstract In this paper, we propose a new probabilistic model for minimizing the anticipated regret in combinatorial optimization problems with distributional uncertainty in the objective coefficients. The interval uncertainty representation of data is supplemented with information on the marginal distributions. As a decision criterion, we minimize the worst-case conditional value-at-risk of regret. The proposed model includes the interval data minmax regret as a special case. For the class of combinatorial optimization problems with a compact convex hull representation, a polynomial sized mixed integer linear program MILP is formulated when a the range and mean are known, and b the range, mean and mean absolute deviation are known while a mixed integer second order cone program MISOCP is formulated when c the range, mean and standard deviation are known. For the subset selection problem of choosing K elements of maximum total weight out of a set of N elements, the probabilistic regret model is shown to be solvable in polynomial time in the instances a and b above. This extends the current known polynomial complexity result for minmax regret subset selection with range information only. Engineering Systems and Design, Singapore University of Technology and Design, 20 Dover Drive, Singapore natarajan karthik@sutd.edu.sg Department of Mathematics, National University of Singapore, 10, Lower Kent Ridge Road, Singapore shidongjian@nus.edu.sg Department of Mathematics, National University of Singapore, 10, Lower Kent Ridge Road, Singapore mattohkc@nus.edu.sg 1

2 1 Minmax Regret Combinatorial Optimization Let Zc denote the optimal value to a linear combinatorial optimization problem over a feasible region X {0, 1} N for the objective coefficient vector c: Zc = max { c T y y X {0, 1} N}. 1.1 Consider a decision-maker who needs to decide on a solution x X before knowing the actual value of the objective coefficients. Let Ω represent a deterministic uncertainty set that captures all the possible realizations of the vector c. Under the regret criterion, the decision-maker experiences an ex-post regret of possibly not choosing the optimal solution. The value of regret in absolute terms is given by: Rx, c = Zc c T x, 1.2 where Rx, c 0. The maximum value of regret for a decision x corresponding to the uncertainty set Ω is given as: max Rx, c. 1.3 c Ω Savage [39] proposed the use of the following minimax regret model, where the decision x is chosen to minimize the maximum regret over all possible realizations of the uncertainty: min max x X c Ω Rx, c. 1.4 One of the early references on minmax regret models in combinatorial optimization is the work of Kouvelis and Yu [27]. The computational complexity of solving the minmax regret problem have been extensively studied therein under the following two representations of Ω : a Scenario uncertainty: The vector c lies in a finite set of M possible discrete scenarios: Ω = {c 1, c 2,..., c M }. b Interval uncertainty: Each component c i of the vector c takes a value between a lower bound c i and upper bound c i. Let Ω i = [c i, c i ] for i = 1,..., N. The uncertainty set is the Cartesian product of the sets of intervals: Ω = Ω 1 Ω 2... Ω N. For the discrete scenario uncertainty, the minmax regret counterpart of problems such as the shortest path, minimum assignment and minimum spanning tree problems are NP-hard even when the scenario 2

3 set contains only two scenarios see Kouvelis and Yu [27]. This indicates the difficulty of solving regret problems to optimality since the original deterministic optimization problems are solvable in polynomial time in these instances. These problems are weakly NP-hard for a constant number of scenarios while they become strongly NP-hard when the number of scenarios is non-constant. In the interval uncertainty case, for any x X, let S + x denote the scenario in which c i = c i if x i = 0, and c i = c i if x i = 1. It is straightforward to see that the scenario S + x is the worst-case scenario that maximizes the regret in 1.3 for a fixed x X. For deterministic combinatorial optimization problems with a compact convex hull representation, this worst-case scenario can be used to develop compact MILP formulations for the minmax regret problem 1.4 refer to Yaman et. al. [44] and Kasperski [24]. As in the scenario uncertainty case, the minmax regret counterpart is NP-hard under interval uncertainty for most classical polynomial time solvable combinatorial optimization problems. Averbakh and Lebedev [6] proved that the minmax regret shortest path and minmax regret minimum spanning tree problems are strongly NP-hard with interval uncertainty. Under the assumption that the deterministic problem is polynomial time solvable, a 2-approximation algorithm for minmax regret was designed by Kasperski and Zieliński [25]. Their algorithm is based on a mid-point scenario approach where the deterministic combinatorial optimization problem is solved with an objective coefficient vector c + c/2. Kasperski and Zieliński [26] developed a fully polynomial time approximation scheme under the assumption that a pseudopolynomial algorithm is available for the deterministic problem. A special case where the minmax regret problem is solvable in polynomial time is the subset selection problem. The deterministic subset selection problem is: Given a set of elements [N] := {1,..., N} with weights {c 1,..., c N }, select a subset of K elements of maximum total weight. The deterministic problem can be solved by a simple sorting algorithm. With an interval uncertainty representation of the weights, Averbakh [5] designed a polynomial time algorithm to solve the minmax regret problem to optimality with a running time of ON mink, N K 2. Subsequently, Conde [14] designed a faster algorithm to solve this problem with running time ON mink, N K. A related model that has been analyzed in discrete optimization is the absolute robust approach see Kouvelis and Yu [27] and Bertsimas and Sim [11] where the decision-maker chooses a decision x that maximizes the minimum objective over all possible realizations of the uncertainty: max min x X c Ω ct x. 1.5 Problem 1.5 is referred to as the absolute robust counterpart of the deterministic optimization problem. The formulation for the absolute robust counterpart should be contrasted with the minmax regret 3

4 formulation which can be viewed as the relative robust counterpart of the deterministic optimization problem. For the discrete scenario uncertainty, the absolute robust counterpart of the shortest path problem is NP-hard as in the regret setting see Kouvelis and Yu [27]. However for the interval uncertainty case, the absolute robust counterpart retains the complexity of the deterministic problem unlike the minmax regret counterpart. This follows from the observation that the worst case realization of the uncertainty in absolute terms is to set the objective coefficient vector to the lower bound c irrespective of the solution x. The minmax regret version in contrast is more difficult to solve since the worst case realization depends on the solution x. However this also implies that the minmax regret solution is less conservative as it considers both the best and worst case. For illustration, consider the binary decision problem of deciding whether to invest or not in a single project with payoff c: Zc = max {cy y {0, 1}}. The payoff is uncertain and takes a value in the range c [c, c] where c < 0 and c > 0. The absolute robust solution is to not invest in the project since in the worst case the payoff is negative. On the other hand, the minmax regret solution is to invest in the project if c > c the best payoff is more than the magnitude of the worst loss and not invest in the project otherwise. Since the regret criterion evaluates the performance with respect to the best decision, it is not as conservative as the absolute robust solution. However the computation of the minmax regret solution is more difficult than the absolute robust solution. The minmax regret models handle support information and assumes that the decision-maker uses the worst scenario in terms of regret to make the decision. However if additional probabilistic information is known or can be estimated from data, it is natural to incorporate this information into the regret model. To quantify the impact of probabilistic information on regret, consider the graph in Figure 1. In this graph, there are three paths connecting node A to node D: 1 4, 2 5 and Consider a decision-maker who wants to go from node A to node D in the shortest possible time by choosing among the three paths. The mean µ i and range [c i, c i ] for each edge i in Figure 1 denotes the average time and the range of possible times in hours to traverse the edge. The comparison of the different paths are shown in the following table: 4

5 c [2,5], c 4 [5,11], c 3 [3,7], 3 4 c 2 [5,9], c [3, 4], Figure 1: Find a Shortest Path from Node A to Node D Table 1: Comparison of paths Criterion Regret Absolute Robust Average Path c 1, c 2, c 3, c 4, c 5 Best Path Max Regret c 1, c 2, c 3, c 4, c 5 Max Time Expected Time 1 4 5, 5, 3, 11, , 9, 7, 11, , 5, 7, 5, , 9, 7, 11, , 9, 3, 5, , 9, 7, 11, In the minmax regret model, the optimal decision is the path 2 5 with regret of 6 hours. However, on average this path takes 0.5 hours more than the other two paths. In terms of expected cost, the optimal decision is either of the paths 1 4 or Note that only the range information is used in the minmax regret model, and only mean information is used to minimize the expected cost. Clearly, the choice of an optimal path is based on the decision criterion and the available data that guides the decision process. In this paper, we propose an integer programming approach for probabilistic regret in combinatorial optimization that incorporates partial distributional information such as the mean and variability of the random coefficients and provides flexibility in modeling the decision-maker s aversion to regret. The structure and the contributions of the paper are summarized next: 1. In Section 2, a new probabilistic model for minmax regret in combinatorial optimization is proposed. As a decision criterion, the worst-case conditional value-at-risk of cost has been previously studied. We extend this to the concept of regret which can be viewed as a relative cost with respect to the best solution. The proposed model incorporates limited probabilistic information on the uncertainty such as the knowledge of the mean, mean absolute deviation or standard deviation while also providing flexibility to model the decision-maker s attitude to regret. In the special 5

6 case, the probabilistic regret criterion reduces to the traditional minmax regret criterion and the expected objective criterion respectively. 2. In Section 3, we develop a tractable formulation to compute the worst-case conditional value-atrisk of regret for a fixed solution x X. The worst-case conditional value-at-risk of regret is shown to be computable in polynomial time if the deterministic optimization problem is solvable in polynomial time. This generalizes the current result for the interval uncertainty model, where the worst-case regret for a fixed solution x X is known to be computable in polynomial time when the deterministic optimization problem is solvable in polynomial time. 3. In Section 4, we formulate conic mixed integer programs to solve the probabilistic regret model. For the class of combinatorial optimization problems with a compact convex hull representation, a polynomial sized MILP is developed when a range and mean are given, and b range, mean and mean absolute deviation are given. If c range, mean and standard deviation are given, we develop a polynomial sized MISOCP. The formulations in this section generalizes the polynomial sized MILP formulation to solve the minmax regret model when only the the range is given. 4. In Section 5, we provide a polynomial time algorithm to solve the probabilistic regret counterpart for subset selection when a range and mean, and b range, mean and mean absolute deviation are given. This extends the current results of Averbakh [5] and Conde [14] who proved the polynomial complexity of the minmax regret counterpart of subsect selection which uses range information only. Together, Sections 3, 4 and 5 generalize several of the results for the interval uncertainty regret model to the probabilistic regret model. 5. In Section 6, numerical examples for the shortest path and subset selection problems are provided. The numerical results provide illustration on the structure of the solutions generated by the probabilistic regret model while identifying the computational improvement obtained by the polynomial time algorithm proposed in Section 5 relative to a generic MILP formulation. 2 Worst-case Conditional Value-at-risk of Regret Let c denote the random objective coefficient vector with a probability distribution P that is itself unknown. P is assumed to lie in the set of distributions PΩ where Ω is the support of the random 6

7 vector. In the simplest model, the decision-maker minimizes the anticipated regret in an expected sense: min x X sup P PΩ E P [Rx, c]. 2.1 Model 2.1 includes two important subcases: a PΩ is the set of all probability distributions with support Ω. In this case 2.1 reduces to the standard minmax regret model 1.4 and b The complete distribution is given with P = {P }. In this case 2.1 reduces to solving the deterministic optimization problem where the random objective is replaced with the mean vector µ, since argmin x X E P [Z c c T x] = argmin x X EP [Z c] µ T x = argmax µ T x. x X Formulation 2.1 however does not capture the degree of regret aversion. Furthermore, as long as the mean vector is fixed, the optimal decision in 2.1 is the optimal solution to the deterministic problem with the mean objective. Thus the solution is insensitive to other distributional information such as variability. To address this, we propose use of the conditional value-at-risk measure that has been gaining popularity in the risk management literature. 2.1 Conditional Value-at-risk Measure Conditional value-at-risk is also referred to as average value-at-risk or expected shortfall in the risk management literature. We briefly review this concept here. Consider a random variable r defined on a probability space Π, F, Q, i.e. a real valued function rω : Π R with finite second moment E[ r 2 ] <. This ensures that the conditional-value-at-risk is finite. For the random variables that we consider in this paper, the finiteness of the second moment is guaranteed as the random variables are assumed to lie within a finite range. For a given α 0, 1, the value-at-risk is defined as the lower α quantile of the random variable r: VaR α r = inf {v Q r v α}. 2.2 The definition of conditional value-at-risk is provided next. Definition 1 Rockafellar and Uryasev [36, 37], Acerbi and Tasche [1]. For α 0, 1, the conditional value-at-risk CVaR at level α of a random variable rω : Π R is the average of the highest 1 α of the outcomes: CVaR α r = 1 1 α 1 α VaR β rdβ

8 An equivalent representation for CVaR is: CVaR α r = inf v + 1 v R 1 α E Q[ r v] From an axiomatic perspective, CVaR is an example of a coherent risk measure see Artzner et. al. [4], Föllmer and Schied [16] and Fritelli and Gianin [17] and satisfies the following four properties: 1. Monotonicity: If r 1 ω r 2 ω for each outcome, then CVaR α r 1 CVaR α r Translation invariance: If c R, then CVaR α r 1 + c = CVaR α r 1 + c. 3. Convexity: If λ [0, 1], then CVaR α λ r λ r 2 λcvar α r λcvar α r Positive homogeneity: If λ 0, then CVaR α λ r 1 = λcvar α r 1. Furthermore, CVaR is an attractive risk measure for stochastic optimization since it is convexity preserving unlike the VaR measure. However the computation of CVaR might still be intractable see Ben-Tal et. al. [7] for a detailed discussion on this. An instance when the computation of CVaR is tractable is for discrete distributions with a polynomial number of scenarios. Optimization with the CVaR measure has been used in portfolio optimization [36] and inventory control [3] among other stochastic optimization problems. Combinatorial optimization problems under the CVaR measure has been studied by So et. al. [42]: min CVaR α c T x. 2.5 x X The negative sign in Formulation 2.5 capture the feature that higher values of c T x are preferred to lower values. Formulation 2.5 can be viewed as regret minimization problem where the regret is defined with respect to an absolute benchmark of zero. Using a sample average approximation method, So et. al. [42] propose approximation algorithms to solve 2.5 for covering, facility location and Steiner tree problems. In the distributional uncertainty representation, the concept of conditional value-at-risk is extended to the concept of worst-case conditional value-at-risk through the following definition. Definition 2 Zhu and Fukushima [46], Natarajan et. al. [31]. Suppose the distribution of the random variable r lies in a set Q. For α 0, 1, the worst-case conditional value-at-risk WCVaR at level α of a random variable r with respect to Q is defined as: WCVaR α r = sup inf Q Q v R v α E Q[ r v]

9 From an axiomatic perspective, WCVaR has also be shown to be a coherent risk measure under mild assumptions on the set of distributions see the discussions in Zhu and Fukushima [46] and Natarajan et. al. [31]. WCVaR has been used as a risk measure in distributional robust portfolio optimization [46, 31] and joint chance constrained optimization problems [13, 48]. Zhu and Fukushima [46] and Natarajan et. al. [31] also provide examples of sets of distributions Q where the position of sup and inf can be exchanged in formula 2.6. Since the objective is linear in the probability measure possibly infinite-dimensional over which it is maximized and convex in the variable v over which it is minimized, the saddle point theorem from Rockafellar [38] is applicable. Applying Theorem 6 in [38] implies the following lemma: Lemma 1. Let α 0, 1, and the distribution of the random variable r lies in a set Q. If Q is a convex set of the probability distributions defined on a closed convex support set Ω R n, then WCVaR α r = inf v R v α sup E Q [ r v] Q Q For all sets of distributions PΩ that are studied in this paper, the condition in Lemma 1 is satisfied. We propose the use of worst-case conditional value-at-risk of regret as a decision criterion in combinatorial optimization problems. The central problem of interest to solve is: min WCVaR αrx, c = min inf v + 1 x X x X v R 1 α sup E P [Rx, c v] P PΩ 2.2 Marginal Distribution and Marginal Moment Models To generalize the interval uncertainty model supplemental marginal distributional information of the random vector c is assumed to be given. The random variables are however not assumed to be independent. The following two models are considered: a Marginal distribution model: For each i [N], the marginal probability distribution P i of c i with support Ω i = [c i, c i ] is assumed to be given. Let PP 1,..., P N denote the set of joint distributions with the fixed marginals. This is commonly referred to as the Fréchet class of distributions. b Marginal moment model: For each i [N], the probability distribution P i of c i with support Ω i = [c i, c i ] is assumed to belong to a set of probability measures P i. The set P i is defined through moment equality constraints on real-valued functions of the form E Pi [f ik c i ] = m ik, k [K i ]. If f ik c i = c k i, this reduces to knowing the first K i moments of c i. Let PP 1,..., P N denote the set of multivariate joint distributions compatible with the marginal probability distributions 9

10 P i P i. Throughout the paper, we assume that mild Slater type conditions hold on the moment information to guarantee that strong duality is applicable for moment problems. One such simple sufficient condition is that the moment vector is in the interior of the set of feasible moments see Isii [22]. With the marginal moment specification, the multivariate moment space is the product of univariate moment spaces. Ensuring that Slater type conditions hold in this case is relatively straightforward since it reduces to Slater conditions for univariate moment spaces. The reader is referred to Bertsimas et. al. [9] and Lasserre [28] for a detailed description on this topic. The moment representation of uncertainty in distributions has been used in the minmax regret newsvendor problem [45, 34]. A newsvendor needs to choose an order quantity q of a product before the exact value of demand is known by balancing the costs of under-ordering and over-ordering. The random demand is represented by d with a probability distribution P. The unit selling price is p, the unit cost is c and the salvage value for any unsold product is 0. A risk neutral firm chooses its quantity to maximize its expected profit: max pe P [minq, d] cq, q 0 where minq, d is the actual quantity of units sold which depends on the demand realization. In the minmax regret version of this problem studied in [45, 34], the newsvendor chooses the order quantity where the demand distribution is not exactly known. The demand distribution is assumed to belong to a set of probability measure P P typically characterized with moment information. The objective is to minimize the maximum loss in profit from not knowing the full distribution: [ min max q 0 P P max s 0 pe P [mins, d] cs pe P [minq, d] cq ]. Yue et. al. [45] solved this model analytically where only the mean and variance of demand are known. Roels and Perakis [34] generalized this model to incorporate additional moments and information on the shape of the demand. On the other hand, if the demand is known with certainty, the optimal order quantity is exactly the demand. The maximum profit would be p c d and the regret model as proposed in this paper is: min inf q 0 v R v α sup E P [p c d P PΩ p minq, d ] + cq v, where α is the parameter that captures aversion to regret. There are two major differences between the minmax regret newsvendor model in [45, 34] and the regret model proposed in this paper. The first difference is that in [45, 34] the newsvendor minimizes the maximum ex-ante regret with respect to 10

11 distributions of not knowing the right distribution, while in this paper, the decision-maker minimizes the ex-post regret with respect to cost coefficient realizations of not knowing the right objective coefficients. The second difference is that the newsvendor problem deals with a single demand variable. However in the multi-dimensional case, the marginal model forms the natural extension and is a more tractable formulation. The new probabilistic regret model can be related to standard minmax regret. In the marginal moment model, if only the range information of each random variable c i is given, then the WCVaR of regret reduces to the maximum regret. Consider the random vector whose distribution is a Dirac measure δĉx with ĉ i x = c i 1 x i + c i x i for i [N]. Then WCVaR of the regret satisfies: inf v + 1 sup E P [Rx, c v] + inf v + 1 v R 1 α P PP 1,...,P N v R 1 α E δĉ[rx, c v] + = inf v R = Rx, ĉ v + 1 [Rx, ĉ v]+ 1 α = max Rx, c. c Ω The last equality is valid since ĉx is the worst-case scenario for a given x X. Moreover, the WCVaR of the regret cannot be larger than the maximum value of regret. Hence, they are equal in this case. When α = 0, problem 2.8 reduces to minimizing the worst-case expected regret, min x X sup P PP 1,...,P N E P [Rx, c]. On the other hand, as α converges to 1, WCVaR α Rx, c converges to max c Ω Rx, c, and problem 2.8 reduces to the traditional interval uncertainty minmax regret model. This implies that the problem of minimizing the WCVaR of the regret in this probabilistic model is NP-hard since the minmax regret problem is NP-hard [6]. The parameter α allows for the flexibility to vary the degree of regret aversion. If a decision x 1 is preferred to decision x 2 for each realization of the uncertainty, it is natural to conjecture that x 1 is preferred to x 2 in the regret model. monotonicity property for the chosen criterion. The following lemma validates this Lemma 2. For two decisions x 1, x 2 X, if x 1 dominates x 2 in each realization of the uncertainty, i.e. c T x 1 c T x 2 for all c Ω, then the decision x 1 is preferred to x 2, i.e. WCVaR α Rx 1, c WCVaR α Rx 2, c. Proof. Since c T x 1 c T x 2 for all c Ω, Rx 1, c = max y X ct y c T x 1 max y X ct y c T x 2 = Rx 2, c, c Ω. 11

12 Thus [Rx 1, c v] + [Rx 2, c v] +, c Ω, v R. Hence for any distribution P P, E P [Rx 1, c v] + E P [Rx 2, c v] +. This implies that sup P P E P [Rx 1, c v] + sup P P E P [Rx 2, c v] +. Therefore, inf v + 1 v R 1 α sup P P E P [Rx 1, c v] + inf v + 1 v R 1 α sup P P E P [Rx 2, c v] +, that is WCVaR α Rx 1, c WCVaR α Rx 2, c. 3 Computation of the WCVaR of Regret and Cost In this section, we compute the WCVaR of regret and cost for a fixed x X in the marginal distribution and marginal moment model. This is motivated by bounds in PERT networks that were proposed by Meilijson and Nadas [30] and later extended in the works of Klein Haneveld [21], Weiss [43], Birge and Maddox [12] and Bertsimas et. al. [9]. In a PERT network, let [N] represent the set of activities. Each activity i [N] is associated with a random activity time c i and marginal distribution P i. Meilijson and Nadas [30] computed the worst-case expected project tardiness sup P PP1,...,P N E P [Z c v] + where Zc denotes the time to complete the project and v denotes a deadline for the project. Their approach can be summarized as follows. For all d R N and c Ω: [ ] + [Zc v] + = max d + c y X dt y v [ ] + [ max y X dt y v + max c y X dt y [Zd v] + + [c i d i ] +. Taking expectation with respect to a distribution P PP 1,..., P N and minimizing over d R N gives the bound: E P [Z c v] + inf [Zd v] + + d R N ] + E Pi [ c i d i ] +, P PP 1,..., P N. Meilijson and Nadas [30] constructed a multivariate probability distribution that is consistent with the marginal distributions such that the upper bound is attained. This leads to their main observation that the worst-case expected project tardiness is obtained by solving the following convex minimization problem: sup E P [Z c v] + = inf [Zd v] + + P PP 1,...,P N d R N 12 E Pi [ c i d i ]

13 With partial marginal distribution information, Klein Haneveld [21], Birge and Maddox [12] and Bertsimas et al. [9] extended the convex formulation of the worst-case expected project tardiness to: sup E P [Z c v] + = inf [Zd v] + + P PP 1,...,P N d R N sup E Pi [ c i d i ] P i P i Klein Haneveld [21] estimated a project deadline v that balances the expected project tardiness with respect to the most unfavorable distribution and the cost of choosing the deadline for the project. This can be formulated as a two stage recourse problem: inf v + 1 sup E P [Z c v] v R 1 α P PP 1,...,P N where α 0, 1 is the tradeoff parameter between the two costs. Formulation 3.3 is clearly equivalent to estimating the worst-case conditional value-at-risk of the project completion time. We extend these results to the regret framework in the following subsection. 3.1 Worst-case Conditional Value-at-risk of Regret [ To compute the WCVaR of regret, we first consider the subproblem sup P PΩ E P Z c c T x v ] + for the central problem 2.8. The proof of Theorem 1 is inspired from proof techniques in Doan and Natarajan [15] and Natarajan et. al. [32]. Theorem 1. For each i [N], assume that the marginal distribution P i of the continuously distributed random variable c i with support Ω i = [c i, c i ] is given. For x X {0, 1} N and v 0, define and Then φx, v = φx, v. φx, v := [ sup E P Z c c T x v ] +, P PP 1,...,P N [Zd φx, v := min d T x v ] + + d µ T x + d Ω E Pi [ c i d i ] +. Proof. Define φ 0 x, v := sup P PP 1,...,P N [ E P max Z c, c T x + v ] φ 0 x, v := min max Zd, d T x + v + E Pi [ c i d i ] +. d Ω 13

14 Since max Zc, c T x + v = [Zc c T x v] + + c T x + v, to prove φx, v = φx, v is equivalent to proving that φ 0 x, v = φ 0 x, v. Step 1: Prove that φ 0 x, v φ 0 x, v. For any c Ω = Ω 1 Ω 2... Ω N, the following holds: max Zc, c T x + v = max max c d + y X dt y, c d + d T x + v max max y X dt y + max c y X dt y, d T x + v + c d T x n n max Zd + [c i d i ] +, d T x + v + [c i d i ] + = max Zd, d T x + v + n [c i d i ] +. Taking expectation with respect to the probability measure P PP 1,..., P N and minimum with respect to d Ω, we get E P [ max Z c, c T x + v ] φ 0 x, v, P PP 1,..., P N. Taking supremum with respect to P PP 1,..., P N, implies φ 0 x, v φ 0 x. Step 2: Prove that φ 0 x, v φ 0 x, v. First, we claim that φ 0 x, v = min max Zd, d T x + v + d R N Since for all d R N \ Ω, we can choose d Ω: d i, if d i [c i, c i ], d i = c i, if d i > c i, c i, if d i < c i. E Pi [ c i d i ] such that the objective value will be lesser than or equal to the objective value at d. The reason is that if d i > c i, by setting d i = c i, the second term of the objective function in 3.4 will not change while the first term will decrease or stay constant. If d i < c i, by setting d i = c i, the second term will decrease by c i d i, and the first term will increase by at most c i d i. Hence φ 0 x, v can be expressed as: φ 0 x, v = min d,t t + E Pi [ c i d i ] + s.t. t d T y, y X

15 t d T x + v. For a fixed x X, 3.5 is a convex programming problem in decision variables d and t. The Karush- Kuhn-Tucker KKT conditions for 3.5 are given as follows: λy 0, t d T y, y X, and s 0, t d T x + v 3.6 λy + s = y X λy max Zd, d T x + v d T y = 0, y X 3.8 s max Zd, d T x + v d T x v = P c i d i = λy + sx i y X :y i =1 There exists an optimal d in the compact set Ω and optimal t = max Zd, d T x + v to problem 3.5. Under the standard Slater s conditions for strong duality in convex optimization, there exist dual variables s, λy such that these optimal d, t, s, λy that satisfy the KKT conditions. For the rest of the proof, we let d, t, s, λy denote the optimal solution that satisfy the KKT conditions. We construct a distribution P as follows: a Generate a random vector ỹ which takes the value y X with probability λy if y x, and takes the value x X with probability s. Note that λx = 0 from the KKT condition 3.8. b Define the set I 1 = {i [N] : c i < d i < c i } and I 2 = [N] \ I 1. For i I 1, generate the random variable c i with the conditional probability density function f i c i ỹ = y = 1 P c i d i I [d i,c i ]c i f i c i if y i = 1, 1 P c i <d i I [c i,d i c i f i c i if y i = 0, and for i I 2 generate the random variable c i with the conditional probability density function f i c i ỹ = y = f i c i. For i I 2, the probability density function for each c i under P is f i c i = f i c i. probability density function is: For i I 1, the f i c i = y X λy f i c i ỹ = y + s f i c i ỹ = x = y X :y i =1 1 λy P c i d i I 1 [d i,c i ]c i f i c i + sx i P c i d i I [d i,c i ]c i f i c i 15

16 + y X :y i =0 1 λy P c i < d i I 1 [c i,d i c i f i c i + s1 x i P c i < d i I [c i,d i c i f i c i = I [di,c i ]c i f i c i + I [ci,d i c i f i c i = f i c i. The probability density function constructed hence belongs to PP 1,..., P n. Therefore, φ 0 x, v [ E P max Z c, c T x + v ] = [ λye P max Z c, c T x + v ỹ = y ] [ + se P max Z c, c T x + v ỹ = x ] y X λye P [Z c ỹ = y] + se P [ c T x + v ỹ = x ] y X λye P [ c T y ỹ = y ] + se P [ c T x + v ỹ = x ] y X = y X :y i =1 +s = i I 1 i I 1 λy i I 1 1 c i P c i d i I [d i,c i ]c i f i c i dc i + c i f i c i dc i i I 2 c i x i f i c i dc i + sv 1 c i x i P c i d i I [d i,c i ]c i f i c i dc i + i I 2 c i I [di,c i ]c i f i c i dc i + i I 2 P c i d i c i f i c i dc i + sv. by 3.10 Since P c i d i = 1 or 0 for i I 2, hence P c i d i c i f i c i dc i = c i I [di,c i ]c i f i c i dc i, i I 2. Then, we obtain φ 0 x, v = = = = = c i I [di,c i ]c i f i c i dc i + sv c i d i I [di,c i ]c i f i c i dc i + E Pi [ c i d i ] + + d i y X :y i =1 d i I [di,c i ]c i f i c i dc i +sv λy + sx i + sv by 3.10 E Pi [ c i d i ] + + λyd T y + sd T x + v y X E Pi [ c i d i ] + + λy max Zd, d T x + v + sd T x + v y X by 3.8 E Pi [ c i d i ] s max Zd, d T x + v + sd T x + v by

17 = E Pi [ c i d i ] + + max Zd, d T x + v by 3.9 = φ 0 x, v. It is useful to contrast the regret bound in Theorem 1 with the earlier bound of Meilijson and Nadas [30] in 3.1. In Theorem 1, the worst-case joint distribution depends on the solution x X and the scalar v. The worst-case joint distribution in Formulation 3.1 however depends on the scalar v only. The proof of Theorem 1 can be extended directly to discrete marginal distributions by replacing the integrals with summations and using linear programming duality. This result generalizes to the marginal moment model and piecewise linear convex functions as illustrated in the next theorem. The proof of Theorem 2 is inspired from proof techniques in Bertsimas et. al. [9] and Natarajan et. al. [32]. Theorem 2. For x X {0, 1} N, consider the marginal moment model: P i = {P i E Pi [f ik c i ] = m ik, k [K i ], E Pi [I [ci,c i ] c i ] = 1}, where I [ci,c i ]c i = 1 if c i c i c i and 0 otherwise. Assume that the moments lie interior to the set of feasible moment vectors. Define φx, a, b := sup [ E P g Z c c T x ] 3.11 P PP 1,...,P N where g is a non-decreasing piecewise linear convex function defined by with 0 a 1 < a 2 <... < a J. Let φx, a, b := min d 1,...,d J Ω Then φx, a, b = φx, a, b. Proof. g Zd j d T j x + Step 1: Prove that φx, a, b φx, a, b. gz = max j [J] a jz + b j, sup P i P i E Pi [ max a j [ ci d ji ] + ] [ c i d ji ]x i j [J] For any c Ω, and d 1,..., d J Ω, the following holds: ] gzc c T x = max [a j max c d j + d j T y c T x + d T j x d T j x + b j j [J] y X [ ] max a j max j [J] y X dt j y d T j x + b j + max a j j [J] [ ci d ji + ] c i d ji x i. gzd j d j T x + max j [J] a j 17 [ max y X c d j T y c d j T x ]

18 The first inequality is due to the subadditivity of Z, and the second one follows from the fact that max y X c d j T y N c i d ji + and a j 0 for all j [J]. For any distribution P, taking expectation on both sides of the above inequality gives E P [gzc c T x] gzd j d j T x + E P max j [J] a j gzd j d j T x + [ c i d ji + c i d ji x i ] E Pi max a j[ c i d ji + c i d ji x i ]. j [J] Note that the last inequality follows from the fact that max j [J] a N j [ c i d ji + c i d ji x i ] N max j [J]a j [ c i d ji + c i d ji x i ]. The above inequality holds for any distribution P PP 1,..., P N and any d 1,..., d J taking minimum with respect to d 1,..., d J Ω, we get φx, a, b = sup P PP 1,...,P N min d 1,...,d J Ω = φx, a, b. Ω. Taking supremum with respect to P PP 1,..., P N, and E P [gzc c T x] max [a jzd j d T j x + b j ] + j [J] Step 2: Prove that φx, a, b φx, a, b. sup P i P i E Pi [ max a j [ ci d ji ] + ] [ c i d ji ]x i j [J] Consider the dual problem of 3.11 and the dual of the supremum problem in Since the moments lie interior to the set of feasible moment vectors, strong duality holds see Isii [22]. Hence φx, a, b = min y 00 + φx, a, b = min s.t. y 00 + K i y ik m ik 3.13 k=1 K i y ik f ik c i [a j c T y c T x + b j ] 0, k=1 max [a jzd j d T j x + b j ] + j [J] ȳ i0 + K i ȳ ik m ik k=1 K i s.t. p i1 c i := ȳ i0 + ȳ ik f ik c i a j c i d ji 1 x i 0, k=1 K i p i2 c i := ȳ i0 + ȳ ik f ik c i + a j c i d ji x i 0, k=1 c Ω, y X, j [J] c i Ω i, i [N], j [J], c i Ω i, i [N], j [J]. Let y 00, y ik, k [K i], i [N] be the optimal solution to Now generate a feasible solution to 3.14 as follows. Set ȳ ik = y ik, k [K i], i [N]. Having fixed ȳ ik, choose ȳ i0 and d ji based on the value x i = 1 or 0 in the following manner: 18

19 1. If x i = 1, we choose ȳ i0 to be the minimal value such that p i1c i is nonnegative over Ω i. Namely, there exists some c i Ω i such that p i1 c i = 0. Then for all j [J], choose d ji to be the maximal value such that p i2 c i is nonnegative over Ω i. This value can be chosen such that d ji Ω i. To verify this observe that since x i = 1, p i2 c i = p i1 c i + a j c i d ji. If a j = 0 the result is obvious; if a j > 0 then d ji = c i is feasible since p i1 c i 0, c i Ω i. Hence d ji c i since it is chosen as the maximal value such that p i2 c i is nonnegative over Ω i. Moreover, d ji c i or else p i2 c i = a jc i d ji < 0. Hence d ji Ω i. 2. If x i = 0, we choose ȳ i0 to be the minimal value such that p i2c i is nonnegative over Ω i. Then for all j [J], choose d ji to be the minimal value such that p i1c i is nonnegative over Ω i. A similar argument to shows that d ji can be restricted to the set Ω i. Now, for any j [J], and any y X, from the constraints of 3.14, we obtain that K i ȳi0 + yik f ikc i a j c i d jiy i x i 0, c i Ω i, i [N] k=1 By the choice of ȳ i0 and d ji, the value ȳ i0 + a jd ji y i x i is the minimal value such that the above inequality holds over Ω i for all i [N]. Take the summation of these n inequalities: ȳi0 + K i yik f ikc i a j c d j T y x 0, c Ω, y X, j [J] k=1 Note that in general, given N univariate functions p i c i = K i k=1 a ikf ik c i + a i0 such that a i0 is the minimal value for p i c i to be nonnegative over Ω i, the minimal value of a 00 for the multivariate function pc = N Ki k=1 a ikf ik c i + a 00 to be nonnegative over Ω is N a i0. By setting a i0 = ȳ i0 + a jd ji y i x i, a 00 = y 00 b j, and comparing 3.16 with the constraint of 3.13: K i y00 + yik f ikc i [a j c T y c T x + b j ] 0, k=1 c Ω, y X, j [J]. This leads to the following result y 00 b j ȳi0 + a j d j T y x, y X, j [J], which is equivalent to y 00 ȳi0 + gzd j d j T x. 19

20 Therefore K i φx, a, b gzd j d j T x + ȳi0 + yik m ik k=1 K i y00 + yik m ik = φx, a, b. k=1 The next proposition provides an extension of the results in Meilijson and Nadas [30] and Bertsimas et al. [9] to the worst-case conditional value-at-risk of regret. Proposition 1. Consider the marginal distribution model with P i = {P i }, i [N] or the marginal moment model with P i = {P i : E Pi [f ik c i ] = m ik, k [K i ], E Pi [I [ci,c i ] c i ] = 1}, i [N]. For x X {0, 1} N, the worst-case CVaR of regret can be computed as WCVaR α Rx, c = min d Ω Zd + α 1 α dt x α sup E Pi [ c i d i ] + c i x i P i P i Proof. From the definition of WCVaR in 2.8: WCVaR α Rx, c = inf v + 1 [ sup E P Z c c T x v ] +. v R 1 α P PP 1,...,P N Applying Theorems 1 and 2, we have: [ sup E P Z c c T x v ] + [Zd = min d T x v ] + N + sup E Pi [ ci d i ] + [ c i d i ]x i. P PP 1,...,P N d Ω P i P i 3.18 The worst-case CVaR of regret is thus computed as: inf min v + 1 v R d Ω 1 α [Zd dt x v] α sup E Pi [ c i d i ] + [ c i d i ]x i P i P i In formulation 3.19, the optimal decision variable is v = Zd d T x which results in the desired formulation. This formulation is appealing computationally since it exploits the marginal distributional representation of the uncertainty. The next result identifies conditions under which the worst-case CVaR of regret is computable in polynomial time for a fixed solution x X. Theorem 3. Assume the following two conditions hold: a The deterministic combinatorial optimization is solvable in polynomial time and 20

21 b For each i [N], G i d i := sup Pi P i E Pi [ c i d i ] + c i x i and its subgradient with respect to d i are computable in polynomial time for a fixed d i Ω i and x i. Then for a given solution x X, the worst-case CVaR of regret under the marginal distribution or marginal moment models is computable in polynomial time. Proof. From Proposition 1, the worst-case CVaR of regret is computed as: WCVaR α Rx, c = min t + α d,t,s 1 α dt x + 1 s i 1 α s.t. t Zd, s i sup E Pi [ c i d i ] + c i x i, P i P i i = 1,..., N, 3.20 d Ω. Denote the feasible set of 3.20 by K. We consider the separation problem of 3.20: given d, t, s, decide if d, t, s K, and if not, find a hyperplane which separates d, t, s from K. Under assumption a and b, we can check if d, t, s K in polynomial time, and if not we consider the following two situations. 1. If t < Zd, we can find y X such that Zd = d T y in polynomial time. It follows that the hyperplane {d, t, s : y T d = t} separates d, t, s from K. 2. If s i < G id i for some i [N], then we can find the separating hyperplane in polynomial time, since the subgradient of G i d i is computable in polynomial time. The remaining constraints d Ω are 2N linear constraints that are easy to enforce. Hence, the separation problem of 3.20 can be solved in polynomial time. It follows that the WCVaR of regret under the marginal model is computable in polynomial time. Many combinatorial optimization problems satisfy the assumption a in Theorem 3. Examples include the longest path problem on a directed acyclic graph, spanning tree problems and assignment problems. Moreover, in the marginal distribution model and several instances of the marginal moment model, Assumption b in Theorem 3 is easy to verify. For both the continuous and discrete marginal distribution model, G i d i is a convex function of d i and a subgradient of the function is given by P c i d i. For the marginal moment model when a the range and mean are given, or b the range, mean and mean absolute deviation are given, G i d i is a piecewise linear convex function that is efficiently computable see Madansky [29] and Ben-Tal and Hochman [8]. If P i P i denotes the extremal distribution that attains the bound sup Pi P i E Pi [ c i d i ] + c i x i in these instances, then a subgradient of the function is given by P i c i d i. 21

22 3.2 Worst-case Conditional Value-at-risk of Cost In this subsection, we apply the previous results to combinatorial optimization problems with an objective of minimizing the worst-case CVaR of the cost: min WCVaR α c T x x X Applying similar arguments as in Proposition 1, we obtain the following proposition under the marginal model. Proposition 2. Consider the marginal distribution model with P i = {P i }, i [N] or the marginal moment model with P i = {P i : E Pi [f ik c i ] = m ik, k [K i ], E Pi [I [ci,c i ] c i ] = 1}, i [N]. For x X {0, 1} N, the worst-case CVaR of cost can be computed as WCVaR α c T α x = min d Ω 1 α dt x α sup E Pi [ c i d i ] + c i x i P i P i Since the objective function in 3.22 is separable of d i, it can be expressed as: WCVaR α c T x = ĥ i x i, 3.23 where the function ĥix i is defined as: ĥ i x i = min d i Ω i α 1 α d ix i α sup P i P i E Pi [ c i d i ] + c i x i Define the parameter h i as the optimal value to a univariate convex programming problem: α h i = min d i Ω i 1 α d i α sup E Pi [ c i d i ] + c i. P i P i Then the worst-case CVaR of the cost can be expressed as. WCVaR α c T x = h i x i To see why this is true observe that if x i = 1, then ĥi1 = h i and if x i = 0, then ĥi0 = 0. Hence the problem of minimizing the worst-case CVaR of the cost can be formulated as the deterministic combinatorial optimization problem: min WCVaR α c T x = min x X x X h i x i In the next section, we provide conic mixed integer programs to minimize the worst-case CVaR of regret. 22

23 4 Mixed Integer Programming Formulations From Proposition 1, the problem of minimizing the WCVaR of regret is formulated as: where min WCVaR αrx, c = x X min Zd + x X,d Ω α 1 α dt x + 1 Hx, d 1 α, 4.1 Hx, d := sup E Pi [ c i d i ] + c i x i. 4.2 P i P i Formulation 4.1 is a stochastic nonconvex mixed integer programming problem where the nonconvexity appears in the bilinear term d T x. For bilinear terms, several linearization techniques have been proposed in the literature by Glover [18], Glover and Woolsey [19, 20], Sherali and Alameddine [41] and Adams and Sherali [2] among others. These alternative linearization techniques vary significantly in terms of their computational performance. We adopt the simplest linearization technique from Glover [18] to handle the bilinear terms where one set of variables is restricted to be binary. For all i [N], and x X {0, 1} N, c i x i z i c i x i z i = d i x i d i c i 1 x i z i d i c i 1 x i. 4.3 By applying the linearization technique, 4.1 is reformulated as the following stochastic convex mixed integer program: min x,d,z Zd + α 1 α s.t. c i x i z i c i x i, i [N], z i + 1 Hx, d 1 α d i c i 1 x i z i d i c i 1 x i, i [N], 4.4 d Ω, x X. The objective function in 4.4 is convex with respect to x, d, z since convexity is preserved under the expectation and maximization operation. Assume that the feasible region X is described in the compact form: X = { y {0, 1} N Ay = b }, where A is a given integer matrix and b is a given integer vector. For the rest of this section, we assume that matrix A is totally unimodular, namely each square submatrix of A has determinant equal to 0, 23

24 +1, or 1. Under this assumption the deterministic combinatorial optimization problem is solvable in polynomial time as a compact linear program see Schrijver [40]: Zd = max { d T y Ay = b, 0 y i 1, i [N] }. 4.5 Many polynomially solvable 0-1 optimization problems fall under this category including subset selection, longest path on a directed acyclic graph and linear assignment problems. Let λ 1, λ 2 be the vectors of dual variables associated with the constraints of 4.5. The dual linear program of 4.5 is given by Zd = min { b T λ 1 + e T λ 2 A T λ 1 + λ 2 d, λ 2 0 }, 4.6 where e is the vector of all ones. By solving the dual formulation of Zd in 4.4, we get: min x,d,z,λ 1,λ 2 b T λ 1 + e T λ 2 + α 1 α z i + 1 Hx, d 1 α s.t. c i x i z i c i x i, i [N], 4.7a 4.7 d i c i 1 x i z i d i c i 1 x i, i [N], 4.7b d Ω, A T λ 1 + λ 2 d, λ 2 0, x X. 4.7c The constraints in problem 4.7 are all linear except for the integrality restrictions in the description of X. To convert this to a conic mixed integer program, we apply standard conic programming methods to evaluate Hx, d in the objective function. 4.1 Marginal Discrete Distribution Model Assume that the marginal distributions of c are discrete: c i c ij with probability p ij, j [J i ], i [N] where j [J i ] p ij = 1 and j [J i ] c ijp ij = µ i for each i [N]. The input specification for the marginal discrete distribution model needs J 1 +J J N probabilities which is typically much smaller than the size of the input needed to specify the joint distribution that needs up to J 1 J 2... J N probabilities. In this case: Hx, d = J i j=1 c ij d i + p ij µ T x = min t ij c ij d i,t ij 0 J i t ij p ij µ T x, j=1 24

25 The problem of minimizing WCVaR is thus formulated as the compact MILP: min b T λ 1 + e T λ 2 + α z i + 1 J i t ij p ij µ T x x,d,z,λ 1,λ 2,t 1 α 1 α j=1 s.t. t ij c ij d i, t ij 0, j [J i ], i [N], a, 4.7b and 4.7c. 4.2 Marginal Moment Model In the standard representation of the marginal moment model, Hx, d is evaluated through conic optimization. This is based on the well-known duality theory of moments and nonnegative polynomials for univariate models. The reader is referred to Nesterov [33] and Bertsimas and Popescu [10] for details. We restrict attention to instances of the marginal moment model where 4.7 can be solved as a MILP or MISOCP. The advantage of these formulations is that the probabilistic regret model can be solved with standard off the shelf solvers such as CPLEX. The details are listed next: a Range and Mean are Known: Assume the interval range and mean of the random vector c are given: P i = {P i : E Pi [ c i ] = µ i, E Pi [I [ci,c i ] c i ] = 1}. In this case, the optimal distribution to the problem sup P i P i E Pi [ c i d i + c i x i ] is known explicitly see Madansky [29] and Ben-Tal and Hochman [8]: c i = c i, with probability µ i c i c i c i, c i, with probability c i µ i c i c i. The worst-case marginal distribution is a two point distribution and can be treated as a special case of the discrete marginal distribution. The probabilistic regret model is solved with the MILP 4.8. b Range, Mean and Mean Absolute Deviation are Known: Assume the interval range, mean and the mean absolute deviation of the random vector c are given: P i = {P i : E Pi c i = µ i, E Pi c i µ i = δ i, E Pi [I [ci,c i ] c i ] = 1}. For feasibility the mean absolute deviation satisfies δ i 2c i µ i µ i c i c i c. The optimal distribution i 25

26 for sup P i P i E Pi [ c i d i + c i x i ] has been identified by Ben-Tal and Hochman [8]: δ c i, with probability i 2µ i c i =: p i, c i = δ c i, with probability i 2c i µ i =: q i, µ i, with probability 1 p i q i. This is a three point distribution and the MILP reformulation 4.8 can be used. c Range, Mean and Standard Deviation are Known: Assume the range, mean and the standard deviation of the random vector c are given: P i = {P i : E Pi c i = µ i, E Pi c 2 i = µ 2 i + σ 2 i, E Pi [I [ci,c i ] c i ] = 1}. By using duality theory, we have: sup P i P i E Pi [ ci d i + c i x i ] = min yi0 + µ i y i1 + µ 2 i + σ 2 i y i2 µ i x i s.t. y i0 + y i1 c i + y i2 c 2 i c i d i 0, c i [c i, c i ], 4.9 y i0 + y i1 c i + y i2 c 2 i 0, c i [c i, c i ]. By applying the S-lemma to the constraints of the above problem, problem 4.9 can be formulated as min y i0 + µ i y i1 + µ 2 i + σ 2 i y i2 µ i x i s.t. τ i1 0, y i0 + d i + c i c i τ i1 0, y i2 + τ i1 0, τ i2 0, y i0 + c i c i τ i2 0, y i2 + τ i2 0, y i1 1 c i + c i τ i1 y i0 + d i + c i c i + 1τ i1 + y i2, 4.10 y i0 + d i + c i c i 1τ i1 y i2 2 y i1 c i + c i τ i2 y i0 + c i c i + 1τ i2 + y i2. y i0 + c i c i 1τ i2 y i2 2 The problem of minimizing WCVaR of regret can be formulated in this case as the mixed integer SOCP: min x,d,z,λ 1,λ 2y,τ b T λ 1 + e T λ 2 + α 1 α z i α s.t. τ i1 0, y i0 + d i + c i c i τ i1 0, y i2 + τ i1 0, i [N], yi0 + µ i y i1 + µ 2 i + σi 2 1 y i2 1 α µt x τ i2 0, y i0 + c i c i τ i2 0, y i2 + τ i2 0, i [N],

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Distributionally Robust Convex Optimization Wolfram Wiesemann 1, Daniel Kuhn 1, and Melvyn Sim 2 1 Department of Computing, Imperial College London, United Kingdom 2 Department of Decision Sciences, National

More information

Complexity of the min-max and min-max regret assignment problems

Complexity of the min-max and min-max regret assignment problems Complexity of the min-max and min-max regret assignment problems Hassene Aissi Cristina Bazgan Daniel Vanderpooten LAMSADE, Université Paris-Dauphine Place du Maréchal de Lattre de Tassigny, 75775 Paris

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

arxiv: v1 [cs.dm] 27 Jan 2014

arxiv: v1 [cs.dm] 27 Jan 2014 Randomized Minmax Regret for Combinatorial Optimization Under Uncertainty Andrew Mastin, Patrick Jaillet, Sang Chin arxiv:1401.7043v1 [cs.dm] 27 Jan 2014 January 29, 2014 Abstract The minmax regret problem

More information

Robust Multicriteria Risk-Averse Stochastic Programming Models

Robust Multicriteria Risk-Averse Stochastic Programming Models Robust Multicriteria Risk-Averse Stochastic Programming Models Xiao Liu, Simge Küçükyavuz Department of Integrated Systems Engineering, The Ohio State University, U.S.A. liu.2738@osu.edu, kucukyavuz.2@osu.edu

More information

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

A Penalized Quadratic Convex Reformulation Method for Random Quadratic Unconstrained Binary Optimization

A Penalized Quadratic Convex Reformulation Method for Random Quadratic Unconstrained Binary Optimization A Penalized Quadratic Convex Reformulation Method for Random Quadratic Unconstrained Binary Optimization Karthik Natarajan Dongjian Shi Kim-Chuan Toh June 15, 2013 Abstract The Quadratic Convex Reformulation

More information

arxiv: v2 [cs.ds] 27 Nov 2014

arxiv: v2 [cs.ds] 27 Nov 2014 Single machine scheduling problems with uncertain parameters and the OWA criterion arxiv:1405.5371v2 [cs.ds] 27 Nov 2014 Adam Kasperski Institute of Industrial Engineering and Management, Wroc law University

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Duality in Regret Measures and Risk Measures arxiv: v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun

Duality in Regret Measures and Risk Measures arxiv: v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun Duality in Regret Measures and Risk Measures arxiv:1705.00340v1 [q-fin.mf] 30 Apr 2017 Qiang Yao, Xinmin Yang and Jie Sun May 13, 2018 Abstract Optimization models based on coherent regret measures and

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

arxiv: v1 [cs.ds] 17 Jul 2013

arxiv: v1 [cs.ds] 17 Jul 2013 Bottleneck combinatorial optimization problems with uncertain costs and the OWA criterion arxiv:137.4521v1 [cs.ds] 17 Jul 213 Adam Kasperski Institute of Industrial Engineering and Management, Wroc law

More information

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty Christoph Buchheim Jannis Kurtz Received:

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

Restricted robust uniform matroid maximization under interval uncertainty

Restricted robust uniform matroid maximization under interval uncertainty Math. Program., Ser. A (2007) 110:431 441 DOI 10.1007/s10107-006-0008-1 FULL LENGTH PAPER Restricted robust uniform matroid maximization under interval uncertainty H. Yaman O. E. Karaşan M. Ç. Pınar Received:

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Distributionally robust simple integer recourse

Distributionally robust simple integer recourse Distributionally robust simple integer recourse Weijun Xie 1 and Shabbir Ahmed 2 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061 2 School of Industrial & Systems

More information

COHERENT APPROACHES TO RISK IN OPTIMIZATION UNDER UNCERTAINTY

COHERENT APPROACHES TO RISK IN OPTIMIZATION UNDER UNCERTAINTY COHERENT APPROACHES TO RISK IN OPTIMIZATION UNDER UNCERTAINTY Terry Rockafellar University of Washington, Seattle University of Florida, Gainesville Goal: a coordinated view of recent ideas in risk modeling

More information

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Noname manuscript No. (will be inserted by the editor) Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Jie Sun Li-Zhi Liao Brian Rodrigues Received: date / Accepted: date Abstract

More information

Tractable Robust Expected Utility and Risk Models for Portfolio Optimization

Tractable Robust Expected Utility and Risk Models for Portfolio Optimization Tractable Robust Expected Utility and Risk Models for Portfolio Optimization Karthik Natarajan Melvyn Sim Joline Uichanco Submitted: March 13, 2008 Abstract Expected utility models in portfolio optimization

More information

Sequencing problems with uncertain parameters and the OWA criterion

Sequencing problems with uncertain parameters and the OWA criterion Sequencing problems with uncertain parameters and the OWA criterion Adam Kasperski 1 Paweł Zieliński 2 1 Institute of Industrial Engineering and Management Wrocław University of Technology, POLAND 2 Institute

More information

Robustness and bootstrap techniques in portfolio efficiency tests

Robustness and bootstrap techniques in portfolio efficiency tests Robustness and bootstrap techniques in portfolio efficiency tests Dept. of Probability and Mathematical Statistics, Charles University, Prague, Czech Republic July 8, 2013 Motivation Portfolio selection

More information

Robust portfolio selection under norm uncertainty

Robust portfolio selection under norm uncertainty Wang and Cheng Journal of Inequalities and Applications (2016) 2016:164 DOI 10.1186/s13660-016-1102-4 R E S E A R C H Open Access Robust portfolio selection under norm uncertainty Lei Wang 1 and Xi Cheng

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

Distributionally Robust Project Crashing with Partial or No Correlation Information

Distributionally Robust Project Crashing with Partial or No Correlation Information Distributionally Robust Project Crashing with Partial or No Correlation Information Selin Damla Ahipasaoglu Karthik Natarajan Dongjian Shi November 8, 206 Abstract Crashing is a method for optimally shortening

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Birgit Rudloff Operations Research and Financial Engineering, Princeton University

Birgit Rudloff Operations Research and Financial Engineering, Princeton University TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street

More information

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Ruken Düzgün Aurélie Thiele July 2012 Abstract This paper describes a multi-range robust optimization approach

More information

Cut Generation for Optimization Problems with Multivariate Risk Constraints

Cut Generation for Optimization Problems with Multivariate Risk Constraints Cut Generation for Optimization Problems with Multivariate Risk Constraints Simge Küçükyavuz Department of Integrated Systems Engineering, The Ohio State University, kucukyavuz.2@osu.edu Nilay Noyan 1

More information

SOME HISTORY OF STOCHASTIC PROGRAMMING

SOME HISTORY OF STOCHASTIC PROGRAMMING SOME HISTORY OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces

More information

MEASURES OF RISK IN STOCHASTIC OPTIMIZATION

MEASURES OF RISK IN STOCHASTIC OPTIMIZATION MEASURES OF RISK IN STOCHASTIC OPTIMIZATION R. T. Rockafellar University of Washington, Seattle University of Florida, Gainesville Insitute for Mathematics and its Applications University of Minnesota

More information

Theory and applications of Robust Optimization

Theory and applications of Robust Optimization Theory and applications of Robust Optimization Dimitris Bertsimas, David B. Brown, Constantine Caramanis May 31, 2007 Abstract In this paper we survey the primary research, both theoretical and applied,

More information

Modeling Uncertainty in Linear Programs: Stochastic and Robust Linear Programming

Modeling Uncertainty in Linear Programs: Stochastic and Robust Linear Programming Modeling Uncertainty in Linear Programs: Stochastic and Robust Programming DGA PhD Student - PhD Thesis EDF-INRIA 10 November 2011 and motivations In real life, Linear Programs are uncertain for several

More information

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY

RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY RISK AND RELIABILITY IN OPTIMIZATION UNDER UNCERTAINTY Terry Rockafellar University of Washington, Seattle AMSI Optimise Melbourne, Australia 18 Jun 2018 Decisions in the Face of Uncertain Outcomes = especially

More information

Worst-Case Expected Shortfall with Univariate and Bivariate. Marginals

Worst-Case Expected Shortfall with Univariate and Bivariate. Marginals Worst-Case Expected Shortfall with Univariate and Bivariate Marginals Anulekha Dhara Bikramjit Das Karthik Natarajan Abstract Worst-case bounds on the expected shortfall risk given only limited information

More information

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract. Włodzimierz Ogryczak Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS Abstract In multiple criteria linear programming (MOLP) any efficient solution can be found

More information

Goal Driven Optimization

Goal Driven Optimization Goal Driven Optimization Wenqing Chen Melvyn Sim May 2006 Abstract Achieving a targeted objective, goal or aspiration level are relevant aspects of decision making under uncertainties. We develop a goal

More information

arxiv: v1 [math.oc] 3 Jun 2016

arxiv: v1 [math.oc] 3 Jun 2016 Min-Max Regret Problems with Ellipsoidal Uncertainty Sets André Chassein 1 and Marc Goerigk 2 arxiv:1606.01180v1 [math.oc] 3 Jun 2016 1 Fachbereich Mathematik, Technische Universität Kaiserslautern, Germany

More information

Regret in the Newsvendor Model with Partial Information

Regret in the Newsvendor Model with Partial Information Regret in the Newsvendor Model with Partial Information Georgia Perakis Guillaume Roels Sloan School of Management, MIT, E53-359, Cambridge, MA 02139, USA Operations Research Center, MIT, E40-149, Cambridge,

More information

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No., February 20, pp. 24 54 issn 0364-765X eissn 526-547 360 0024 informs doi 0.287/moor.0.0482 20 INFORMS A Geometric Characterization of the Power of Finite

More information

Almost Robust Optimization with Binary Variables

Almost Robust Optimization with Binary Variables Almost Robust Optimization with Binary Variables Opher Baron, Oded Berman, Mohammad M. Fazel-Zarandi Rotman School of Management, University of Toronto, Toronto, Ontario M5S 3E6, Canada, Opher.Baron@Rotman.utoronto.ca,

More information

Pareto Efficiency in Robust Optimization

Pareto Efficiency in Robust Optimization Pareto Efficiency in Robust Optimization Dan Iancu Graduate School of Business Stanford University joint work with Nikolaos Trichakis (HBS) 1/26 Classical Robust Optimization Typical linear optimization

More information

Generalized quantiles as risk measures

Generalized quantiles as risk measures Generalized quantiles as risk measures F. Bellini 1, B. Klar 2, A. Müller 3, E. Rosazza Gianin 1 1 Dipartimento di Statistica e Metodi Quantitativi, Università di Milano Bicocca 2 Institut für Stochastik,

More information

Tractable Robust Expected Utility and Risk Models for Portfolio Optimization

Tractable Robust Expected Utility and Risk Models for Portfolio Optimization Tractable Robust Expected Utility and Risk Models for Portfolio Optimization Karthik Natarajan Melvyn Sim Joline Uichanco Submitted: March 13, 2008. Revised: September 25, 2008, December 5, 2008 Abstract

More information

POLYNOMIAL-TIME IDENTIFICATION OF ROBUST NETWORK FLOWS UNDER UNCERTAIN ARC FAILURES

POLYNOMIAL-TIME IDENTIFICATION OF ROBUST NETWORK FLOWS UNDER UNCERTAIN ARC FAILURES POLYNOMIAL-TIME IDENTIFICATION OF ROBUST NETWORK FLOWS UNDER UNCERTAIN ARC FAILURES VLADIMIR L. BOGINSKI, CLAYTON W. COMMANDER, AND TIMOFEY TURKO ABSTRACT. We propose LP-based solution methods for network

More information

ORIGINS OF STOCHASTIC PROGRAMMING

ORIGINS OF STOCHASTIC PROGRAMMING ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990

More information

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition Sanjay Mehrotra and He Zhang July 23, 2013 Abstract Moment robust optimization models formulate a stochastic problem with

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,

More information

Operations Research Letters. On a time consistency concept in risk averse multistage stochastic programming

Operations Research Letters. On a time consistency concept in risk averse multistage stochastic programming Operations Research Letters 37 2009 143 147 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl On a time consistency concept in risk averse

More information

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Min-max- robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Christoph Buchheim, Jannis Kurtz 2 Faultät Mathemati, Technische Universität Dortmund Vogelpothsweg

More information

Stochastic Optimization with Risk Measures

Stochastic Optimization with Risk Measures Stochastic Optimization with Risk Measures IMA New Directions Short Course on Mathematical Optimization Jim Luedtke Department of Industrial and Systems Engineering University of Wisconsin-Madison August

More information

Distributionally Robust Newsvendor Problems. with Variation Distance

Distributionally Robust Newsvendor Problems. with Variation Distance Distributionally Robust Newsvendor Problems with Variation Distance Hamed Rahimian 1, Güzin Bayraksan 1, Tito Homem-de-Mello 1 Integrated Systems Engineering Department, The Ohio State University, Columbus

More information

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization Dimitris Bertsimas Sloan School of Management and Operations Research Center, Massachusetts

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach L. Jeff Hong Department of Industrial Engineering and Logistics Management The Hong Kong University of Science

More information

Optimization Problems with Probabilistic Constraints

Optimization Problems with Probabilistic Constraints Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.

More information

Portfolio optimization with stochastic dominance constraints

Portfolio optimization with stochastic dominance constraints Charles University in Prague Faculty of Mathematics and Physics Portfolio optimization with stochastic dominance constraints December 16, 2014 Contents Motivation 1 Motivation 2 3 4 5 Contents Motivation

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Robust Multicriteria Risk Averse Stochastic Programming Models

Robust Multicriteria Risk Averse Stochastic Programming Models Robust Multicriteria Risk Averse Stochastic Programming Models Xiao Liu Department of Integrated Systems Engineering, The Ohio State University, U.S.A. liu.2738@osu.edu Simge Küçükyavuz Industrial and

More information

Mathematical Optimization Models and Applications

Mathematical Optimization Models and Applications Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,

More information

The newsvendor problem with convex risk

The newsvendor problem with convex risk UNIVERSIDAD CARLOS III DE MADRID WORKING PAPERS Working Paper Business Economic Series WP. 16-06. December, 12 nd, 2016. ISSN 1989-8843 Instituto para el Desarrollo Empresarial Universidad Carlos III de

More information

Lagrangean Decomposition for Mean-Variance Combinatorial Optimization

Lagrangean Decomposition for Mean-Variance Combinatorial Optimization Lagrangean Decomposition for Mean-Variance Combinatorial Optimization Frank Baumann, Christoph Buchheim, and Anna Ilyina Fakultät für Mathematik, Technische Universität Dortmund, Germany {frank.baumann,christoph.buchheim,anna.ilyina}@tu-dortmund.de

More information

Strong Formulations of Robust Mixed 0 1 Programming

Strong Formulations of Robust Mixed 0 1 Programming Math. Program., Ser. B 108, 235 250 (2006) Digital Object Identifier (DOI) 10.1007/s10107-006-0709-5 Alper Atamtürk Strong Formulations of Robust Mixed 0 1 Programming Received: January 27, 2004 / Accepted:

More information

Inverse Stochastic Dominance Constraints Duality and Methods

Inverse Stochastic Dominance Constraints Duality and Methods Duality and Methods Darinka Dentcheva 1 Andrzej Ruszczyński 2 1 Stevens Institute of Technology Hoboken, New Jersey, USA 2 Rutgers University Piscataway, New Jersey, USA Research supported by NSF awards

More information

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets manuscript Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets Zhi Chen Imperial College Business School, Imperial College London zhi.chen@imperial.ac.uk Melvyn Sim Department

More information

Regret Aversion, Regret Neutrality, and Risk Aversion in Production. Xu GUO and Wing-Keung WONG

Regret Aversion, Regret Neutrality, and Risk Aversion in Production. Xu GUO and Wing-Keung WONG Division of Economics, EGC School of Humanities and Social Sciences Nanyang Technological University 14 Nanyang Drive Singapore 637332 Regret Aversion, Regret Neutrality, and Risk Aversion in Production

More information

Recoverable Robustness in Scheduling Problems

Recoverable Robustness in Scheduling Problems Master Thesis Computing Science Recoverable Robustness in Scheduling Problems Author: J.M.J. Stoef (3470997) J.M.J.Stoef@uu.nl Supervisors: dr. J.A. Hoogeveen J.A.Hoogeveen@uu.nl dr. ir. J.M. van den Akker

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie Shabbir Ahmed Ruiwei Jiang February 13, 2017 Abstract A distributionally robust joint chance constraint

More information

Generalized quantiles as risk measures

Generalized quantiles as risk measures Generalized quantiles as risk measures Bellini, Klar, Muller, Rosazza Gianin December 1, 2014 Vorisek Jan Introduction Quantiles q α of a random variable X can be defined as the minimizers of a piecewise

More information

Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management

Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management Shu-Shang Zhu Department of Management Science, School of Management, Fudan University, Shanghai 200433, China, sszhu@fudan.edu.cn

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Minmax regret 1-center problem on a network with a discrete set of scenarios

Minmax regret 1-center problem on a network with a discrete set of scenarios Minmax regret 1-center problem on a network with a discrete set of scenarios Mohamed Ali ALOULOU, Rim KALAÏ, Daniel VANDERPOOTEN Abstract We consider the minmax regret 1-center problem on a general network

More information

Mixed 0-1 linear programs under objective uncertainty: A completely positive representation

Mixed 0-1 linear programs under objective uncertainty: A completely positive representation Singapore Management University Institutional Knowledge at Singapore Management University Research Collection Lee Kong Chian School Of Business Lee Kong Chian School of Business 6-2011 Mixed 0-1 linear

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Robust multi-sensor scheduling for multi-site surveillance

Robust multi-sensor scheduling for multi-site surveillance DOI 10.1007/s10878-009-9271-4 Robust multi-sensor scheduling for multi-site surveillance Nikita Boyko Timofey Turko Vladimir Boginski David E. Jeffcoat Stanislav Uryasev Grigoriy Zrazhevsky Panos M. Pardalos

More information

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,

More information

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

Multistage Adaptive Robust Optimization for the Unit Commitment Problem

Multistage Adaptive Robust Optimization for the Unit Commitment Problem Multistage Adaptive Robust Optimization for the Unit Commitment Problem Álvaro Lorca, X. Andy Sun H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta,

More information

Branch-and-cut (and-price) for the chance constrained vehicle routing problem

Branch-and-cut (and-price) for the chance constrained vehicle routing problem Branch-and-cut (and-price) for the chance constrained vehicle routing problem Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo May 25th, 2016 ColGen 2016 joint work with

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Spectral Measures of Uncertain Risk

Spectral Measures of Uncertain Risk Spectral Measures of Uncertain Risk Jin Peng, Shengguo Li Institute of Uncertain Systems, Huanggang Normal University, Hubei 438, China Email: pengjin1@tsinghua.org.cn lisg@hgnu.edu.cn Abstract: A key

More information

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational

More information

Department of Economics and Management University of Brescia Italy WORKING PAPER. Discrete Conditional Value-at-Risk

Department of Economics and Management University of Brescia Italy WORKING PAPER. Discrete Conditional Value-at-Risk Department of Economics and Management University of Brescia Italy WORKING PAPER Discrete Conditional Value-at-Risk Carlo Filippi Wlodzimierz Ogryczak M. Grazia peranza WPDEM 2/2016 Via. Faustino 74/b,

More information

Robust discrete optimization and network flows

Robust discrete optimization and network flows Math. Program., Ser. B 98: 49 71 2003) Digital Object Identifier DOI) 10.1007/s10107-003-0396-4 Dimitris Bertsimas Melvyn Sim Robust discrete optimization and network flows Received: January 1, 2002 /

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m Convex analysts may give one of two

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1110.0918ec e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2011 INFORMS Electronic Companion Mixed Zero-One Linear Programs Under Objective Uncertainty: A Completely

More information

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets

Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Generalized Hypothesis Testing and Maximizing the Success Probability in Financial Markets Tim Leung 1, Qingshuo Song 2, and Jie Yang 3 1 Columbia University, New York, USA; leung@ieor.columbia.edu 2 City

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information