Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric

Size: px
Start display at page:

Download "Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric"

Transcription

1 Data-Driven Distributionally Robust Chance-Constrained Optimization with asserstein Metric Ran Ji Department of System Engineering and Operations Research, George Mason University, Miguel A. Lejeune Department of Decision Sciences, The George ashington University, e study distributionally robust chance-constrained programming (DRCCP) optimization problems with data-driven asserstein ambiguity sets. e consider DRCCP problems with random right-hand side and technology vector under two types of uncertainties called uncertain probabilities and continuum of realizations. For the uncertainty case with uncertain probabilities, we provide mixed-integer linear programming reformulations for DRCCP problems. e derive two sets of precedence valid inequalities to strengthen the formulations. For the uncertainty case with continuum of realizations, we propose an exact mixed-integer second-order cone programming (MISOCP) reformulation for models with chance constraints with random right-hand while an MISOCP relaxation is proposed for constraints with random technology vector. e show that this relaxation is exact when the decision variables are binary or bounded general integer. e evaluate the scalability and tightness of the MISOCP formulations on a distributionally robust chance-constrained knapsack problem. Key words: Distributionally Robust Optimization, Chance-Constrained Programming, asserstein Metric 1. Introduction 1.1. Problem Setting The objective of this paper is to investigate a class of distributionally robust chance-constrained programming models, in which the stochastic constraints are satisfied with a specified probability level across all possible probability distributions within an ambiguity set. The generic formulation of the distributionally robust chance-constrained problem DRCCP studied here is: DRCCP: min x g(x) (1a) s.t. sup P {ξ : h(x, ξ) 0} ɛ P D x X (1b) (1c) where x R M is a vector of decision variables, X is a feasible region defined by deterministic constraints, and ɛ [0, 1] is a fixed probability level. The random variable ξ is defined on a probability space (Ω, F, P), in which Ω represents the sample space of the random variable ξ with a sigma algebra F, and P denotes the 1

2 2 Data-Driven Chance-Constrained Optimization with asserstein Metric associated probability measure supported on Ω. The DRCCP model seeks a solution x X that minimizes a convex cost function g(x), while ensuring via the ambiguous chance constraint (1b) that the worst-case probability of the event {h(x, ξ) 0} does not exceed ɛ within the ambiguity set D. The well-known equivalence between a chance constraint and an expectation one with indicator function allows rewriting (1b) as: sup P D E P [ 1{h(x,ξ) 0} ] ɛ (2) where 1 {h(x,ξ) 0} denotes the indicator function taking value 1 if h(x, ξ) 0 happens and 0 otherwise. Throughout this paper, we focus on the DRCCP model with individual chance constraints. Two uncertainty types are considered (see Pflug and Pohl (2017)): 1. Uncertain probabilities: The underlying probability distribution has a finite support and its atoms are the same as those of the empirical distribution. The mass probability of each atom is allowed to vary and must be found. 2. Continuum of realizations: The random variable ξ can have a continuum (infinite number) of realizations. The following is assumed regarding the structure of the uncertain vector ξ, the ambiguity set D, and the set defined by the inequality h(x, ξ) 0: The random vector ξ is supported on a bounded and convex set Ω. The ambiguity set is constructed via a data-driven approach based on the asserstein metric, i.e., the ambiguity set D is supposed to contain all the underlying probability distributions P belonging to a asserstein ball centered at the empirical distribution consisted of a finite number of data samples. The stochastic function h(x, ξ) is affine in both x and ξ. Two functional forms for the stochastic inequalities defining {ξ : h(x, ξ) 0} are specified 1 : 1. Linear random right-hand side: {ξ : h(x, ξ) 0} = {ξ : a T x ξ}, where a R M is deterministic and ξ R is the random right-hand side. 2. Linear random technology vector: {ξ : h(x, ξ) 0} = {ξ : ξ T x b}, where ξ R M is the random technology vector Literature Review Approaches based on moments and statistical distances are commonly used to build ambiguity sets in distributionally robust optimization (DRO). Moment-based ambiguity sets include the underlying distributions that have moment similarities (e.g., mean, variance) with the true probability distribution and often yield to computationally tractable conic formulations, see, e.g., Calafiore (2007), Delage and Ye (2010), Zymler 1 In an abuse of notation, we use the same symbol ξ in chance-constrained models involving either random right-hand side or random technology vector models, although ξ represents a scalar in the first case and a vector of dimension M in the second one.

3 Data-Driven Chance-Constrained Optimization with asserstein Metric 3 et al. (2013), Cheng et al. (2014), Hanasusanto et al. (2015), Yang and Xu (2016), Xie and Ahmed (2018b), Hanasusanto et al. (2017) and the references therein. e now proceed to review distributionally robust chance-constrained (DRCC) studies with statistical ambiguity sets considering the type of uncertainty (uncertain probability vs. continuum or realizations), the form of the random inequalities (linear vs. nonlinear), and the type of the obtained formulation (reformulation vs. approximation). e give particular attention to studies based on the asserstein metric. Erdoğan and Iyengar (2006) consider a linear random technology vector and construct an ambiguity set based on the Prohorov-metric. They obtain an approximated robust solution from a sampling problem, where each constraint in the ambiguity set corresponds to one sample data drawn from the reference distribution. Jiang et al. (2016) investigate the distributionally robust chance-constrained unit commitment problem with renewable energy with an ambiguity set in which the L 2 norm distance between the mass probabilities of the underlying and reference distributions is bounded from above. They study the uncertain probability case in an individual linear chance constraint with random right-hand side, reformulate the problem as a mixed-integer second order cone problem, and derive valid inequalities that they imbricate within their specialized algorithmic method. Considering joint distributionally robust chance constraints, Jiang and Guan (2015) and Hu and Hong (2013) derive tractable reformulations for the family of φ-diverge probability metric (e.g., Kullback-Leibler (KL) divergence, Hellinger distance, etc.). These studies show that a distributionally robust chance constraint is equivalent to a classical chance constraint under the reference distribution with a re-scaled reliability level, and cover the cases of linear uncertainty in the right-hand side and technology matrix. A reason for the widespread use of asserstein ambiguity sets in DRO is that they enjoy the finite sample guarantee property Esfahani and Kuhn (2017), which signifies that the asserstein ambiguity set can be constructed to contain the true probability distribution with a high probability level and can provide an upper bound on the out-of-sample performance of the distributionally robust solution. DRO studies on asserstein ambiguity sets can be categorized, as noted by Pflug and Pohl (2017), with respect to the types of uncertainties presented in Section 1.1. See Pflug and ozabal (2007), Postek et al. (2016), Ji and Lejeune (2017) for the uncertain probability case and Pflug et al. (2012), Zhao and Guan (2018), Gao and Kleywegt (2016), Esfahani and Kuhn (2017) for the continuum of realizations uncertainty type, in which the underlying probability distribution can take infinitely many possible values. asserstein ambiguity sets have been primarily used to study DRO expectation constraints. For example, in the uncertain probability case, Ji and Lejeune (2017) seek to maximize the worst-case expected fractional functions representing reward-risk ratios. The proposed reformulations derive the support function of asserstein ambiguity set and the concave conjugate of the ratio functions. As for the continuum of realization case, Esfahani and Kuhn (2017) utilize convex duality to derive reformulations of the worst-case

4 4 Data-Driven Chance-Constrained Optimization with asserstein Metric expectation of a loss function representable as pointwise maxima. Blanchet et al. (2018) study the distributionally robust mean-variance portfolio optimization problem with asserstein ambiguity by reformulating the problem into an empirical variance minimization problem with a regularization term. A few recent studies on distributionally robust chance constraints with asserstein ambiguity sets tackle problems with linear chance constraints in the continuum of realizations case. Duan et al. (2017) study a distributionally robust chance-constrained optimal power flow problem with asserstein ambiguity set. They construct a deterministic robust constraint to approximate the distributionally robust individual chance constraint and utilize a bisection search method to find the worst-case expectation of a quadratic cost function. Xie and Ahmed (2018a) propose a bicriteria approximation scheme to solve a specific type of distributionally robust chance-constrained covering problems with asserstein ambiguity sets. This approach guarantees to find a solution that violates the stochastic inequalities with a probability that is upper-bounded by the original probability level multiplied by a constant called violation ratio Contributions, Structure, and Notations e derive new tractable reformulations for several variants of the DRCCP problem with asserstein ambiguity under different (i) types of uncertainties (i.e., uncertain probabilities vs. continuum of realizations), and (ii) forms of stochastic inequalities (i.e., right-hand side vs. technology vector). e rely conic duality theory and the dual form of the worst-case expectation of an indicator function to subsequently derive mixed-integer linear or conic reformulations/approximations for the distributionally robust chance constraint. In the uncertain probability case, we provide mixed-integer linear programming reformulations for DRCCP problems with random right-hand side and technology vector. e also derive two sets of precedence valid inequalities that strengthen the formulations. In the continuum of realizations case, we propose an exact mixed-integer second-order cone programming (MISOCP) reformulation for models with chance-constrained models with random right-hand side. An MISOCP relaxation is proposed for models with chance constraints and random technology vector and is shown to be equivalent to the true formulation when the decision variables are bounded integer. e carry out numerical experiments on instances of the distributionally robust chance-constrained knapsack problems to show the effectiveness of the proposed MISOCP reformulation framework. The rest of the paper proceeds as follows. Section 2 sketches the construction of asserstein ambiguity sets. Section 3 and 4 derive the reformulation and approximation methods for the DRCCP problems with uncertain probabilities and continuum of realizations, respectively. The results of the computational experiments are reported in Section 5. Notation. e denote by R + and R ++ the set of non-negative and positive real numbers, respectively. The round-up and round-down of a real number t are respectively t and t and e denotes the unit vector. Let

5 Data-Driven Chance-Constrained Optimization with asserstein Metric 5 D be the ambiguity set, P D be an underlying probability distribution, P 0 be the reference (empirical) distribution, and Q represent the true and unknown probability distribution. Let p j = P (ξ = ξ j ), j N denote the probability associated with realization j and P(Ω) represent the space of all probability distributions P supported on Ω. e denote by δ ξ the Dirac distribution concentrating unit mass at ξ. e define the characteristic function as χ Ω (ξ) = 0, if ξ Ω; =, otherwise. e use the index sets N = {1,..., N}, M = {1,..., M}. For a given norm on R M, the dual norm is defined by v = sup{v T ζ : ζ 1}. The conjugate of a real-valued function f(v) is defined as f (v) = sup ζ v T ζ f(ζ). The asserstein ambiguity set is denoted by D UP for the uncertain probability case and D CR 2. Preliminaries on asserstein Ambiguity Set for the continuum of realizations one. To be self-contained, we briefly present some key preliminary concepts involved in the construction of asserstein ambiguity sets. The asserstein metric between two probability distributions is defined as the minimum mass transportation cost incurred by moving one distribution to another, where the unit cost is defined as the norm distance between two atoms 2. Throughout this paper unless stated otherwise, we restrict our attention to the 1-asserstein distance based on the l 1 norm. DEFINITION 1. (asserstein Metric) The asserstein metric d (P, P 0 ) : P(Ω) P(Ω) R between two probability distributions P, P 0 P(Ω) is defined by: ( ) d (P, P 0 ) = inf ξ ξ 0 Π(dξ, dξ 0 Π is a joint distribution of ξ, ) : ξ 0. (3) with marginals P and P Ω 2 0 For a discrete distribution problem, the asserstein metric is given by: d (P, P 0 ) = inf π ij = p 0 π ij ξ j ξ 0 i, i N j N i :, (4) π 0 π ij = p j, j N i,j N where π ij represents the probability mass transported from ξ 0 i to ξ j, p 0 i and p j respectively denote the probability of atom ξ 0 i and of ξ j. The asserstein ambiguity set D is defined as a ball of radius θ centered around the empirical distribution P 0 i N N δ ξ 0 i and δ ξ 0 i refers to the Dirac distribution con- where P 0 is constructed with N i.i.d samples: P 0 := 1 N centrating unit mass at realization ξi 0. D := {P P(Ω) : d (P, P 0 ) θ}, (5) i=1 2 e shall interchangeably use the terms atoms and realizations of uncertain variables ξ.

6 6 Data-Driven Chance-Constrained Optimization with asserstein Metric (Gao and Kleywegt 2016, Theorem 1) show that strong duality holds true when the asserstein ambiguity set is constructed via a data-driven approach as in (5), which allows the rewriting of the worst case expectation sup P D E P [ 1{h(x,ξ) 0} ] as: sup P D E P [ 1{h(x,ξ) 0} ] = inf λ 0 [ λθ + sup E [ 1 {h(x,ξ) 0} λd (ξ, ξ 0 ) ]]. (6) ξ Ω e utilize this duality result to derive exact reformulations or approximate formulations for the two uncertainty types: uncertain probabilities in Section 3 and continuum of realizations in Section asserstein DRCCP Case with Uncertain Probabilities e first focus on the case with uncertain probabilities in which the distance between atoms is a constant. To ease the notations, we set c ij = ξ j ξ 0 i to denote the unit cost of transporting the probability mass from atom ξ 0 i to another one ξ j. e recall that: 1) the probabilities of the atoms of the underlying probability distribution are allowed to vary and are defined as decision variables and 2) the atoms of the underlying distribution are known and identical to those of the empirical distribution. e denote by 1 {h(x,ξj ) 0} the indicator function when ξ is realized as ξ j which occurs with probability p j. Let p i 0 be the probability of the i th atom in the empirical distribution and D UP be the asserstein ambiguity set in the case with uncertain probabilities. Theorem 1 reformulates the worst-case expectation semi-infinite programming problem sup P D UP E P[1 {h(x,ξ) 0} ] in a finite dimensional space. THEOREM 1. (DRCCP Reformulation for Uncertain Probabilities) Let w {0, 1} N. For any θ 0, the distributionally robust chance constraint (1b) with uncertain probabilities can be reformulated with the set of mixed-integer linear inequalities Z UP λθ + p 0 i v i ɛ (7a) i N λ 0, (7b) (x, λ, v, w) Z UP = λc ij + v i w j 0, i, j N, (7c) w j = 1 {h(x,ξj ) 0}, j N (7d) Proof. The value of the worst-case expectation can be obtained from: sup P D UP E P [1 {h(x,ξ) 0} ] = sup π,p 0 s.t. p j 1 {h(x,ξj ) 0} j N i,j N π ij c ij θ (8a) (8b) π ij = p 0 i, i N (8c) j N π ij = p j, j N (8d) i N

7 Data-Driven Chance-Constrained Optimization with asserstein Metric 7 Using the equality constraints (8d), we first remove p j from (8a) and from problem (8) altogether. Denoting by λ 0 and v i, i N the dual variables corresponding to (8b) and (8c), respectively, we use Lagrangian duality to reformulate problem (8) as: sup P D UP E P [1 {h(x,ξ) 0} ] = sup inf π 0 λ 0,v i,j N π ij 1 {h(x,ξj ) 0} + λ ( θ i,j N π ij c ij ) + ( v i p 0 i ) π ij (9a) i N j N = inf sup λθ + p 0 i v i ( ) π ij λc ij + v i 1 λ 0,v {h(x,ξj ) 0} (9b) π 0 i N i,j N = inf λθ + ( p 0 i v i + sup ( π ij λc ij + v i 1 λ 0,v {h(x,ξj ) 0}) ) (9c) π 0 i N i,j N = inf λθ + { 0, if λcij + v p 0 i 1 i v i + {h(x,ξj) 0} 0, i, j N. (9d) λ 0,v i N +, otherwise Equations (9b) and (9c) are obtained by rearranging the terms in (9a) while (9d) gives an the upper bound for ( ( ) π i,j N ij λc ij + v i 1 {h(x,ξj ) 0} ) by maximizing over π. Using (9d) to replace the worst-case expectation sup P D UP E P[1 {h(x,ξ) 0} ] into (2) gives (7a) and (7c). Introducing the linkage between w j and leads to (7d). 1 {h(x,ξj ) 0} e next derive a set of valid inequalities taking the form of precedence constraints on the binary variables w j. The valid inequalities are derived from the set of constraints (7c) and the ordering of the cost coefficients c i,j defining the distance between atoms ξ 0 i and ξ j. THEOREM 2. (Precedence Cuts) Assume without loss of generality that c i,j(ik ) is the ordered vector of distances c ij for atom i N : c i,j(i1 ) c i,j(i2 )... c i,j(ik )... c i,j(in ), i N, (10) where the index j(i k ) identifies the atom ξ j that has k-th smallest distance to ξ 0 i. The inequalities are valid for Z UP. w j(ik 1 ) w j(ik ), i, j(i k ) N : k = 2,..., N (11) Proof. Using the above notation for the ordered vector of distances, we rewrite (7c) as λc i,j(ik ) + v i w j(ik ) 0, i, j(i k ) N. (12) For any given i N with fixed values of λ and v i, due to the ascending ordering of distances defined by (10) and constraints (12), it follows that, if w j(ik ) = 0, w j(ik 1 ) must take value of 0 for k 2. This validates the precedence constraints (11).

8 8 Data-Driven Chance-Constrained Optimization with asserstein Metric In the next subsections, we consider two functional forms for the stochastic inequalities defining the set {ξ : h(x, ξ) 0} and define for each the specific formulation of the set Z UP in Theorem 1. Note that the precedence cuts (10)-(11) derived above are valid for the two considered functional forms (i.e., random right-hand side in Section 3.1 and random technology vector in Section 3.2) and can be added to strengthen each of the formulations presented in the next two subsections Random Right-Hand Side e now consider stochastic inequalities with linear random right-hand side. The set {ξ : h(x, ξ) 0} is defined by {ξ : a T x ξ} with ξ R. Corollary 1 follows from Theorem 1. COROLLARY 1. (Linear Uncertainty with Random Right-Hand Side) Let L x, U x R M be the vectors of lower and upper bounds for x, respectively, and S j be a positive constant defined as: S j = a m U xm + a m L xm ξ j, j N. m M:a m>0 m M:a m 0 The distributionally robust chance constraint can be reformulated with the set of mixed-integer linear inequalities Z UP sup P { ξ : a T x ξ } ɛ (13) P D UP a T x ξ j + S j (1 w j ), j N (14a) (x, λ, v, w) Z UP R = w j {0, 1}, j N (14b) (7a) (7c). In the uncertain probability case, the mixed-integer linear programming problem R : min x,λ,v,w { } g(x) : (x, λ, v, w) R M R R N {0, 1} N : (x, λ, v, w) X Z UP R (15) is equivalent to DRCCP for linear inequalities with random right-hand side. Proof. Theorem 1 allows the reformulation of (13) with the set of constraints (7a)-(7d), in which (7d) can then be replaced by (14a) and (14b). Each inequality (14a) ensures that the corresponding binary variable w j takes value 0 if a T x > ξ j. e derive in Theorem 3 a tightened formulation for the set Z UP R small number. in Corollary 1. Let δ be an infinitesimally THEOREM 3. (Strengthened Formulation) Assume without loss of generality that the ξ j, j = 1,..., N are indexed in such a way that ξ 1 ξ 2... ξ N. (16)

9 Data-Driven Chance-Constrained Optimization with asserstein Metric 9 The set Z UP R can be reformulated and tightened as follows: w j 1 w j, j = 2,..., N j (x, λ, v, w, π) Z UP RT w j π ik + δ ɛ, j N k=1 i N (7a) (7c); (14a) (14b) (17a) (17b) Proof. Due to the ascending ordering (16) of the atoms and constraints (14a), it follows that 1) for j 2, if w j must take value 0, then w j l, l = 1,..., j cannot take value 1, and 2) if w j = 1, then w j+1 can also take value 1 for j 1. Both conditions are enforced by the precedence constraints (17a), which validates them. e have now to demonstrate that (17b) are valid for ZR UP. The probability of atom ξ j is p j = π i N ij (see (8d)), and it ensues that the cumulative probability of atoms ξ j is j π k=1 i N ik due to (16). For the chance constraint (1b) to hold with probability at most ɛ, the binary variable w j associated to each atom whose cumulative probability is equal or below ɛ must take value 0, which is enforced by (17b). The inequalities (17b) do not impose any restriction on the variables w j associated with atoms with cumulative probability j π k=1 i N ik > 1 ɛ and are thus valid Random Technology Vector e now consider a linear stochastic inequality with uncertainty in the technology vector. The set {ξ : h(x, ξ) 0} takes the form {ξ : ξ T x b} with ξ R M. COROLLARY 2. (Linear Uncertainty in Random Technology Vector) Let S j be a positive constant defined as: S j = m M:ξ jm >0 ξ jm U xm + The distributionally robust chance constraint m M:ξ jm 0 ξ jm L xm b, j N. sup P { ξ : ξ T x b } ɛ (18) P D UP can be reformulated with the following set of mixed-integer linear inequalities Z UP (x, λ, v, w) Z UP L = L : { ξ T j x b + S j(1 w j ), j N (19a) (7a) (7c); (14b) In the uncertain probability case, the mixed-integer linear programming problem min x,λ,v,w { } g(x) : (x, λ, v, w) R M R R N {0, 1} N : (x, λ, v, w) X Z UP L is equivalent to DRCCP for linear uncertainty in the technology vector. Proof. Theorem 1 allows the reformulation of (18) with the inequalities (7a)-(7d). The constraints (7d) can then be replaced by (14b) and (19a). Each inequality (19a) ensures that the corresponding binary variable w j takes value 0 if ξ T j x > b. (20)

10 10 Data-Driven Chance-Constrained Optimization with asserstein Metric 4. asserstein DRCCP Case with Continuum of Realizations e now consider the case when the random variable ξ can have a continuum of realizations (i.e, ξ can be varied to uncountably many possible values). The atoms ξ of the underlying distribution are unknown, instead of the probabilities p which were the unknowns in the uncertain probability case (Section 3). e propose exact reformulations for chance-constrained models with random right-hand side and outer approximations for those with random technology vector. e also define under which conditions the true and the approximation problems are equivalent. e first recall the approach proposed by Esfahani and Kuhn (2017) to reformulate the worst-case expectation problem introduced earlier. THEOREM 4. (Dual Reformulation for Continuum of Realizations Esfahani and Kuhn (2017)) Let l(x, ξ) := max k K l (k) (x, ξ) represent a pointwise maximum function, and the negative constituent functions l (k) (x, ξ), k K are proper, convex and lower semi-continuous. Let s i, i N be a set of auxiliary variables. For any given θ 0, the worst-case expectation sup P D CR is equal to the optimal value of the following problem: inf λ,s,z,v λθ + 1 N N i=1 s i E P[l(x, ξ)] with continuum of realizations (21a) s.t. [ l (k) ] (z ik v ik ) + σ Ω (v ik ) z T ikξ 0 i s i i N, k K (21b) z ik λ i N, k K (21c) λ 0 (21d) where x, λ, z, v and s are decision variables, λ 0 is the dual variable for the asserstein distance constraint ξ ξ 0 Π(dξ, dξ 0 ) θ, [ l (k) ] (z Ω 2 ik v ik ) is the conjugate of l (k) (x, ξ) at z ik v ik, z ik is the dual norm, and σ Ω is the support function of the uncertainty set Ω. e refer the reader to Esfahani and Kuhn (2017) for the proof of Theorem 4. Lemma 1 represents the indicator function as a pointwise maxima. This will allow us to employ Theorem 4 to subsequently reformulate the distributionally robust chance constraint. [ ] LEMMA 1. The expected function E P 1{h(x,ξ) 0} can be rewritten as a convex function EP [l(x, ξ i )] with the indicator function 1 {h(x,ξi ) 0} defined as in which χ {h(x,ξi ) 0}(ξ) l(x, ξ i ) = max {1 χ {h(x,ξi ) 0}, }{{}}{{} 0 } (22) l (1) l (2) i i χ {h(x,ξi ) 0}(ξ) = { 0, if h(x, ξ i ) 0, otherwise (23) is the characteristic function of the convex set defined by h(x, ξ i ) 0.

11 Data-Driven Chance-Constrained Optimization with asserstein Metric 11 Proof. From (23), the equivalence between 1 {h(x,ξi ) 0} and (22) is immediate: max{1 χ {h(x,ξi ) 0}(ξ), 0} = { 1, if h(x, ξ) 0 0, otherwise (24) Since the characteristic function χ {h(x,ξi ) 0} of a convex set is convex (see Boyd and Vandenberghe (2004)), l (1) i and l (2) i are both concave. The pointwise supremum of a set of concave functions is convex,which implies that l(x, ξ i ) in (22) is convex and that E P [ 1{h(x,ξ) 0} ], which is the weighted summation of the convex functions l(x, ξ i ), is also convex. [ ] The convexity of E P 1{h(x,ξ) 0} is now used to reformulate the worst-case expectation sup P D E P [ 1{h(x,ξ) 0} ]. The next theorem combines Lemma 1 and Theorem 4 to provide the deterministic reformulation of the distributionally robust chance constraint (1b) in the continuum of realizations case. THEOREM 5. (DRCCP Reformulation for Continuum of Realizations) Suppose the uncertainty set Ω is closed and the set {h(x, ξ) 0} has a non-empty intersection with the uncertainty set Ω. Let the indicator loss function defined by (22). For any given θ 0, the distributionally robust chance constraint (1b) with continuum of realizations can be reformulated with the following set of inequalities Z CR. λθ + 1 N s i ɛ N i=1 λ 0 (25b) (λ, s, z, v) Z CR = s i 0, i N (25c) [ ] l (1) (zi v i ) + σ Ω (v i ) z T i ξ 0 i s i, i N (25d) z i λ, i N (25e) (25a) Proof. Substituting (21a) for sup P D E P [ 1{h(x,ξ) 0} ] gives (25a). Non-negativity imposed on λ gives (25b). Recall K := {1, 2}, with l (1) (x, ξ) = 1 χ {h(x,ξ) 0} (ξ) and l (2) (x, ξ) = 0. The conjugate function [ l (2) ] is = 0, if ξ = 0; =, otherwise. The constraints involving k = 2 can then be disregarded. e suppress the index k to ease the notation, and obtain (25d) and (25e) directly from (21b) and (21c), respectively. To further specify the reformulation, we need to determine the conjugate function [ l (k) ] of the loss function l(x, ξ) and the support function σ Ω depending on the form of the uncertainty set Ω, which we define as Ω = {ξ : Cξ d}. Recall that ξ denotes a scalar in models with random right-hand side and is a M-dimensional vector in models with random technology vector case. In a similar abuse of notation, C R +, d R, and Ω is a halfspace in models with random right-hand side, while in models with random technology vector, C R r M + is a matrix, d R r is a vector, and the support Ω is a polytope. In the next subsections, we consider two functional forms for the stochastic inequalities defining the set {ξ : h(x, ξ) 0} and define for each the specific formulation of the set Z CR in Theorem (5). e first give

12 12 Data-Driven Chance-Constrained Optimization with asserstein Metric a generic reformulation with bilinear terms. Next, we derive a mixed-integer conic programming reformulation (resp., outer approximation) for models with random right-hand side (resp., random technology vector). Finally, we define the conditions under which the outer approximation for the model with random technology matrix is equivalent to DRCCP Random Right-Hand Side e study first the case with linear uncertainty in the right-hand side: {ξ : h(x, ξ) 0} = {ξ : a T x ξ}, with ξ R, C R + and d R. Corollary 3 follows from Theorem 5. COROLLARY 3. (Linear Uncertainty in Right-Hand Side) The distributionally robust chance constraint sup P D CR P { ξ : a T x ξ } ɛ (26) can be reformulated with the set of constraints ZR CR : 1 + β i (a T x ξ 0 i ) + γ i (d Cξ 0 i ) s i, i N (27a) (x, λ, β, γ, s) Z CR R = In the continuum of realizations case, the problem min x,λ,β,γ,s β i + Cγ i λ, i N (27b) γ i 0, i N (27c) β i 0, i N (27d) (25a) (25c) { } g(x) : (x, λ, β, γ, s) R M R R N R N R N : (x, λ, β, γ) X Z CR R is equivalent to DRCCP for linear uncertainty in the right-hand side. as: Proof. The uncertainty set is defined by Ω = {ξ R : Cξ d}. e obtain the support function σ Ω (v i ) sup σ Ω (v i ) = ξ s.t. v i ξ i Cξ i d where v i R, and γ i R + is the dual variable corresponding to Cξ i d, i N. = { inf γ i d γ 0 (28) s.t. Cγ i = v i The set {ξ : h(x, ξ) 0} is given by {ξ : a T x ξ}. The conjugate function [ l (1) ] (z i v i ), i N is: { sup (z i v i )ξ i + 1 inf β i a T x i + 1 [ l (1) ] (z i v i ) = ξ = β i 0 (29) s.t. a T x ξ i s.t. β i = z i v i where β i R is the dual variable of constraint ξ i a T x, i N. Thus we have z i = β i + v i = β i + Cγ i, where z i R, i N. Substituting (28), (29) and z i into Z CR gives the set of constraints defining ZR CR. The above problem includes bilinear terms involving the decision variables x and β i in (27a) and is non-convex. e propose an MISOCP reformulation in Theorem 6.

13 Data-Driven Chance-Constrained Optimization with asserstein Metric 13 THEOREM 6. (MISOCP Reformulation for DRCCP with Linear Uncertainty in Right-Hand Side) Suppose the uncertainty set Ω is defined by Ω = {ξ : Cξ d, C 0}. Let L x and U x denote the lower and upper bound vectors of x. Define G i as the positive constant: G i = ξ 0 i m M:a m 0 a m L xm m M:a m<0 a m U xm, i N. The distributionally robust chance constraint (26) can be reformulated with the set of constraints ZRSO: CR 1 w i s i, i N (30a) (x, λ, β, γ, s, y, w ) Z CR RSO = w i {0, 1}, i N (30b) β i λ, i N (30c) w i N(1 ɛ) (30d) i N y i 0, i N (30e) a T x ξ 0 i 2y i, i N (30f) a T x ξ 0 i + (1 w i)g i 2y i, i N (30g) G i w i y i, i N (30h) 2β i y i w i2, i N (30i) (25a) (25c), (27c) (27d) In the continuum of realizations case, the mixed-integer second-order cone programming problem min x,λ,β,γ,s,y,w { g(x) : (x, λ, β, γ, s, y, w ) R M R R N R N R N R N is equivalent to DRCCP for linear uncertainty in the right-hand side. Proof. {0, 1} N : (x, λ, β, γ, s, y, w ) X Z CR RSO The set of constraints ZRSO CR is derived from ZR CR. Since all training samples from the empirical distribution belong to the uncertainty set, which implies d Cξ 0 i 0, i N, and C R +, it is optimal to set γ i = 0 in (27a) and (27b), which gives: 1 + β i (a T x ξ 0 i ) s i, i N (31a) β i λ, i N (31b) Since the dual norm of the l 1 -norm is the l -norm (see Boyd and Vandenberghe (2004)), we obtain: β i = max β i { β i : i N } (31c) The variable β i is non-positive (see (27d)) since it is the dual variable for constraint ξ i a T x. Integrating (31c) and (27d) gives (30c). }

14 14 Data-Driven Chance-Constrained Optimization with asserstein Metric Since θ is a non-negative constant and λ 0 (25b), it follows that, for any given λ and θ, each term s i must be as small as possible for (25a) to hold, and therefore β i (a T x ξ 0 i ) must be minimal (but no smaller than -1 since s i 0). This leads us to distinguish two scenarios for each constraint (31a) and its term β i (a T x ξ 0 i ) (with β i 0 due to (27d)): (i) if a T x ξ 0 i > 0, then it is optimal to set β i (a T x ξ 0 i ) 1, giving s i = 0; (ii) if a T x ξ 0 i 0, then it is optimal to set β i (a T x ξ 0 i ) = 0, allowing for s i = 1, which is the smallest value that s i can take under scenario (ii). e now define a binary variable w {0, 1} N and a nonnegative variable y R N +, and incorporate linear (30e) -(30h) and second-order cone (30i) constraints to force each variable w i and β i to take their ideal value defined by: w i = β i (a T x ξ 0 i ) = { 1, if a T x ξi 0 > 0, and β i = 1/(a T x ξi 0 ). 0, if a T x ξi 0 0, and β i = 0. Each w i can thus be understood as a binary variable indicating if a given solution x satisfies or not the constraint a T x ξ 0 i > 0 under the atom ξ 0 i. Consider the above two scenarios: (i) If a T x ξ 0 i > 0, then y i (a T x ξ 0 i )/2 > 0 due to (30f), which in turn implies w i = 1 due to (30h) and y i (a T x ξ 0 i )/2 due to (30g). Therefore, y i = (a T x ξ 0 i )/2 and the convex quadratic constraint (30i) forces β i 1. It follows that (32) holds under (i). a T x ξ i 0 (ii) If a T x ξi 0 < 0, we have w i = 0 from (30g), y i = 0 from (30h), and β i can take value 0. It follows that (32) holds under (ii). Constraint (30a) is obtained by substituting β i (a T x ξ 0 i ) by w i due to (32). The constraint (30d) holds if the aggregate probability of the satisfied atoms must be at least equal to 1 ɛ, which is required for the distributionally robust chance constraint. Therefore, (30d) is valid and can be added to strengthen the formulation. Note that we multiply y i by 2 in (30f) and (30g) to obtain the canonical form of a rotated second constraint in (30i). (32) 4.2. Random Technology Vector e next present the reformulation for the case with linear uncertainty in the technology vector, in which the explicit form of the set {ξ : h(x, ξ) 0} is {ξ : ξ T x b} with ξ R M, C R r M +, and d R r. COROLLARY 4. (Linear Uncertainty in Random Technology Vector) The distributionally robust chance constraint sup P D CR P { ξ : ξ T x b } ɛ, (33)

15 Data-Driven Chance-Constrained Optimization with asserstein Metric 15 can be reformulated with the set of constraints Z CR (x, λ, η, γ, s) Z CR L = In the continuum of realizations case, the problem min x,λ,η,γ,s L : 1 + η i (b x T ξ 0 i ) + γ T i (d Cξ 0 i ) s i, i N (34a) xη i + C T γ i λ, i N (34b) γ i 0, i N (34c) η i 0, i N (34d) (25a) (25b) { } g(x) : (x, λ, η, γ, s) R M R R N R N R N : (x, λ, η, γ, s) X Z CR L is equivalent to DRCCP under linear uncertainty in the random technology vector. Proof. The support function σ Ω (v i ) is derived as in (28) in Corollary 3. Since ξ is here a M-dimensional vector, the support function σ Ω (v i ) reads: sup σ Ω (v i ) = ξ s.t. v T i ξ i Cξ i d = { inf γi T d γ 0 (35) s.t. C T γ i = v i where γ i R r + is the dual variable vector corresponding to constraints Cξ i d, i N. The conjugate function [ l (1) ] (z i v i ), i N is: sup (z i v i ) T ξ i + 1 [ l (1) ] (z i v i ) = ξ s.t. x T ξ i b = { inf bη i + 1 η 0 (36) s.t. xη i = z i v i where η i R is the dual variable for constraint x T ξ i b, i N. Thus we have z i = xη i + C T γ i, where x, z i R M. Substituting (35), (36) and z i into Z CR gives the set of constraints defining Z CR L. As for the case with linear uncertainty in the right-hand side, problem in Corollary 4 includes bilinear terms involving x and η i in constraint (34a) and is non-convex. The complexity of (4) is further compounded by the presence of additional bilinear products xη i appearing in the dual norm term in (34b). Theorem 7 provides a mixed-integer second-order cone relaxation of the distributionally robust chance constraint with random technology matrix and continuum of realizations uncertainty type. THEOREM 7. (MISOCP Outer Approximation) Let L x, U x denote lower and upper bound vectors of x and U η be the upper bound of η. Define G i > 0, i N as the positive constant: G i = b m M:ξ 0 im 0 ξ 0 iml xm m M:ξ 0 im <0 ξ 0 imu xm, i N. The distributionally robust chance constraint (33) with continuum of realizations can be relaxed by the set of constraints Z CR CSO:

16 16 Data-Driven Chance-Constrained Optimization with asserstein Metric Proof. (x, λ, β, γ, s, y, w, φ) Z CR CSO = 1 w i s i, i N (37a) w i {0, 1}, i N (37b) w i (1 ɛ)n, (37c) i N y i 0, i N (37d) b x T ξ 0 i + 2y i 0, i N (37e) b x T ξ 0 i + 2y i (1 w i)g i, i N (37f) G iw i y i, i N (37g) 2η i y i w i2, i N (37h) φ mi λ, m M, i N (37i) φ mi U xm η i, m M, i N (37j) φ mi L xm η i + U η x m L xm U η, m M, i N (37k) φ mi L xm η i, m M, i N (37l) φ mi U η x m + U xm η i U xm U η, m M, i N (37m) (25a) (25c), (34c) (34d) Since d Cξ 0 i 0, i N, given that the scenarios of the empirical distribution all belong to the uncertainty set and and C R r M +, it is optimal to set γ i = 0 in (34a) and (34b), which can hence be rewritten as: 1 + η i (b x T ξ 0 i ) s i, i N (38a) xη i λ, i N (38b) e decompose the remaining part of the proof into two steps. e first derive the mixed-integer secondorder cone programming formulations to represent constraints (38a) before linearizing the bilinear terms in (38b) using the McCormick inequalities McCormick (1976). Step 1. Mixed-integer second-order cone representation. Let s focus on the terms η i (b x T ξ 0 i ) in (38a) and distinguish two scenarios: (i) if b x T ξ 0 i < 0, then it is optimal to set η i 1 x T ξ 0 i b, which allows for s i = 0. (ii) if b x T ξ 0 i 0, it is optimal to set η i (b x T ξ 0 i ) = 0, allowing for s i = 1, i.e., the smallest possible value for s i given the non-negativity of η i and (b x T ξ 0 i ) under scenario (ii). e now introduce a vector of binary w {0, 1} N and nonnegative y R N + variables. The binary variables

17 Data-Driven Chance-Constrained Optimization with asserstein Metric 17 are used to replace the term η i (b x T ξ 0 i ). Based on (i) and (ii), we incorporate a set of second-order cone constraints that force each binary variable w i to take the optimal values presented below, namely: { w i = η i (b x T ξ 0 1, if b x T ξi 0 < 0, and η i = 1/(b x T ξi 0 ). i ) = 0, if b x T ξi 0 0, and η i = 0. Each binary variable w i can be interpreted as indicative of whether or not a given solution x satisfies the constraint b x T ξ 0 i < 0 under the atom ξ 0 i : (i) If b x T ξ 0 i 0, (37f) forces w i = 0 which, in turn, leads to y i = 0 due to (37g). (ii) If b x T ξ 0 i < 0, we have y i (x T ξ 0 i b)/2 > 0 from (37e) and w i = 1 from (37g). Therefore, if b x T ξ 0 i < 0 w i = 1, the nonlinear inequality (39) 2η i y i w i, i N (40) forces η i 1 x T ξ 0 i b. On the other hand, if b xt ξ 0 i > 0 w i = 0, η i can take value 0. Since w i {0, 1}, then w i = w i2, we can replace the right-hand side of (40) by w i2, which gives the canonical rotated second order cone constraint (37h). Furthermore, (38a) is simplified as (37a) by substituting w i based on (39). To see the equivalence, consider the following: i) If b x T ξ 0 i 0, w i = 0 and the optimal value of η i is 0. Thus, (37a) becomes: 1 = 1 w i s i. ii) If b x T ξ 0 i < 0, we have w i = 1, y i (x T ξ 0 i b)/2, η i 1 x T ξ 0 i b, and η i(b x T ξ 0 i ) in the left side of (37a) is then smaller than -1, which allows its replacement by w i. Owing to the definition of w i (39) which takes value 1 when b x T ξ 0 i < 0 and 0 otherwise and to the requirement that the aggregate probability of the satisfied atoms must be at least equal to 1 ɛ, (37c) is valid and can be added to strengthen the formulation. Step 2. McCormick envelop relaxation. The dual of the l 1 norm is the l norm; thus, (38b) becomes xη i = max x,η { x mη i : m M} λ, i N (41) The dual variable η corresponding to x T ξ b in the derivation of the conjugate function is nonnegative and upper-bounded by U η. Due to (39), η i either takes value 0 if b x T ξ 0 i 0 or 1/(b x T ξ 0 i ) if b x T ξ 0 i > 0. To derive the envelope of the bilinear terms x m η i, m M, i N, we introduce a set of auxiliary variables φ mi := x m η i constrained by the McCormick inequalities and obtain the convex relaxation of the bilinear term x m η i represented by (37j)-(37m). Substituting φ mi := x m η i into (41) gives (37i). e note that the feasible set Z CR CSO outer-approximates the feasible area of the chance constraint (33) due to the McCormick relaxation of bilinear terms xη i. e next provide some conditions under which the above relaxation problem is exact and the feasible set defined by Z CR CSO is equivalent to that of the distributionally robust chance constraint.

18 18 Data-Driven Chance-Constrained Optimization with asserstein Metric LEMMA 2. (Exactness Conditions for MISOCP Outer Approximation) The set Z CR CSO is equivalent to the feasible area defined by the distributionally robust constraint (33) if the decision variables x are bounded integer variables. If x {0, 1} M is binary, the McCormick envelop relaxation provides an exact reformulation for each bilinear term xη. If x Z M + [0, U x ] M is a general integer variable, we rewrite it as a sum of binary variables and apply McCormick s approach to each resulting term. 5. Computational Tests In this section, we conduct a series of experiments to evaluate the computational efficiency of the proposed MISOCP reformulations on a distributionally robust knapsack problem (see Cheng et al. (2014)) with single chance constraint. The model is a DRCCP under the continuum of realization case with random technology vector. The following notations are adopted for the DRCCP-Knapsack problem. e consider M items and g R M + is the vector of item values. Let ξ R M denote the vector of random item weights, and b > 0 be the capacity of the knapsack. Each binary decision variable x m takes value 1 if item m is selected and is equal to 0 otherwise. The DRCCP-Knapsack problem reads: DRCCP-Knapsack: max x g T x (42a) s.t. inf P D P { ξ : ξ T x b } 1 ɛ (42b) x {0, 1} M (42c) To fit the distributionally robust chance constraint (42b) into our modeling and solution framework, we first reformulate (42b) as: sup P { ξ : ξ T x b } ɛ, P D which, defining ξ = ξ and b = b, is equivalent to: } sup P {ξ : ξ T x b ɛ (43) P D The above chance constraint (43) has the same form as constraint (33) and allows us to utilize Theorem (7) to reformulate it. The problem instances are created as follows. The random weights ξ are generated from a uniform distribution ξ U(5, 15), the item values are sampled from another uniform distribution g U(0, 20). e consider a number of items M = {10, 20, 30} with data sample size N = {100, 500, 1000, 2000}, a risk tolerance level ɛ {0.10, 0.05}, and a asserstein radius θ {0.10, 0.05}. e define the knapsack capacity as b = 10 M. e create 48 problem instances by considering the above combinations of values for the parameters. e solve three formulations for each instance:

19 Data-Driven Chance-Constrained Optimization with asserstein Metric 19 MISOCP Reformulation: The feasible region of the distributionally robust chance constraint (43) is represented by the set Z CR CSO (defined with constraints (25a)-(25c), (34c)-(34d), and (37a)-(37m)), in which the decision variables x {0, 1} M and w {0, 1} N are binary. The MISOCP Reformulation is equivalent to the DRCCP-Knapsack problem due to Lemma 2. MISOCP Approximation: e relax the binary requirements on x, i.e., we set x [0, 1]. Thus Z CR CSO results in an outer approximation (see Theorem 7) of the feasible area defined by the distributionally robust chance constraint. Continuous Relaxation: e remove the integrity constraints on both x and w to obtain the continuous relaxation of the MISOCP set Z CR CSO. e use AMPL to formulate all problems. Each problem instance is solved with Gurobi solver on a 64-bit laptop with Intel(R) Core(TM) i CPU processor at 2.60GHz with 16GB RAM. e set the time limit to 3600 seconds. For each instance, we report the solution time and objective value obtained with each model. e report the gap between the objective values of the MISOCP Approximation and Continuous Relaxation problems from the (true) optimal objective value of the exact MISOCP reformulation. The average of the solution time and gap values are also given in the last row of each table. M N Table 1 Computational Efficiency ɛ = 0.10, θ = 0.10 MISOCP Reformulation MISOCP Approximation Continuous Relaxation Time ObjValue Time ObjValue Gap Time ObjValue Gap % % % % % % % % % % % % % % % % % % % % % % % % Average % % The optimal solution is reached within one hour for all instances with the MISOCP Reformulation and Approximation methods. Optimality could not be proven within 3 hours for three instances with 2000 data points. e observe that the solution time (for each formulation) increases with the number of items M and the number of data points N. The average solution time for the MISOCP Reformulation is about 700 seconds. As expected, the Continuous Relaxation model is the fastest to solve with an average solution

20 20 Data-Driven Chance-Constrained Optimization with asserstein Metric M N Table 2 Computational Efficiency ɛ = 0.10, θ = 0.05 MISOCP Reformulation MISOCP Approximation Continuous Relaxation Time ObjValue Time ObjValue Gap Time ObjValue Gap % % % % % % % % % % % % % % % % % % % % % % % % Average % % M N Table 3 Computational Efficiency ɛ = 0.05, θ = 0.10 MISOCP Reformulation MISOCP Approximation Continuous Relaxation Time ObjValue Time ObjValue Gap Time ObjValue Gap % % % % % % % % % % % % % % % % % % % % % % % % Average % % time of 4.1 seconds and provides the loosest approximation with an average gap of 7.6%. The MISOCP Approximation is 54.2% tighter than the Continuous Relaxation but slower. The solution of the MISOCP Approximation is on average 10.8% faster (625.4 seconds) than the exact reformulation and is tight; the average optimality gap is 3.5%. In short, the MISOCP Approximation model provides a tight relaxation of the exact model and provides a reasonable tradeoff between solution time and accuracy should the exact reformulation be too time-consuming to solve.

21 Data-Driven Chance-Constrained Optimization with asserstein Metric 21 M N Table 4 Computational Efficiency ɛ = 0.05, θ = 0.05 MISOCP Reformulation MISOCP Approximation Continuous Relaxation Time ObjValue Time ObjValue Gap Time ObjValue Gap % % % % % % % % % % % % % % % % % % % % % % % % Average % % References Blanchet J, Chen L, Zhou XY (2018) Distributionally robust mean-variance portfolio selection with asserstein distance orking Paper: Available at Optimization Online. Boyd S, Vandenberghe L (2004) Convex optimization (Cambridge University Press). Calafiore GC (2007) Ambiguous risk measures and optimal robust portfolios. SIAM Journal on Optimization 18(3): Cheng J, Delage E, Lisser A (2014) Distributionally robust stochastic knapsack problem. SIAM Journal on Optimization 24(3): Delage E, Ye Y (2010) Distributionally robust optimization under moment uncertainty with application to data-driven problems. Operations Research 58(3): Duan C, Fang, Jiang L, Yao L, Liu J (2017) Distributionally robust chance-constrained voltage-concerned dc-opf with asserstein metric orking Paper: Available at arxiv: v1. Erdoğan E, Iyengar G (2006) Ambiguous chance constrained problems and robust optimization. Mathematical Programming 107(1-2): Esfahani PM, Kuhn D (2017) Data-driven distributionally robust optimization using the asserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming Gao R, Kleywegt AJ (2016) Distributionally robust stochastic optimization with asserstein distance, orking Paper: Available at Optimization Online. Hanasusanto GA, Roitch V, Kuhn D, iesemann (2015) A distributionally robust perspective on uncertainty quantification and chance constrained programming. Mathematical Programming 151(1):35 62.

22 22 Data-Driven Chance-Constrained Optimization with asserstein Metric Hanasusanto GA, Roitch V, Kuhn D, iesemann (2017) Ambiguous joint chance constraints under mean and dispersion information. Operations Research 65(3): Hu Z, Hong LJ (2013) Kullback-Leibler divergence constrained distributionally robust optimization orking Paper: Available at Optimization Online. Ji R, Lejeune M (2017) Data-driven optimization of reward-risk ratio measures. orking Paper: Available at SSRN, URL Jiang R, Guan Y (2015) Data-driven chance constrained stochastic program. Mathematical Programming 158: Jiang R, Guan Y, atson JP (2016) Risk-averse stochastic unit commitment with incomplete information. IIE Transactions 48(9): McCormick GP (1976) Computability of global solutions to factorable nonconvex programs: Part I Convex underestimating problems. Mathematical Programming 10(1): Pflug GC, Pichler A, ozabal D (2012) The 1/N investment strategy is optimal under high model ambiguity. Journal of Banking & Finance 36(2): Pflug GC, Pohl M (2017) A review on ambiguity in stochastic portfolio optimization. Set-Valued and Variational Analysis Pflug GC, ozabal D (2007) Ambiguity in portfolio selection. Quantitative Finance 7(4): Postek K, Den Hertog D, Melenberg B (2016) Computationally tractable counterparts of distributionally robust constraints on risk measures. SIAM Review 58(4): Xie, Ahmed S (2018a) Bicriteria approximation of chance constrained covering problems orking Paper: Available at Optimization Online. Xie, Ahmed S (2018b) On deterministic reformulations of distributionally robust joint chance constrained optimization problems. SIAM Journal on Optimization 28(2): Yang, Xu H (2016) Distributionally robust chance constraints for non-linear uncertainties. Mathematical Programming 155(1-2): Zhao C, Guan Y (2018) Data-driven risk-averse stochastic optimization with asserstein metric. Operations Research Letters 46(2): Zymler S, Kuhn D, Rustem B (2013) Distributionally robust joint chance constraints with second-order moment information. Mathematical Programming 137(1-2):

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

On Distributionally Robust Chance Constrained Program with Wasserstein Distance

On Distributionally Robust Chance Constrained Program with Wasserstein Distance On Distributionally Robust Chance Constrained Program with Wasserstein Distance Weijun Xie 1 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 4061 June 15, 018 Abstract

More information

Distributionally Robust Stochastic Optimization with Wasserstein Distance

Distributionally Robust Stochastic Optimization with Wasserstein Distance Distributionally Robust Stochastic Optimization with Wasserstein Distance Rui Gao DOS Seminar, Oct 2016 Joint work with Anton Kleywegt School of Industrial and Systems Engineering Georgia Tech What is

More information

Random Convex Approximations of Ambiguous Chance Constrained Programs

Random Convex Approximations of Ambiguous Chance Constrained Programs Random Convex Approximations of Ambiguous Chance Constrained Programs Shih-Hao Tseng Eilyan Bitar Ao Tang Abstract We investigate an approach to the approximation of ambiguous chance constrained programs

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie Shabbir Ahmed Ruiwei Jiang February 13, 2017 Abstract A distributionally robust joint chance constraint

More information

Distributionally robust simple integer recourse

Distributionally robust simple integer recourse Distributionally robust simple integer recourse Weijun Xie 1 and Shabbir Ahmed 2 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061 2 School of Industrial & Systems

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information Ambiguous Joint Chance Constraints under Mean and Dispersion Information Grani A. Hanasusanto 1, Vladimir Roitch 2, Daniel Kuhn 3, and Wolfram Wiesemann 4 1 Graduate Program in Operations Research and

More information

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Manish Bansal Grado Department of Industrial and Systems Engineering, Virginia Tech Email: bansal@vt.edu Kuo-Ling Huang

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Distributionally robust optimization techniques in batch bayesian optimisation

Distributionally robust optimization techniques in batch bayesian optimisation Distributionally robust optimization techniques in batch bayesian optimisation Nikitas Rontsis June 13, 2016 1 Introduction This report is concerned with performing batch bayesian optimization of an unknown

More information

Distributionally Robust Reward-risk Ratio Programming with Wasserstein Metric

Distributionally Robust Reward-risk Ratio Programming with Wasserstein Metric oname manuscript o. will be inserted by the editor) Distributionally Robust Reward-risk Ratio Programming with Wasserstein Metric Yong Zhao Yongchao Liu Jin Zhang Xinmin Yang Received: date / Accepted:

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Distributionally Robust Convex Optimization Wolfram Wiesemann 1, Daniel Kuhn 1, and Melvyn Sim 2 1 Department of Computing, Imperial College London, United Kingdom 2 Department of Decision Sciences, National

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Combinatorial Data Mining Method for Multi-Portfolio Stochastic Asset Allocation

Combinatorial Data Mining Method for Multi-Portfolio Stochastic Asset Allocation Combinatorial for Stochastic Asset Allocation Ran Ji, M.A. Lejeune Department of Decision Sciences July 8, 2013 Content Class of Models with Downside Risk Measure Class of Models with of multiple portfolios

More information

c 2014 Society for Industrial and Applied Mathematics

c 2014 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 4, No. 3, pp. 1485 1506 c 014 Society for Industrial and Applied Mathematics DISTRIBUTIONALLY ROBUST STOCHASTIC KNAPSACK PROBLEM JIANQIANG CHENG, ERICK DELAGE, AND ABDEL LISSER Abstract.

More information

Quantifying Stochastic Model Errors via Robust Optimization

Quantifying Stochastic Model Errors via Robust Optimization Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

Convex relaxations of chance constrained optimization problems

Convex relaxations of chance constrained optimization problems Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets manuscript Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets Zhi Chen Imperial College Business School, Imperial College London zhi.chen@imperial.ac.uk Melvyn Sim Department

More information

Ambiguity Sets and their applications to SVM

Ambiguity Sets and their applications to SVM Ambiguity Sets and their applications to SVM Ammon Washburn University of Arizona April 22, 2016 Ammon Washburn (University of Arizona) Ambiguity Sets April 22, 2016 1 / 25 Introduction Go over some (very

More information

Threshold Boolean Form for Joint Probabilistic Constraints with Random Technology Matrix

Threshold Boolean Form for Joint Probabilistic Constraints with Random Technology Matrix Threshold Boolean Form for Joint Probabilistic Constraints with Random Technology Matrix Alexander Kogan, Miguel A. Leeune Abstract We develop a new modeling and exact solution method for stochastic programming

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Robust Dual-Response Optimization

Robust Dual-Response Optimization Yanıkoğlu, den Hertog, and Kleijnen Robust Dual-Response Optimization 29 May 1 June 1 / 24 Robust Dual-Response Optimization İhsan Yanıkoğlu, Dick den Hertog, Jack P.C. Kleijnen Özyeğin University, İstanbul,

More information

Mathematical Optimization Models and Applications

Mathematical Optimization Models and Applications Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Second-order cone programming formulation for two player zero-sum game with chance constraints

Second-order cone programming formulation for two player zero-sum game with chance constraints Second-order cone programming formulation for two player zero-sum game with chance constraints Vikas Vikram Singh a, Abdel Lisser a a Laboratoire de Recherche en Informatique Université Paris Sud, 91405

More information

Safe Approximations of Chance Constraints Using Historical Data

Safe Approximations of Chance Constraints Using Historical Data Safe Approximations of Chance Constraints Using Historical Data İhsan Yanıkoğlu Department of Econometrics and Operations Research, Tilburg University, 5000 LE, Netherlands, {i.yanikoglu@uvt.nl} Dick den

More information

arxiv: v2 [math.oc] 16 Jul 2016

arxiv: v2 [math.oc] 16 Jul 2016 Distributionally Robust Stochastic Optimization with Wasserstein Distance Rui Gao, Anton J. Kleywegt School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0205

More information

Mixed Integer Linear Programming Formulation for Chance Constrained Mathematical Programs with Equilibrium Constraints

Mixed Integer Linear Programming Formulation for Chance Constrained Mathematical Programs with Equilibrium Constraints Mixed Integer Linear Programming Formulation for Chance Constrained Mathematical Programs with Equilibrium Constraints ayed A. adat and Lingling Fan University of outh Florida, email: linglingfan@usf.edu

More information

A note on scenario reduction for two-stage stochastic programs

A note on scenario reduction for two-stage stochastic programs A note on scenario reduction for two-stage stochastic programs Holger Heitsch a and Werner Römisch a a Humboldt-University Berlin, Institute of Mathematics, 199 Berlin, Germany We extend earlier work on

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Data-driven Chance Constrained Stochastic Program

Data-driven Chance Constrained Stochastic Program Data-driven Chance Constrained Stochastic Program Ruiwei Jiang and Yongpei Guan Department of Industrial and Systems Engineering University of Florida, Gainesville, FL 326, USA Email: rwjiang@ufl.edu;

More information

arxiv: v2 [math.oc] 10 May 2017

arxiv: v2 [math.oc] 10 May 2017 Conic Programming Reformulations of Two-Stage Distributionally Robust Linear Programs over Wasserstein Balls arxiv:1609.07505v2 [math.oc] 10 May 2017 Grani A. Hanasusanto 1 and Daniel Kuhn 2 1 Graduate

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Managing Uncertainty and Security in Power System Operations: Chance-Constrained Optimal Power Flow

Managing Uncertainty and Security in Power System Operations: Chance-Constrained Optimal Power Flow Managing Uncertainty and Security in Power System Operations: Chance-Constrained Optimal Power Flow Line Roald, November 4 th 2016 Line Roald 09.11.2016 1 Outline Introduction Chance-Constrained Optimal

More information

THE stochastic and dynamic environments of many practical

THE stochastic and dynamic environments of many practical A Convex Optimization Approach to Distributionally Robust Markov Decision Processes with Wasserstein Distance Insoon Yang, Member, IEEE Abstract In this paper, we consider the problem of constructing control

More information

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition Sanjay Mehrotra and He Zhang July 23, 2013 Abstract Moment robust optimization models formulate a stochastic problem with

More information

Supplementary Material

Supplementary Material ec1 Supplementary Material EC.1. Measures in D(S, µ, Ψ) Are Less Dispersed Lemma EC.1. Given a distributional set D(S, µ, Ψ) with S convex and a random vector ξ such that its distribution F D(S, µ, Ψ),

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

The Value of Randomized Solutions in Mixed-Integer Distributionally Robust Optimization Problems

The Value of Randomized Solutions in Mixed-Integer Distributionally Robust Optimization Problems The Value of Randomized Solutions in Mixed-Integer Distributionally Robust Optimization Problems Erick Delage 1 and Ahmed Saif 2 1 Department of Decision Sciences, HEC Montréal 2 Department of Industrial

More information

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Time inconsistency of optimal policies of distributionally robust inventory models

Time inconsistency of optimal policies of distributionally robust inventory models Time inconsistency of optimal policies of distributionally robust inventory models Alexander Shapiro Linwei Xin Abstract In this paper, we investigate optimal policies of distributionally robust (risk

More information

Two-stage stochastic (and distributionally robust) p-order conic mixed integer programs: Tight second stage formulations

Two-stage stochastic (and distributionally robust) p-order conic mixed integer programs: Tight second stage formulations Two-stage stochastic (and distributionally robust p-order conic mixed integer programs: Tight second stage formulations Manish Bansal and Yingqiu Zhang Department of Industrial and Systems Engineering

More information

Optimization Problems with Probabilistic Constraints

Optimization Problems with Probabilistic Constraints Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.

More information

Portfolio optimization with stochastic dominance constraints

Portfolio optimization with stochastic dominance constraints Charles University in Prague Faculty of Mathematics and Physics Portfolio optimization with stochastic dominance constraints December 16, 2014 Contents Motivation 1 Motivation 2 3 4 5 Contents Motivation

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

Stability Analysis for Mathematical Programs with Distributionally Robust Chance Constraint

Stability Analysis for Mathematical Programs with Distributionally Robust Chance Constraint Stability Analysis for Mathematical Programs with Distributionally Robust Chance Constraint Shaoyan Guo, Huifu Xu and Liwei Zhang September 24, 2015 Abstract. Stability analysis for optimization problems

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Robust Growth-Optimal Portfolios

Robust Growth-Optimal Portfolios Robust Growth-Optimal Portfolios! Daniel Kuhn! Chair of Risk Analytics and Optimization École Polytechnique Fédérale de Lausanne rao.epfl.ch 4 Technology Stocks I 4 technology companies: Intel, Cisco,

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

A SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE

A SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE K Y B E R N E I K A V O L U M E 4 4 ( 2 0 0 8 ), N U M B E R 2, P A G E S 2 4 3 2 5 8 A SECOND ORDER SOCHASIC DOMINANCE PORFOLIO EFFICIENCY MEASURE Miloš Kopa and Petr Chovanec In this paper, we introduce

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the

More information

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization

More information

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No., February 20, pp. 24 54 issn 0364-765X eissn 526-547 360 0024 informs doi 0.287/moor.0.0482 20 INFORMS A Geometric Characterization of the Power of Finite

More information

Scenario grouping and decomposition algorithms for chance-constrained programs

Scenario grouping and decomposition algorithms for chance-constrained programs Scenario grouping and decomposition algorithms for chance-constrained programs Yan Deng Shabbir Ahmed Jon Lee Siqian Shen Abstract A lower bound for a finite-scenario chance-constrained problem is given

More information

Optimal Transport Methods in Operations Research and Statistics

Optimal Transport Methods in Operations Research and Statistics Optimal Transport Methods in Operations Research and Statistics Jose Blanchet (based on work with F. He, Y. Kang, K. Murthy, F. Zhang). Stanford University (Management Science and Engineering), and Columbia

More information

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

Lagrange Relaxation: Introduction and Applications

Lagrange Relaxation: Introduction and Applications 1 / 23 Lagrange Relaxation: Introduction and Applications Operations Research Anthony Papavasiliou 2 / 23 Contents 1 Context 2 Applications Application in Stochastic Programming Unit Commitment 3 / 23

More information

On the relation between concavity cuts and the surrogate dual for convex maximization problems

On the relation between concavity cuts and the surrogate dual for convex maximization problems On the relation between concavity cuts and the surrogate dual for convex imization problems Marco Locatelli Dipartimento di Ingegneria Informatica, Università di Parma Via G.P. Usberti, 181/A, 43124 Parma,

More information

Scenario Optimization for Robust Design

Scenario Optimization for Robust Design Scenario Optimization for Robust Design foundations and recent developments Giuseppe Carlo Calafiore Dipartimento di Elettronica e Telecomunicazioni Politecnico di Torino ITALY Learning for Control Workshop

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Lecture Notes on Support Vector Machine

Lecture Notes on Support Vector Machine Lecture Notes on Support Vector Machine Feng Li fli@sdu.edu.cn Shandong University, China 1 Hyperplane and Margin In a n-dimensional space, a hyper plane is defined by ω T x + b = 0 (1) where ω R n is

More information

POWER system operations increasingly rely on the AC

POWER system operations increasingly rely on the AC i Convex Relaxations and Approximations of Chance-Constrained AC-OF roblems Lejla Halilbašić, Student Member, IEEE, ierre inson, Senior Member, IEEE, and Spyros Chatzivasileiadis, Senior Member, IEEE arxiv:1804.05754v3

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Reformulation of chance constrained problems using penalty functions

Reformulation of chance constrained problems using penalty functions Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF

More information

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Noname manuscript No. (will be inserted by the editor) Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Jie Sun Li-Zhi Liao Brian Rodrigues Received: date / Accepted: date Abstract

More information

On a class of minimax stochastic programs

On a class of minimax stochastic programs On a class of minimax stochastic programs Alexander Shapiro and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology 765 Ferst Drive, Atlanta, GA 30332. August 29, 2003

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

The Distributionally Robust Chance Constrained Vehicle Routing Problem

The Distributionally Robust Chance Constrained Vehicle Routing Problem The Distributionally Robust Chance Constrained Vehicle Routing Problem Shubhechyya Ghosal and Wolfram Wiesemann Imperial College Business School, Imperial College London, United Kingdom August 3, 208 Abstract

More information

Robust Markov Decision Processes

Robust Markov Decision Processes Robust Markov Decision Processes Wolfram Wiesemann, Daniel Kuhn and Berç Rustem February 9, 2012 Abstract Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments.

More information

Generalized Decision Rule Approximations for Stochastic Programming via Liftings

Generalized Decision Rule Approximations for Stochastic Programming via Liftings Generalized Decision Rule Approximations for Stochastic Programming via Liftings Angelos Georghiou 1, Wolfram Wiesemann 2, and Daniel Kuhn 3 1 Process Systems Engineering Laboratory, Massachusetts Institute

More information

arxiv: v2 [cs.ds] 27 Nov 2014

arxiv: v2 [cs.ds] 27 Nov 2014 Single machine scheduling problems with uncertain parameters and the OWA criterion arxiv:1405.5371v2 [cs.ds] 27 Nov 2014 Adam Kasperski Institute of Industrial Engineering and Management, Wroc law University

More information

Auxiliary signal design for failure detection in uncertain systems

Auxiliary signal design for failure detection in uncertain systems Auxiliary signal design for failure detection in uncertain systems R. Nikoukhah, S. L. Campbell and F. Delebecque Abstract An auxiliary signal is an input signal that enhances the identifiability of a

More information

Relaxations of linear programming problems with first order stochastic dominance constraints

Relaxations of linear programming problems with first order stochastic dominance constraints Operations Research Letters 34 (2006) 653 659 Operations Research Letters www.elsevier.com/locate/orl Relaxations of linear programming problems with first order stochastic dominance constraints Nilay

More information

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Takafumi Kanamori and Akiko Takeda Abstract. Uncertain programs have been developed to deal with optimization problems

More information

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Machine Learning. Lecture 6: Support Vector Machine. Feng Li. Machine Learning Lecture 6: Support Vector Machine Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Warm Up 2 / 80 Warm Up (Contd.)

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Ambiguous Chance Constrained Programs: Algorithms and Applications

Ambiguous Chance Constrained Programs: Algorithms and Applications Ambiguous Chance Constrained Programs: Algorithms and Applications Emre Erdoğan Adviser: Associate Professor Garud Iyengar Partial Sponsor: TÜBİTAK NATO-A1 Submitted in partial fulfilment of the Requirements

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information