On Distributionally Robust Chance Constrained Program with Wasserstein Distance

Size: px
Start display at page:

Download "On Distributionally Robust Chance Constrained Program with Wasserstein Distance"

Transcription

1 On Distributionally Robust Chance Constrained Program with Wasserstein Distance Weijun Xie 1 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 4061 June 15, 018 Abstract This paper studies a distributionally robust chance constrained program DRCCP) with Wasserstein ambiguity set, where the uncertain constraints should satisfy with a probability at least a given threshold for all the probability distributions of the uncertain parameters within a chosen Wasserstein distance from an empirical distribution. In this work, we investigate equivalent reformulations and approximations of such problems. We first show that a DRCCP can be reformulated as a conditionalvalue-at-risk constrained optimization problem, and thus admits tight inner and outer approximations. When the metric space of uncertain parameters is a normed vector space, we show that a DR- CCP of bounded feasible region is mixed integer representable by introducing big-m coefficients and additional binary variables. For a DRCCP with pure binary decision variables, by exploring submodular structure, we show that it admits a big-m free formulation and can be solved by branch and cut algorithm. This result can be generalized to mixed integer DRCCPs. Finally, we present a numerical study to illustrate effectiveness of the proposed methods. wxie@vt.edu. 1

2 1 Introduction 1.1 Setting A distributionally robust chance constrained program DRCCP) is of the form: min c x, s.t. x S, inf P ξ : ax) ξi b i x), i [I] P P 1 ɛ. 1a) 1b) 1c) In 1), the vector x R n denotes the decision variables; the vector c R n denotes the objective function coefficients; the set S R n denotes deterministic constraints on x; and the constraint 1c) is a chance constraint involving I uncertain inequalities specified by the random vectors ξ i supported on set Ξ i R n for each i [I] with a joint probability distribution P from a family P, termed ambiguity set. We let [R] := 1,,..., R for any positive integer R, and for each uncertain constraint i [I], ax) R n and b i x) R denote affine mappings of x such that ax) = ηx + 1 η)e and b i x) = Bi x + bi with η 0, 1, all-one vector e R n, B i R n, and b i R, respectively. For notational convenience, we let Ξ = i [I] Ξ i and ξ = ξ 1,..., ξ I ). ote that i) for any i, j [I] and i j, random vectors ξ i and ξ j can be correlated; and ii) we use η 0, 1 to differentiate whether 1c) involves left-hand uncertainty i.e., η = 1) or right-hand uncertainty i.e., η = 0). The chance constraint 1c) requires that all I uncertain constraints are simultaneously satisfied for all the probability distributions from ambiguity set P with a probability at least 1 ɛ), where ɛ 0, 1) is a specified risk tolerance. We call 1) a single DRCCP if I = 1 and a joint DRCCP if I. Also, 1) is termed a DRCCP with right-hand uncertainty if η = 0 and a DRCCP with left-hand uncertainty, otherwise. For a joint DRCCP, if I =, ξ 1 = ξ, we call 1) as a two-sided DRCCP. We denote the feasible region induced by 1c) as Z := x R n : inf P ξ : ax) ξi b i x), i [I] 1 ɛ. ) P P In this paper, we consider Wassertein ambiguity set P. A1) The Wasserstein ambiguity set P is defined as P = P P 0 Ξ) : E ξ, ζ)] P P ζ[d δ, 3) where P ζ denotes the discrete empirical distribution of ζ on the countable support Z = ζ j Ξ with point mass function p j, d : Ξ Ξ R + denotes distance metric and δ > 0 denotes the Wasserstein radius. We assume that Ξ, d) is a totally bounded Polish separable complete metric) space with distance metric d, i.e., for every ɛ > 0, there exists a finite covering of Ξ by balls with radius at most ɛ. ote that The Wasserstein metric measures the distance between true distribution and empirical distribution and is able to recover the true distribution when the number of sampled data goes to infinity [14]. 1. Related Literature There are significant work on reformulation, convexity and approximations of set Z under various ambiguity sets [7, 18, 19, 1, 37, 39]). For a single DRCCP, when P consists of all probability distributions with given first and second moments, the set Z is second-order conic representable [7, 13]. Similar

3 convexity results hold for single DRCCP when P also incorporates other distributional information such as the support of ξ [10], the unimodality of P [18, 4], or arbitrary convex mapping of ξ [37]. For a joint DRCCP, [19] provided the first convex reformulation of Z in the absence of coefficient uncertainty, i.e. η = 0, when P is characterized by the mean, a positively homogeneous dispersion measure, and a conic support of ξ. For the more general coefficient uncertainty setting, [37] identified several sufficient conditions for Z to be convex e.g., when P is specified by one moment constraint), and [36] showed that Z is convex for two-sided DRCCP when P is characterized by the first two moments. When DRCCP set Z is not convex, many inner convex approximations have been proposed. In [9], the authors proposed to aggregate the multiple uncertain constraints with positive scalars in to a single constraint, and then use conditional-value-at-risk CVaR) approximation scheme [8] to develop an inner approximation of Z. This approximation is shown to be exact for single DRCCP when P is specified by first and second moments in [43] or, more generally, by convex moment constraints in [37]. In [38], the authors provided several sufficient conditions under which the well-known Bonferroni approximation of joint DRCCP is exact and yields a convex reformulation. Recently, there are many successful developments on data driven distributionally robust programs with Wasserstein ambiguity set 3) [16, 7, 41]. For instance, [16, 7] studied its reformulation under different settings. Later on, [4, 15, 3, 3] applied it to the optimization problems related with machine learning. Other relevant works can be found [3, 17,, 6]. However, there is very limited literature on DRCCP with Wasserstein ambiguity set. In [35], the authors proved that it is strongly P-hard to optimize over the DRCCP set Z with Wasserstein ambiguity set and proposed a bicriteria approximation for a class of DRCCP with covering uncertain constraints i.e., S is a closed convex cone and Ξ i R n, B i R n +, b i R for each i [I]). In [11], the authors considered two-sided DRCCP with right-hand uncertainty and proposed its tractable reformulation, while in [0], the authors studied CVaR approximation of DRCCP. As far as the author is concerned, there is no work on developing tight approximations and exact reformulations of general DRCCP with Wasserstein ambiguity set. 1.3 Contributions In this paper, we study approximations and exact reformulations of DRCCP under Wasserstein ambiguity set. In particular, our main contributions are summarized as below. 1. We derive a deterministic equivalent reformulation for set Z and show that this reformulation admits a conditional-value-at-risk CVaR) interpretation. Based upon this fact, we are able to derive tight inner and outer approximations.. When the support Ξ is an n I- dimensional vectlor space and the distance metric is a norm i.e, dξ, ζ) = ξ ζ ), we show that the feasible region S Z of a DRCCP, once bounded, is mixed integer representable with big-m coefficients and I additional binary variables. We also derive compact formulations for the proposed inner and outer approximations and compare their strengths. 3. When the decision variables are pure binary i.e., S 0, 1 n ), we first show that the nonlinear constraints in the reformulation can be recast as submodular knapsack constraints. Then, by exploring the polyhedral properties of submodular functions, we propose a new big-m free mixed integer linear reformulation. In a numerical study, we further show that the proposed formulation can be effectively solved by branch and cut algorithm. The remainder of the paper is organized as follows. Section presents exact reformulation of DR- CCP set Z as well as its inner and outer approximations under a general setting. Section 3 provides mixed integer) convex reformulations of feasible region S Z and its inner and outer approximations when the metric space of the random variables is a normed vector space. Section 4 studies binary DR- CCP i.e., S 0, 1 n ), develops a big-m free formulation, and numerically illustrates the proposed 3

4 methods. Section 5 concludes the paper. otation: The following notation is used throughout the paper. We use bold-letters e.g., x, A) to denote vectors or matrices, and use corresponding non-bold letters to denote their components. We let e be the all-ones vector, and let e i be the ith standard basis vector. Given an integer n, we let [n] := 1,,..., n, and use R n + := x R n : x l 0, l [n] and R n := x R n : x l 0, l [n]. Given a real number t, we let t) + := maxt, 0. Given a finite set I, we let I denote its cardinality. We let ξ denote a random vector with support Ξ and denote one of its realization by ξ. Given a set R, the characteristic function χ R x) = 0 if x R, and, otherwise, while the indicator function Ix R) =1 if x R, and 0, otherwise. For a matrix A, we let A i denote ith row of A and A j denote jth column of A. Additional notation will be introduced as needed. Given a subset T [n], we define an n-dimensional 1, if τ T binary vector e T as e T ) τ = 0, if τ [n] \ T. General Case: Reformulations and Approximations In this section, we will study an equivalent reformulation of set Z under Assumption A1). This reformulation has conditional-value-at-risk CVaR) interpretation, therefore allows us to derive tight inner and outer approximations..1 Exact Reformulation In this part, we will reformulate set Z into its deterministic counterpart. The main idea of this reformulation is that we first use the strong duality result from [16] to formulate the worst-case chance constraint into its dual form, and then break down the indicator function according to its definition. Theorem 1. Under Assumption A1), set Z is equivalent to Z = x Rn : δλ ɛ 1 min λfx, ζ j ) 1, 0, λ 0, 4) where Proof. ote that fx, ζ) := min inf i [I] ξ Ξ,ax) ξ i>b ix) inf P ξ : ax) ξi b i x), i [I] 1 ɛ P P is equivalent to sup P ξ : ax) ξi > b i x), i [I] ɛ. P P By Theorem 1 in [16], sup P P P ξ : ax) ξi > b i x), i [I] is equivalent to min λ 0 λδ 1 ξ Ξ dξ, ζ). 5) [ inf λdξ, ζ j ) I ax) ξ i > b i x), i [I] )]. 6a) Thus set Z is equivalent to Z := x Rn : λδ 1 [ inf λdξ, ζ j ) I ax) ξ i > b i x), i [I] )] ɛ, λ 0. ξ Ξ 6b) 4

5 We now break down the indicator function in the infimum of 6b) and reformulate it as below. Claim 1. for given λ 0 and ζ Z, we have [ inf λdξ, ζ) I ax) ξ i > b i x), i [I] )] = min min ξ Ξ inf i [I] ξ Ξ,ax) ξ i>b ix) [λdξ, ζ) 1], 0. 6c) Proof. We first note that I ax) ξ i > b i x), i [I] ) = max i [I] I ax) ξ i > b i x) ). Thus, [ inf λdξ, ζ) I ax) ξ i > b i x), i [I] )] = min inf [ λdξ, ζ) I ax) ξ i > b i x) )]. ξ Ξ i [I] ξ Ξ Therefore, we only need to show that for any i [I], [ inf λdξ, ζ) I ax) ξ i > b i x) )] = min inf [λdξ, ζ) 1], 0. 6d) ξ Ξ ξ Ξ,ax) ξ i>b ix) There are two cases: Case 1. If ax) ζ i > b i x), then in the left-hand side of 6d), the infimum is equal to 1 by letting ξ := ζ, which equals to the right-hand side since the infimum is also achieved by ξ := ζ. Case. If ax) ζ i b i x), then for any ξ Ξ, we either have ax) ξ i > b i x) or ax) ξ i b i x). Hence, the left-hand side of 6d) is equivalent to [ λdξ, ζ) I ax) ξ i > b i x) )] inf ξ Ξ = min = min inf ξ Ξ,ax) ξ i>b ix) [λdξ, ζ) 1], inf [λdξ, ζ) 1], 0 ξ Ξ,ax) ξ i>b ix) where inf ξ Ξ,ax) ξ i b ix) [dξ, ζ)] = 0 by letting ξ := ζ. Thus, By Claim 1, set Z is equivalent to 4). inf ξ Ξ,ax) ξ i b ix), [λdξ, ζ)] In Theorem 1, we must have λ > 0, thus can define a new variable γ = 1 λ in 4) and reformulate set Z into the following equivalent form. Theorem. Under Assumption A1), set Z is equivalent to Z = x Rn : δ ɛγ 1 min fx, ζ j ) γ, 0, γ 0, 7) where function f, ) is defined in 5). Proof. ext, let Z denote the set in the right-hand side of 7), we only need to show that sets Z, Z are equivalent, i.e., Z = Z. Z Z ) Given x Z, there exists λ 0 such that x, λ) satisfies 4). If λ > 0, then let γ = 1 λ. Then it is easy to see that x, γ) satisfies 7). Hence, x Z. ow suppose that λ = 0, then in 4), we have a contradiction that ɛ 0, 1). ɛ 1 5

6 Z Z ) Similarly, given x Z, there exists γ 0 such that x, γ) satisfies 7). If γ > 0, then let λ = 1 γ. Then it is easy to see that x, λ) satisfies 4). Hence, x Z. ow suppose that γ = 0, then in 7), we have min fx, ζ j ) γ, 0 := 0 for each j []. Thus, 7) reduces to δ 0 contradicting that δ > 0. Before showing a conditional-value-at-risk CVaR) interpretation of set Z, let us begin with the following two definitions. Given a random variable X, let P and F X ) be its probability distribution and cumulative distribution function, respectively. Then 1 ɛ)-value at risk VaR) of X is VaR 1 ɛ X) := min s : F Xs) 1 ɛ, while its 1 ɛ)-conditional-value-at-risk CVaR) [31] is defined as CVaR 1 ɛ X) := min β β + 1 [ ] ɛ E P X β + With the definitions above, we observe that set Z in 7) has a CVaR interpretation. Corollary 1. Under Assumption A1), set Z is equivalent to Z = x R n : δ [ ɛ + CVaR 1 ɛ fx, ζ) ] 0, 8) where f, ) is defined in 5), and CVaR 1 ɛ [ fx, ζ) ] [ = min γ γ + 1 ɛ E P ζ fx, ζ) ] γ. + Proof. First, we observe that the constraint in 7) directly implies γ > 0, thus the nonnegativity constraint of γ can be dropped, i.e., equivalently, we have Z = x Rn : δ ɛ γ + 1 max fx, ζ j ) + γ, 0 0 ɛ. Z = ext, in the above formulation, let γ := γ and replace the existence of γ by finding the best γ such that the constraint still holds, we arrive at which is equivalent to 8). x Rn : δ ɛ + min γ γ + 1 ɛ. max fx, ζ j ) γ, 0 0, In the following sections, our derivations of exact reformulations is based upon Theorem while the approximations are mainly according to CVaR interpretation in Corollary 1. 6

7 . Outer and Inner Approximations In this subsection, we will introduce one outer approximation and three different inner approximations by exploring the exact reformulations in the previous section. VaR Outer Approximation: ote from [31] that for any random variable X, we have ) ) CVaR 1 ɛ X = VaR 1 ɛ X + 1 [ )] ) ɛ E X VaR1 ɛ X VaR 1 ɛ X. Therefore, in Corollary 1, if we replace CVaR 1 ɛ ) by VaR 1 ɛ ), then we have the following outer approximation of set Z. Theorem 3. Under Assumption A1), set Z is outer approximated by Z VaR = x R n : P ζ fx, ζ) δ ɛ + 1 ɛ. 9) Proof. Due to the well known result in [31] that CVaR 1 ɛ [ fx, ζ) ] [ VaR 1 ɛ fx, ζ) ]. Therefore, set Z is outer approximated by Z VaR = x R n : δ [ ɛ + VaR 1 ɛ fx, ζ) ] 0, which is equivalent to Z VaR = x R n : P ζ fx, ζ) δ 1 ɛ. ɛ Inner Approximation I- Robust Scenario Approximation: On the other hand, we notice that for any random variable X, we have ) ) CVaR 1 ɛ X CVaR 1 X := ess. sup X). Thus, in Corollary 1, if we replace CVaR 1 ɛ ) by ess. sup ), then we have the following inner approximation of set Z. Theorem 4. Under Assumption A1), set Z is inner approximated by Z S = x R n : fx, ζ j ) δɛ, j []. 10) Proof. Since CVaR 1 ɛ [ fx, ζ) ] [ ess. sup fx, ζ) ], and ζ is a discrete random vector, therefore, set Z can inner approximated by which is equivalent to 10). Z S = x R n : P ζ fx, ζ) δ = 1, ɛ ote that set Z S has a similar structure as scenario approach to chance constrained problem [6, 8, 9], and indeed can be viewed as a robust scenario approach to chance constrained problem. We will discuss more on this fact in Section 3.. Inner Approximation II- Inner Chance Constrained Approximation: ext we propose a chance constrained inner approximation of DRCCP set Z by constructing a feasible γ in 7). 7

8 Theorem 5. Under Assumption A1) and ɛ 0, 1), ɛ / Z +, set Z is inner approximated by Z I = x R n : P ζ fx, ζ) δ 1 α, 0 α ɛ. 11) ɛ α Proof. For any x Z I, we would like to show that x Z. Since x Z I, there exists an α such that x, α) satisfies constraints in 11). ow let us define γ =. Let define a set Since δ ɛ α 1 δ ɛ α C = j [] : fx, ζ j ) γ. δ ɛ α := γ, thus by 11), C α. Hence, min fx, ζ j ) γ, 0 = 1 fx, ζ j ) γ ) C γ α γ = δ ɛγ, j C where the first inequality is due to fx, ζ j ) 0 and the second inequality is due to C α. We remark that this result together with set Z VaR shows that DRCCP set Z can be inner and outer approximated by regular chance constraints with empirical probability distribution P ζ. We also note that i) set Z S is a special case of set Z I by letting α = 0, thus, we must have Z S Z I ; ii) there are ɛ + 1 non-dominant α values, that is, we must have α 0, 1,..., ɛ. Indeed, suppose that α i 1, i ) for an i [ ɛ ], then the feasible region expands if we decrease the value of α to i 1. Therefore, to optimize over set Z I, we can enumerate these ɛ + 1 values of α and choose the one which yields the smallest objective value. These two results are summarized below. Corollary. Suppose that Assumption A1) holds and ɛ 0, 1), ɛ / Z + and set Z I is defined in 11), then i) Z S Z I ; and ii) set Z I is equivalent to Z I = x R n : P ζ fx, ζ) δ 1 α, α 0, 1 ɛ α,..., ɛ. 1) Inner Approximation III- CVaR Approximation: Finally, we close this section by studying a well known convex approximation of a chance constraint, which is to replace the nonconvex chance constraint by a convex constraint defined by CVaR cf., [8]). For a DRCCP, the resulting formulation is Z CVaR = x R n : sup inf P P β [ ɛβ + E P max i [I] ax) ξ b i x) ) ) ] + β 0. + Set Z CVaR 13) is convex and is an inner approximation of set Z. The following results show a reformulation of set Z CVaR. We would like to acknowledge that this result has been independently observed by a recent work in [0]. For the completeness of this paper, we present a proof with our notation as below. 13) Theorem 6. Set Z CVaR Z and is equivalent to [ Z CVaR = x : ɛβ + λδ 1 inf λdξ, ζ j ) ξ Ξ λ, β 0. max i [I] ax) ξ i b i x) ) ) ] + β + 0, 14a) 14b) 8

9 Proof. ote that Wassersetein ambiguity set P is weakly compact [5], thus according to Theorem.1 in [33], set Z CVaR is equivalent to Z CVaR = x : inf β [ ɛβ + sup P P E P max ax) ξ b i x) ) ) ] + β 0. i [I] + ote that in the 15), the infimum must be achieved. Indeed, we first note that for any β < 0, the inequality in 15) will not be satisfied. Thus, we must have β 0. On the other hand, we note that ɛβ + sup P P E P max ax) ξ b i x) ) ) + β ɛβ + E P i [I] ζ + max i [I] ) ) ax) ζ bi x) + β + where the inequality is due to P ζ P. The right-hand side of the above inequality will be equal to ) ) 1 ɛ)β > 0 for any β > max i [I], b i x) ax) ζ j i, 0. Thus, the best β in 15) is bounded, i.e., Z CVaR is equivalent to Z CVaR = x : ɛβ + sup E P max ax) ξ i b i x) ) ) + β 0, β 0. 16) P P i [I] + By Theorem 1 in [16], the above formulation is further equal to [ Z CVaR = x : min ɛβ + λδ 1 inf λdξ, ζ j ) max ax) ξ b i x) ) ) ] + β 0, λ 0 ξ Ξ i [I] + β 0, which is equivalent to 14). 15) 3 DRCCP with ormed Vector Space Ξ, d) ote that the results in the previous section are quite general. In this section, we will show that these results can be significantly simplified given that Ξ, d) is a normed vector space. In particular, we make the following assumption. A) The support Ξ is an n I-dimensional vector space and distance metric dξ, ζ) = ξ ζ. 3.1 Exact Mixed Integer Program Reformulation In this subsection, we show that set Z is mixed integer representable under Assumptions A1)-A). To begin with, we observe that under additional Assumption A), function f, ) in Theorem can be explicitly calculated. Thus, Theorem 7. Under Assumptions A1)-A), set Z is equivalent to δ ɛγ 1 Z = x Rn : γ 0, min fx, ζ j ) γ, 0, 17a) 17b) 9

10 where fx, ζ) = min min i [I]\Ix) max b i x) ax) ζ i, 0 ax), min i Ix) χ x:b ix)<0x), 18) Ix) = [I] if ax) 0 and Ix) =, otherwise, and characteristic function χ R x) = if x / R and 0, otherwise. ote that the result in Theorem 7 can be further simplified by reformulating set Z as a disjunction of a nonconvex set and a convex set. Theorem 8. Suppose Assumptions A1)-A) hold. Then Z = Z 1 Z, where δν ɛγ 1 z j, Z 1 = x R n z : j + γ max b i x) ax) ζ j i, 0, i [I], j [], z j 0, j [], ax) ν, ν > 0, γ 0, 19a) 19b) 19c) 19d) 19e) and Z = x R n :ax) = 0, b i x) 0, i [I]. 0) Proof. We need to show that Z 1 Z Z and Z Z 1 Z. Z 1 Z Z. Given x Z, we have Ix) = [I], thus fx, ζ) defined in 18)) is. Thus, let γ = δ ɛ. Clearly, γ, x) satisfies all the constraints in 17), i.e., x Z. Hence, Z Z. Given x Z 1, there exists γ, ν, z, x) which satisfies constraints in 19). Suppose that Ix) = [I], then we have ax) = 0. Hence, for each i Ix), we have 19a) and 19b) imply that which is equivalent to δν ɛγ 1 That is, b i x) > 0. Thus, x Z Z. z j 1 max b i x), 0 γ), max b i x), 0 δν + 1 ɛ)γ > 0. ow we suppose that Ix) =. For each i [I], 19a) and 19b) along with ν > 0 imply that z j 1 ν min ν min max b i x) ax) ζ j i, 0 γ i [I] ν, 0 max b i x) ax) ζ j min min i, 0 γ i [I] ax) ν, 0 = min fx, ζ j ) γ ν, 0 where the second inequality due to 19d). Then according to 19a), we have δ ɛ γ ν 1 z j ν 1 min fx, ζ j ) γ ν, 0 i.e., γ/ν, x) satisfies the constraints in 17), i.e., x Z. Thus, Z 1 Z. 10

11 Z Z 1 Z Similarly, given x Z, there exists γ, x) which satisfies constraints in 17). Suppose that ax) = 0, then we must have b i x) 0 for all i [I], otherwise, we have fx, ζ j ) = 0 for all j [I]. Then 17a) is equivalent to 0 < δ ɛ 1)γ a contradiction that γ 0, ɛ 0, 1). Hence, we must x Z. From now on, we assume that ax) 0. Let us define γ = γ ax), ν = ax), and z j = min i [I] maxb i x) ax) ζ j i, 0 γ, 0) for each j []. Clearly, γ, ν, z, x) satisfies constraints in 19), i.e., x Z 1. We remark that set Z is usually trivial. Remark 1. i) If η = 1, then ii) if η = 0, then Z =. Z = x R n : x = 0, b i 0, i [I] ; On the other hand, set Z 1 can be formulated as a mixed integer set when it is bounded, i.e., we can introduce binary variables to represent constraints 19b). Theorem 9. Suppose there exists an M R I + such that max x Z 1 b i x) ax) ζ j i M ij for all i [I], j []. Then Z 1 is mixed integer representable as below: δν ɛγ 1 z j, z j + γ s ij, i [I], j [], Z 1 = x R n : s ij b i x) ax) ζ j i, i [I], j [], s ij M ij y ij, s ij b i x) ax) ζ j i + M ij1 y ij ), i [I], j [], ax) ν, ν > 0, γ 0, s ij 0, z j 0, y ij 0, 1, i [I], j []. 1a) 1b) 1c) 1d) 1e) 1f) Proof. To prove that Z 1 is equivalent to the right-hand side of 1), it is sufficient to show that for each i [I], j [], max b i x) ax) ζ j i, 0 = s ij. There are three cases: Case 1. if b i x) ax) ζ j i < 0, then we must have y ij = 0 otherwise, we have s ij b i x) ax) ζ j i < 0, a contradiction that s ij 0). Hence, we have s ij = 0 = max b i x) ax) ζ j i, 0. Case. if b i x) ax) ζ j i = 0, then for any y ij 0, 1, we have s ij = 0 = max b i x) ax) ζ j i, 0. Case 3. if b i x) ax) ζ j i > 0, then we must have y ij = 1 otherwise, we have b i x) ax) ζ j i s ij M j y j = 0, a contradiction that s ij b i x) ax) ζ j i > 0). Thus, we has s ij = b i x) ax) ζ j i = max b i x) ax) ζ j i, 0. 11

12 ote that there are various methods introduced by literature [30, 34] to obtain big-m coefficients and in the numerical study section, we will derive the big-m coefficients by inspection. In formulation 1), there are I binary variables and big-m coefficients, causing it very challenging to solve. In the next section, we will show that set Z 1 can be approximated to arbitrary accuracy by a big-m free formulation, and a branch and cut algorithm can be used to solve the approximated formulation. DRCCP with Right-hand Uncertainty: In this special case, we consider DRCCP with right-hand uncertainty, i.e., η = 0, ax) = e. We first note that by Theorem 7, set Z with ax) = e yields a more compact formulation. Theorem 10. If η = 0, ax) = e, then under Assumptions A1)-A), set Z is equivalent to the following mathematical program: δ ɛγ 1 z j, a) Z = x R n : z j + γ) n max b i x) e ζ j i, 0, j [], i [I], b) z j 0, j [], γ 0. c) Proof. The result directly follows from Theorem 7. To reformulate set Z in ) as a mixed integer program, we observe that without loss of generality, e ζ j i is nonnegative for all j [], i [I]. Indeed, suppose that L := min,i [I] e ζ j i < 0, then we can redefine e ζ j i := e ζ j i L and b ix) := b i x) L for all j [], i [I]. Theorem 11. Suppose that η = 0, ax) = e and e ζ j i that for all j [], i [I]. Then set Z is max x Z b ix) e ζ j i M ij z j + γ) n s ij, j [], i [I], Z = x R n : s ij b i x) e ζ j i y ij, j [], i [I], δ ɛγ 1 0 for all i [I] and there exists an M RI + such z j, s ij M ij y ij, j [], i [I], s ij b i x) e ζ j i, j [], i [I], γ 0, z j 0, y ij 0, 1, j [], i [I]. Proof. Let Ẑ denote the set on right-hand side of 3), we would like to show that Ẑ = Z. Z Ẑ. Given x Z, there exists γ, z, x) which satisfies constraints ). ow let s ij = maxb i x) e ζ j i, 0 and y ij = 1 if b i x) e ζ j i, 0, otherwise for each j [], i [I]. We only need to show that γ, s, z, y, x) satisfies the constraints in 3). Clearly, constraints 3a), 3b), 3d), 3e) and 3f) are satisfied, and for each j [], i [I] such that y ij = 1, constraints 3c) are also satisfied since s ij = max b i x) e ζ j i, 0 = b i x) e ζ j i. It remains to show that for each j [], i [I] such that y ij = 0, constraints 3c) will be also satisfied, i.e., s ij = 0 b i x). 3a) 3b) 3c) 3d) 3e) 3f) 1

13 Z Suppose that there exists a i 0 [I] such that b i0 x) < 0. By assumption, we know that e ζ j i 0 0 for all j []. Thus, we have max b i0 x) e ζ j i 0, 0 = 0 for all j []. According to constraints b), we have z j +γ 0, i.e., z j γ for each j []. Substituting this inequality into constraint a), we finally have δ ɛγ 1 z j γ which implies that γ δ 1 ɛ < 0, a contradiction that γ 0. Ẑ. Given x Z, there exists γ, s, z, y, x) which satisfies the constraints in 3). To prove that γ, z, x) satisfies constraints ), we only need to show that s ij max b i x) e ζ j i, 0 for each j [], i [I]. Indeed, if y ij = 1, we have s ij b i x) e ζ j i maxb ix) e ζ j i, 0, otherwise, we have s ij 0 maxb i x) e ζ j i, 0. We finally remark that formulation 3) can be stronger than 1) since we only need to compute the largest upper bound of b i x) e ζ j i and define it as M ij rather than the largest absolute value of b i x) e ζ j i. 3. Inner and Outer Approximations In the previous subsection, we develop exact mixed integer reformulations of set Z under various setting. However, these reformulations might be difficult to solve, especially when the number of empirical data points becomes large i.e., is large), there are a large number i.e., I ) of binary variables in the reformulations. In this subsection, we will investigate compact formulations of the inner and outer approximations of set Z proposed in Section, which involve fewer or even zero binary variables. VaR Outer Approximation: We first study the reformulation of the outer approximation Z VaR. Theorem 1. Under Assumptions A1)-A), set Z is outer approximated by δ Z VaR = x R n : P ζ ɛ ax) + ax) ζi b i x), i [I] 1 ɛ. 4) Proof. By Theorem 3 with fx, ζ) = min min i [I]\Ix) max b i x) ax) ζ, 0 ax), min i Ix) χ b ix)<0x), and Ix) = [I] if ax) 0, otherwise, Ix) =, we have maxbix) ax) ζ,0 Z VaR = x R n : P ax) ζ δ ɛ, i [I] \ Ix), χ bix)<0x) δ ɛ, i Ix) 1 ɛ, which is equivalent to 4). 13

14 ote that in 4), we arrive at a regular chance constrained program with discrete random vector ζ, which can be reformulated as mixed integer program with big-m coefficients cf., [1, 5]). A particular interpretation of formulation 4) is that in order to enforce the robustness, we further penalize the lefthand side of uncertain constraints by the dual norm ax). Inner Approximation I- Robust Scenario Approximation: We next consider robust scenario approximation Z S. Theorem 13. Under Assumptions A1)-A), set Z is inner approximated by Z S = x R n : δ ɛ ax) + ax) ζ j b i x), j [], i [I]. 5) Proof. The proof is similar to Theorem 1, thus is omitted. We remark that set Z S in 5) is very similar to scenario approach to chance constrained program [6], i.e., generate i.i.d. samples ζ j and enforce the constraints corresponding to each sample. The difference is that in set Z S, we also add a penalty δ ɛ ax) to each of the sampled constraints. Inner Approximation II- Inner Chance Constraint Approximation: The second inner approximation set Z I is nonconvex and according to Theorem 5, we can formulate it as below. Theorem 14. Suppose that Assumptions A1)-A) hold and ɛ 0, 1), ɛ / Z +, then set Z is inner approximated by δ Z I = x R n : P ζ ɛ α ax) + ax) ζi b i x), i [I] 1 α, 0 α ɛ. 6) Proof. The proof is similar to Theorem 1, hence is omitted. ote that for any given α, set Z I is mixed integer representable with big-m coefficients. Since from Corollary, there are only ɛ + 1 effective values of α that we can choose from, thus Z I can be formulated as a disjunction of ɛ + 1 mixed integer sets. Inner Approximation III- CVaR Approximation: ext, we study the CVaR approximation. Theorem 15. Under Assumptions A1)-A), set Z is inner approximated by δν ɛγ 1 z j, Z CVaR = x R n z : j + γ b i x) ax) ζ j i, j [], i [I], z j 0, j [], ax) ν,, ν 0, γ 0, 7a) 7b) 7c) 7d) 7e) Proof. By Theorem 6, we have set Z CVaR is equal to [ Z CVaR = x R n : ɛβ + λδ 1 λ, β 0. inf ξ λ ξ ζ j max i [I] ax) ξ i b i x) ) ) ] + β + 0, 14

15 which is further equivalent to ɛβ + λδ 1 Z CVaR = x R n : ax) λ, i [I], λ, β 0. [ min min i [I] In the above formulation, let ν = λ, γ = β and also let z j = min linearize it for each j []. Thus, we arrive at 7). ) ] b i x) ax) ζ j i β, 0 0, ) ] [min i [I] b i x) ax) ζ j i β, 0 and We remark that we can directly derive the reformulation of set Z CVaR in 7) based upon formulation 17). Indeed, since maxb i x) ax) ζ i, 0 b i x) ax) ζ i, by replacing maxb i x) ax) ζ i, 0 with b i x) ax) ζ, then function fx, ζ) is lower bounded by fx, ζ) fx, ζ) = min min i [I]\Ix) b i x) ax) ζ i, min ax) χ b i Ix) ix)<0x). Thus, set Z can be inner approximated by the following set δ ɛγ 1 min fx, ζ j ) γ, 0, x Rn : γ 0, which is exactly equivalent to Z CVaR by introducing additional variables to linearize the nonlinear function min fx, ζ) γ, 0. From this observation, we note that Z CVaR = Z if ɛ 1. Corollary 3. Suppose that Assumptions A1)-A) hold and ɛ 0, 1/], then set Z = Z CVaR. Proof. We note that set Z CVaR Z = Z 1 Z, where Z 1 and Z are defined in 19) and 0), respectively. We note that set Z Z CVaR. Indeed, suppose that x Z, i.e., ax) = 0, b i x) 0 for each i [I], then let ν = 0, γ = 0 and z j = 0 for each j []. Clearly, ν, γ, z, x) satisfies the constraints in 7). Hence, x Z CVaR. Thus, it is sufficient to show that Z 1 Z CVaR. Indeed, given x Z 1, there exists ν, γ, z) such that ν, γ, z, x) satisfies the constraints in 19). We only need to show that z j + γ > 0 for each j []. Suppose that there exists a j 0 [] such that z j0 + γ 0. Then according to 19a), we have δ 1 \j 0 z j + 1 ɛγ + z j 0 ) 0 where the second inequality is due to ɛ 1 and z j0 + γ 0, a contradiction that δ > 0. Therefore, in 19b), we must have max b i x) ax) ζ j i, 0 = b i x) ax) ζ j i for each i [I], j []. Hence, ν, γ, z, x) satisfies the constraints in 7), i.e., x Z CVaR. Formulation Comparisons: Finally, we would like to compare sets Z S, Z CVaR. Indeed, we can show that Z S Z CVaR, i.e., set Z S is at least as conservative as set Z CVaR. Theorem 16. Let Z S, Z CVaR be defined in 5), 7), respectively. Then Z S Z CVaR. 15

16 Proof. Given x Z S, we only need to show that x Z CVaR. Indeed, let us consider ν = ax), γ = ax) ɛ, z j = 0 for all j [], then we see that ν, γ, z, x) satisfies the constraints in 7), i.e., x Z CVaR. We illustrate sets Z, Z VaR, Z CVaR, Z S, Z I with the following example. Example 1. Suppose = 3, n =, I =, δ = 1/6, ɛ = /3 and ζ1 1 = 1, 0), ζ 1 = 0, 3), ζ1 = 3, 0), ζ = 0, 1), ζ1 3 =, 0), ζ 3 = 0, ), b 1 x) = x 1, ax) = 1, b x) = x. Then, we have Z = x 1, x ) : + x 1, 3 + x x 1, x ) : 3 + x 1, + x x 1, x ) : 3 x 1, 3 x 3, 6 + x 1 + x Z VaR = x 1, x ) : + 4 x 1, x x 1, x ) : x 1, + 4 x Z CVaR = x 1, x ) : 3 x 1, 3 x, 6 + x 1 + x Z S = x 1, x ) : x 1, x Z I = x 1, x ) : + x 1, 3 + x x 1, x ) : 3 + x 1, + x Clearly, we have Z S x 1, x ) : x 1, 3 + ZCVaR Z I 4 x Z Z VaR see Figure 1 for an illustration). Finally, the inclusive relationships among sets Z, Z VaR, Z S, Z I Z CVaR are illustrated in Figure and their reformulations are summarized in Table 1. Table 1: Summary of formulation results in Section 3. Set Z Set Z VaR Set Z S Set Z I Set Z CVaR Mixed-integer Mixed-integer Convex Mixed-integer Convex Theorem 9 Theorem 1 Theorem 13 Theorem 14 Theorem 15 4 DRCCP with Pure Binary Decision Variables In this section, we will study DRCCP with pure binary decision variables x 0, 1 n, i.e., in addition to Assumptions A1)-A), we further assume that A3) the set S is binary, that is, S 0, 1 n. Indeed, we remark that if S is a bounded mixed integer set, we can introduce On logn)) additional binary variables to approximate set S with arbitrary accuracy via binary approximation of continuous variables c.f., [4]). For binary DRCCP, we will show that the reformulations in the previous section can be improved. 16

17 x Z CVaR ZS, 3) Z VaR Z I Z, ) 3, ) x 1 Figure 1: Illustration of Example 1 Figure : Summary of formulation comparisons 17

18 4.1 Polyhedral Results of Submodular Functions: A Review Our main derivation of stronger formulations is based upon some polyhedral results of submodular functions, which will be briefly reviewed in this subsection. We first begin with the following lemmas on submodular functions. Lemma 1. Given d 1 R n +, d, d 3 R, function fx) = max d 1 x + d, d 3 ) is submodular over the binary hypercube. Proof. For simplicity, given a T [n], we define a binary vector e T 0, 1 n such that 1, if l T e T ) l = 0, if l [n] \ T. According to the definition of submodular function [1], we only need to show that fe T1 t) fe T1 ) fe T t) fe T ) for any T 1 T and t [n] \ T. There are three cases: Case 1. if i T 1 d 1i + d d 3, since d 1 R n +, then we must have fe T1 t) fe T1 ) = fe T t) fe T ) = d 1t. Case. if i T 1 d 1i + d < d 3 but i T d 1i + d d 3, then we must have where the inequality is due to d 1 R n +. fe T1 t) fe T1 ) = 0 fe T t) fe T ) = d 1t, Case 3. if i T d 1i + d < d 3, since d 1 R n +, then we must have fe T1 t) fe T1 ) = fe T t) fe T ) = 0. Lemma. Given q 1, function fx) = x q with q 1 is submodular over the binary hypercube. Proof. This is because fx) = x q = q l [n] x l, and ge x) is a submodular function if g ) is a concave function cf., [40]). ext, we will introduce polyhedral properties of submodular functions. For any given submodular function fx) with x 0, 1 n, let us denote Π f to be its epigraph, i.e., Π f = x, φ) : φ fx), x 0, 1 n. Then the convex hull of Π f is characterized by the system of extended polymatroid inequalities EPI) [, 40], i.e., conv Π f ) = x, φ) : f0) + ρ σl x σl φ, σ Ω, x [0, 1] n, 8) l [n] where Ω denotes a collection of all the permutations of set [n] and ρ σl = fe A σ l ) fe A σ l 1 ) for each l [n] with A σ 0 =, A σ 1, if τ T l = σ 1,..., σ l and e T ) τ = 0, if τ [n] \ T. In addition, although there are n! number of inequalities in 8), these inequalities can be easily separated by greedy procedure. 18

19 Lemma 3. [, 40]) Suppose x, φ) / conv Π f ), and σ Ω be a permutation of [n] such that x σ1... x σn. Then x, φ) must violate the constraint f0) + l [n] ρ σ l x σl φ. From Lemma 3, we see that to separate a point x, φ) from conv Π f ), we only need to sort the coordinates of x in a descending order, i.e., x σ1... x σn. Then x, φ) can be the separated by the constraint f0) + l [n] ρ σ l x σl φ from conv Π f ). The time complexity of this separating procedure is On log n). 4. Reformulating a Binary DRCCP by Submodular Knapsack Constraints: Big-M free In this section, we will replace the nonlinar constraints defining the feasible region of a binary DR- CCP i.e., set S Z) by submodular upper bound knapsack) constraints. These constraints can be equivalently described by the system of EPI in 8), therefore we obtain a big-m free mixed integer representation of set S Z. First, we introduce n complementing binary variables of x, denoted by w, i.e., w l + x l = 1 for each l [n]. With these n additional variables, we can reformulate function b i x) ax) ζ j i as b i x) ax) ζ j i = r ijx + t ijw + u ij 9) for each i [I], j [] such that r ij R n +, t ij R n +. Indeed, since ax) = ηx + 1 η)e and b i x) = Bi x + bi, in 9), we can choose r ijl = B il IB il > 0) ηζ j il Iζj il < 0), t ijl = B il IB il < 0) + ηζ j il Iζj il > 0), u ij = b i 1 η)e ζ j i + ) B iτ IB iτ < 0) ηζ j iτ Iζj iτ > 0), τ [n] for each l [n], i [I], j []. Thus, from above discussion, we can formulate S Z 1 note that set Z = Z 1 Z according to Theorem 8) as the following mixed integer set with submodular knapsack constraints. Theorem 17. Suppose that Assumptions A1)-A3) hold. Then S Z = S Z 1 ) S Z ), where δν ɛγ 1 z j, max rijx + t ijw + u ij, 0 z j γ, i [I], j [], S Z 1 = x S : z j 0, j [], η x + 1 η) e ν, w l + x l = 1, l [n], ν 1, γ 0, w 0, 1 n 30a) 30b) 30c) 30d) 30e) 30f) 30g) and S Z = x S :ax) = 0, b i x) 0, i [I] 31) Proof. From the discussion above and the fact that ax) = ηx + 1 η)e with η 0, 1, we have constraints 19b) and 19d) is equivalent to 30b) and 30d). We only need to show that ν 1 in set S Z 1. 19

20 If η = 0, then η x + 1 η) e = e 1, then we are done. ow suppose that η = 1. We note that if x = 0, then the constraints 19) imply that b i x) > 0 for each i [I]. Thus, if x = 0, then set Z 1 Z. Therefore, without loss of generality, we can assume that in set Z 1, x 0. ote that S Z 1 0, 1 n, therefore, x 0 implies that x 1, thus, v η x + 1 η) e = x 1. From the proof of Theorem 17, we note that if η = 1 and b i δ ɛ for each i [I], then we have Z Z 1 Corollary 4. Suppose that Assumptions A1)-A3) hold, η = 1 and b i δ ɛ for each i [I]. Then S Z = S Z 1. Proof. We only need to show that 0 S Z 1. Suppose x = 0, i.e., w = e. Let us set ν = 1, γ = 0, z = 0. Then it is easy to see that x, w, z, γ, ν) satisfies the constraints in 30), i.e., 0 S Z 1. We note that the left-hand sides of constraints 30b) and 30d) are submodular functions according to Lemma 1 and Lemma, thus, we can equivalently replace these constraints with the convex hulls of epigraphs of their associated submodular functions. Thus, Corollary 5. Suppose that Assumptions A1)-A3) hold. Then δν ɛγ 1 z j, x, w, z j γ) convπ ij ), i [I], j [], S Z 1 = x S : z j 0, j [], x, ν) convπ 0 ), w l + x l = 1, l [n], ν 1, γ 0, w 0, 1 n, 3a) 3b) 3c) 3d) 3e) 3f) where Π ij = x, w, φ) : max r ijx + t ijw + u ij, 0 φ, x, w 0, 1 n, i [I], j [], Π 0 = x, φ) : η x + 1 η) e φ, x 0, 1 n 33a) 33b) and convπ ij ) i [I],, convπ 0 ) can be described by the system of EPI in 8). ote that the optimization problem min x S Z1 c x can be solved by branch and cut algorithm. In particular, at each branch and bound node, denoted as x, ŵ, ẑ, γ, ν), there might be too many i.e., I +1) valid inequalities to add, since in 3b) and 3d), there are I +1 convex hulls of epigraphs i.e., convπ ij ) i [I],, convπ 0 )) to be separated from. Therefore, instead, we can first check and find the epigraphs of κ κ 1) most violated constraints in 30b) and 30d), i.e., find the epigraphs corresponding to the κ largest values in the following set max r ij x + t ijŵ + u ij, 0 + ẑ j + γ i [I], η x + 1 η) e ν. Finally, we can generate and add valid inequalities by separating x, ŵ, ẑ, γ, ν) from the convex hulls of these κ epigraphs according to Lemma umerical Study In this subsection, we present a numerical study to compare the big-m formulation in Theorem 9 with big-m free formulation in Theorem 17 and its corollaries on the distributionally robust multidimensional knapsack problem DRMKP) [10, 34, 37]. In DRMKP, there are n items and I knapsacks. Additionally, c j represents the value of item j for all j [n], ξ i := [ ξ i1,..., ξ in ] represents the vector of 0

21 random item weights in knapsack i, and b i > 0 represents the capacity limit of knapsack i, for all i [I]. The binary decision variable x j = 1 if the jth item is picked and 0 otherwise. We use the Wasserstein ambiguity set under Assumptions A1) and A) with L - norm as distance metric. With the notation above, DRMKP is formulated as v = max c x, x 0,1 n s.t. inf P ξ i x b i, i [I] P P 1 ɛ. 34) To test the proposed formulations, we generate 10 random instances with n = 0 and I = 10, indexed by 1,,..., 10. For each instance, we generate = 1000 empirical samples ζ j R+ I n from a uniform distribution over a box [1, 10] I n. For each l [n], we independently generate c l from the uniform distribution on the interval [1, 10], while for each i [I], we set b i := 100. We test these 10 random instances with risk parameter ɛ 0.05, 0.10 and Wasserstein radius δ 0.1, 0.. Our first approach is to solve the big-m reformulation of DRMKP in Theorem 9, which reads as follows: v = max c x, x 0,1 n s.t. δν ɛγ 1 z j, z j + γ s ij, i [I], j [], s ij b i x ζ j i, i [I], j [], 35) s ij M ij y ij, s ij b i x ζ j i + M ij1 y ij ), i [I], j [], x ν, ν 1, γ 0, s ij 0, z j 0, y ij 0, 1, i [I], j [], where M ij = maxb i, b i e ζ j i for each i [I], j []. We compare this formulation with another big-m free formulation in Theorem 17 and its corollaries, which reads as follows: v = max c x, x 0,1 n s.t. δν ɛγ 1 z j, w, z j γ) convπ ij ), i [I], j [], z j 0, j [], 36) x, ν) convπ 0 ), w l + x l = 1, l [n], ν 1, γ 0, w [0, 1] n, where Π ij = w, φ) : max ζ j i ) w + b i ζ j i ) e, 0 φ, w 0, 1 n, i [I], j [], Π 0 = x, φ) : x φ, x 0, 1 n 37a) 37b) and their convex hulls, convπ ij ) i [I],, convπ 0 ) can be described by the system of EPI 8). ote that the fact that 35) and 36) are exact reformulations of DRMKP follows from Corollary 4 since b i δ ɛ for all i [I]. 1

22 We use the commercial solver Gurobi version 7.5, with default settings) to solve the instances of formulation 35). We set the time limit of solving each instance to be 3600 seconds. The results are displayed in Table. We use UB, LB, GAP, Opt. Val. and Time to denote the best upper bound, the best lower bound, optimality gap, the optimal objective value and the total running time, respectively. All instances were executed on a MacBook Pro with a.80 GHz processor and 16GB RAM. From Table, we observe that the overall running time of DRMKP formulation 36) significantly outperforms that of 35), i.e., almost all of the instances of formulation 36) can be solved within 10 minutes, while the majority of the instances of formulation 35) reach the time limit. The main reasons are two-fold: i) formulation 35) involves O I + n) binary variables and O I) continuous variables, while formulation 35) only involves On) binary variables and O) continuous variables; and ii) formulation 35) contains big-m coefficients, while formulation 36) is big-m free. We also observe that, as the risk parameter ɛ increases or Wasserstein radius δ decreases, both formulations take longer to solve but formulation 36) still significantly outperforms formulation 35). These results demonstrate the effectiveness our proposed approaches. 5 Conclusion In this paper, we studied a distributionally robust chance constrained problem DRCCP) with Wasserstein ambiguity set. We showed that a DRCCP can be formulated as a conditional-value-at-risk constrained optimization, thus admits tight inner and outer approximations. When the metric space of random variables is normed vector space, we showed that a DRCCP is mixed integer representable with big-m coefficients and additional binary variables, i.e., a DRCCP can be formulated as a mixed integer conic program. We also compared various inner and outer approximations and proved their corresponding inclusive relations. We further proposed a big-m free formulation for a binary DRCCP. The numerical studies demonstrated that the developed big-m free formulation can significantly outperform the big-m one. Acknowledgments The author would like to thank Professor Shabbir Ahmed Georgia Tech) for his helpful comments on an earlier version of the paper.

23 Table : Performance comparison of formulation 35) and formulation 36) ɛ δ Instances n I Formulation 35) Formulation 36) UB LB Time GAP Opt. Val. Time % % % % % % % % % % Average % A A A A % % A A A A A A % A A % Average % % % % % % % % % % % Average % % % % % A A % % A A % % Average % The A represents that no feasible solution has been found within the time limit 3

24 References [1] S. Ahmed, J. Luedtke, Y. Song, and W. Xie. onanticipative duality, relaxations, and formulations for chance-constrained stochastic programs. Mathematical Programming, 161-):51 81, 017. [] A. Atamtürk and V. arayanan. Polymatroids and mean-risk minimization in discrete optimization. Operations Research Letters, 365):618 6, 008. [3] J. Blanchet, L. Chen, and X. Y. Zhou. Distributionally robust mean-variance portfolio selection with wasserstein distances. arxiv preprint arxiv: , 018. [4] J. Blanchet, Y. Kang, and K. Murthy. Robust wasserstein profile inference and applications to machine learning. arxiv preprint arxiv: , 016. [5] E. Boissard et al. Simple bounds for the convergence of empirical and occupation measures in 1-wasserstein distance. Electronic Journal of Probability, 16:96 333, 011. [6] G. C. Calafiore and M. C. Campi. The scenario approach to robust control design. IEEE Transactions on Automatic Control, 515):74 753, 006. [7] G. C. Calafiore and L. El Ghaoui. On distributionally robust chance-constrained linear programs. Journal of Optimization Theory and Applications, 1301):1, 006. [8] M. C. Campi, S. Garatti, and M. Prandini. The scenario approach for systems and control design. Annual Reviews in Control, 33): , 009. [9] W. Chen, M. Sim, J. Sun, and C.-P. Teo. From CVaR to uncertainty set: Implications in joint chanceconstrained optimization. Operations research, 58): , 010. [10] J. Cheng, E. Delage, and A. Lisser. Distributionally robust stochastic knapsack problem. SIAM Journal on Optimization, 43): , 014. [11] C. Duan, W. Fang, L. Jiang, L. Yao, and J. Liu. Distributionally robust chance-constrained approximate ac-opf with wasserstein metric. IEEE Transactions on Power Systems, 018. [1] J. Edmonds. Submodular functions, matroids, and certain polyhedra. Edited by G. Goos, J. Hartmanis, and J. van Leeuwen, 11, [13] L. El Ghaoui, M. Oks, and F. Oustry. Worst-case value-at-risk and robust portfolio optimization: A conic programming approach. Operations Research, 514): , 003. [14]. Fournier and A. Guillin. On the rate of convergence in wasserstein distance of the empirical measure. Probability Theory and Related Fields, 163-4): , 015. [15] R. Gao, X. Chen, and A. J. Kleywegt. Wasserstein distributional robustness and regularization in statistical learning. arxiv preprint arxiv: , 017. [16] R. Gao and A. J. Kleywegt. Distributionally robust stochastic optimization with wasserstein distance. arxiv preprint arxiv: , 016. [17] G. A. Hanasusanto and D. Kuhn. Conic programming reformulations of two-stage distributionally robust linear programs over wasserstein balls. arxiv preprint arxiv: , 016. [18] G. A. Hanasusanto, V. Roitch, D. Kuhn, and W. Wiesemann. A distributionally robust perspective on uncertainty quantification and chance constrained programming. Mathematical Programming, 151:35 6,

25 [19] G. A. Hanasusanto, V. Roitch, D. Kuhn, and W. Wiesemann. Ambiguous joint chance constraints under mean and dispersion information. Operations Research, 653): , 017. [0] A. R. Hota, A. Cherukuri, and J. Lygeros. Data-driven chance constrained optimization under wasserstein ambiguity sets. arxiv preprint arxiv: , 018. [1] R. Jiang and Y. Guan. Data-driven chance constrained stochastic program. Mathematical Programming, 158:91 37, 016. [] R. Kiesel, R. Rühlicke, G. Stahl, and J. Zheng. The wasserstein metric and robustness in risk management. Risks, 43):3, 016. [3] J. Lee and M. Raginsky. Minimax statistical learning and domain adaptation with wasserstein distances. arxiv preprint arxiv: , 017. [4] B. Li, R. Jiang, and J. L. Mathieu. Ambiguous risk constraints with moment and unimodality information. Mathematical Programming, ov 017. [5] J. Luedtke and S. Ahmed. A sample approximation approach for optimization with probabilistic constraints. SIAM Journal on Optimization, 19): , 008. [6] F. Luo and S. Mehrotra. Decomposition algorithm for distributionally robust optimization using wasserstein metric. arxiv preprint arxiv: , 017. [7] P. Mohajerin Esfahani and D. Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: performance guarantees and tractable reformulations. Mathematical Programming, Jul 017. [8] A. emirovski and A. Shapiro. Convex approximations of chance constrained programs. SIAM Journal on Optimization, 174): , 006. [9] A. emirovski and A. Shapiro. Scenario approximations of chance constraints. In Probabilistic and randomized methods for design under uncertainty, pages Springer, 006. [30] F. Qiu, S. Ahmed, S. S. Dey, and L. A. Wolsey. Covering linear programming with violations. IFORMS Journal on Computing, 63): , 014. [31] R. T. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of risk, :1 4, 000. [3] S. Shafieezadeh-Abadeh, P. M. Esfahani, and D. Kuhn. Distributionally robust logistic regression. In Advances in eural Information Processing Systems, pages , 015. [33] A. Shapiro and A. Kleywegt. Minimax analysis of stochastic problems. Optimization Methods and Software, 173):53 54, 00. [34] Y. Song, J. R. Luedtke, and S. Küçükyavuz. Chance-constrained binary packing problems. I- FORMS Journal on Computing, 64): , 014. [35] W. Xie and S. Ahmed. Bicriteria approximation of chance constrained covering problems. Available at Optimization Online, 018. [36] W. Xie and S. Ahmed. Distributionally robust chance constrained optimal power flow with renewables: A conic reformulation. IEEE Transactions on Power Systems, 33): , 018. [37] W. Xie and S. Ahmed. On deterministic reformulations of distributionally robust joint chance constrained optimization problems. SIAM Journal on Optimization, 8): ,

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie Shabbir Ahmed Ruiwei Jiang February 13, 2017 Abstract A distributionally robust joint chance constraint

More information

Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric

Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric Data-Driven Distributionally Robust Chance-Constrained Optimization with asserstein Metric Ran Ji Department of System Engineering and Operations Research, George Mason University, rji2@gmu.edu; Miguel

More information

Distributionally robust simple integer recourse

Distributionally robust simple integer recourse Distributionally robust simple integer recourse Weijun Xie 1 and Shabbir Ahmed 2 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061 2 School of Industrial & Systems

More information

Random Convex Approximations of Ambiguous Chance Constrained Programs

Random Convex Approximations of Ambiguous Chance Constrained Programs Random Convex Approximations of Ambiguous Chance Constrained Programs Shih-Hao Tseng Eilyan Bitar Ao Tang Abstract We investigate an approach to the approximation of ambiguous chance constrained programs

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs

Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Scenario Grouping and Decomposition Algorithms for Chance-constrained Programs Siqian Shen Dept. of Industrial and Operations Engineering University of Michigan Joint work with Yan Deng (UMich, Google)

More information

Distributionally Robust Stochastic Optimization with Wasserstein Distance

Distributionally Robust Stochastic Optimization with Wasserstein Distance Distributionally Robust Stochastic Optimization with Wasserstein Distance Rui Gao DOS Seminar, Oct 2016 Joint work with Anton Kleywegt School of Industrial and Systems Engineering Georgia Tech What is

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Manish Bansal Grado Department of Industrial and Systems Engineering, Virginia Tech Email: bansal@vt.edu Kuo-Ling Huang

More information

Convex relaxations of chance constrained optimization problems

Convex relaxations of chance constrained optimization problems Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011

More information

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

Reformulation of chance constrained problems using penalty functions

Reformulation of chance constrained problems using penalty functions Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information Ambiguous Joint Chance Constraints under Mean and Dispersion Information Grani A. Hanasusanto 1, Vladimir Roitch 2, Daniel Kuhn 3, and Wolfram Wiesemann 4 1 Graduate Program in Operations Research and

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

Two-stage stochastic (and distributionally robust) p-order conic mixed integer programs: Tight second stage formulations

Two-stage stochastic (and distributionally robust) p-order conic mixed integer programs: Tight second stage formulations Two-stage stochastic (and distributionally robust p-order conic mixed integer programs: Tight second stage formulations Manish Bansal and Yingqiu Zhang Department of Industrial and Systems Engineering

More information

Scenario grouping and decomposition algorithms for chance-constrained programs

Scenario grouping and decomposition algorithms for chance-constrained programs Scenario grouping and decomposition algorithms for chance-constrained programs Yan Deng Shabbir Ahmed Jon Lee Siqian Shen Abstract A lower bound for a finite-scenario chance-constrained problem is given

More information

ON MIXING SETS ARISING IN CHANCE-CONSTRAINED PROGRAMMING

ON MIXING SETS ARISING IN CHANCE-CONSTRAINED PROGRAMMING ON MIXING SETS ARISING IN CHANCE-CONSTRAINED PROGRAMMING Abstract. The mixing set with a knapsack constraint arises in deterministic equivalent of chance-constrained programming problems with finite discrete

More information

Optimal Transport in Risk Analysis

Optimal Transport in Risk Analysis Optimal Transport in Risk Analysis Jose Blanchet (based on work with Y. Kang and K. Murthy) Stanford University (Management Science and Engineering), and Columbia University (Department of Statistics and

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming

Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming IFORMS 2008 c 2008 IFORMS isbn 978-1-877640-23-0 doi 10.1287/educ.1080.0048 Solving Chance-Constrained Stochastic Programs via Sampling and Integer Programming Shabbir Ahmed and Alexander Shapiro H. Milton

More information

MIP reformulations of some chance-constrained mathematical programs

MIP reformulations of some chance-constrained mathematical programs MIP reformulations of some chance-constrained mathematical programs Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo December 4th, 2012 FIELDS Industrial Optimization

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Optimal Transport Methods in Operations Research and Statistics

Optimal Transport Methods in Operations Research and Statistics Optimal Transport Methods in Operations Research and Statistics Jose Blanchet (based on work with F. He, Y. Kang, K. Murthy, F. Zhang). Stanford University (Management Science and Engineering), and Columbia

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Distributionally Robust Convex Optimization Wolfram Wiesemann 1, Daniel Kuhn 1, and Melvyn Sim 2 1 Department of Computing, Imperial College London, United Kingdom 2 Department of Decision Sciences, National

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Mixed Integer Linear Programming Formulations for Probabilistic Constraints

Mixed Integer Linear Programming Formulations for Probabilistic Constraints Mixed Integer Linear Programming Formulations for Probabilistic Constraints J. P. Vielma a,, S. Ahmed b, G. Nemhauser b a Department of Industrial Engineering, University of Pittsburgh 1048 Benedum Hall,

More information

Chance-Constrained Binary Packing Problems

Chance-Constrained Binary Packing Problems INFORMS JOURNAL ON COMPUTING Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0899-1499 eissn 1526-5528 00 0000 0001 INFORMS doi 10.1287/ijoc.2014.0595 c 0000 INFORMS Authors are encouraged to submit new papers

More information

Distirbutional robustness, regularizing variance, and adversaries

Distirbutional robustness, regularizing variance, and adversaries Distirbutional robustness, regularizing variance, and adversaries John Duchi Based on joint work with Hongseok Namkoong and Aman Sinha Stanford University November 2017 Motivation We do not want machine-learned

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Decomposition Algorithms for Two-Stage Chance-Constrained Programs

Decomposition Algorithms for Two-Stage Chance-Constrained Programs Mathematical Programming manuscript No. (will be inserted by the editor) Decomposition Algorithms for Two-Stage Chance-Constrained Programs Xiao Liu Simge Küçükyavuz Luedtke James Received: date / Accepted:

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the

More information

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets manuscript Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets Zhi Chen Imperial College Business School, Imperial College London zhi.chen@imperial.ac.uk Melvyn Sim Department

More information

Polyhedral Results for A Class of Cardinality Constrained Submodular Minimization Problems

Polyhedral Results for A Class of Cardinality Constrained Submodular Minimization Problems Polyhedral Results for A Class of Cardinality Constrained Submodular Minimization Problems Shabbir Ahmed and Jiajin Yu Georgia Institute of Technology A Motivating Problem [n]: Set of candidate investment

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

arxiv: v2 [math.oc] 10 May 2017

arxiv: v2 [math.oc] 10 May 2017 Conic Programming Reformulations of Two-Stage Distributionally Robust Linear Programs over Wasserstein Balls arxiv:1609.07505v2 [math.oc] 10 May 2017 Grani A. Hanasusanto 1 and Daniel Kuhn 2 1 Graduate

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Robust multi-sensor scheduling for multi-site surveillance

Robust multi-sensor scheduling for multi-site surveillance DOI 10.1007/s10878-009-9271-4 Robust multi-sensor scheduling for multi-site surveillance Nikita Boyko Timofey Turko Vladimir Boginski David E. Jeffcoat Stanislav Uryasev Grigoriy Zrazhevsky Panos M. Pardalos

More information

Nonanticipative duality, relaxations, and formulations for chance-constrained stochastic programs

Nonanticipative duality, relaxations, and formulations for chance-constrained stochastic programs Nonanticipative duality, relaxations, and formulations for chance-constrained stochastic programs Shabbir Ahmed, James Luedtke, Yongjia Song & Weijun Xie Mathematical Programming A Publication of the Mathematical

More information

On Solving Two-Stage Distributionally Robust Disjunctive Programs with a General Ambiguity Set

On Solving Two-Stage Distributionally Robust Disjunctive Programs with a General Ambiguity Set On Solving Two-Stage Distributionally Robust Disjunctive Programs with a General Ambiguity Set Manish Bansal 12 Department of Industrial and Systems Engineering Virginia Tech, Blacksburg {bansal@vt.edu}

More information

Distributionally robust optimization techniques in batch bayesian optimisation

Distributionally robust optimization techniques in batch bayesian optimisation Distributionally robust optimization techniques in batch bayesian optimisation Nikitas Rontsis June 13, 2016 1 Introduction This report is concerned with performing batch bayesian optimization of an unknown

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

A robust approach to the chance-constrained knapsack problem

A robust approach to the chance-constrained knapsack problem A robust approach to the chance-constrained knapsack problem Olivier Klopfenstein 1,2, Dritan Nace 2 1 France Télécom R&D, 38-40 rue du gl Leclerc, 92794 Issy-les-Moux cedex 9, France 2 Université de Technologie

More information

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty Christoph Buchheim Jannis Kurtz Received:

More information

Strong Formulations of Robust Mixed 0 1 Programming

Strong Formulations of Robust Mixed 0 1 Programming Math. Program., Ser. B 108, 235 250 (2006) Digital Object Identifier (DOI) 10.1007/s10107-006-0709-5 Alper Atamtürk Strong Formulations of Robust Mixed 0 1 Programming Received: January 27, 2004 / Accepted:

More information

Second Order Cone Programming, Missing or Uncertain Data, and Sparse SVMs

Second Order Cone Programming, Missing or Uncertain Data, and Sparse SVMs Second Order Cone Programming, Missing or Uncertain Data, and Sparse SVMs Ammon Washburn University of Arizona September 25, 2015 1 / 28 Introduction We will begin with basic Support Vector Machines (SVMs)

More information

Optimization Problems with Probabilistic Constraints

Optimization Problems with Probabilistic Constraints Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.

More information

Lagrangean Decomposition for Mean-Variance Combinatorial Optimization

Lagrangean Decomposition for Mean-Variance Combinatorial Optimization Lagrangean Decomposition for Mean-Variance Combinatorial Optimization Frank Baumann, Christoph Buchheim, and Anna Ilyina Fakultät für Mathematik, Technische Universität Dortmund, Germany {frank.baumann,christoph.buchheim,anna.ilyina}@tu-dortmund.de

More information

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty

Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Worst-Case Violation of Sampled Convex Programs for Optimization with Uncertainty Takafumi Kanamori and Akiko Takeda Abstract. Uncertain programs have been developed to deal with optimization problems

More information

Distributionally Robust Reward-risk Ratio Programming with Wasserstein Metric

Distributionally Robust Reward-risk Ratio Programming with Wasserstein Metric oname manuscript o. will be inserted by the editor) Distributionally Robust Reward-risk Ratio Programming with Wasserstein Metric Yong Zhao Yongchao Liu Jin Zhang Xinmin Yang Received: date / Accepted:

More information

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018 Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing

More information

Cut Generation for Optimization Problems with Multivariate Risk Constraints

Cut Generation for Optimization Problems with Multivariate Risk Constraints Cut Generation for Optimization Problems with Multivariate Risk Constraints Simge Küçükyavuz Department of Integrated Systems Engineering, The Ohio State University, kucukyavuz.2@osu.edu Nilay Noyan 1

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints Pranjal Awasthi Vineet Goyal Brian Y. Lu July 15, 2015 Abstract In this paper, we study the performance of static

More information

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Noname manuscript No. (will be inserted by the editor) Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Jie Sun Li-Zhi Liao Brian Rodrigues Received: date / Accepted: date Abstract

More information

A note on scenario reduction for two-stage stochastic programs

A note on scenario reduction for two-stage stochastic programs A note on scenario reduction for two-stage stochastic programs Holger Heitsch a and Werner Römisch a a Humboldt-University Berlin, Institute of Mathematics, 199 Berlin, Germany We extend earlier work on

More information

Second-order cone programming formulation for two player zero-sum game with chance constraints

Second-order cone programming formulation for two player zero-sum game with chance constraints Second-order cone programming formulation for two player zero-sum game with chance constraints Vikas Vikram Singh a, Abdel Lisser a a Laboratoire de Recherche en Informatique Université Paris Sud, 91405

More information

The Value of Randomized Solutions in Mixed-Integer Distributionally Robust Optimization Problems

The Value of Randomized Solutions in Mixed-Integer Distributionally Robust Optimization Problems The Value of Randomized Solutions in Mixed-Integer Distributionally Robust Optimization Problems Erick Delage 1 and Ahmed Saif 2 1 Department of Decision Sciences, HEC Montréal 2 Department of Industrial

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Easy and not so easy multifacility location problems... (In 20 minutes.)

Easy and not so easy multifacility location problems... (In 20 minutes.) Easy and not so easy multifacility location problems... (In 20 minutes.) MINLP 2014 Pittsburgh, June 2014 Justo Puerto Institute of Mathematics (IMUS) Universidad de Sevilla Outline 1 Introduction (In

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

The Value of Adaptability

The Value of Adaptability The Value of Adaptability Dimitris Bertsimas Constantine Caramanis September 30, 2005 Abstract We consider linear optimization problems with deterministic parameter uncertainty. We consider a departure

More information

THE stochastic and dynamic environments of many practical

THE stochastic and dynamic environments of many practical A Convex Optimization Approach to Distributionally Robust Markov Decision Processes with Wasserstein Distance Insoon Yang, Member, IEEE Abstract In this paper, we consider the problem of constructing control

More information

Sequential pairing of mixed integer inequalities

Sequential pairing of mixed integer inequalities Sequential pairing of mixed integer inequalities Yongpei Guan, Shabbir Ahmed, George L. Nemhauser School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta,

More information

arxiv: v2 [math.oc] 16 Jul 2016

arxiv: v2 [math.oc] 16 Jul 2016 Distributionally Robust Stochastic Optimization with Wasserstein Distance Rui Gao, Anton J. Kleywegt School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0205

More information

Mathematical Optimization Models and Applications

Mathematical Optimization Models and Applications Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,

More information

Superhedging and distributionally robust optimization with neural networks

Superhedging and distributionally robust optimization with neural networks Superhedging and distributionally robust optimization with neural networks Stephan Eckstein joint work with Michael Kupper and Mathias Pohl Robust Techniques in Quantitative Finance University of Oxford

More information

Concepts and Applications of Stochastically Weighted Stochastic Dominance

Concepts and Applications of Stochastically Weighted Stochastic Dominance Concepts and Applications of Stochastically Weighted Stochastic Dominance Jian Hu Department of Industrial Engineering and Management Sciences Northwestern University jianhu@northwestern.edu Tito Homem-de-Mello

More information

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach

Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach L. Jeff Hong Department of Industrial Engineering and Logistics Management The Hong Kong University of Science

More information

Generalized Decision Rule Approximations for Stochastic Programming via Liftings

Generalized Decision Rule Approximations for Stochastic Programming via Liftings Generalized Decision Rule Approximations for Stochastic Programming via Liftings Angelos Georghiou 1, Wolfram Wiesemann 2, and Daniel Kuhn 3 1 Process Systems Engineering Laboratory, Massachusetts Institute

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

1 Regression with High Dimensional Data

1 Regression with High Dimensional Data 6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:

More information

Branch-and-cut (and-price) for the chance constrained vehicle routing problem

Branch-and-cut (and-price) for the chance constrained vehicle routing problem Branch-and-cut (and-price) for the chance constrained vehicle routing problem Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo May 25th, 2016 ColGen 2016 joint work with

More information

Data-Driven Risk-Averse Stochastic Optimization with Wasserstein Metric

Data-Driven Risk-Averse Stochastic Optimization with Wasserstein Metric Data-Driven Risk-Averse Stochastic Optimization with Wasserstein Metric Chaoyue Zhao and Yongpei Guan School of Industrial Engineering and Management Oklahoma State University, Stillwater, OK 74074 Department

More information

Safe Approximations of Chance Constraints Using Historical Data

Safe Approximations of Chance Constraints Using Historical Data Safe Approximations of Chance Constraints Using Historical Data İhsan Yanıkoğlu Department of Econometrics and Operations Research, Tilburg University, 5000 LE, Netherlands, {i.yanikoglu@uvt.nl} Dick den

More information

Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization

Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization Rudabeh Meskarian 1 Department of Engineering Systems and Design, Singapore

More information

Two-Term Disjunctions on the Second-Order Cone

Two-Term Disjunctions on the Second-Order Cone Noname manuscript No. (will be inserted by the editor) Two-Term Disjunctions on the Second-Order Cone Fatma Kılınç-Karzan Sercan Yıldız the date of receipt and acceptance should be inserted later Abstract

More information

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD

SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD EE-731: ADVANCED TOPICS IN DATA SCIENCES LABORATORY FOR INFORMATION AND INFERENCE SYSTEMS SPRING 2016 INSTRUCTOR: VOLKAN CEVHER SCRIBERS: SOROOSH SHAFIEEZADEH-ABADEH, MICHAËL DEFFERRARD STRUCTURED SPARSITY

More information

c 2014 Society for Industrial and Applied Mathematics

c 2014 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 4, No. 3, pp. 1485 1506 c 014 Society for Industrial and Applied Mathematics DISTRIBUTIONALLY ROBUST STOCHASTIC KNAPSACK PROBLEM JIANQIANG CHENG, ERICK DELAGE, AND ABDEL LISSER Abstract.

More information

Robust Growth-Optimal Portfolios

Robust Growth-Optimal Portfolios Robust Growth-Optimal Portfolios! Daniel Kuhn! Chair of Risk Analytics and Optimization École Polytechnique Fédérale de Lausanne rao.epfl.ch 4 Technology Stocks I 4 technology companies: Intel, Cisco,

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Sparse Approximation via Penalty Decomposition Methods

Sparse Approximation via Penalty Decomposition Methods Sparse Approximation via Penalty Decomposition Methods Zhaosong Lu Yong Zhang February 19, 2012 Abstract In this paper we consider sparse approximation problems, that is, general l 0 minimization problems

More information

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract. Włodzimierz Ogryczak Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS Abstract In multiple criteria linear programming (MOLP) any efficient solution can be found

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

POWER system operations increasingly rely on the AC

POWER system operations increasingly rely on the AC i Convex Relaxations and Approximations of Chance-Constrained AC-OF roblems Lejla Halilbašić, Student Member, IEEE, ierre inson, Senior Member, IEEE, and Spyros Chatzivasileiadis, Senior Member, IEEE arxiv:1804.05754v3

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

An Integer Programming Approach for Linear Programs with Probabilistic Constraints

An Integer Programming Approach for Linear Programs with Probabilistic Constraints An Integer Programming Approach for Linear Programs with Probabilistic Constraints James Luedtke Shabbir Ahmed George Nemhauser Georgia Institute of Technology 765 Ferst Drive, Atlanta, GA, USA luedtke@gatech.edu

More information