Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Size: px
Start display at page:

Download "Distributionally Robust Discrete Optimization with Entropic Value-at-Risk"

Transcription

1 Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, Jin Qi NUS Business School, National University of Singapore, Abstract We study the discrete optimization problem under the distributionally robust framework. We optimize the Entropic Value-at-Risk, which is a coherent risk measure and is also known as Bernstein approximation for the chance constraint. We propose an efficient approximation algorithm to resolve the problem via solving a sequence of nominal problems. The computational results show that the number of nominal problems required to be solved is small under various distributional information sets. Keywords: measure. robust optimization; discrete optimization; coherent risk 1. Introduction Robust optimization has been widely applied in decision making as it could address data uncertainty while preserving the computational tractability. The majority of studies has been focusing on developing modeling and solving techniques for the convex optimization problems. Its application on discrete optimization problems, however, is rather limited. The difficulty mainly arises from the computational perspective. As it is well known that the discrete deterministic optimization in general is hard to solve, how to tackle the stochastic version of the problem without tremendously increasing the computational complexity is challenging. In this work, our primary goal is to provide an efficient approximation scheme to optimize a coherent risk measure in discrete optimization problems, where the distributional information is not exactly known. Preprint submitted to Optimization Online May 6, 2014

2 For discrete optimization under robust framework, Kouvelis and Yu [1] describe the uncertainties by a finite number of scenarios and optimize the worst-case performance. They show that even the shortest path problem with only two scenarios on the cost vector, however, is already NP-hard. Averbakh [2] investigates the matroid optimization problem where each uncertainty belongs to an interval, and shows that it can be polynomially solvable. However, the conclusion cannot be generalized to other discrete optimization problems. Bertsimas and Sim [3, 4] introduce the budget of uncertainty to control the conservatism level. Under their framework, the shortest path problem under uncertainty could be solved by O(n) deterministic shortest path problem, where n is the number of arcs. Nevertheless, the above approaches use little information of the uncertainties. When more distributional information is available, e.g., mean or variance, the solutions from those approaches might be unnecessarily and overly conservative. There have also been literatures using stochastic programming approach on discrete optimization (see, for instance, [5]), but the computational issue is no less severe. In this paper, we study the discrete optimization problem with more general distributional information by optimizing the Entropic Value-at-Risk (EVaR). EVaR is first proposed by Nemirovski and Shapiro [6] as a convex approximation of the chance constraint problem. It is also a coherent risk measure that could properly evaluate the risk of the uncertain outcomes; see [7]. We then provide an efficient approximation algorithm to solve it via a sequence of nominal problems. Our computational results clearly demonstrate its benefit and suggest that the number of nominal problems we need to solve is relatively small. The paper is structured as follows. In Section 2, we present our general framework. In Section 3, we discuss the solution procedure where we approximate EVaR using piecewise linear functions. In Section 4, we show how to find the linear segments efficiently. In Section 5, we provide several examples on the distributional information to show that the algorithms can work quite efficiently. Notations: Vectors are represented as boldface characters, for example, x = (x 1, x 2,..., x n ) where x i denotes the ith element of the vector x. We use a tilde to represent uncertain quantities, such as random variable z or random vector c. 2

3 2. Model We consider the following nominal discrete optimization problem, min c x s.t. x X, where X 0, 1 n. In practice, we may not have deterministic cost parameters c. For example, in the shortest path problem, the travel time on each arc might be uncertain. To incorporate these uncertainties, we let the cost associated to each x i be c i, i = 1,..., n. Given a probability level ɛ (0, 1), it is natural to minimize the corresponding quantile by solving the following problem, min τ s.t. Prob ( c x > τ) ɛ, x X, where Prob(A) represents probability of an event A. Essentially, this is the same problem as minimizing the Value-at-Risk (VaR) of c x, where min VaR 1 ɛ ( c x), (1) x X VaR 1 ɛ ( z) = inf t : Prob ( z t) 1 ɛ for any random variable z. While VaR is a widely used risk measure, it is not convex and hence Problem (1) suffers from computational intractability. Indeed, Nemirovski [8] has pointed out that the feasible set of Problem (1) may not be convex even under the relaxation of discrete decision variables. Besides, VaR only accounts for the frequency but pays no attention to the magnitude, which is actually non-negligible in many practical problems. To tackle these issues, we next introduce an invariant of VaR. Definition 1. The Entropic Value-at-Risk (EVaR) of a random variable z, whose distribution P is only known to lie in a distributional ambiguity set P, is defined as EVaR 1 ɛ ( z) = inf >0 ( )] z ln sup E P [exp ln ɛ. P P 3

4 Note that the definition of EVaR incorporates the situation where we may not have full knowledge of the probability distribution. Hence, the situation of knowing the exact distribution is only a special case with P being a singleton. This function is first proposed by Nemirovski and Shapiro [6] as a convex and safe approximation for the chance constraint and named as Bernstein approximation. They show that EVaR 1 ɛ ( z) 0 implies Prob P ( z > 0) ɛ for any P P, where Prob P is the probability of the event under distribution P. With known distribution, Ahmadi-Javid [7] studies EVaR from the perspective of risk measure, and proves that EVaR is a coherent risk measure, i.e., it has the properties of translation invariance, subadditivity, monotonicity, and positive homogeneity. With general distributional information, we can also easily check that EVaR in Definition 1 remains to be a coherent risk measure. Instead of VaR, we minimize EVaR as follows: min EVaR 1 ɛ ( c x). (2) x X From now on, we assume that for any i = 1,..., n, the distribution P i of uncertain cost c i belongs to a distributional ambiguity set P i. Besides, c i, i = 1,..., n, are independent bounded random variables, and the distributional ambiguity set P of all possible distributions of c can be written as P = P 1 P n. We also impose the condition on the expectation of uncertain costs such that sup P P E P [ c] 0. In particular, when P i = P E P ( c i ) = 0, P ( c i [ 1, 1]) = 1 for all i = 1,..., n, Ben-Tal et al. [9] have shown that Bernstein approximation is a less conservative approximation to the ambiguous chance constraint compared with the budgeted approximation proposed by Bertsimas and Sim [4]. Remark 1 When the uncertain costs c independently follow normal distribution with mean µ and standard deviation σ, we could observe that [ ( c EVaR 1 ɛ ( c x) = inf )] x ln E P exp ln ɛ >0 ( ) = inf µ i + σ2 i x i ln ɛ >0 2 = µ x + 2 ln(1/ɛ) n σi 2x i, 4

5 n where the optimal is achieved at σ2 i x i. In this case, Problem (2) is 2 ln(1/ɛ) equivalent to the classical mean-standard deviation model, µ x + 2 ln(1/ɛ) n σi 2x i. min x X 3. Solution procedure We now focus on the solution procedure for Problem (2), and start with decomposing the uncertain costs. Proposition 1. Given any ɛ (0, 1), we have Zɛ := min EVaR 1 ɛ ( c x) = inf C i ()x i ln ɛ, (3) x X >0,x X where for any i = 1,..., n, C i () = sup P P i ln E P [ exp )] [ ( ci = ln sup E P exp P P i )] ( ci is a convex nonincreasing function in > 0, and lim + C i () = sup P Pi E P ( c i ). With Proposition 1, we first consider a special case of Problem (3) where the nominal problem is an optimization problem over a greedoid. In this situation, the optimal solution could be exactly found by solving a polynomial number of nominal problems. Corollary 1. When the nominal problem is an optimization problem over a greedoid, and the uncertain cost c i, i = 1,..., n, have the structure as c i = a i +b i z i, where z i are independent and identically distributed and b i > 0. Then ( ) the optimal solution of Problem (3) could be derived by solving at most n + 1 = O(n 2 2 ) nominal problems. The above corollary, however, cannot be applied to Problem (3) in general. We next provide an approximation scheme for Problem (3) by approximating the function C i () with a piecewise linear function. 5

6 Suppose we know the optimal in Problem (3) is bounded by [ min, max ], then we choose 1 < 2 < < m+1 such that 1 = min, m+1 = max. For any i = 1,..., n, we denote C ij = C i ( j ) and define the function Ĉ ij () = C ij + C i(j+1) C ij j+1 j ( j ), j = 1,..., m. In other words, Ĉij() is a linear function connecting the points ( j, C i ( j )) and ( j+1, C i ( j+1 )). In addition, we define the piecewise function Ĉ i () = max Ĉ ij (). j=1,...,m We plan to use the piecewise linear function, Ĉ i (), to approximate the concave function C i () and solve the following problem: Ẑɛ = Ĉ i ()x i ln ɛ. (4) min [ min, max],x X We next discuss the solution procedure of Problem (4). Proposition 2. Ẑ ɛ where can be equivalently calculated as Ẑ ɛ = min U(j), j=1,...,m+1 U(j) = j ln ɛ + min x X C ij x i. Notice that U(j) can be derived via solving a deterministic version of the nominal problem. Therefore, Ẑɛ can be calculated with (m + 1) underlying deterministic problems. To see how close Ẑ ɛ can approximate Zɛ, we need the following proposition. Proposition 3. Suppose there exists β > 0 such that Ĉi() (1 + β)c i () for any i = 1,..., n and [ min, max ], we then have Z ɛ Ẑ ɛ (1 + β)z ɛ. Therefore, we can use Ẑ ɛ to arbitrarily approximate Z ɛ. Clearly, the approximation level, (1 + β), would affect the number of linear segments, m, which in turn is critical to the computational complexity of the approximation problem. 6

7 4. Guidelines for Selecting Break Points In this section, we show potential approaches to find the bound [ min, max ] and discuss how to search for the linear segments to approximate function C i () within (1 + β) level. Setting min = 0 is obviously one potential approach to set the lower bound of optimal. However, this will incur many linear segments at the beginning of the search. So we next provide another approach to find better min and max. We denote C i () as the subdifferential of C i () at for i = 1,..., n. Proposition 4. Consider any min A min, max A max, where A min = > 0 max s i x i ln ɛ, s i C i (), i = 1,..., n, x X A max = > 0 min s i x i ln ɛ, s i C i (), i = 1,..., n x X are both convex sets. Then the optimal in Problem (3) must be in the range [ min, max]. Remark 2 There are a few points we need to highlight with respect to Proposition 4. First, as both A min and A max are convex, we can decrease (increase) the value of any initial until it becomes an element of A min (A max ) and then we find a lower (upper) bound of the optimal. Second, the set A min might possibly be empty, in which case we cannot use Proposition 4 to find the lower bound and hence we need other alternatives, such as simply setting min = 0. Third, the conditions in sets A min and A max involve a discrete optimization problem which might be computationally intractable. In that case, we can relax x X to x Conv (X ), where Conv( ) represents the convex hull. We then would have a stricter but easier condition to check, and still guarantee the bound of optimal. We now study how to choose j, j = 1,..., m for any approximation level (1 + β) in Proposition 3. Denote by as the ceiling, i.e., z = minz o Z : z o z. Proposition 5. Given any ɛ > 0 and i = 1,..., n, we let µ i = sup P Pi E P [ c i ], and select s i ( min max ) m = max,,...,n µ i β 7

8 and j = min + j 1 m ( max min ) for all j = 2,..., m, where s i C i ( min ). We then have Ĉi() (1 + β)c i () for all [ min, max ], i = 1,..., n. Proposition 5 provides an upper bound on the number of linear segments needed to achieve the (1+β)-approximation for an arbitrary level of β. However, the scheme just simply divides the set [ min, max ] evenly, and may lead to an unnecessarily high value of m. Indeed, we can use the following algorithm to find the value of m and the corresponding break points 1,..., m efficiently. Algorithm for Break Points Initialization: k = 1, 1 = min. repeat 1. Solve where and k+1 := max o s.t. f i ( k, o ) 0, i = 1,..., n k < o max, f i ( k, o ) = max g [ k, o i ( k, o, ), ] g i ( k, o, ) = C i ( k ) + C i ( o ) C i ( k ) o k ( k ) (1 + β) C i (). 2. Update k = k + 1. until k = max. Output: m = k, 1,..., m. (5) Proposition 6. Consider any k min, i = 1,..., n. f i ( k, o ) is nondecreasing in o [ k, max ], and g i ( k, o, ) is concave in. According to Proposition 6, to calculate k+1 in Problem (5), we can use a binary search on o. At any given value of o, we check the sign of max,...,n f i ( k, o ), where f i ( k, o ) can be obtained from a univariate concave maximization problem that can be efficiently solved. Interestingly, when c i, i = 1,..., n, are all normally distributed, we can analytically calculate the breakpoints based on the Algorithm for Break Points. 8

9 Corollary 2. When the random vector c follows normal distribution with mean µ and standard deviation σ, we have for any > 0, C i () = µ i + σ2 i, i = 1,..., n. 2 Based on the Algorithm for Break Points, we could calculate the break points as 1 k+1 = ( ) 2, k = 1,..., m 1. 1+β 2βµj k + β k where σ 2 j j = arg min,...,n µi σ 2 i. The number of linear segments m can be bounded above by ln( max / min ) 2 ln ( 1 + β + β ) + 1. Corollary 2 suggests that the calculation of break points under normal distribution assumption only depends on the uncertain cost c j which has the smallest mean-variance ratio. Hence, to guarantee the (1+β) approximation level, the number of linear segments is not related to the number of decision variables n. It is proportionally increasing with function 1/(2 ln( 1 + β + β)). Table 1 shows this value when β varies. β / ( 2 ln ( 1 + β β )) Table 1: Bound for the number of linear segments under normal distribution. However, for other distributional information sets, it is not easy to find the closed form formulation of the break points and the number of linear segments. Next, we will conduct some computational studies to show the calculation of break points. 5. Computational studies In this section, we consider a stochastic shortest path problem to show the calculation of break points, and let the approximation level, β be in 9

10 all computational studies. We consider three different types of distributional information on the uncertainties c, including the distributional ambiguity set, normal distribution, and uniform distribution. The evaluation of C i () for the case of normal distribution has been discussed in Corollary 2. Distributional ambiguity set. Nemirovski and Shapiro [6] have shown the calculation of function C i () under various distributional ambiguity sets. Since the calculation procedure is quite similar, we would only take a specific one as an example. We assume that the random variable c i takes values in [c i, c i ], and its mean equals µ i, i.e., Then we have for i = 1,..., n, [ )] exp C i () = ln sup P P i E P P i = P E P ( c i ) = µ i, P ( c i [c i, c i ]) = 1. ( ci ( ci µ i = ln exp c i c i ( ci ) + µ i c i exp c i c i ( )) ci. Uniform distribution. When the random variable c i follows uniform distribution U(c i, c i ), then ( C i () = ln exp (c ) i/) exp (c i /), i = 1,..., n. c i c i We first investigate the same graph studied by Bertsimas and Sim [3] which has 300 nodes and 1475 arcs. They also specify the bound of c i, which we denote by [c i, c i ]. For the case of distributional ambiguity set, c i and c i represent the lower and upper bound, and we consider the mean µ i at different values. For the case of normal distribution, we define µ i = (c i + c i )/2 and vary the standard deviation σ i, while for uniform distribution, we just let c i follow U(c i, c i ). The numbers of break points under these three situations are summarized in Table 2. We can observe that these numbers are relatively small. It implies that we only need to solve a small collection of nominal problems to achieve (1 + β) approximation of the stochastic problems even β is as low as We next study how the size of graph would influence the number of break points. To this end, we randomly generate directed graphs following the procedure in [3]. The origin node is at (0, 0), and the destination node is at (1, 1). We randomly generate the two coordinates of the nodes, and calculate their Euclidean distance as c i. For each arc i, we generate a parameter γ 10

11 Distributional information Distributional ambiguity set Normal Uniform 3c i + c i 4 µ i σ i c i + c i 2 c i +3 c i 4 c i c i 2 c i c i 4 c i c i 6 ɛ = ɛ = Table 2: Number of break points under different distributional information. from the uniform distribution U(1, 9), and let c i = γc i. We then construct the distributional information similar to our first study, and let µ i = c i +c i in 2 the case of distributional ambiguity set and σ i = c i c i in the case of normal 4 distribution. Table 3 shows that the number of break points, and hence the computational complexity of the approximated problem, would not increase with the size of graphs. Compared with [3], which requires solving O(n) nominal problems, our method is more appealing for the large networks. ɛ Distributional (number of nodes, number of arcs) Information (30, 150) (100, 500) (200, 1000) (300, 1500) (500, 2500) (800, 5000) Distributional ambiguity set Normal Uniform Distributional ambiguity set Normal Uniform Table 3: Number of break points under different graphs. Reference [1] P. Kouvelis, G. Yu, Robust Discrete Optimization and Its Applications, vol. 14, Springer, [2] I. Averbakh, On the complexity of a class of combinatorial optimization problems with uncertainty, Mathematical Programming 90 (2) (2001)

12 [3] D. Bertsimas, M. Sim, Robust discrete optimization and network flows, Mathematical programming 98 (1-3) (2003) [4] D. Bertsimas, M. Sim, The price of robustness, Operations Research 52 (1) (2004) [5] R. Schultz, L. Stougie, M. H. Van Der Vlerk, Solving stochastic programs with integer recourse by enumeration: A framework using Gröbner basis, Mathematical Programming 83 (1-3) (1998) [6] A. Nemirovski, A. Shapiro, Convex approximations of chance constrained programs, SIAM Journal on Optimization 17 (4) (2006) [7] A. Ahmadi-Javid, Entropic Value-at-Risk: A New Coherent Risk Measure, Journal of Optimization Theory and Applications 155 (3) (2012) [8] A. Nemirovski, On safe tractable approximations of chance constraints, European Journal of Operational Research 219 (3) (2012) [9] A. Ben-Tal, L. El Ghaoui, A. Nemirovski, Robust optimization, Princeton University Press, [10] R. Kaas, M. Goovaerts, J. Dhaene, M. Denuit, Modern actuarial risk theory, vol. 328, Springer, Appendix Proof of Proposition 1 Observe that for any given x X, > 0, ( c sup ln E P [exp )] ( x n = ln sup P P = sup = = P P i E P ln E P P P i sup P P i C i () x i, 12 [ ( ci x i exp [ exp ( ln E P [ exp )] ( ci x i ( ci )] ) )] ) x i

13 where the first two equalities follow from the independence of c i and P = P 1 P n, and the third equality holds since x i 0, 1. Therefore, Zɛ = min EVaR 1 ɛ ( c x) = min inf C i () x i ln ɛ. x X x X >0 To prove the convexity of C i (), i = 1,..., n, we consider any 1, 2 > 0. Define λ = λ 1 + (1 λ) 2 for any λ [0, 1]. We have ( )] [ ( )] ci ci λ 1 ln E P [exp + (1 λ) 2 ln E P exp 1 [ ( ci = λ ln (E P exp 1 [ ( = λ ln ci (E P exp ( ( λ ln E P ci exp 1 )]) λ 1 λ 1 ( )] ci = λ ln E P [exp, λ 2 [ ( ci + λ ln (E P exp )]) λ 1 [ ( λ ci (E P exp 2 )) λ 1 ( ( λ ci exp 2 )) (1 λ) 2 λ )]) (1 λ) 2 λ 2 )]) (1 λ) 2 λ where the inequality follows from Hölder s inequality. As pointwise supremum preserves convexity, we know that C i () is convex. For the special case that P is a singleton, Kaas et al. [10] have shown the nonincreasing property and provided the asymptotic analysis of C i (). In the general case, we can easily extend the result to complete the proof. Proof of Corollary 1 For any i = 1,..., n, we let h i () = ln sup P P i E P [ exp ( )] zi. Since z i are independent and identically distributed, we could easily get that h 1 () =... = h n (). For simplicity, we define h() = h 1 (). Hence, we 13

14 could write function C i () in terms of function h() as [ ( )] ai + b i z i C i () = ln sup E P exp P P i [ ( )] zi = a i + ln sup E P exp P P i /b i = a i + b i h (/b i ). We next show that when varies from 0 to +, the two functions C i () and C j () either are equal to each other or have at most one intersection point. Consider any i j, i, j 1,..., n. If b i =, then it is obvious to justify our argument. If b i, without loss of generality, we assume > b i, and let l() = C j () C i () for any > 0, then we have for any > 0, l( + ) l() = (C j ( ( + ) ) C j ()) ( (C )) i ( + ) ( ( C i ()) ) ( )) + + = (h h b i h h. When decreases to 0, we could get = lim 0 l( + ) l() lim 0 ( ( ) + h h 0 h ( ) ( ) + h = lim ( ( /b ) j ) h + h = lim 0 = d /bj d /bi, b i ( )) ( ) + b i (h b i h ( ) lim 0 h h lim 0 b i ( b i )) ( ) ( ) b i + b i h b i ( b i + ( /b ) i ) h ( ) b i where d /bj and d /bi are the upper bound of subdifferential of h at / and /b i, respectively. Based on the definition of subdifferential and the convexity of function h, we have h(/ ) h(/b i ) d /bi (/ /b i ), h(/b i ) h(/ ) d /bj (/b i / ). 14

15 Combining the two inequalities together, and / < /b i, we get d /bj d /bi. l(+ ) l() Hence, we have lim 0 0, which implies that l() is a nonincreasing function in > 0. C j () and C i () either are overlap or have at most one intersection point. ( Therefore, ) the cost functions C i (), i = 1,..., n can only have at most n + 1 = O(n 2 2 ) possible ordering. Observe that the nominal problem is an optimization problem over a greedoid, the optimal solution depends only on the ordering of the cost. Hence, the optimal x in Problem (3) can be obtained from enumerating all O(n 2 ) possible orderings of C i (). Proof of Proposition 2 According to the convexity of C i () and the definition of Ĉ i (), we know Ĉ i () = Ĉij () if [ j, j+1 ]. Therefore, Ẑɛ = min min min Ĉ ij ()x i ln ɛ. (6) x X j=1,...,m [ j, j+1 ] It is noted that Ĉij() is linear in. Therefore, the optimal solution for n min Ĉij()x i ln ɛ can be achieved at the extreme points, [ j, j+1 ] i.e., j or j+1. Hence, min min Ĉ ij ()x i ln ɛ j=1,...,m [ j, j+1 ] = min j=1,...,m min = min j=1,...,m min = min j=1,...,m+1 Ĉ ij ( j )x i j ln ɛ, C ij x i j ln ɛ, C ij x i j ln ɛ. Ĉ ij ( j+1 )x i j+1 ln ɛ C i(j+1) x i j+1 ln ɛ 15

16 Therefore, Problem (6) can be reformulated as Ẑ ɛ = min x X min j=1,...,m+1 = min j=1,...,m+1 min x X = min j=1,...,m+1 U(j). (C ij x i j ln ɛ) C ij x i j ln ɛ Proof of Proposition 3 Consider any [ min, max ], there exists j 1,..., m such that [ j, j+1 ]. Since C i () is convex, we then have C i () Ĉij() = Ĉi() (1 + β)c i (). Therefore, for any x X, ɛ (0, 1), C i ()x i ln ɛ Ĉ i ()x i ln ɛ (1+β)C i ()x i (1+β) ln ɛ, where the inequalities hold since x is nonnegative and ln ɛ < 0. Taking the minimum on each side we get Zɛ Ẑ ɛ (1 + β)zɛ. Proof of Proposition 4 The convexity of the sets A min and A max follows from the convexity of the functions C i (), which is supported by Proposition 1. For any x X and 0 < min A min, we can find s i C i ( min), i = 1,..., n such that max y X n s iy i ln ɛ. Hence we have C i ()x i ln ɛ (C i (min) + s i ( min)) x i min ln ɛ + (min ) ln ɛ C i ( min)x i min ln ɛ ( min ) max y X C i (min)x i min ln ɛ, s i y i + (min ) ln ɛ 16

17 where the first inequality is due to the convexity of C i (), the second and third inequalities hold since n s ix i max n y X s iy i ln ɛ. Therefore, we can find optimal to be no less than min. To show the other way, consider any max A max, we then can find s i C i (max), i = 1,..., n such that min n y X s iy i ln ɛ. Similar to the previous analysis, we have C i ()x i ln ɛ (C i (max) + s i ( max)) x i max ln ɛ ( max) ln ɛ C i ( max)x i max ln ɛ + ( max) min y Y C i (max)x i max ln ɛ. Therefore, we can find optimal to be no greater than max. Proof of Proposition 5 Consider any [ j, j+1 ], j = 1,..., m. We can observe that s i y i ( max) ln ɛ Ĉ i () C ij C i(j+1) s i ( j+1 j ) C i () + C i() ( s i ) ( j+1 j ) µ ( i = C i () 1 + s ) i min max µ i m C i () (1 + β), where the first inequality follows from the monotonicity of Ĉi(), the second inequality follows from the convexity of C i (), and the third inequality follows from the monotonicity of C i () and C i () µ i > 0 for all. Proof of Proposition 6 To prove that f i ( k, o ) is nondecreasing in o [ k, max ], we consider any o, ᾱ o such that k < o ᾱ o max. In that case, o = λ k + (1 λ)ᾱ o 17

18 for some λ [0, 1). We then have C i ( k ) C i ( o ) o k C i ( k ) λc i ( k ) (1 λ)c i (ᾱ o ) λ k + (1 λ)ᾱ o k = C i ( k ) C i (ᾱ o ) ᾱ o k. Therefore, f i ( k, ᾱ o ) max g [ k, o i ( k, ᾱ o, ) ] = max C i ( k ) k (C [ k, o ] ᾱ o i ( k ) C i (ᾱ o )) (1 + β) C i () k max C i ( k ) k (C [ k, o ] o i ( k ) C i ( o )) (1 + β) C i () k = f i ( k, o ), where the first inequality holds since o ᾱ o, and the second inequality is due to the result in the inequality (7). The concavity of g i ( k, o, ) in follows immediately from the convexity of C i (). Proof of Corollary 2 Since c i follows normal distribution, from the moment generating function, we have for any > 0, C i () = µ i + σ2 i, i = 1,..., n. 2 (7) 18

19 Hence, in Algorithm for Break Points, we could get f i ( k, o ) = max g i( k, o, ) [ k, o ] = max [ k, o ] C i ( k ) + C i( o ) C i ( k ) o k ( k ) (1 + β)c i () Ci ( o ) C i ( k ) = C i ( k ) C i( o ) C i ( k ) o k + max (1 + β)c k [ k, o ] o i () ( k ) = µ i + σ2 i + σ2 i 2 k 2 + max σ2 i (1 + β) µ o [ k, o ] 2 k o i + σ2 i 2 = µ i + σ2 i + σ2 1+β i 2 k 2 µ i(1 + β) + k σ o i 2, if o (1 + β) k, o σ2 i 2 k (1+β)σ2 i, if 2 o k o (1 + β) k ( ) 2 σ i β = 2 βµ o i βσ2 i, if o (1 + β) k, k 2 k βσ2 i 2 βµ i, if o k o (1 + β) k Given the closed form of function f i ( k, o ), we could solve Problem (5), and get 1 k+1 = ( ) 2, 1+β 2βµj k + β k where It is easy to show that j = arg min,...,n σ 2 j µi σ 2 i. k+1 = ( 1+β k 1 2βµj σ 2 j + β k ) 2 1 ( 1 + β β) 2 k = ( 1 + β+ β) 2 k, where the inequality holds because we have µ j + σ2 j 2 k (1 + β)µ j and µ j σ 2 j > 0. 19

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Distributionally Robust Convex Optimization Wolfram Wiesemann 1, Daniel Kuhn 1, and Melvyn Sim 2 1 Department of Computing, Imperial College London, United Kingdom 2 Department of Decision Sciences, National

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Robust discrete optimization and network flows

Robust discrete optimization and network flows Math. Program., Ser. B 98: 49 71 2003) Digital Object Identifier DOI) 10.1007/s10107-003-0396-4 Dimitris Bertsimas Melvyn Sim Robust discrete optimization and network flows Received: January 1, 2002 /

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No., February 20, pp. 24 54 issn 0364-765X eissn 526-547 360 0024 informs doi 0.287/moor.0.0482 20 INFORMS A Geometric Characterization of the Power of Finite

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

arxiv: v1 [cs.dm] 27 Jan 2014

arxiv: v1 [cs.dm] 27 Jan 2014 Randomized Minmax Regret for Combinatorial Optimization Under Uncertainty Andrew Mastin, Patrick Jaillet, Sang Chin arxiv:1401.7043v1 [cs.dm] 27 Jan 2014 January 29, 2014 Abstract The minmax regret problem

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

arxiv: v1 [cs.ds] 17 Jul 2013

arxiv: v1 [cs.ds] 17 Jul 2013 Bottleneck combinatorial optimization problems with uncertain costs and the OWA criterion arxiv:137.4521v1 [cs.ds] 17 Jul 213 Adam Kasperski Institute of Industrial Engineering and Management, Wroc law

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,

More information

Almost Robust Optimization with Binary Variables

Almost Robust Optimization with Binary Variables Almost Robust Optimization with Binary Variables Opher Baron, Oded Berman, Mohammad M. Fazel-Zarandi Rotman School of Management, University of Toronto, Toronto, Ontario M5S 3E6, Canada, Opher.Baron@Rotman.utoronto.ca,

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Routing Optimization under Uncertainty

Routing Optimization under Uncertainty Accepted for publication in Operations Research, October 2015 Routing Optimization under Uncertainty Patrick Jaillet Department of Electrical Engineering and Computer Science Laboratory for Information

More information

Restricted robust uniform matroid maximization under interval uncertainty

Restricted robust uniform matroid maximization under interval uncertainty Math. Program., Ser. A (2007) 110:431 441 DOI 10.1007/s10107-006-0008-1 FULL LENGTH PAPER Restricted robust uniform matroid maximization under interval uncertainty H. Yaman O. E. Karaşan M. Ç. Pınar Received:

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Min-max- robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Christoph Buchheim, Jannis Kurtz 2 Faultät Mathemati, Technische Universität Dortmund Vogelpothsweg

More information

A Robust Optimization Perspective on Stochastic Programming

A Robust Optimization Perspective on Stochastic Programming A Robust Optimization Perspective on Stochastic Programming Xin Chen, Melvyn Sim and Peng Sun Submitted: December 004 Revised: May 16, 006 Abstract In this paper, we introduce an approach for constructing

More information

A Bicriteria Approach to Robust Optimization

A Bicriteria Approach to Robust Optimization A Bicriteria Approach to Robust Optimization André Chassein and Marc Goerigk Fachbereich Mathematik, Technische Universität Kaiserslautern, Germany Abstract The classic approach in robust optimization

More information

Distributionally robust simple integer recourse

Distributionally robust simple integer recourse Distributionally robust simple integer recourse Weijun Xie 1 and Shabbir Ahmed 2 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061 2 School of Industrial & Systems

More information

Pareto Efficiency in Robust Optimization

Pareto Efficiency in Robust Optimization Pareto Efficiency in Robust Optimization Dan Iancu Graduate School of Business Stanford University joint work with Nikolaos Trichakis (HBS) 1/26 Classical Robust Optimization Typical linear optimization

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization Dimitris Bertsimas Sloan School of Management and Operations Research Center, Massachusetts

More information

Routing Optimization with Deadlines under Uncertainty

Routing Optimization with Deadlines under Uncertainty Submitted to Operations Research manuscript 2013-03-128 Routing Optimization with Deadlines under Uncertainty Patrick Jaillet Department of Electrical Engineering and Computer Science, Laboratory for Information

More information

Optimization Problems with Probabilistic Constraints

Optimization Problems with Probabilistic Constraints Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.

More information

Min-max-min Robust Combinatorial Optimization Subject to Discrete Uncertainty

Min-max-min Robust Combinatorial Optimization Subject to Discrete Uncertainty Min-- Robust Combinatorial Optimization Subject to Discrete Uncertainty Christoph Buchheim Jannis Kurtz Received: date / Accepted: date Abstract We consider combinatorial optimization problems with uncertain

More information

arxiv: v2 [cs.ds] 27 Nov 2014

arxiv: v2 [cs.ds] 27 Nov 2014 Single machine scheduling problems with uncertain parameters and the OWA criterion arxiv:1405.5371v2 [cs.ds] 27 Nov 2014 Adam Kasperski Institute of Industrial Engineering and Management, Wroc law University

More information

A robust approach to the chance-constrained knapsack problem

A robust approach to the chance-constrained knapsack problem A robust approach to the chance-constrained knapsack problem Olivier Klopfenstein 1,2, Dritan Nace 2 1 France Télécom R&D, 38-40 rue du gl Leclerc, 92794 Issy-les-Moux cedex 9, France 2 Université de Technologie

More information

Random Convex Approximations of Ambiguous Chance Constrained Programs

Random Convex Approximations of Ambiguous Chance Constrained Programs Random Convex Approximations of Ambiguous Chance Constrained Programs Shih-Hao Tseng Eilyan Bitar Ao Tang Abstract We investigate an approach to the approximation of ambiguous chance constrained programs

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

A Probabilistic Model for Minmax Regret in Combinatorial Optimization

A Probabilistic Model for Minmax Regret in Combinatorial Optimization A Probabilistic Model for Minmax Regret in Combinatorial Optimization Karthik Natarajan Dongjian Shi Kim-Chuan Toh May 21, 2013 Abstract In this paper, we propose a new probabilistic model for minimizing

More information

Robust Temporal Constraint Network

Robust Temporal Constraint Network Robust Temporal Constraint Network Hoong Chuin LAU Thomas OU Melvyn SIM National University of Singapore, Singapore 119260 lauhc@comp.nus.edu.sg, ouchungk@comp.nus.edu.sg, dscsimm@nus.edu.sg June 13, 2005

More information

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Ruken Düzgün Aurélie Thiele July 2012 Abstract This paper describes a multi-range robust optimization approach

More information

A Bicriteria Approach to Robust Optimization

A Bicriteria Approach to Robust Optimization A Bicriteria Approach to Robust Optimization André Chassein and Marc Goerigk Fachbereich Mathematik, Technische Universität Kaiserslautern, Germany Abstract The classic approach in robust optimization

More information

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract. Włodzimierz Ogryczak Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS Abstract In multiple criteria linear programming (MOLP) any efficient solution can be found

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie Shabbir Ahmed Ruiwei Jiang February 13, 2017 Abstract A distributionally robust joint chance constraint

More information

Robust Dual-Response Optimization

Robust Dual-Response Optimization Yanıkoğlu, den Hertog, and Kleijnen Robust Dual-Response Optimization 29 May 1 June 1 / 24 Robust Dual-Response Optimization İhsan Yanıkoğlu, Dick den Hertog, Jack P.C. Kleijnen Özyeğin University, İstanbul,

More information

Quantifying Stochastic Model Errors via Robust Optimization

Quantifying Stochastic Model Errors via Robust Optimization Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations

More information

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization

The Complexity of Maximum. Matroid-Greedoid Intersection and. Weighted Greedoid Maximization Department of Computer Science Series of Publications C Report C-2004-2 The Complexity of Maximum Matroid-Greedoid Intersection and Weighted Greedoid Maximization Taneli Mielikäinen Esko Ukkonen University

More information

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty Christoph Buchheim Jannis Kurtz Received:

More information

Complexity of the min-max and min-max regret assignment problems

Complexity of the min-max and min-max regret assignment problems Complexity of the min-max and min-max regret assignment problems Hassene Aissi Cristina Bazgan Daniel Vanderpooten LAMSADE, Université Paris-Dauphine Place du Maréchal de Lattre de Tassigny, 75775 Paris

More information

Convex relaxations of chance constrained optimization problems

Convex relaxations of chance constrained optimization problems Convex relaxations of chance constrained optimization problems Shabbir Ahmed School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 30332. May 12, 2011

More information

Robust constrained shortest path problems under budgeted uncertainty

Robust constrained shortest path problems under budgeted uncertainty Robust constrained shortest path problems under budgeted uncertainty Artur Alves Pessoa 1, Luigi Di Puglia Pugliese 2, Francesca Guerriero 2 and Michael Poss 3 1 Production Engineering Department, Fluminense

More information

1 Robust optimization

1 Robust optimization ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

Robust Adaptive Routing Under Uncertainty *

Robust Adaptive Routing Under Uncertainty * Robust Adaptive Routing Under Uncertainty * Arthur Flajolet Operations Research Center Massachusetts Institute of Technology Cambridge, MA 02139 Ecole Polytechnique Palaiseau, France 91120 Email: flajolet@mit.edu

More information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information Ambiguous Joint Chance Constraints under Mean and Dispersion Information Grani A. Hanasusanto 1, Vladimir Roitch 2, Daniel Kuhn 3, and Wolfram Wiesemann 4 1 Graduate Program in Operations Research and

More information

Robust combinatorial optimization with cost uncertainty

Robust combinatorial optimization with cost uncertainty Robust combinatorial optimization with cost uncertainty Michael Poss UMR CNRS 6599 Heudiasyc, Université de Technologie de Compiègne, Centre de Recherches de Royallieu, 60200 Compiègne, France. Abstract

More information

Approximability of the Two-Stage Stochastic Knapsack problem with discretely distributed weights

Approximability of the Two-Stage Stochastic Knapsack problem with discretely distributed weights Approximability of the Two-Stage Stochastic Knapsack problem with discretely distributed weights Stefanie Kosuch Institutionen för datavetenskap (IDA),Linköpings Universitet, SE-581 83 Linköping, Sweden

More information

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs

Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs Manish Bansal Grado Department of Industrial and Systems Engineering, Virginia Tech Email: bansal@vt.edu Kuo-Ling Huang

More information

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints Pranjal Awasthi Vineet Goyal Brian Y. Lu July 15, 2015 Abstract In this paper, we study the performance of static

More information

A Linear Decision-Based Approximation Approach to Stochastic Programming

A Linear Decision-Based Approximation Approach to Stochastic Programming OPERATIONS RESEARCH Vol. 56, No. 2, March April 2008, pp. 344 357 issn 0030-364X eissn 526-5463 08 5602 0344 informs doi 0.287/opre.070.0457 2008 INFORMS A Linear Decision-Based Approximation Approach

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets manuscript Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets Zhi Chen Imperial College Business School, Imperial College London zhi.chen@imperial.ac.uk Melvyn Sim Department

More information

Minmax regret 1-center problem on a network with a discrete set of scenarios

Minmax regret 1-center problem on a network with a discrete set of scenarios Minmax regret 1-center problem on a network with a discrete set of scenarios Mohamed Ali ALOULOU, Rim KALAÏ, Daniel VANDERPOOTEN Abstract We consider the minmax regret 1-center problem on a general network

More information

arxiv: v1 [math.oc] 3 Jun 2016

arxiv: v1 [math.oc] 3 Jun 2016 Min-Max Regret Problems with Ellipsoidal Uncertainty Sets André Chassein 1 and Marc Goerigk 2 arxiv:1606.01180v1 [math.oc] 3 Jun 2016 1 Fachbereich Mathematik, Technische Universität Kaiserslautern, Germany

More information

The Distributionally Robust Chance Constrained Vehicle Routing Problem

The Distributionally Robust Chance Constrained Vehicle Routing Problem The Distributionally Robust Chance Constrained Vehicle Routing Problem Shubhechyya Ghosal and Wolfram Wiesemann Imperial College Business School, Imperial College London, United Kingdom August 3, 208 Abstract

More information

Goal Driven Optimization

Goal Driven Optimization Goal Driven Optimization Wenqing Chen Melvyn Sim May 2006 Abstract Achieving a targeted objective, goal or aspiration level are relevant aspects of decision making under uncertainties. We develop a goal

More information

Inverse Stochastic Dominance Constraints Duality and Methods

Inverse Stochastic Dominance Constraints Duality and Methods Duality and Methods Darinka Dentcheva 1 Andrzej Ruszczyński 2 1 Stevens Institute of Technology Hoboken, New Jersey, USA 2 Rutgers University Piscataway, New Jersey, USA Research supported by NSF awards

More information

Distributionally Robust Stochastic Optimization with Wasserstein Distance

Distributionally Robust Stochastic Optimization with Wasserstein Distance Distributionally Robust Stochastic Optimization with Wasserstein Distance Rui Gao DOS Seminar, Oct 2016 Joint work with Anton Kleywegt School of Industrial and Systems Engineering Georgia Tech What is

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Strong Formulations of Robust Mixed 0 1 Programming

Strong Formulations of Robust Mixed 0 1 Programming Math. Program., Ser. B 108, 235 250 (2006) Digital Object Identifier (DOI) 10.1007/s10107-006-0709-5 Alper Atamtürk Strong Formulations of Robust Mixed 0 1 Programming Received: January 27, 2004 / Accepted:

More information

Robust Growth-Optimal Portfolios

Robust Growth-Optimal Portfolios Robust Growth-Optimal Portfolios! Daniel Kuhn! Chair of Risk Analytics and Optimization École Polytechnique Fédérale de Lausanne rao.epfl.ch 4 Technology Stocks I 4 technology companies: Intel, Cisco,

More information

Robust Optimization of Sums of Piecewise Linear Functions with Application to Inventory Problems

Robust Optimization of Sums of Piecewise Linear Functions with Application to Inventory Problems Robust Optimization of Sums of Piecewise Linear Functions with Application to Inventory Problems Amir Ardestani-Jaafari, Erick Delage Department of Management Sciences, HEC Montréal, Montréal, Quebec,

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 18 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 31, 2012 Andre Tkacenko

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric

Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric Data-Driven Distributionally Robust Chance-Constrained Optimization with asserstein Metric Ran Ji Department of System Engineering and Operations Research, George Mason University, rji2@gmu.edu; Miguel

More information

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack

Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack Hassene Aissi, Cristina Bazgan, and Daniel Vanderpooten LAMSADE, Université Paris-Dauphine, France {aissi,bazgan,vdp}@lamsade.dauphine.fr

More information

Robust portfolio selection under norm uncertainty

Robust portfolio selection under norm uncertainty Wang and Cheng Journal of Inequalities and Applications (2016) 2016:164 DOI 10.1186/s13660-016-1102-4 R E S E A R C H Open Access Robust portfolio selection under norm uncertainty Lei Wang 1 and Xi Cheng

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Robust Portfolio Selection Based on a Multi-stage Scenario Tree

Robust Portfolio Selection Based on a Multi-stage Scenario Tree Robust Portfolio Selection Based on a Multi-stage Scenario Tree Ruijun Shen Shuzhong Zhang February 2006; revised November 2006 Abstract The aim of this paper is to apply the concept of robust optimization

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Ambiguous Chance Constrained Programs: Algorithms and Applications

Ambiguous Chance Constrained Programs: Algorithms and Applications Ambiguous Chance Constrained Programs: Algorithms and Applications Emre Erdoğan Adviser: Associate Professor Garud Iyengar Partial Sponsor: TÜBİTAK NATO-A1 Submitted in partial fulfilment of the Requirements

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Algorithms for a Special Class of State-Dependent Shortest Path Problems with an Application to the Train Routing Problem

Algorithms for a Special Class of State-Dependent Shortest Path Problems with an Application to the Train Routing Problem Algorithms fo Special Class of State-Dependent Shortest Path Problems with an Application to the Train Routing Problem Lunce Fu and Maged Dessouky Daniel J. Epstein Department of Industrial & Systems Engineering

More information

Stochastic Integer Programming An Algorithmic Perspective

Stochastic Integer Programming An Algorithmic Perspective Stochastic Integer Programming An Algorithmic Perspective sahmed@isye.gatech.edu www.isye.gatech.edu/~sahmed School of Industrial & Systems Engineering 2 Outline Two-stage SIP Formulation Challenges Simple

More information

Robust multi-sensor scheduling for multi-site surveillance

Robust multi-sensor scheduling for multi-site surveillance DOI 10.1007/s10878-009-9271-4 Robust multi-sensor scheduling for multi-site surveillance Nikita Boyko Timofey Turko Vladimir Boginski David E. Jeffcoat Stanislav Uryasev Grigoriy Zrazhevsky Panos M. Pardalos

More information

Selected Topics in Robust Convex Optimization

Selected Topics in Robust Convex Optimization Selected Topics in Robust Convex Optimization Arkadi Nemirovski School of Industrial and Systems Engineering Georgia Institute of Technology Optimization programs with uncertain data and their Robust Counterparts

More information

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,

More information

EE 227A: Convex Optimization and Applications April 24, 2008

EE 227A: Convex Optimization and Applications April 24, 2008 EE 227A: Convex Optimization and Applications April 24, 2008 Lecture 24: Robust Optimization: Chance Constraints Lecturer: Laurent El Ghaoui Reading assignment: Chapter 2 of the book on Robust Optimization

More information

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples

More information

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies

More information

Mixed 0-1 linear programs under objective uncertainty: A completely positive representation

Mixed 0-1 linear programs under objective uncertainty: A completely positive representation Singapore Management University Institutional Knowledge at Singapore Management University Research Collection Lee Kong Chian School Of Business Lee Kong Chian School of Business 6-2011 Mixed 0-1 linear

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Optimization Tools in an Uncertain Environment

Optimization Tools in an Uncertain Environment Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization

More information

Robust Optimization for Empty Repositioning Problems

Robust Optimization for Empty Repositioning Problems Robust Optimization for Empty Repositioning Problems Alan L. Erera, Juan C. Morales and Martin Savelsbergh The Logistics Institute School of Industrial and Systems Engineering Georgia Institute of Technology

More information

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the

More information

On Distributionally Robust Chance Constrained Program with Wasserstein Distance

On Distributionally Robust Chance Constrained Program with Wasserstein Distance On Distributionally Robust Chance Constrained Program with Wasserstein Distance Weijun Xie 1 1 Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 4061 June 15, 018 Abstract

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Estimation and Optimization: Gaps and Bridges. MURI Meeting June 20, Laurent El Ghaoui. UC Berkeley EECS

Estimation and Optimization: Gaps and Bridges. MURI Meeting June 20, Laurent El Ghaoui. UC Berkeley EECS MURI Meeting June 20, 2001 Estimation and Optimization: Gaps and Bridges Laurent El Ghaoui EECS UC Berkeley 1 goals currently, estimation (of model parameters) and optimization (of decision variables)

More information

On distributional robust probability functions and their computations

On distributional robust probability functions and their computations On distributional robust probability functions and their computations Man Hong WONG a,, Shuzhong ZHANG b a Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong,

More information

Stochastic Equilibrium Problems arising in the energy industry

Stochastic Equilibrium Problems arising in the energy industry Stochastic Equilibrium Problems arising in the energy industry Claudia Sagastizábal (visiting researcher IMPA) mailto:sagastiz@impa.br http://www.impa.br/~sagastiz ENEC workshop, IPAM, Los Angeles, January

More information

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions

A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions A Tighter Variant of Jensen s Lower Bound for Stochastic Programs and Separable Approximations to Recourse Functions Huseyin Topaloglu School of Operations Research and Information Engineering, Cornell

More information

CVaR and Examples of Deviation Risk Measures

CVaR and Examples of Deviation Risk Measures CVaR and Examples of Deviation Risk Measures Jakub Černý Department of Probability and Mathematical Statistics Stochastic Modelling in Economics and Finance November 10, 2014 1 / 25 Contents CVaR - Dual

More information