Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Size: px
Start display at page:

Download "Robust Solutions to Multi-Objective Linear Programs with Uncertain Data"

Transcription

1 Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for the radius of robust feasibility guaranteeing constraint feasibility for all possible scenarios within a speci ed uncertainty set under a ne data parametrization. We then present numerically tractable optimality conditions for minmax robust weakly e cient solutions, i.e., the weakly e cient solutions of the robust counterpart. We also consider highly robust weakly e - cient solutions, i.e., robust feasible solutions which are weakly e cient for any possible instance of the objective matrix within a speci ed uncertainty set, providing lower bounds for the radius of highly robust e ciency guaranteeing the existence of this type of solutions under a ne and rank-1 objective data uncertainty. Finally, we provide numerically tractable optimality conditions for highly robust weakly e cient solutions. Keywords. Robust optimization. Multi-objective linear programming. Data uncertainty. Robust feasibility. Robust weakly e cient solutions. 1 Introduction Consider the deterministic multi-objective linear programming problem (P ) V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; j 2 J ; where V- min stands for vector minimization, c i 2 R n (interpreted as a column vector) for i 2 I := f1; : : : ; mg, the symbol > denotes transpose, x 2 R n is the decision variable, This research was partially supported by the Australian Research Council, Discovery Project DP , the MICINN of Spain, Grant MTM C03-02, and Generalitat Valenciana, Grant ACOMP/2013/062. y Corresponding author. Tel.: Fax: z Dept. of Statistics and Operations Research, Alicante University, Alicante, Spain. x Dept. of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. addresses: mgoberna@ua.es (M.A. Goberna), v.jeyakumar@unsw.edu.au (V. Jeyakumar), g.li@unsw.edu.au (G. Li), jose.vicente@ua.es (J. Vicente-Pérez). 1

2 and (a j ; b j ) 2 R n R, for j 2 J := f1; : : : ; p}, are the constraint input data of the problem. The real m n matrix C whose rows are the vectors c i, i 2 I, is called the objective matrix. The problem (P ) has been extensively studied in the literature (see, e.g., the overviews [7] and [15]), where perfect information is often assumed (that is, accurate values for the input quantities or parameters), despite the reality that such precise knowledge is rarely available in practice for real-world optimization problems. The data of real-world optimization problems are often uncertain (that is, they are not known exactly at the time of the decision) due to estimation errors, prediction errors, or lack of information. Scalar uncertain optimization problems have been traditionally treated via sensitivity analysis which estimates the impact of small perturbations of the data in the optimal value, while robust optimization, which provides a deterministic framework for uncertain problems, has recently emerged as a powerful alternative approach (see, for instance, [2, 4, 17, 22, 27]). Particular types of uncertain multi-objective linear programming problems have already been studied, e.g., [38] considers changes in one objective function via sensitivity analysis, while [36] considers changes in the whole objective function x 7! Cx and [21] changes in the constraints by using di erent robustness approaches. The purpose of the present work is to study multi-objective linear programming problems in the face of data uncertainty both in the objective function and constraints from a robustness perspective. Following the robust optimization framework, the multi-objective problem (P ) in the face of data uncertainty both in the objective matrix and in the data of the constraints can be captured by a parameterized multi-objective linear programming problem of the form (P ) V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; j 2 J where the input data, c i, i 2 I, and (a j ; b j ), j 2 J, are uncertain vectors, C := (c 1 ; : : : ; c m ) 2 U R nm and (a j ; b j ) 2 V j R n+1, j 2 J and the sets U and V j, j 2 J, are speci ed uncertainty sets that are bounded, but often in nite sets. By enforcing the constraints for all possible uncertainties within V j, j 2 J, the uncertain problem becomes the uncertain multi-objective linear semi-in nite programming problem V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J ; (1) where (c 1 ; : : : ; c m ) 2 U and whose feasible set, X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg; (2) is called robust feasible set of (P ) and x 2 X is called a robust feasible solution. Following the recent work on robust linear programming (see [2]), some of the key questions of multi-objective linear programming under data uncertainty include: I. (Guaranteeing robust feasible solutions) How to guarantee non-emptiness of the robust feasible set X for speci ed uncertainty sets V j ; j 2 J? 2

3 II. (Guaranteeing and identifying robust e cient solutions) Which robust feasible solutions of (P ) are robust e cient solutions (see the paragraph below) that are immune to objective data uncertainty and what are the mathematical characterizations that identify robust e cient solutions? How to guarantee the existence of robust e cient solutions? III. (Numerical tractability of robust e cient solutions) For what speci ed classes of uncertainty sets U and V j, j 2 J, the robust e cient solution characterizations can be numerically checked using existing multi-objective programming techniques? In this paper, we provide some answers to the above questions for the uncertain multi-objective linear programming problem (P ) in the face of data uncertainty by focusing on two choices of the robust optimal solutions: the rst one is called a minmax robust e cient solution or simply robust e cient solution following the approach widely used in robust scalar optimization problem (see also [16] and [31] for recent development), and corresponds to an e cient solution to a deterministic worst-case (minmax) multi-objective optimization problem; the second one is called highly robust e cient solution as in [26, 30] (see also [38] and [36, Section 4]), and consists of the preservation of the e ciency for all (c 1 ; : : : ; c m ) 2 U. So, the existence of this type of solution implies that the uncertainty set U is small in some sense (e.g., Cartesian products of balls in R n or segments in R nm emanating from some xed data (c 1 ; : : : ; c m ) ). To compensate the smallness of the uncertainty set, we focus our analysis on the larger class of highly robust solutions: highly robust weakly e cient solutions. On the other hand, in [36, Section 4], highly robust e cient solutions are considered instead of highly robust weakly e cient solutions. For the convenience of the reader, other notions of robust solutions are summarized in the appendix. Our key contributions are outlined as follows: (1) We rst introduce the concept of radius of robust feasibility in Section 3, guaranteeing non emptiness of the robust feasible set X of (P ) under a nely parameterized data uncertainty. This concept is inspired in the notion of consistency radius used in linear semi-in nite programming in order to guantee the feasibility of the nominal problem under perturbations preserving the number of constraints ([8], [9]). We derive a formula for the e ective computation of the radius of robust feasibility that also applies to single-objective linear programming under the same type of uncertainty. (2) We examine the robust weakly e cient solution of an uncertain multi-objective linear programming problem in Section 4, and establish numerically tractable mathematical characterizations of robust weakly e cient solutions under various data uncertainty. (3) We present, in Section 5, an explicit formula for the radius of highly robust e - ciency, i.e., the greatest value of certain parameter associated with two families 3

4 of uncertainty sets for the objective data such that the corresponding multiobjective linear programming problems have highly robust weakly e cient solutions. The mentioned families are formed by Cartesian products of balls in R n and by segments in R mn in the direction of rank-1 matrices (the same type of uncertainty considered in [36, Section 4]). Recall that rank-1 matrices are the products of non-zero column vectors by non-zero row vectors (see [35] for other characterizations). These matrices are frequently used in computational algebra (as building blocks for more complex matrices), in conic optimization (as the rank-1 matrices are the extreme rays of the semide nite cone), and in statistics (as the singular value decomposition gives the best rank-1 approximation of a given matrix with respect to the Frobenius and the spectral norms). (4) We nally provide, in Section 6, numerically tractable mathematical characterizations of highly robust weakly e cient solutions under various data uncertainty. 2 Preliminaries We begin this section introducing the necessary notation and concepts on multi-objective linear programming. We denote by 0 n and kk the vector of zeros and the Euclidean norm in R n, respectively. The closed unit ball and the distance associated to the above norm are denoted by B n and d, respectively. Given Z R n, int Z, cl Z, bd Z, and conv Z denote the interior, the closure, the boundary and the convex hull of Z, respectively, whereas cone Z := R + conv Z denotes the convex conical hull of Z [ f0 n g. For x; y 2 R m, we write x y (x < y) when x i y i (x i < y i, respectively) for all i 2 I. The simplex m in the space of criteria R m is de ned as m := f 2 R m + : P m i=1 i = 1g. The following known dual characterizations of solutions of semi-in nite linear inequality systems play key roles in the next section in developing radius of robust feasibility formulae. Lemma 1 ([23, Corollaries and 3.1.2]) Let T be an arbitrary index set. Then, fx 2 R n : u > t x v t ; t 2 T g 6= ; if and only if (0 n ; 1) =2 cl cone f(u t ; v t ) : t 2 T g. In that case, u > x v holds for any x 2 R n such that u > t x v t, 8t 2 T, if and only if (u; v) 2 cl fcone f(u t ; v t ) : t 2 T g + R + (0 n ; 1)g : (3) We now apply Lemma 1 to the robust feasible set X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg: Proposition 2 (Feasibility and polyhedrality of X) Let X be as in (2). the following statements hold: Then (i) X 6= ; if and only if (0 n ; 1) =2 cl cone f[ j2j V j g. (ii) If X 6= ; and the uncertainty sets V j are all polyhedral sets, then X is a polyhedral set too. 4

5 Proof. (i) It is a straightforward consequence of Lemma 1. (ii) Assume that X 6= ;. If the uncertainty sets are polyhedral, we can write V j = conv E j + cone D j, with E j and D j nite sets, for each j 2 J. Since the cone in (3) is cl fcone f[ j2j V j g + R + (0 n ; 1)g = cone f[ j2j (E j [ D j )g + R + (0 n ; 1) and, by the separation theorem, two non-empty closed convex sets coincide if and only if they have the same linear consequences, we have X = x 2 R n : a > x b; (a; b) 2 [ j2j (E j [ D j ) : Hence, the conclusion follows. Concerning Proposition 2, if the uncertainty set V j contains no line, then E j, de ned as in the proof of (ii) of Proposition 2, is the set of extreme points of V j. In particular, if V j is a compact convex set for each j 2 J and the strict robust feasibility condition fx 2 R n : a > j x > b j ; 8(a j ; b j ) 2 V j ; j 2 Jg 6= ; (4) holds, then conef[ j2j V j g is closed [23, Theorem 5.3 (ii)] and this in turn implies [23, p. 81] that the so-called characteristic cone K(V) := conef[ j2j V j g + R + f(0 n ; 1)g ; (to be used later) is closed too. Moreover, according to [23, Theorem 9.3], X is a compact set if and only if (0 n ; 1) 2 int K(V). Particular cases of Proposition 2 (ii) can be found in the literature (see [2] and references therein). 3 Radius of robust feasibility In this section, we rst discuss the feasibility of our uncertain multi-objective model under a ne constraint data perturbations. In other words, for any given vectors c i 2 R n, i 2 I, we study the feasibility of the problem (P ) V- min (c > 1 x; : : : ; c > mx) s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; for 0, where the uncertain set-valued mapping V j, for j 2 J := f1; : : : ; pg, takes the form V j := (a j ; b j ) + B n+1 ; (5) and the linear system fa > j x b j ; j 2 Jg is assumed to be feasible. The radius of robust feasibility, (V), associated to V := Q p V j with V j as in (5), is de ned to be (V) := sup f 2 R + : (P ) is feasibleg : (6) By Lemma 1, we rst observe that the radius of robust feasibility (V) is a nonnegative real number since, given j 2 J, (0 n ; 1) 2 (a j ; b j ) + B n+1 for a positive large enough, in which case the corresponding problem (P ) is not feasible. 5

6 The next result provides a formula for the radius of robust feasibility which involves the so-called hypographical set ([8]) of the system fa > j x b j ; j 2 Jg, de ned as H(a; b) := conv (a j ; b j ); j 2 J + R + f(0 n ; 1)g ; (7) where a := (a 1 ; : : : ; a p ) 2 (R n ) p and b := (b 1 ; : : : ; b p ) 2 R p. We observe that H(a; b) is the sum of the polytope conv (a j ; b j ); j 2 J with the closed half-line R + f(0 n ; 1)g, so that it is a polyhedral convex set. Lemma 3 Let 0 and (a j ; b j ) 2 R n R, j 2 J. Suppose that Then, for all > 0, we have (0 n ; 1) 2 cl cone f(a j ; b j ); j 2 Jg + B n+1 : (0 n ; 1) 2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 : Proof. Let > 0. To see the conclusion, we assume by contradiction that (0 n ; 1) =2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 : Then, the separation theorem implies that there exists (; r) 2 R n+1 nf0 n+1 g such that for all (y; s) 2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 one has r = h(; r); (0 n ; 1)i 0 h(; r); (y; s)i; (8) where h; i denotes the usual inner product, i.e., h(; r); (y; s)i = > y + rs. Recall that (0 n ; 1) 2 cl cone f(a j ; b j ); j 2 Jg + B n+1. So, there exist sequences f(yk ; s k )g k2n R n R, f j k g k2n R +, and f(z j k ; tj k )g k2n B n+1, j 2 J, such that (y k ; s k )! (0 n ; 1) and (y k ; s k ) = j k (a j; b j ) + (z j k ; tj k ) : If f P p j k g k2n is a bounded sequence, by passing to subsequence if necessary, we have (0 n ; 1) 2 cone f(a j ; b j ); j 2 Jg + B n+1 : Thus, the claim is true whenever f P p j k g k2n is a bounded sequence. So, we may assume that P p j k! +1 as k! 1. Let (y; s) 2 B n+1 be such that h(y; s); (; r)i = k(; r)k. Note that j k (a j; b j ) + (z j k ; tj k ) (y; s) 2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 : Then, (8) implies that r 0 h(; r); j k (a j; b j ) + (z j k ; tj k ) i ( = h(; r); (y k ; s k )i ( 6 j k ) k(; r)k: j k ) k(; r)k

7 Passing to the limit, we arrive at a contradiction as (; r) 6= 0 n+1, > 0, P p j k! +1 and (y k ; s k )! (0 n ; 1). We now provide our promised formula for the radius of robust feasibility. Observe that, since 0 n+1 =2 H(a; b) by Proposition 2, d 0 n+1 ; H(a; b) can be computed minimizing kk 2 on H(a; b) (i.e., by solving a convex quadratic program). Theorem 4 (Radius of robust feasibility) For (P ), let (a j ; b j ) 2 R n R, j 2 J, with fx 2 R n : a > j x b j ; j 2 Jg 6= ;. Let V j := (a j ; b j ) + B n+1, j 2 J, and V := Q p V j. Let (V) be the radius of robust feasibility as given in (6) and let H(a; b) be the hypographical set as given in (7). Then, (V) = d 0 n+1 ; H(a; b) : Proof. If a given (v; w) 2 (R n ) p R p is interpreted as a perturbation of (v; w) 2 (R n ) p R p, we can measure the size of this perturbation as the supremum of the distances between the vectors of coe cients corresponding to the same index. This can be done by endowing the parameter space (R n ) p R p with the metric e d de ned by ed ((v; w); (p; q)) := sup k(v j ; w j ) (p j ; q j )k ; for (v; w); (p; q) 2 (R n ) p R p : ;:::;p Let a 2 (R n ) p and b 2 R p be as in (7). Denote the set consisting of all inconsistent parameters by i, that is, i := f(v; w) 2 (R n ) p R p : fx 2 R n : v > j x w j ; j = 1; : : : ; pg = ;g: We now show that ed (a; b); i = d 0n+1 ; H(a; b) : (9) By Lemma 1, d 0 n+1 ; H(a; b) > 0: Let (a; b) 2 H(a; b) be such that k(a; b)k = d 0 n+1 ; H(a; b) : We associate with (a; b) 2 R n+1 the linear system formed by the inequality a > x b repeated p times, with corresponding parameter (a; b) 2 (R n ) p R p (the context determines, in each case, the interpretation of (a; b) as either a vector or a parameter). We have 0 n+1 2 H 1 ; where H 1 := H(a; b) (a; b) = conv (a j a; b j b); j = 1; : : : ; p + R + f(0 n ; 1)g : So, there exist j 0 with P p j = 1 and 0 such that This shows that 0 n+1 = (0 n ; 1) = j (a j a; b j b) + (0 n ; 1): j + 1 k (a j a; b j b + 1 ); k 2 N: k 7

8 So, fx : (a j a) > x b j b + 1 k ; j = 1; : : : ; pg = ;. Thus, (a a; b b + 1 k ) 2 i, and so, (a a; b b) 2 cl i. It follows that ed (a; b); i = e d (a; b); cl i k(a; b)k = d 0n+1 ; H(a; b) : To see (9), we suppose on the contrary that d (a; b); i < d 0n+1 ; H(a; b) : Then, there exist " 0 > 0; with " 0 < k(a; b)k; and (^a; ^b) 2 bd i such that d e (a; b); (^a; ^b) = ed (a; b); i < k(a; b)k "0. Then, one can nd f(^a k ; ^b k )g k2n i such that (^a k ; ^b k )! (^a; ^b). So, Lemma 1 gives us that (0 n ; 1) 2 cl conef(^a k j ; ^b k j ) : j = 1; : : : ; pg = conef(^a k j ; ^b k j ) : j = 1; : : : ; pg: Thus, there exist k j 0 such that (0 n ; 1) = P p k j (^a k j ; ^b k j ). Note that P p k j > 0, and so, Then as k! 1, k 0 n+1 = k j P p (^a j ; ^b j ) + P 1 p (0 n ; 1)k = k k j k j k j P p (^a k k j ; ^b k j ) + P 1 p (0 n ; 1): j k j k j P p (^a j ^a k k j ; ^b j ^bk j )k! 0: j So, 0 n+1 2 cl H(^a; ^b) = H(^a; ^b): It then follows that there exist j 0 with P p j = 1 and 0 such that 0 n+1 = j (^a j ; ^b j ) + (0 n ; 1): Thus, we have k P p j(a j ; b j ) + (0; 1)k = k P p j(a j ; b j ) + (0; 1) P p j(^a j ; ^b j ) + (0 n ; 1) k = k P p j (a j ; b j ) (^a j ; ^b j ) k e d (a; b); (^a; ^b) < k(a; b)k " 0 ; where the rst inequality follows from the de nition of d e and j 0 with P p j = 1. Note that P p j(a j ; b j ) + (0 n ; 1) 2 H(a; b). We see that H(a; b) \ (k(a; b)k " 0 )B n+1 6= ;. This shows that d(0 n+1 ; H(a; b)) k(a; b)k " 0 which contradicts the fact that d(0 n+1 ; H(a; b)) = k(a; b)k: Therefore, (9) holds. Let 2 R + so that (P C ) is feasible for. Then, (a; b) 2 i implies that d e (a; b); (a; b) >. Therefore, (9) gives us that d 0 n+1 ; H(a; b) = d e (a; b); i. Thus, (V) d 0 n+1 ; H(a; b). We now show that (V) = d 0 n+1 ; H(a; b). To see this, we proceed by the method of contradition and suppose that (V) < d 0 n+1 ; H(a; b). The, there exists > 0 such 8

9 that (V) + 2 < d 0 n+1 ; H(a; b). Let 0 := (V) + : Then, by the de nition of (V); (P 0 ) is not feasible, that is, fx 2 R n : c > x d; (c; d) 2 Hence, it follows from Lemma 1 that (0 n ; 1) 2 cl conef p[ (aj ; b j ) + B n+1 g = ;: p[ (aj ; b j ) + B n+1 g: By applying Lemma 3, we can nd j 0 and (z j ; t j ) 2 B n+1 ; j = 1; : : : ; p; such that (0 n ; 1) = j (a j ; b j ) + ( 0 + ) (z j ; t j ) : Let (v j ; w j ) = (a j ; b j ) + ( 0 + ) (z j ; t j ) ; j = 1; : : : ; p; v := (v 1 ; : : : ; v p ) 2 (R n ) p and w := (w 1 ; : : : ; w p ) 2 R p. Then, e d (a; b); (v; w) 0 + and (0 n ; 1) = j (v j ; w j ) 2 cone f(v j ; w j ) ; j = 1; : : : ; pg : So, Lemma 1 implies that fx 2 R n : (v j ; w j ) ; j = 1; : : : ; pg = ; and hence (v; w) 2 i : Thus, ed (a; b); i e d (a; b); (v; w) 0 + = (V) + 2: Thus, from (9), we see that d 0 n+1 ; H(a; b) d e (a; b); i (V) + 2: This contradicts the fact that (V) + 2 < d 0 n+1 ; H(a; b) : So, the conclusion follows. Remark 5 We would like to note that we have given a self-contained proof for Theorem 4 by exploiting the niteness of the linear inequality system. This proof is totally di erent from the one given in [21, Theorem 2.5], where we made a massive use of the very technical stability machinery for linear semi-in nite systems developed in [8, 9]. In the following example we show how the radius of robust feasibility of (P ) can be calculated using Theorem 4. Example 6 (Calculating the radius of robust feasibility) Consider (P ) with n = 3, J = f1; : : : ; 5g and V j as in (5), with >< (aj ; b j ); j 2 J = B 1 2A >: ; B 2 2A ; B 0 0 A ; B 1 0 A ; B 0 >= 1A : (10) >;

10 The minimum of kk 2 on H(a; b), whose linear representation 8 9 x 1 + x 2 x 3 1 3x 1 + 3x 2 + 3x 3 4x 4 9 >< >= x 1 x 2 x 3 1 3x 1 + x 2 + x 3 1 x >: 1 3x 2 + x 3 1 >; x 1 x 2 + 3x 3 3 is obtained from (7) and (10) by Fourier-Motzkin elimination, is attained at the point 1 ; 1 ; 1 ; 3. So, r (V) = 1 3 ; 1 3 ; ; 3 = 3 : 4 Tractable optimality conditions for robust solutions In this section we deal with an uncertain linear multi-objective programming problem (P ) V- min c > 1 x; : : : ; c > mx s.t. a > (11) j x b j ; j 2 J; where the constraint data (a j ; b j ) are uncertain and belong to the bounded uncertainty set V j, for j 2 J, and the objective data c i are uncertain too and belong to the bounded uncertainty set U i, for i 2 I, and so U = Q m i=1 U i. A (robust) decision maker would assume that, selecting a decision x which is feasible for every possible scenario in the constraint data uncertainty set, each objective function x 7! c > i x will attain its worst possible value (risk) sup ci 2U i c > i x. So, the robust counterpart of the above uncertain linear multi-objective programming problem is the convex linearly constrained programming problem V- min f(x) s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; (12) where f(x) = ( U1 (x); : : : ; Um (x)) and Ui (x) := sup ci 2U i c > i x is the support function of U i for each i 2 I. Since Ui = cl conv Ui, the objective function f in (12) is the same for the uncertainty sets fu i ; i 2 Ig and fcl conv U i ; i 2 Ig. Moreover, by Lemma 1 and the separation theorem, the feasible set of (12) is also the same for the uncertainty sets fv j ; j 2 Jg and fcl conv V j ; j 2 Jg as cl conef[ j2j cl conv V j g + R + f(0 n ; 1)g = cl conef[ j2j V j g + R + f(0 n ; 1)g: Hence, we can assume without loss of generality that U i and V j are all compact convex sets, and then, Ui (x) := max ci 2U i c > i x is a nite-valued convex function for each i 2 I. 10

11 De nition 7 (Robust weakly e cient solution) We say that a point x 2 X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg is a (minmax) robust weakly e cient solution to (P ) if it is a weakly e cient solution to its robust counterpart (12), that is, if there is no ^x 2 X such that Ui (^x) < Ui (x) for all i 2 I. When X is bounded, the continuous function Ui attains its minimum on X, for each i 2 I, and this guarantees the existence of (minmax) robust weakly e cient solutions. It is easy to see that (12) is equivalent to V- min (z 1 ; : : : ; z m ) s.t. z i c > i x; 8c i 2 U i ; i 2 I; a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; in the sense that a feasible point x is a weakly e cient solution to (12) if and only if (x; f(x)) 2 R n R m is a weakly e cient solution to (13). Consequently, x 2 X is a (minmax) robust weakly e cient solution to (P ) if and only if (x; f(x)) is a weakly e cient solution to (13). Below, we show that robust solutions for uncertain multi-objective linear programming problems with various objective data uncertainty sets can be found by solving deterministic multi-objective linear programming problems or deterministic linear multiobjective programming problems with cone constraints, and so, can be computed via existing technology of deterministic multi-objective programming problems (cf. [15]). These classes of commonly used data uncertainty sets include box data uncertainty, norm data uncertainty and ellipsoidal data uncertainty. We note that these data uncertainty sets have been successfully employed in modeling uncertain scalar optimization problem arising in diverse areas such as nance [10], management science [5], statistical learning [33, 29, 32] and engineering [3, 34]. For excellent comprehensive surveys, see [2, 6]. 4.1 Box data uncertainty Consider the box data uncertainty sets (13) U i = [c i ; c i ]; i 2 I; (14) V j = [a j ; a j ] [b j ; b j ]; j 2 J; (15) where c i ; c i 2 R n, c i c i, i 2 I, and a j ; a j 2 R n, a j a j, and b j ; b j 2 R, b j b j, j 2 J. Denote the extreme points of [c i ; c i ] and [a j ; a j ] by f^c (1) i ; : : : ; ^c (2n ) i g and f^a (1) j ; : : : ; ^a (2n ) j g, respectively. Theorem 8 Consider the uncertain programming problem (P ) with data uncertainty sets U i and V j given as in (14) and (15). Then, x is a robust weakly e cient solution to (P ) if and only if x is a weakly e cient solution to the following deterministic multi-objective linear programming problem: V- min (z 1 ; : : : ; z m ) s.t. z i (^c (k) i ) > x; i 2 I; k = 1; : : : ; 2 n ; (^a (k) j ) > x b j ; j 2 J; k = 1; : : : ; 2 n : 11

12 Proof. Let U i and V j be box data uncertainty sets given as in (14) and (15), respectively. Then, the robust multi-objective linear programming problem (12) can be equivalently rewritten as follows, V- min (z 1 ; : : : ; z m ) s.t. z i max ci 2U i c > i x; i 2 I; min (aj ;b j )2V j fa > j x b j g 0; j 2 J: Note that a linear function attains its minimum and maximum over a polytope at an extreme point of the polytope. Hence, for each i 2 I and each j 2 J we get max c > i x = max c i 2U i 1k2 n(^c(k) i ) > x; min (a j ;b j )2V j fa > j x b j g = min 1k2 nf(^a(k) j ) > xg b j : Therefore, the conclusion follows. 4.2 Norm data uncertainty Consider the norm data uncertainty sets U i = fc i + i u i : u i 2 R n ; km i u i k s 1g ; i 2 I; (16) V j = fa j + j v j : v j 2 R n ; kz j v j k s 1g [b j ; b j ]; j 2 J; (17) where c i ; a j 2 R n, b j ; b j 2 R, b j b j, i ; j > 0, M i and Z j are invertible symmetric n n matrices, i 2 I, j 2 J, and kk s denotes the s-norm, for s 2 [1; +1], de ned by ( pp s n i=1 kxk s = jx ij s ; if s 2 [1; +1); maxfjx i j : 1 i ng; if s = +1: Moreover, we de ne s 2 [1; +1] to be the number so that 1 s + 1 s = 1. Theorem 9 Consider the uncertain programming problem (P ) with data uncertainty sets U i and V j given as in (16) and (17). Then, x is a robust weakly e cient solution to (P ) if and only if x is a weakly e cient solution to the following deterministic multi-objective linear programming problem with s-order cone constraints: V- min (z 1 ; : : : ; z m ) s.t. z i (c i ) > x + i km 1 i xk s ; i 2 I; a > j x j kz 1 j xk s b j ; j 2 J: Proof. Let U i and V j be box data uncertainty sets given as in (16) and (17) respectively. Then, the robust counterpart of the uncertain multi-objective linear programming problem (12) can be equivalently rewritten as V- min (z 1 ; : : : ; z m ) s.t. z i max ci 2U i c > i x; i 2 I; min (aj ;b j )2V j fa > j x b j g 0; j 2 J: 12

13 Since the dual norm of the s-norm is the s -norm, that is, max a > x = kak s kxk s1 a 2 R n, then, for each i 2 I and each j 2 J, we have for any Therefore, the conclusion follows. max c > i x = (c i ) > x + i km 1 i xk s ; c i 2U i min fa > j x b j g = a > j x j kz 1 j xk s b j : (a j ;b j )2V j 4.3 Ellipsoidal data uncertainty Consider the ellipsoidal data uncertainty sets p X i U i = fc 0 i + u k i c k i : k(u 1 i ; : : : ; u p i i )k 1g; i 2 I; (18) V j = fa 0 j + k=1 q X j l=1 v l ja l j : k(v 1 j ; : : : ; v q j j )k 1g [b j; b j ]; j 2 J; (19) where c k i ; a l j 2 R n, k = 0; 1; : : : ; p i, l = 0; 1; : : : ; q j, p i ; q j 2 N and b j ; b j 2 R, i 2 I, j 2 J. Theorem 10 Consider the uncertain programming problem (P ) with data uncertainty sets U i and V j given as in (18) and (19). Then, x is a robust weakly e cient solution to (P ) if and only if x is a weakly e cient solution to the following deterministic multi-objective linear programming problem with second order cone constraints: V- min (z 1 ; : : : ; z m ) s.t. z i (c 0 i ) > x + k (c 1 i ) > x; : : : ; (c p i i )> x k; i 2 I; (a 0 j) > x k (a 1 j) > x; : : : ; (a q j j )> x k b j ; j 2 J: Proof. Let U i and V j be box data uncertainty sets given as in (18) and (19) respectively. Then, the robust multi-objective linear programming problem (12) can be equivalently rewritten as V- min (z 1 ; : : : ; z m ) s.t. z i max ci 2U i c > i x; i 2 I; min (aj ;b j )2V j fa > j x b j g 0; j 2 J: Since max kxk1 a> x = kak for any a 2 R n, then, for each i 2 I and each j 2 J, we have max c i 2U i c > i x = (c 0 i ) > x + k (c 1 i ) > x; : : : ; (c p i i )> x k; min (a j ;b j )2V j fa > j x b j g = (a 0 j) > x k (a 1 j) > x; : : : ; (a q j j )> x k b j : Therefore, the conclusion follows. We nally note that in the case when the objective function is free of uncertainty, the characterization of a robust solution for uncertain multi-objective linear programming problem under ellipsoidal constraint data uncertainty was derived in [21]. 13

14 5 Radius of highly robust e ciency >From now on, we consider highly robust solutions for uncertain multi-objective linear programming problems of the form (P ) V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; j 2 J ; where the objective and the constraints are uncertain, (c 1 ; : : : ; c m ) 2 U R nm and (a j ; b j ) 2 V j ; and the uncertainty sets are bounded. Recall that the robust feasible set for (P ) is given by X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg: (20) In what follows the normal cone to X at x 2 X, N(X; x) := fw 2 R n : w > (x x) 0; 8x 2 Xg; will play a crucial role. De nition 11 (Highly robust weakly e cient solution) We say that x 2 X is a highly robust weakly e cient solution for the uncertain multi-objective linear programming problem (P ) if, for each (c 1 ; : : : ; c m ) 2 U, x is a weakly e cient solution to the problem in (1), that is, if, for each (c 1 ; : : : ; c m ) 2 U, there exists no x 2 X such that c > i x < c > i x for all i 2 I. We have shown in Section 4 that X does not change when the uncertainty sets fv j ; j 2 Jg are replaced by fcl conv V j ; j 2 Jg. Next we show that any highly robust weakly e cient solution for U is also a highly robust weakly e cient solution for cl U. In fact, if x 2 X is not a highly robust weakly e cient solution for cl U, then there exist (c 1 ; : : : ; c m ) 2 cl U and ^x 2 X such that c > i ^x < c > i x for all i 2 I. Let f(c k 1; : : : ; c k m)g k2n be a sequence in U converging to (c 1 ; : : : ; c m ). Then (c k i ) >^x < (c k i ) > x for all i 2 I and k large enough, which implies that x is not a highly robust weakly e cient solution for U. Consequently, we may assume without loss of generality that V j is a compact convex set, for each j 2 J, while U is a compact set. Next we rst provide a simple uncertain multi-objective linear programming problem where the set of highly robust weakly e cient solutions is nonempty. Example 12 Consider the multi-objective linear programming problem with uncertain objectives and uncertainty-free constraints V- min (c > 1 x; c > 2 x) : x 2 [ 1; 1] 2 ; (21) where (c 1 ; c 2 ) is uncertain and belongs to the uncertainty set U := fc +M : 2 [0; 1]g, C = and M = :

15 The set of weakly e cient solutions with respect to U is 8 ([ 1; 1] f1g) [ (f1g [ 1; 1]) ; if 0 < >< 1; 2 [ 1; 1] 2 ; if = 1 2 >: ; ([ 1; 1] f 1g) [ (f 1g [ 1; 1]) ; if 1 < 1; 2 and so, the set of highly robust weakly e cient solutions is f(1; 1) ; ( 1; 1)g. In this case U is not a cartesian product, so that there is no minmax robust weakly e cient solution. The relationship between minmax robust solutions and highly robust solutions is established in the following proposition. Proposition 13 Let (P ) be an uncertain multi-objective linear programming problem as in (11), with U = Q m i=1 U i: If x 2 X is a highly robust weakly e cient solution to (P ), then x is also a minmax robust weakly e cient solution to (P ). Proof. Assume that x 2 X is not a minmax robust weakly e cient solution to (P ). Then, (x; f(x)) is not a weakly e cient solution to (13). By the compactness assumption, for each i 2 I, Ui (x) = c > i x for certain c i 2 U i. Now, since (x; f(x)) is not a weakly e cient solution to (13), there exists (~x; ~z) 2 R n R m such that ~x 2 X, ~z i c > i ~x for all c i 2 U i, i 2 I, and ~z i < c > i x; 8i 2 I: Since c > i ~x ~z i < c > i x for all i 2 I; x is not a weakly e cient solution to (1) when (c 1 ; : : : ; c m ) = (c 1 ; : : : ; c m ) 2 U and so, x is not a highly robust weakly e cient solution to (P ). The next example illustrates the fact that the set of highly robust weakly e cient solutions may be empty despite of the existence of minmax robust weakly e cient solutions (the opposite situation holds, e.g., whenever U fails to be a Cartesian product of subsets of R n and X is a singleton set). Example 14 Consider again the linear multi-objective programming problem stated in (21) with a di erent uncertainty set U := U 1 U 2 with U 1 = : [0; 1] ; U 2 = : [0; 1] : Its robust counterpart can be formulated as ( ) V- min ( sup (2 1 1) x 1 ; sup (2 2 1) x 2 ) : x 2 [ 1; 1] 2 ; 1 2[0;1] 2 2[0;1] 15

16 which is equivalent to V- min (jx 1 j; jx 2 j) : x 2 [ 1; 1] 2. It can be easily checked that the set of minmax robust weakly e cient solutions is ([ 1; 1] f0g) [ (f0g [ 1; 1]) while the set of weakly e cient solutions is 8 ([ 1; 1] f1g) [ (f1g [ 1; 1]) ; if 0 1 < 1; < 1; 2 >< >: ([ 1; 1] f 1g) [ (f1g [ 1; 1]) ; if 0 1 < 1 2 ; 1 2 < 2 1; [ 1; 1] 2 ; if 1 = 1 2 or 2 = 1 2 ; ([ 1; 1] f1g) [ (f 1g [ 1; 1]) ; if 1 2 < 1 1; 0 2 < 1 2 ; ([ 1; 1] f 1g) [ (f 1g [ 1; 1]) ; if 1 2 < 1 1; 1 2 < 2 1; so that there is no highly robust weakly e cient solution. 5.1 A ne objective data perturbations The existence of highly robust weakly e cient solutions can be frequently guaranteed in the case of a ne objective data perturbations of the objectives. For this purpose, consider the parameterized uncertain linear multi-objective programming problem where (P ) V- min (c > 1 x; : : : ; c > mx) s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; (c 1 ; : : : ; c m ) 2 U = my (c i + B n ) ; with c i 2 R n, i 2 I, and 0. Inspired in the de nition of the radius of feasibility, we de ne the radius of highly robust e ciency (U) as the supremum of those 2 R + such that (P ) has some highly robust weakly e cient solution. When X is bounded and = 0; U = fc 1 ; : : : ; c m g and the minimizers on X of the scalar functions x 7! c > i x; i 2 I, are highly robust weakly e cient solutions of (P 0 ). So, (U) 2 R + [ f+1g : Assume that X is a polytope (e.g., the sets V j, j 2 J, are all polytopes, recall Proposition 2) and denote by E the set of extreme points of X. So, given c 2 R n, the function x 7! c > x attains its minimum on X at some point e 2 E, so that c > (x e) 0 for all x 2 X, i.e., c 2 N(X; e). Moreover, e is the unique minimizer of x 7! c > x on X for all c 2 int N(X; e). So, the nite family of solid polyhedral convex cones fn(x; e); e 2 Eg constitutes a tessellation of R n. The boundary of each cone N(X; e), e 2 E, is contained in a nite union of hyperplanes, so that S e2e bd N(X; e) is contained in a nite union of hyperplanes too. Thus, a vector c generated at random in R n belongs to R n n S e2e bd N(X; e) = S e2e int N(X; e) with probability 1. This intuitive argument, together with the next result, shows that we can get a positive lower bound for (U) > 0 under a mild condition. Theorem 15 (Radius of highly robust e ciency) Let X be a polytope and let E be its set of extreme points. If there exists an index i 2 I 0 and a corresponding extreme point e i 2 E such that c i 2 int N(X; e i ), then (U) > i=1

17 Proof. Let X = x 2 R n : p > t x q t ; t 2 T be a linear representation of the polytope X such that kp t k = 1 for all t 2 T. The normal cone N(X; e) at e 2 E is the negative polar of the cone of feasible directions of X at e, i.e., N(X; e) = fx 2 R n : p > t x 0; 8t 2 T (e)g, where T (e) = t 2 T : p > t e = q t is the set of active indices at e. Moreover, int N(X; e) = fx 2 R n : p > t x < 0; 8t 2 T (e)g. So, for any c 2 int N(X; e), the radius of the greatest ball centered at c and contained in N(X; e) is d ( c; bd N(X; e)) = min p > t c : t 2 T (e) > 0: (22) The supremum of any set of positive scalars such that some extreme point of X which minimizes on X at least one objective function x 7! c > i x when c i 2 c i + B n will be a lower bound for (U). We now compute such a lower bound. Denote by I 0 the set of indices i 2 I such that c i lies in the interior of some element of fn(x; e); e 2 Eg. By assumption, I 0 6= ;. For each i 2 I 0 there exists a unique extreme point e i of X such that c i 2 int N(X; e i ). Then, from (22), one has (U) max d c > i ; bd N(X; e i ) : i 2 I 0 = max i2i 0 min p > t c i : t 2 T (e i ) > 0: (23) The assumption that X is a polytope cannot be replaced by the weaker assumption that X is a compact convex set. Indeed, in this case, we still have X = conv E with fn(x; e); e 2 Eg being a tesselation of R n ; but we may have int N(X; e) = ; for all e 2 E (e.g., if X is a closed ball, fn(x; e); e 2 Eg is formed by all rays emanating from 0 n ). Observe that the lower bound for (U) in (23) can be e ectively computed. Below, we provide two examples. The rst example illustrates how to use (23) to compute a lower bound for the radius of highly robust e ciency. The second example shows that the lower bound for the radius of highly robust e ciency provided by (23) can be achieved (and so, is the best possible lower bound). Example 16 (A lower bound for radius of highly robust e ciency) Consider the problem V- min (c > 1 x; c > 2 x) : x 2 X, where the feasible set is X := x 2 R 2 : x 1 1; x 1 1; x 2 1; x 2 1 : The extreme points of X are e 1 = (1; 1), e 2 = ( 1; 1), e 3 = ( 1; 1) and e 4 = (1; 1). (a) Let c 1 = ( 2; 1) > and c 2 = ( 1; 1) >. We have c 1 2 int N(X; e 1 ) and c 2 2 int N(X; e 4 ) with T (e 1 ) = f1; 3g and T (e 4 ) = f1; 4g ; so that (23) yields (U) max fmin f2; 1g ; min f1; 1gg = 1: (24) (b) The vectors c 1 = (1; 0) > and c 2 = ( 1; 0) > belong to S 4 i=1 bd N(X; e i) and do not satisfy the assumption of Theorem 15. It is easy to see that any element of bd X is a weakly e cient solution. We associate with r 2 N the couples of perturbed vectors (1; 1 r )>, ( 1; 1 r )> 1 and (1; r )> 1, ( 1; r )>, whose corresponding sets of weakly e cient solutions are conv fe 3 ; e 4 g and conv fe 1 ; e 2 g, respectively. Since conv fe 3 ; e 4 g\ conv fe 1 ; e 2 g = ;, the problem (P ) with = 1 has no highly robust weakly e cient r solutions, and so (U) < 1 for all r 2 N. Consequently, (U) = 0. r 17

18 Example 17 (Best possible lower bound) Consider the following multi-objective linear programming problem (EP ) V- minf(c 1 x; c 2 x) : x 1; x 2g; where the data (c 1 ; c 2 ) is uncertain and belongs to the uncertainty set U = [c 1 ; c 1 + ] [c 2 ; c 2 + ] with c 1 = 2, c 2 = 1 and 0. The extreme points of the feasible set X = [1; 2] are e 1 = 1 and e 2 = 2, and one has c 1 2 int N(X; e 1 ) and c 2 2 int N(X; e 2 ). Let I 0 = f1; 2g, p 1 = 1, p 2 = 1. As T (e 1 ) = f1g and T (e 2 ) = f2g, from (23), we have (U) max min fp t c i : t 2 T (e i )g = 2: i2i 0 Indeed, the obtained lower bound 2 is tight. To see this, we rst note that, for all (c 1 ; c 2 ) 2 [c 1 2; c 1 + 2] [c 2 2; c 2 + 2], x = 1 is a highly robust weakly e cient solution for the problem (EP ) with = 2. On the other hand, for any > 2, there exist (c j 1; c j 2) 2 [c 1 ; c 1 + ] [c 2 ; c 2 + ], j = 1; 2 such that c 1 1 < 0, c 1 2 < 0, c 2 1 > 0 and c 2 2 > 0. Note that the set of weakly e cient solutions of (EP ) is ( f1g; if c1 > 0; c 2 > 0; f2g; if c 1 < 0; c 2 < 0: So, if > 2, the highly robust weakly e cient solution set for the problem (EP ) is empty. This shows that (U) = Radial objective data perturbations We now associate with a given matrix C := (c 1 ; : : : ; c m ) 2 R nm and given vectors u 2 R n nf0 n g and v 2 R m +nf0 m g the parameterized uncertain linear multi-objective programming problem (P ) V- min C > x s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; where the data C is uncertain and it belongs to the following uncertainty set U = C + uv > : 2 [0; ] : (25) and 0. This data uncertainty set was introduced and examined in [36, Section 3] (see also [28]). We de ne again the radius of highly robust e ciency (U) as the supremum of those 2 R + such that (P ) has some highly robust weakly e cient solution. Obviously, (U) 6= 1 whenever at least one of the scalar functions x 7! c > i x; i 2 I, attains its minimum on X: As a straightforward consequence of the next theorem we shall obtain the following lower bound for the radius of highly robust e ciency: n (U) sup 2 R + : 9 x 2 X; ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ o 2 N(X; x) : Moreover, the supremum in the de nition of (U) is attained whenever there exist x 2 X; and ; ~ 2 m such that C 2 N(X; x) and (C + (U)uv > ) ~ 2 N(X; x). 18

19 Theorem 18 (Characterizing highly robust weakly e cient solutions) Consider the uncertain problem (P ) with = 1, with uncertain set U = C + uv > : 2 [0; 1] ; and let x 2 X: Then, x is highly robust weakly solution if and only if there exist ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x): Moreover, if V j is convex for each j 2 J and K (V) is closed, then the highly robust weakly e ciency of x 2 X is equivalent to the condition that there exist ; ~ 2 m and (a j ; b j ); (~a j ; ~ b j ) 2 V j, j ; ~ j 0, j 2 J, such that C = j a j and j (a > j x b j ) = 0; j 2 J; and (C + uv > ) ~ = ~ j a j and ~ j (~a > j x ~ bj ) = 0; j 2 J: Proof. Let x 2 X be a robust weakly e cient solution. Then, we have for each C 2 U, there exists no x 2 X such that C > x < C > x. By [20, Prop. 18 (iii)], this is equivalent to the fact that (8C 2 U); (9 2 R m + f0 m g)(c 2 N(X; x)): As N(X; x) is a cone, by normalization, we may assume that 2 m, and so, x is a robust weakly e cient solution if and only if (8C 2 U); (9 2 m )(C 2 N(X; x)): (26) To see the rst assertion, it su ces to show that (26) is further equivalent to (9 ; ~ 2 m )(C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x)): (27) To see the equivalence, we only need to show that (27) implies (26) when u 6= 0 m (otherwise U is a singleton set). To achieve this, suppose that (27) holds and x an arbitrary C 2 U. Then there exists 2 [0; 1] such that C = C +uv >. We may assume 2 (0; 1), otherwise there is nothing to prove. Firstly, if ~ > v = 0, then (uv > ) ~ = u(v > ~ ) = 0n. Hence, (C + uv > ) ~ = C ~ 2 N(X; x), which means that (27) holds for = ~. So, for any 2 (0; 1) one has (C + uv > ) ~ = (1 )C ~ + (C + uv > ) ~ 2 N(X; x): Consequently, we may assume ~ > v 6= 0. Even more, as v 2 R m +nf0g and ~ 2 m, we may assume ~ > v > 0. In the same way, we get that > v 0. Hence, one has 19

20 (1 ) ~ > v + > v > 0 and so, := (1 )~ > v (1 ) ~ > v+ > v 2 [0; 1] and := +(1 ) ~ 2 m. Moreover, we have Now, = = (uv > ) (1 )(1 )(uv > ) ~ (1 ) ~ > v (1 ) ~ > v+ > v (uv> ) (1 ) ~ > v (1 ) ~ > v+ > v (> v)u C = C + uv > ( + (1 ) ~ ) > v (1 ) ~ > v+ > v (1 )(uv> ) ~ = C + (uv > ) + (1 ) C + uv > ~ > v (1 ) ~ > v+ > v (1 )(~ > v)u = 0 n : (28) = C + (uv > ) + (1 )(C + uv > ) ~ (1 )(1 )(uv > ) ~ = C + (1 )(C + uv > ) ~ 2 N(X; x): where the fourth equality follows from (28) and the last relation follows from (27) and the convexity of N(X; x). To see the second assertion, we assume that V j is convex, j 2 J, and K (V) is closed. We only need to show ( ) N(X; x) = j a j : (a j ; b j ) 2 V j ; j 0 and j (a > j x b j ) = 0; j 2 J : The system a > x b; (a; b) 2 T, with T := [ j2j V j, is a linear representation of X. Thus, u 2 N(X; x) if and only if the inequality u > x u > x is consequence of a > x b; (a; b) 2 T, which is equivalent, according to Lemma 1, to u; u > x 2 cone(t ) + R + f(0 n ; 1)g : This is equivalent to assert the existence of a nite subset S of T; corresponding nonnegative scalars s ; s 2 S; and 0; such that u; u > x = X (a;b)2s (a;b) (a; b) + (0 n ; 1) : (29) Multiplying by (x; 1) both members of (29) we get = 0; so that (29) is equivalent to u = X (a;b)a and (a;b) (a > x b) = 0; 8 (a; b) 2 S: (30) (a;b)2s Finally, since S [ j2j V j ; we can write S = [ j2j S j, with S j V j, j 2 J, and S i \S j = ; when i 6= j. Let j := P (a;b)2s j (a;b), j 2 J. If j 6= 0 one has, by convexity of V j, P (a;b)2s (a j ; b j ) := j (a;b) (a;b) 2 V j : j Take (a j ; b j ) 2 V j arbitrarily when j = 0: Then we get from (30) that u = j a j and j (a > j x b j ) = 0; j = 1; : : : ; p: 20

21 Thus, the conclusion follows. In Theorem 18 we require that v 2 R m +. The following example (inspired in [36, Example 3.3]) illustrates that this non-negativity requirement cannot be dropped. Example 19 (Non-negativity requirement for rank-1 objective data uncertainty) Let C 1 1A 1 ; v = =2 R 2 + and u 3A : Consider the uncertain multi-objective optimization problem V- min C > x : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 f1; 2g ; (31) where the objective data matrix C is an element of < = fc + uv > : 2 [0; 1]g 1 1A 3 3A : 2 [0; 1] : ; and the uncertainty sets for the constraints are the convex polytopes >< V 1 = conv B 1 2A >: ; B 2 >= >< 2A ; V 2 = conv B 0 0 A >; >: ; B 1 0 A ; Note that the robust feasible set is >= C A : >; X = fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 f1; 2gg = a > j x b j ; j 2 f1; : : : ; 5g ; where a > j x b j ; j 2 f1; : : : ; 5g is the set in (10). It can be checked that x = (1; 1; 3=2) 2 X and so, < 2 1 = N(X; x) = 1 1A + 2A : 1 0; 2 0 ; : 2 2 Let = (2=3; 1=3) > and ~ = (1=3; 2=3) >. Then, we have On the other hand, for 0 C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x): C A A and x = (0; 0; 3) > 2 X, we see that 6 C > x = < A 2 U; ! 11 2 = C > x: 11 2 So, x is not a weakly e cient solution to (31). Thus, the above solution characterization fails. 21

22 In the case where the constraints are uncertainty free, i.e., the sets V j are all singletons, we obtain the following solution characterization for robust multi-objective optimization problems with rank-1 objective uncertainty. Corollary 20 Consider the set U as in Theorem 18 and V j = (a j ; b j ), j 2 J. For each C 2 U, consider the uncertain multi-objective linear programming problem (1). Given x 2 X, the following statements are equivalent: (i) x is a highly robust weakly e cient solution. (ii) There exist ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x): (iii) There exist ; ~ 2 m and j ; ~ j 0, j 2 J, such that C = j a j and j (a > j x b j ) = 0; j 2 J; and (C + uv > ) ~ = ~ j a j and ~ j (a > j x b j ) = 0; j 2 J: (iv) x is a weakly e cient solution to the problems (P 0 ) V- min C > x s.t. a > j x b j ; j 2 J; and (P 1 ) V- min (C + uv > ) > x s.t. a > j x b j ; j 2 J: Proof. Let V j = f a j ; b j g, j 2 J. The equivalences (i), (ii), (iii) come from Theorem 18, taking into account that all the uncertainty sets V j are polytopes. Note that (i) ) (iv) always holds because of (25) and De nition 11. Finally, the implication (iv) ) (ii) is immediate by the usual characterization for weakly e cient solutions (e.g., see [20, Prop. 18 (iii)]). Thus, the conclusion follows. Remark 21 The equivalence (i), (iii) in Corollary 20, on robust weakly e cient solutions of uncertain vector linear programming problems, can be seen as a counterpart of [36, Theorem 3.1], on robust e cient solutions of the same type of problems. 22

23 6 Tractable optimality conditions for highly robust solutions Next, we provide various classes of commonly used uncertainty sets determining the robust feasible set X = fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg; under which one can numerically check whether a robust feasible point is a highly robust weakly e cient solution or not. Throughout this section we assume that the objective function of (1) satis es the rank-1 matrix data uncertainty, as de ned in Section 5. We begin with the simple box constraint data uncertainty. 6.1 Box constraint data Uncertainty Consider the box data uncertainty set V j = [a j ; a j ] [b j ; b j ]; (32) where a j ; a j 2 R n ; a j a j ; and b j ; b j 2 R; b j b j ; j 2 J. Denote the extreme points of [a j ; a j ] by f^a (1) j ; : : : ; ^a (2n ) j g. Theorem 22 Consider the set U as in Theorem 18 and V j, j 2 J, as in (32). For each C 2 U, consider the uncertain multi-objective linear programming problem in (1). Then, the following statements are equivalent: (i) x 2 X is a highly robust weakly e cient solution to (P ). (ii) There exist ; ~ 2 m and (l) j ; ~(l) j 0 such that C = 2 n X l=1 (l) j ^a(l) j and (l) j (^a (l) j )> x b j = 0; j 2 J; l = 1; : : : ; 2 n ; and (C + uv > ) ~ = 2 n X l=1 ~ (l) j ^a(l) j and ~ (l) j (^a (l) j )> x b j = 0; j 2 J; l = 1; : : : ; 2 n : (iii) x is a weakly e cient solution for the following two deterministic multi-objective linear programming problems and V- min C > x s.t. (^a (l) j )> x b j 0; l = 1; : : : ; 2 n ; j 2 J; V- min (C + uv > ) > x s.t. (^a (l) j )> x b j 0; l = 1; : : : ; 2 n ; j 2 J: 23

24 Proof. (i), (ii) Let x be a robust weakly e cient solution to (1). Note that X can be rewritten as X = x 2 R n : a > j x b j 0 for all (a j ; b j ) 2 [a j ; a j ] [b j ; b j ]; j 2 J n o = x 2 R n : (^a (l) j )> x b j 0; l = 1; : : : ; 2 n ; j 2 J : Then, we have N(X; x) = ( 2 n X l=1 ) (l) j ^a(l) j : (l) j (^a (l) j )> x b j = 0; (l) j 0; 8l; 8j : (33) The conclusion follows from Theorem 18. (i) ) (iii) This implication follows by the de nition of a highly robust weakly e cient solution. (iii) ) (ii) By the usual characterization for weakly e cient solutions (e.g., see [20, Prop. 18 (iii)]), we see that there exist ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x). Thus, this implication follows by (33)). It is worth noting that one can determine from Theorem 22 whether or not a given robust feasible point x under the box constraint data uncertainty is a highly robust weakly e cient solution by solving nitely many linear equalities. 6.2 Norm constraint data uncertainty Consider the norm constraint data uncertainty set V j = fa j + j v j : v j 2 R n ; kz j v j k s 1g [b j ; b j ]; (34) where a j 2 R n ; b j ; b j 2 R, b j b j ; j > 0, Z j is an invertible symmetric n n matrix, j 2 J. Recall that kk s denotes the s-norm, s 2 [1; +1], and s 2 [1; +1] is the number so that = 1. The following simple fact about s-norms will be used later s s s )(u) = fv : kvk s 1; v > u = kuk s g; denotes the usual convex subdi erential of a convex function h : R n! R at x 2 R n, = fz 2 R n : z > (y x) h(y) h(x) 8y 2 R n g: In this case, we have the following characterization of robust weakly e cient solutions. Theorem 23 Consider the set U as in Theorem 18 and V j, j 2 J, as in (34). For each C 2 U, consider the uncertain multi-objective linear programming problem in (1). Suppose that there exists x 0 2 R n such that Then, the following statements are equivalent: a > j x 0 b j j kz 1 j x 0 k s > 0; j 2 J: (35) (i) x 2 X is a highly robust weakly e cient solution to (P ). 24

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems

More information

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data Robust Solutions to Multi-Objective Linear Programs with Uncertain Data arxiv:1402.3095v1 [math.oc] 13 Feb 2014 M.A. Goberna, V. Jeyakumar, G. Li, and J. Vicente-Pérez December 9, 2013 Abstract In this

More information

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1) ROBUST SOLUTIONS OF MULTI-OBJECTIVE LINEAR SEMI-INFINITE PROGRAMS UNDER CONSTRAINT DATA UNCERTAINTY M.A. GOBERNA, V. JEYAKUMAR, G. LI, AND J. VICENTE-PÉREZ Abstract. The multi-objective optimization model

More information

Robust linear semi-in nite programming duality under uncertainty?

Robust linear semi-in nite programming duality under uncertainty? Mathematical Programming (Series B) manuscript No. (will be inserted by the editor) Robust linear semi-in nite programming duality under uncertainty? Robust linear SIP duality Goberna M.A., Jeyakumar V.

More information

Near convexity, metric convexity, and convexity

Near convexity, metric convexity, and convexity Near convexity, metric convexity, and convexity Fred Richman Florida Atlantic University Boca Raton, FL 33431 28 February 2005 Abstract It is shown that a subset of a uniformly convex normed space is nearly

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 26 June 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

Topics in Mathematical Economics. Atsushi Kajii Kyoto University Topics in Mathematical Economics Atsushi Kajii Kyoto University 25 November 2018 2 Contents 1 Preliminary Mathematics 5 1.1 Topology.................................. 5 1.2 Linear Algebra..............................

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Lecture 8: Basic convex analysis

Lecture 8: Basic convex analysis Lecture 8: Basic convex analysis 1 Convex sets Both convex sets and functions have general importance in economic theory, not only in optimization. Given two points x; y 2 R n and 2 [0; 1]; the weighted

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

POLARS AND DUAL CONES

POLARS AND DUAL CONES POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty V. Jeyakumar, G.Y. Li and G. M. Lee Revised Version: January 20, 2011 Abstract The celebrated von Neumann minimax

More information

MATHEMATICAL PROGRAMMING I

MATHEMATICAL PROGRAMMING I MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008 Linear Algebra Chih-Wei Yi Dept. of Computer Science National Chiao Tung University November, 008 Section De nition and Examples Section De nition and Examples Section De nition and Examples De nition

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1 Preprint February 19, 2018 1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1 by CHRISTIAN GÜNTHER Martin Luther University Halle-Wittenberg

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

Metric Spaces. DEF. If (X; d) is a metric space and E is a nonempty subset, then (E; d) is also a metric space, called a subspace of X:

Metric Spaces. DEF. If (X; d) is a metric space and E is a nonempty subset, then (E; d) is also a metric space, called a subspace of X: Metric Spaces DEF. A metric space X or (X; d) is a nonempty set X together with a function d : X X! [0; 1) such that for all x; y; and z in X : 1. d (x; y) 0 with equality i x = y 2. d (x; y) = d (y; x)

More information

Widely applicable periodicity results for higher order di erence equations

Widely applicable periodicity results for higher order di erence equations Widely applicable periodicity results for higher order di erence equations István Gy½ori, László Horváth Department of Mathematics University of Pannonia 800 Veszprém, Egyetem u. 10., Hungary E-mail: gyori@almos.uni-pannon.hu

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587 Cycles of length two in monotonic models José Alvaro Rodrigues-Neto Research School of Economics, Australian National University ANU Working Papers in Economics and Econometrics # 587 October 20122 JEL:

More information

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries 1. Introduction In this paper we reconsider the problem of axiomatizing scoring rules. Early results on this problem are due to Smith (1973) and Young (1975). They characterized social welfare and social

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

2 Interval-valued Probability Measures

2 Interval-valued Probability Measures Interval-Valued Probability Measures K. David Jamison 1, Weldon A. Lodwick 2 1. Watson Wyatt & Company, 950 17th Street,Suite 1400, Denver, CO 80202, U.S.A 2. Department of Mathematics, Campus Box 170,

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

An Invitation to Convex Functions Theory

An Invitation to Convex Functions Theory An Invitation to Convex Functions Theory Constantin P. Niculescu 1 1 Partially supported by Grant CEX05-D11-36. ii Contents Introduction v 1 Background on Convex Sets 1 1.1 Convex Sets..............................

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b. Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008 Lecture 9 Monotone VIs/CPs Properties of cones and some existence results October 6, 2008 Outline Properties of cones Existence results for monotone CPs/VIs Polyhedrality of solution sets Game theory:

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Introduction to Linear Algebra. Tyrone L. Vincent

Introduction to Linear Algebra. Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Mean-Variance Utility

Mean-Variance Utility Mean-Variance Utility Yutaka Nakamura University of Tsukuba Graduate School of Systems and Information Engineering Division of Social Systems and Management -- Tennnoudai, Tsukuba, Ibaraki 305-8573, Japan

More information

Mathematics 530. Practice Problems. n + 1 }

Mathematics 530. Practice Problems. n + 1 } Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP Prof. Olivier Bochet Room A.34 Phone 3 63 476 E-mail olivier.bochet@vwi.unibe.ch Webpage http//sta.vwi.unibe.ch/bochet Advanced Microeconomics Fall 2 Lecture Note Choice-Based Approach Price e ects, Wealth

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

LECTURE 6. CONTINUOUS FUNCTIONS AND BASIC TOPOLOGICAL NOTIONS

LECTURE 6. CONTINUOUS FUNCTIONS AND BASIC TOPOLOGICAL NOTIONS ANALYSIS FOR HIGH SCHOOL TEACHERS LECTURE 6. CONTINUOUS FUNCTIONS AND BASIC TOPOLOGICAL NOTIONS ROTHSCHILD CAESARIA COURSE, 2011/2 1. The idea of approximation revisited When discussing the notion of the

More information

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous: MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is

More information

NEW SIGNS OF ISOSCELES TRIANGLES

NEW SIGNS OF ISOSCELES TRIANGLES INTERNATIONAL JOURNAL OF GEOMETRY Vol. 2 (2013), No. 2, 56-67 NEW SIGNS OF ISOSCELES TRIANGLES MAKSIM VASKOUSKI and KANSTANTSIN KASTSEVICH Abstract. In this paper we prove new signs of isosceles triangles

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS

More information

Separation of convex polyhedral sets with uncertain data

Separation of convex polyhedral sets with uncertain data Separation of convex polyhedral sets with uncertain data Milan Hladík Department of Applied Mathematics Charles University Malostranské nám. 25 118 00 Prague Czech Republic e-mail: milan.hladik@matfyz.cz

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Springer-Verlag Berlin Heidelberg

Springer-Verlag Berlin Heidelberg SOME CHARACTERIZATIONS AND PROPERTIES OF THE \DISTANCE TO ILL-POSEDNESS" AND THE CONDITION MEASURE OF A CONIC LINEAR SYSTEM 1 Robert M. Freund 2 M.I.T. Jorge R. Vera 3 Catholic University of Chile October,

More information

Maths 212: Homework Solutions

Maths 212: Homework Solutions Maths 212: Homework Solutions 1. The definition of A ensures that x π for all x A, so π is an upper bound of A. To show it is the least upper bound, suppose x < π and consider two cases. If x < 1, then

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

AN INTRODUCTION TO CONVEXITY

AN INTRODUCTION TO CONVEXITY AN INTRODUCTION TO CONVEXITY GEIR DAHL NOVEMBER 2010 University of Oslo, Centre of Mathematics for Applications, P.O.Box 1053, Blindern, 0316 Oslo, Norway (geird@math.uio.no) Contents 1 The basic concepts

More information

Robust Estimation and Inference for Extremal Dependence in Time Series. Appendix C: Omitted Proofs and Supporting Lemmata

Robust Estimation and Inference for Extremal Dependence in Time Series. Appendix C: Omitted Proofs and Supporting Lemmata Robust Estimation and Inference for Extremal Dependence in Time Series Appendix C: Omitted Proofs and Supporting Lemmata Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill January

More information

94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE

94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE 94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE 3.3 Dot Product We haven t yet de ned a multiplication between vectors. It turns out there are di erent ways this can be done. In this section, we present

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Microeconomics, Block I Part 1

Microeconomics, Block I Part 1 Microeconomics, Block I Part 1 Piero Gottardi EUI Sept. 26, 2016 Piero Gottardi (EUI) Microeconomics, Block I Part 1 Sept. 26, 2016 1 / 53 Choice Theory Set of alternatives: X, with generic elements x,

More information

Sets, Functions and Metric Spaces

Sets, Functions and Metric Spaces Chapter 14 Sets, Functions and Metric Spaces 14.1 Functions and sets 14.1.1 The function concept Definition 14.1 Let us consider two sets A and B whose elements may be any objects whatsoever. Suppose that

More information

Semicontinuities of Multifunctions and Functions

Semicontinuities of Multifunctions and Functions Chapter 4 Semicontinuities of Multifunctions and Functions The notion of the continuity of functions is certainly well known to the reader. This topological notion plays an important role also for multifunctions.

More information

Learning with Submodular Functions: A Convex Optimization Perspective

Learning with Submodular Functions: A Convex Optimization Perspective Foundations and Trends R in Machine Learning Vol. 6, No. 2-3 (2013) 145 373 c 2013 F. Bach DOI: 10.1561/2200000039 Learning with Submodular Functions: A Convex Optimization Perspective Francis Bach INRIA

More information

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Applied Mathematics E-Notes, 2(202), 4-22 c ISSN 607-250 Available free at mirror sites of http://www.math.nthu.edu.tw/amen/ Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Abar Zada

More information

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO QUESTION BOOKLET EECS 227A Fall 2009 Midterm Tuesday, Ocotober 20, 11:10-12:30pm DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the midterm. The midterm consists

More information

Convex hull of two quadratic or a conic quadratic and a quadratic inequality

Convex hull of two quadratic or a conic quadratic and a quadratic inequality Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should

More information

Extreme points of compact convex sets

Extreme points of compact convex sets Extreme points of compact convex sets In this chapter, we are going to show that compact convex sets are determined by a proper subset, the set of its extreme points. Let us start with the main definition.

More information

Normal Fans of Polyhedral Convex Sets

Normal Fans of Polyhedral Convex Sets Set-Valued Analysis manuscript No. (will be inserted by the editor) Normal Fans of Polyhedral Convex Sets Structures and Connections Shu Lu Stephen M. Robinson Received: date / Accepted: date Dedicated

More information

Week 3: Faces of convex sets

Week 3: Faces of convex sets Week 3: Faces of convex sets Conic Optimisation MATH515 Semester 018 Vera Roshchina School of Mathematics and Statistics, UNSW August 9, 018 Contents 1. Faces of convex sets 1. Minkowski theorem 3 3. Minimal

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Lecture Notes on Game Theory

Lecture Notes on Game Theory Lecture Notes on Game Theory Levent Koçkesen Strategic Form Games In this part we will analyze games in which the players choose their actions simultaneously (or without the knowledge of other players

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Multiple Temptations 1

Multiple Temptations 1 Multiple Temptations 1 John E. Stovall 2 University of Rochester JOB MARKET PAPER 3 Forthcoming in Econometrica November 9, 2009 1 I would like to thank Val Lambson, Eddie Dekel, Bart Lipman, numerous

More information