Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Similar documents
Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

Robust linear semi-in nite programming duality under uncertainty?

Near convexity, metric convexity, and convexity

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

4.3 - Linear Combinations and Independence of Vectors

Topics in Mathematical Economics. Atsushi Kajii Kyoto University

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

Lecture 8: Basic convex analysis

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

POLARS AND DUAL CONES

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

On John type ellipsoids

Topological properties

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

MATHEMATICAL PROGRAMMING I

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

Linear Algebra. Preliminary Lecture Notes

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1

Robust Duality in Parametric Convex Optimization

Metric Spaces. DEF. If (X; d) is a metric space and E is a nonempty subset, then (E; d) is also a metric space, called a subspace of X:

Widely applicable periodicity results for higher order di erence equations

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

Linear Algebra. Preliminary Lecture Notes

Alvaro Rodrigues-Neto Research School of Economics, Australian National University. ANU Working Papers in Economics and Econometrics # 587

Economics Bulletin, 2012, Vol. 32 No. 1 pp Introduction. 2. The preliminaries

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

2 Interval-valued Probability Measures

Assignment 1: From the Definition of Convexity to Helley Theorem

Optimization and Optimal Control in Banach Spaces

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Chapter 2 Convex Analysis

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

An Invitation to Convex Functions Theory

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Set, functions and Euclidean space. Seungjin Han

Introduction to Real Analysis Alternative Chapter 1

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

Math 341: Convex Geometry. Xi Chen

EIGENVALUES AND EIGENVECTORS 3

Introduction to Linear Algebra. Tyrone L. Vincent

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Mean-Variance Utility

Mathematics 530. Practice Problems. n + 1 }

MAT-INF4110/MAT-INF9110 Mathematical optimization

Optimality Conditions for Nonsmooth Convex Optimization

Metric Spaces and Topology

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Course 212: Academic Year Section 1: Metric Spaces

On duality theory of conic linear problems

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

Advanced Microeconomics Fall Lecture Note 1 Choice-Based Approach: Price e ects, Wealth e ects and the WARP

Chapter 1. Preliminaries

LECTURE 6. CONTINUOUS FUNCTIONS AND BASIC TOPOLOGICAL NOTIONS

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

NEW SIGNS OF ISOSCELES TRIANGLES

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Nonlinear Programming (NLP)

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

Separation of convex polyhedral sets with uncertain data

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

Springer-Verlag Berlin Heidelberg

Maths 212: Homework Solutions

3. Linear Programming and Polyhedral Combinatorics

AN INTRODUCTION TO CONVEXITY

Robust Estimation and Inference for Extremal Dependence in Time Series. Appendix C: Omitted Proofs and Supporting Lemmata

94 CHAPTER 3. VECTORS AND THE GEOMETRY OF SPACE

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Microeconomics, Block I Part 1

Sets, Functions and Metric Spaces

Semicontinuities of Multifunctions and Functions

Learning with Submodular Functions: A Convex Optimization Perspective

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Convex hull of two quadratic or a conic quadratic and a quadratic inequality

Extreme points of compact convex sets

Normal Fans of Polyhedral Convex Sets

Week 3: Faces of convex sets

1 Directional Derivatives and Differentiability

The Skorokhod reflection problem for functions with discontinuities (contractive case)

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Lecture Notes on Game Theory

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

The Kuhn-Tucker Problem

Multiple Temptations 1

Transcription:

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for the radius of robust feasibility guaranteeing constraint feasibility for all possible scenarios within a speci ed uncertainty set under a ne data parametrization. We then present numerically tractable optimality conditions for minmax robust weakly e cient solutions, i.e., the weakly e cient solutions of the robust counterpart. We also consider highly robust weakly e - cient solutions, i.e., robust feasible solutions which are weakly e cient for any possible instance of the objective matrix within a speci ed uncertainty set, providing lower bounds for the radius of highly robust e ciency guaranteeing the existence of this type of solutions under a ne and rank-1 objective data uncertainty. Finally, we provide numerically tractable optimality conditions for highly robust weakly e cient solutions. Keywords. Robust optimization. Multi-objective linear programming. Data uncertainty. Robust feasibility. Robust weakly e cient solutions. 1 Introduction Consider the deterministic multi-objective linear programming problem (P ) V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; j 2 J ; where V- min stands for vector minimization, c i 2 R n (interpreted as a column vector) for i 2 I := f1; : : : ; mg, the symbol > denotes transpose, x 2 R n is the decision variable, This research was partially supported by the Australian Research Council, Discovery Project DP120100467, the MICINN of Spain, Grant MTM2011-29064-C03-02, and Generalitat Valenciana, Grant ACOMP/2013/062. y Corresponding author. Tel.: +34 965903533. Fax: +34 965903531. z Dept. of Statistics and Operations Research, Alicante University, 03071 Alicante, Spain. x Dept. of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E-mail addresses: mgoberna@ua.es (M.A. Goberna), v.jeyakumar@unsw.edu.au (V. Jeyakumar), g.li@unsw.edu.au (G. Li), jose.vicente@ua.es (J. Vicente-Pérez). 1

and (a j ; b j ) 2 R n R, for j 2 J := f1; : : : ; p}, are the constraint input data of the problem. The real m n matrix C whose rows are the vectors c i, i 2 I, is called the objective matrix. The problem (P ) has been extensively studied in the literature (see, e.g., the overviews [7] and [15]), where perfect information is often assumed (that is, accurate values for the input quantities or parameters), despite the reality that such precise knowledge is rarely available in practice for real-world optimization problems. The data of real-world optimization problems are often uncertain (that is, they are not known exactly at the time of the decision) due to estimation errors, prediction errors, or lack of information. Scalar uncertain optimization problems have been traditionally treated via sensitivity analysis which estimates the impact of small perturbations of the data in the optimal value, while robust optimization, which provides a deterministic framework for uncertain problems, has recently emerged as a powerful alternative approach (see, for instance, [2, 4, 17, 22, 27]). Particular types of uncertain multi-objective linear programming problems have already been studied, e.g., [38] considers changes in one objective function via sensitivity analysis, while [36] considers changes in the whole objective function x 7! Cx and [21] changes in the constraints by using di erent robustness approaches. The purpose of the present work is to study multi-objective linear programming problems in the face of data uncertainty both in the objective function and constraints from a robustness perspective. Following the robust optimization framework, the multi-objective problem (P ) in the face of data uncertainty both in the objective matrix and in the data of the constraints can be captured by a parameterized multi-objective linear programming problem of the form (P ) V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; j 2 J where the input data, c i, i 2 I, and (a j ; b j ), j 2 J, are uncertain vectors, C := (c 1 ; : : : ; c m ) 2 U R nm and (a j ; b j ) 2 V j R n+1, j 2 J and the sets U and V j, j 2 J, are speci ed uncertainty sets that are bounded, but often in nite sets. By enforcing the constraints for all possible uncertainties within V j, j 2 J, the uncertain problem becomes the uncertain multi-objective linear semi-in nite programming problem V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J ; (1) where (c 1 ; : : : ; c m ) 2 U and whose feasible set, X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg; (2) is called robust feasible set of (P ) and x 2 X is called a robust feasible solution. Following the recent work on robust linear programming (see [2]), some of the key questions of multi-objective linear programming under data uncertainty include: I. (Guaranteeing robust feasible solutions) How to guarantee non-emptiness of the robust feasible set X for speci ed uncertainty sets V j ; j 2 J? 2

II. (Guaranteeing and identifying robust e cient solutions) Which robust feasible solutions of (P ) are robust e cient solutions (see the paragraph below) that are immune to objective data uncertainty and what are the mathematical characterizations that identify robust e cient solutions? How to guarantee the existence of robust e cient solutions? III. (Numerical tractability of robust e cient solutions) For what speci ed classes of uncertainty sets U and V j, j 2 J, the robust e cient solution characterizations can be numerically checked using existing multi-objective programming techniques? In this paper, we provide some answers to the above questions for the uncertain multi-objective linear programming problem (P ) in the face of data uncertainty by focusing on two choices of the robust optimal solutions: the rst one is called a minmax robust e cient solution or simply robust e cient solution following the approach widely used in robust scalar optimization problem (see also [16] and [31] for recent development), and corresponds to an e cient solution to a deterministic worst-case (minmax) multi-objective optimization problem; the second one is called highly robust e cient solution as in [26, 30] (see also [38] and [36, Section 4]), and consists of the preservation of the e ciency for all (c 1 ; : : : ; c m ) 2 U. So, the existence of this type of solution implies that the uncertainty set U is small in some sense (e.g., Cartesian products of balls in R n or segments in R nm emanating from some xed data (c 1 ; : : : ; c m ) ). To compensate the smallness of the uncertainty set, we focus our analysis on the larger class of highly robust solutions: highly robust weakly e cient solutions. On the other hand, in [36, Section 4], highly robust e cient solutions are considered instead of highly robust weakly e cient solutions. For the convenience of the reader, other notions of robust solutions are summarized in the appendix. Our key contributions are outlined as follows: (1) We rst introduce the concept of radius of robust feasibility in Section 3, guaranteeing non emptiness of the robust feasible set X of (P ) under a nely parameterized data uncertainty. This concept is inspired in the notion of consistency radius used in linear semi-in nite programming in order to guantee the feasibility of the nominal problem under perturbations preserving the number of constraints ([8], [9]). We derive a formula for the e ective computation of the radius of robust feasibility that also applies to single-objective linear programming under the same type of uncertainty. (2) We examine the robust weakly e cient solution of an uncertain multi-objective linear programming problem in Section 4, and establish numerically tractable mathematical characterizations of robust weakly e cient solutions under various data uncertainty. (3) We present, in Section 5, an explicit formula for the radius of highly robust e - ciency, i.e., the greatest value of certain parameter associated with two families 3

of uncertainty sets for the objective data such that the corresponding multiobjective linear programming problems have highly robust weakly e cient solutions. The mentioned families are formed by Cartesian products of balls in R n and by segments in R mn in the direction of rank-1 matrices (the same type of uncertainty considered in [36, Section 4]). Recall that rank-1 matrices are the products of non-zero column vectors by non-zero row vectors (see [35] for other characterizations). These matrices are frequently used in computational algebra (as building blocks for more complex matrices), in conic optimization (as the rank-1 matrices are the extreme rays of the semide nite cone), and in statistics (as the singular value decomposition gives the best rank-1 approximation of a given matrix with respect to the Frobenius and the spectral norms). (4) We nally provide, in Section 6, numerically tractable mathematical characterizations of highly robust weakly e cient solutions under various data uncertainty. 2 Preliminaries We begin this section introducing the necessary notation and concepts on multi-objective linear programming. We denote by 0 n and kk the vector of zeros and the Euclidean norm in R n, respectively. The closed unit ball and the distance associated to the above norm are denoted by B n and d, respectively. Given Z R n, int Z, cl Z, bd Z, and conv Z denote the interior, the closure, the boundary and the convex hull of Z, respectively, whereas cone Z := R + conv Z denotes the convex conical hull of Z [ f0 n g. For x; y 2 R m, we write x y (x < y) when x i y i (x i < y i, respectively) for all i 2 I. The simplex m in the space of criteria R m is de ned as m := f 2 R m + : P m i=1 i = 1g. The following known dual characterizations of solutions of semi-in nite linear inequality systems play key roles in the next section in developing radius of robust feasibility formulae. Lemma 1 ([23, Corollaries 3.1.1 and 3.1.2]) Let T be an arbitrary index set. Then, fx 2 R n : u > t x v t ; t 2 T g 6= ; if and only if (0 n ; 1) =2 cl cone f(u t ; v t ) : t 2 T g. In that case, u > x v holds for any x 2 R n such that u > t x v t, 8t 2 T, if and only if (u; v) 2 cl fcone f(u t ; v t ) : t 2 T g + R + (0 n ; 1)g : (3) We now apply Lemma 1 to the robust feasible set X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg: Proposition 2 (Feasibility and polyhedrality of X) Let X be as in (2). the following statements hold: Then (i) X 6= ; if and only if (0 n ; 1) =2 cl cone f[ j2j V j g. (ii) If X 6= ; and the uncertainty sets V j are all polyhedral sets, then X is a polyhedral set too. 4

Proof. (i) It is a straightforward consequence of Lemma 1. (ii) Assume that X 6= ;. If the uncertainty sets are polyhedral, we can write V j = conv E j + cone D j, with E j and D j nite sets, for each j 2 J. Since the cone in (3) is cl fcone f[ j2j V j g + R + (0 n ; 1)g = cone f[ j2j (E j [ D j )g + R + (0 n ; 1) and, by the separation theorem, two non-empty closed convex sets coincide if and only if they have the same linear consequences, we have X = x 2 R n : a > x b; (a; b) 2 [ j2j (E j [ D j ) : Hence, the conclusion follows. Concerning Proposition 2, if the uncertainty set V j contains no line, then E j, de ned as in the proof of (ii) of Proposition 2, is the set of extreme points of V j. In particular, if V j is a compact convex set for each j 2 J and the strict robust feasibility condition fx 2 R n : a > j x > b j ; 8(a j ; b j ) 2 V j ; j 2 Jg 6= ; (4) holds, then conef[ j2j V j g is closed [23, Theorem 5.3 (ii)] and this in turn implies [23, p. 81] that the so-called characteristic cone K(V) := conef[ j2j V j g + R + f(0 n ; 1)g ; (to be used later) is closed too. Moreover, according to [23, Theorem 9.3], X is a compact set if and only if (0 n ; 1) 2 int K(V). Particular cases of Proposition 2 (ii) can be found in the literature (see [2] and references therein). 3 Radius of robust feasibility In this section, we rst discuss the feasibility of our uncertain multi-objective model under a ne constraint data perturbations. In other words, for any given vectors c i 2 R n, i 2 I, we study the feasibility of the problem (P ) V- min (c > 1 x; : : : ; c > mx) s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; for 0, where the uncertain set-valued mapping V j, for j 2 J := f1; : : : ; pg, takes the form V j := (a j ; b j ) + B n+1 ; (5) and the linear system fa > j x b j ; j 2 Jg is assumed to be feasible. The radius of robust feasibility, (V), associated to V := Q p V j with V j as in (5), is de ned to be (V) := sup f 2 R + : (P ) is feasibleg : (6) By Lemma 1, we rst observe that the radius of robust feasibility (V) is a nonnegative real number since, given j 2 J, (0 n ; 1) 2 (a j ; b j ) + B n+1 for a positive large enough, in which case the corresponding problem (P ) is not feasible. 5

The next result provides a formula for the radius of robust feasibility which involves the so-called hypographical set ([8]) of the system fa > j x b j ; j 2 Jg, de ned as H(a; b) := conv (a j ; b j ); j 2 J + R + f(0 n ; 1)g ; (7) where a := (a 1 ; : : : ; a p ) 2 (R n ) p and b := (b 1 ; : : : ; b p ) 2 R p. We observe that H(a; b) is the sum of the polytope conv (a j ; b j ); j 2 J with the closed half-line R + f(0 n ; 1)g, so that it is a polyhedral convex set. Lemma 3 Let 0 and (a j ; b j ) 2 R n R, j 2 J. Suppose that Then, for all > 0, we have (0 n ; 1) 2 cl cone f(a j ; b j ); j 2 Jg + B n+1 : (0 n ; 1) 2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 : Proof. Let > 0. To see the conclusion, we assume by contradiction that (0 n ; 1) =2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 : Then, the separation theorem implies that there exists (; r) 2 R n+1 nf0 n+1 g such that for all (y; s) 2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 one has r = h(; r); (0 n ; 1)i 0 h(; r); (y; s)i; (8) where h; i denotes the usual inner product, i.e., h(; r); (y; s)i = > y + rs. Recall that (0 n ; 1) 2 cl cone f(a j ; b j ); j 2 Jg + B n+1. So, there exist sequences f(yk ; s k )g k2n R n R, f j k g k2n R +, and f(z j k ; tj k )g k2n B n+1, j 2 J, such that (y k ; s k )! (0 n ; 1) and (y k ; s k ) = j k (a j; b j ) + (z j k ; tj k ) : If f P p j k g k2n is a bounded sequence, by passing to subsequence if necessary, we have (0 n ; 1) 2 cone f(a j ; b j ); j 2 Jg + B n+1 : Thus, the claim is true whenever f P p j k g k2n is a bounded sequence. So, we may assume that P p j k! +1 as k! 1. Let (y; s) 2 B n+1 be such that h(y; s); (; r)i = k(; r)k. Note that j k (a j; b j ) + (z j k ; tj k ) (y; s) 2 cone f(a j ; b j ); j 2 Jg + ( + )B n+1 : Then, (8) implies that r 0 h(; r); j k (a j; b j ) + (z j k ; tj k ) i ( = h(; r); (y k ; s k )i ( 6 j k ) k(; r)k: j k ) k(; r)k

Passing to the limit, we arrive at a contradiction as (; r) 6= 0 n+1, > 0, P p j k! +1 and (y k ; s k )! (0 n ; 1). We now provide our promised formula for the radius of robust feasibility. Observe that, since 0 n+1 =2 H(a; b) by Proposition 2, d 0 n+1 ; H(a; b) can be computed minimizing kk 2 on H(a; b) (i.e., by solving a convex quadratic program). Theorem 4 (Radius of robust feasibility) For (P ), let (a j ; b j ) 2 R n R, j 2 J, with fx 2 R n : a > j x b j ; j 2 Jg 6= ;. Let V j := (a j ; b j ) + B n+1, j 2 J, and V := Q p V j. Let (V) be the radius of robust feasibility as given in (6) and let H(a; b) be the hypographical set as given in (7). Then, (V) = d 0 n+1 ; H(a; b) : Proof. If a given (v; w) 2 (R n ) p R p is interpreted as a perturbation of (v; w) 2 (R n ) p R p, we can measure the size of this perturbation as the supremum of the distances between the vectors of coe cients corresponding to the same index. This can be done by endowing the parameter space (R n ) p R p with the metric e d de ned by ed ((v; w); (p; q)) := sup k(v j ; w j ) (p j ; q j )k ; for (v; w); (p; q) 2 (R n ) p R p : ;:::;p Let a 2 (R n ) p and b 2 R p be as in (7). Denote the set consisting of all inconsistent parameters by i, that is, i := f(v; w) 2 (R n ) p R p : fx 2 R n : v > j x w j ; j = 1; : : : ; pg = ;g: We now show that ed (a; b); i = d 0n+1 ; H(a; b) : (9) By Lemma 1, d 0 n+1 ; H(a; b) > 0: Let (a; b) 2 H(a; b) be such that k(a; b)k = d 0 n+1 ; H(a; b) : We associate with (a; b) 2 R n+1 the linear system formed by the inequality a > x b repeated p times, with corresponding parameter (a; b) 2 (R n ) p R p (the context determines, in each case, the interpretation of (a; b) as either a vector or a parameter). We have 0 n+1 2 H 1 ; where H 1 := H(a; b) (a; b) = conv (a j a; b j b); j = 1; : : : ; p + R + f(0 n ; 1)g : So, there exist j 0 with P p j = 1 and 0 such that This shows that 0 n+1 = (0 n ; 1) = j (a j a; b j b) + (0 n ; 1): j + 1 k (a j a; b j b + 1 ); k 2 N: k 7

So, fx : (a j a) > x b j b + 1 k ; j = 1; : : : ; pg = ;. Thus, (a a; b b + 1 k ) 2 i, and so, (a a; b b) 2 cl i. It follows that ed (a; b); i = e d (a; b); cl i k(a; b)k = d 0n+1 ; H(a; b) : To see (9), we suppose on the contrary that d (a; b); i < d 0n+1 ; H(a; b) : Then, there exist " 0 > 0; with " 0 < k(a; b)k; and (^a; ^b) 2 bd i such that d e (a; b); (^a; ^b) = ed (a; b); i < k(a; b)k "0. Then, one can nd f(^a k ; ^b k )g k2n i such that (^a k ; ^b k )! (^a; ^b). So, Lemma 1 gives us that (0 n ; 1) 2 cl conef(^a k j ; ^b k j ) : j = 1; : : : ; pg = conef(^a k j ; ^b k j ) : j = 1; : : : ; pg: Thus, there exist k j 0 such that (0 n ; 1) = P p k j (^a k j ; ^b k j ). Note that P p k j > 0, and so, Then as k! 1, k 0 n+1 = k j P p (^a j ; ^b j ) + P 1 p (0 n ; 1)k = k k j k j k j P p (^a k k j ; ^b k j ) + P 1 p (0 n ; 1): j k j k j P p (^a j ^a k k j ; ^b j ^bk j )k! 0: j So, 0 n+1 2 cl H(^a; ^b) = H(^a; ^b): It then follows that there exist j 0 with P p j = 1 and 0 such that 0 n+1 = j (^a j ; ^b j ) + (0 n ; 1): Thus, we have k P p j(a j ; b j ) + (0; 1)k = k P p j(a j ; b j ) + (0; 1) P p j(^a j ; ^b j ) + (0 n ; 1) k = k P p j (a j ; b j ) (^a j ; ^b j ) k e d (a; b); (^a; ^b) < k(a; b)k " 0 ; where the rst inequality follows from the de nition of d e and j 0 with P p j = 1. Note that P p j(a j ; b j ) + (0 n ; 1) 2 H(a; b). We see that H(a; b) \ (k(a; b)k " 0 )B n+1 6= ;. This shows that d(0 n+1 ; H(a; b)) k(a; b)k " 0 which contradicts the fact that d(0 n+1 ; H(a; b)) = k(a; b)k: Therefore, (9) holds. Let 2 R + so that (P C ) is feasible for. Then, (a; b) 2 i implies that d e (a; b); (a; b) >. Therefore, (9) gives us that d 0 n+1 ; H(a; b) = d e (a; b); i. Thus, (V) d 0 n+1 ; H(a; b). We now show that (V) = d 0 n+1 ; H(a; b). To see this, we proceed by the method of contradition and suppose that (V) < d 0 n+1 ; H(a; b). The, there exists > 0 such 8

that (V) + 2 < d 0 n+1 ; H(a; b). Let 0 := (V) + : Then, by the de nition of (V); (P 0 ) is not feasible, that is, fx 2 R n : c > x d; (c; d) 2 Hence, it follows from Lemma 1 that (0 n ; 1) 2 cl conef p[ (aj ; b j ) + B n+1 g = ;: p[ (aj ; b j ) + B n+1 g: By applying Lemma 3, we can nd j 0 and (z j ; t j ) 2 B n+1 ; j = 1; : : : ; p; such that (0 n ; 1) = j (a j ; b j ) + ( 0 + ) (z j ; t j ) : Let (v j ; w j ) = (a j ; b j ) + ( 0 + ) (z j ; t j ) ; j = 1; : : : ; p; v := (v 1 ; : : : ; v p ) 2 (R n ) p and w := (w 1 ; : : : ; w p ) 2 R p. Then, e d (a; b); (v; w) 0 + and (0 n ; 1) = j (v j ; w j ) 2 cone f(v j ; w j ) ; j = 1; : : : ; pg : So, Lemma 1 implies that fx 2 R n : (v j ; w j ) ; j = 1; : : : ; pg = ; and hence (v; w) 2 i : Thus, ed (a; b); i e d (a; b); (v; w) 0 + = (V) + 2: Thus, from (9), we see that d 0 n+1 ; H(a; b) d e (a; b); i (V) + 2: This contradicts the fact that (V) + 2 < d 0 n+1 ; H(a; b) : So, the conclusion follows. Remark 5 We would like to note that we have given a self-contained proof for Theorem 4 by exploiting the niteness of the linear inequality system. This proof is totally di erent from the one given in [21, Theorem 2.5], where we made a massive use of the very technical stability machinery for linear semi-in nite systems developed in [8, 9]. In the following example we show how the radius of robust feasibility of (P ) can be calculated using Theorem 4. Example 6 (Calculating the radius of robust feasibility) Consider (P ) with n = 3, J = f1; : : : ; 5g and V j as in (5), with 80 1 0 1 0 1 0 1 0 19 2 1 1 0 0 >< (aj ; b j ); j 2 J = B 1 C @ 2A >: ; B 2 C @ 2A ; B 0 C @ 0 A ; B 1 C @ 0 A ; B 0 >= C @ 1A : (10) >; 6 6 3 3 3 9

The minimum of kk 2 on H(a; b), whose linear representation 8 9 x 1 + x 2 x 3 1 3x 1 + 3x 2 + 3x 3 4x 4 9 >< >= x 1 x 2 x 3 1 3x 1 + x 2 + x 3 1 x >: 1 3x 2 + x 3 1 >; x 1 x 2 + 3x 3 3 is obtained from (7) and (10) by Fourier-Motzkin elimination, is attained at the point 1 ; 1 ; 1 ; 3. So, 3 3 3 r (V) = 1 3 ; 1 3 ; 1 28 3 ; 3 = 3 : 4 Tractable optimality conditions for robust solutions In this section we deal with an uncertain linear multi-objective programming problem (P ) V- min c > 1 x; : : : ; c > mx s.t. a > (11) j x b j ; j 2 J; where the constraint data (a j ; b j ) are uncertain and belong to the bounded uncertainty set V j, for j 2 J, and the objective data c i are uncertain too and belong to the bounded uncertainty set U i, for i 2 I, and so U = Q m i=1 U i. A (robust) decision maker would assume that, selecting a decision x which is feasible for every possible scenario in the constraint data uncertainty set, each objective function x 7! c > i x will attain its worst possible value (risk) sup ci 2U i c > i x. So, the robust counterpart of the above uncertain linear multi-objective programming problem is the convex linearly constrained programming problem V- min f(x) s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; (12) where f(x) = ( U1 (x); : : : ; Um (x)) and Ui (x) := sup ci 2U i c > i x is the support function of U i for each i 2 I. Since Ui = cl conv Ui, the objective function f in (12) is the same for the uncertainty sets fu i ; i 2 Ig and fcl conv U i ; i 2 Ig. Moreover, by Lemma 1 and the separation theorem, the feasible set of (12) is also the same for the uncertainty sets fv j ; j 2 Jg and fcl conv V j ; j 2 Jg as cl conef[ j2j cl conv V j g + R + f(0 n ; 1)g = cl conef[ j2j V j g + R + f(0 n ; 1)g: Hence, we can assume without loss of generality that U i and V j are all compact convex sets, and then, Ui (x) := max ci 2U i c > i x is a nite-valued convex function for each i 2 I. 10

De nition 7 (Robust weakly e cient solution) We say that a point x 2 X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg is a (minmax) robust weakly e cient solution to (P ) if it is a weakly e cient solution to its robust counterpart (12), that is, if there is no ^x 2 X such that Ui (^x) < Ui (x) for all i 2 I. When X is bounded, the continuous function Ui attains its minimum on X, for each i 2 I, and this guarantees the existence of (minmax) robust weakly e cient solutions. It is easy to see that (12) is equivalent to V- min (z 1 ; : : : ; z m ) s.t. z i c > i x; 8c i 2 U i ; i 2 I; a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; in the sense that a feasible point x is a weakly e cient solution to (12) if and only if (x; f(x)) 2 R n R m is a weakly e cient solution to (13). Consequently, x 2 X is a (minmax) robust weakly e cient solution to (P ) if and only if (x; f(x)) is a weakly e cient solution to (13). Below, we show that robust solutions for uncertain multi-objective linear programming problems with various objective data uncertainty sets can be found by solving deterministic multi-objective linear programming problems or deterministic linear multiobjective programming problems with cone constraints, and so, can be computed via existing technology of deterministic multi-objective programming problems (cf. [15]). These classes of commonly used data uncertainty sets include box data uncertainty, norm data uncertainty and ellipsoidal data uncertainty. We note that these data uncertainty sets have been successfully employed in modeling uncertain scalar optimization problem arising in diverse areas such as nance [10], management science [5], statistical learning [33, 29, 32] and engineering [3, 34]. For excellent comprehensive surveys, see [2, 6]. 4.1 Box data uncertainty Consider the box data uncertainty sets (13) U i = [c i ; c i ]; i 2 I; (14) V j = [a j ; a j ] [b j ; b j ]; j 2 J; (15) where c i ; c i 2 R n, c i c i, i 2 I, and a j ; a j 2 R n, a j a j, and b j ; b j 2 R, b j b j, j 2 J. Denote the extreme points of [c i ; c i ] and [a j ; a j ] by f^c (1) i ; : : : ; ^c (2n ) i g and f^a (1) j ; : : : ; ^a (2n ) j g, respectively. Theorem 8 Consider the uncertain programming problem (P ) with data uncertainty sets U i and V j given as in (14) and (15). Then, x is a robust weakly e cient solution to (P ) if and only if x is a weakly e cient solution to the following deterministic multi-objective linear programming problem: V- min (z 1 ; : : : ; z m ) s.t. z i (^c (k) i ) > x; i 2 I; k = 1; : : : ; 2 n ; (^a (k) j ) > x b j ; j 2 J; k = 1; : : : ; 2 n : 11

Proof. Let U i and V j be box data uncertainty sets given as in (14) and (15), respectively. Then, the robust multi-objective linear programming problem (12) can be equivalently rewritten as follows, V- min (z 1 ; : : : ; z m ) s.t. z i max ci 2U i c > i x; i 2 I; min (aj ;b j )2V j fa > j x b j g 0; j 2 J: Note that a linear function attains its minimum and maximum over a polytope at an extreme point of the polytope. Hence, for each i 2 I and each j 2 J we get max c > i x = max c i 2U i 1k2 n(^c(k) i ) > x; min (a j ;b j )2V j fa > j x b j g = min 1k2 nf(^a(k) j ) > xg b j : Therefore, the conclusion follows. 4.2 Norm data uncertainty Consider the norm data uncertainty sets U i = fc i + i u i : u i 2 R n ; km i u i k s 1g ; i 2 I; (16) V j = fa j + j v j : v j 2 R n ; kz j v j k s 1g [b j ; b j ]; j 2 J; (17) where c i ; a j 2 R n, b j ; b j 2 R, b j b j, i ; j > 0, M i and Z j are invertible symmetric n n matrices, i 2 I, j 2 J, and kk s denotes the s-norm, for s 2 [1; +1], de ned by ( pp s n i=1 kxk s = jx ij s ; if s 2 [1; +1); maxfjx i j : 1 i ng; if s = +1: Moreover, we de ne s 2 [1; +1] to be the number so that 1 s + 1 s = 1. Theorem 9 Consider the uncertain programming problem (P ) with data uncertainty sets U i and V j given as in (16) and (17). Then, x is a robust weakly e cient solution to (P ) if and only if x is a weakly e cient solution to the following deterministic multi-objective linear programming problem with s-order cone constraints: V- min (z 1 ; : : : ; z m ) s.t. z i (c i ) > x + i km 1 i xk s ; i 2 I; a > j x j kz 1 j xk s b j ; j 2 J: Proof. Let U i and V j be box data uncertainty sets given as in (16) and (17) respectively. Then, the robust counterpart of the uncertain multi-objective linear programming problem (12) can be equivalently rewritten as V- min (z 1 ; : : : ; z m ) s.t. z i max ci 2U i c > i x; i 2 I; min (aj ;b j )2V j fa > j x b j g 0; j 2 J: 12

Since the dual norm of the s-norm is the s -norm, that is, max a > x = kak s kxk s1 a 2 R n, then, for each i 2 I and each j 2 J, we have for any Therefore, the conclusion follows. max c > i x = (c i ) > x + i km 1 i xk s ; c i 2U i min fa > j x b j g = a > j x j kz 1 j xk s b j : (a j ;b j )2V j 4.3 Ellipsoidal data uncertainty Consider the ellipsoidal data uncertainty sets p X i U i = fc 0 i + u k i c k i : k(u 1 i ; : : : ; u p i i )k 1g; i 2 I; (18) V j = fa 0 j + k=1 q X j l=1 v l ja l j : k(v 1 j ; : : : ; v q j j )k 1g [b j; b j ]; j 2 J; (19) where c k i ; a l j 2 R n, k = 0; 1; : : : ; p i, l = 0; 1; : : : ; q j, p i ; q j 2 N and b j ; b j 2 R, i 2 I, j 2 J. Theorem 10 Consider the uncertain programming problem (P ) with data uncertainty sets U i and V j given as in (18) and (19). Then, x is a robust weakly e cient solution to (P ) if and only if x is a weakly e cient solution to the following deterministic multi-objective linear programming problem with second order cone constraints: V- min (z 1 ; : : : ; z m ) s.t. z i (c 0 i ) > x + k (c 1 i ) > x; : : : ; (c p i i )> x k; i 2 I; (a 0 j) > x k (a 1 j) > x; : : : ; (a q j j )> x k b j ; j 2 J: Proof. Let U i and V j be box data uncertainty sets given as in (18) and (19) respectively. Then, the robust multi-objective linear programming problem (12) can be equivalently rewritten as V- min (z 1 ; : : : ; z m ) s.t. z i max ci 2U i c > i x; i 2 I; min (aj ;b j )2V j fa > j x b j g 0; j 2 J: Since max kxk1 a> x = kak for any a 2 R n, then, for each i 2 I and each j 2 J, we have max c i 2U i c > i x = (c 0 i ) > x + k (c 1 i ) > x; : : : ; (c p i i )> x k; min (a j ;b j )2V j fa > j x b j g = (a 0 j) > x k (a 1 j) > x; : : : ; (a q j j )> x k b j : Therefore, the conclusion follows. We nally note that in the case when the objective function is free of uncertainty, the characterization of a robust solution for uncertain multi-objective linear programming problem under ellipsoidal constraint data uncertainty was derived in [21]. 13

5 Radius of highly robust e ciency >From now on, we consider highly robust solutions for uncertain multi-objective linear programming problems of the form (P ) V- min (c > 1 x; : : : ; c > mx) : a > j x b j ; j 2 J ; where the objective and the constraints are uncertain, (c 1 ; : : : ; c m ) 2 U R nm and (a j ; b j ) 2 V j ; and the uncertainty sets are bounded. Recall that the robust feasible set for (P ) is given by X := fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg: (20) In what follows the normal cone to X at x 2 X, N(X; x) := fw 2 R n : w > (x x) 0; 8x 2 Xg; will play a crucial role. De nition 11 (Highly robust weakly e cient solution) We say that x 2 X is a highly robust weakly e cient solution for the uncertain multi-objective linear programming problem (P ) if, for each (c 1 ; : : : ; c m ) 2 U, x is a weakly e cient solution to the problem in (1), that is, if, for each (c 1 ; : : : ; c m ) 2 U, there exists no x 2 X such that c > i x < c > i x for all i 2 I. We have shown in Section 4 that X does not change when the uncertainty sets fv j ; j 2 Jg are replaced by fcl conv V j ; j 2 Jg. Next we show that any highly robust weakly e cient solution for U is also a highly robust weakly e cient solution for cl U. In fact, if x 2 X is not a highly robust weakly e cient solution for cl U, then there exist (c 1 ; : : : ; c m ) 2 cl U and ^x 2 X such that c > i ^x < c > i x for all i 2 I. Let f(c k 1; : : : ; c k m)g k2n be a sequence in U converging to (c 1 ; : : : ; c m ). Then (c k i ) >^x < (c k i ) > x for all i 2 I and k large enough, which implies that x is not a highly robust weakly e cient solution for U. Consequently, we may assume without loss of generality that V j is a compact convex set, for each j 2 J, while U is a compact set. Next we rst provide a simple uncertain multi-objective linear programming problem where the set of highly robust weakly e cient solutions is nonempty. Example 12 Consider the multi-objective linear programming problem with uncertain objectives and uncertainty-free constraints V- min (c > 1 x; c > 2 x) : x 2 [ 1; 1] 2 ; (21) where (c 1 ; c 2 ) is uncertain and belongs to the uncertainty set U := fc +M : 2 [0; 1]g, 1 0 2 0 C = and M = : 0 1 0 2 14

The set of weakly e cient solutions with respect to U is 8 ([ 1; 1] f1g) [ (f1g [ 1; 1]) ; if 0 < >< 1; 2 [ 1; 1] 2 ; if = 1 2 >: ; ([ 1; 1] f 1g) [ (f 1g [ 1; 1]) ; if 1 < 1; 2 and so, the set of highly robust weakly e cient solutions is f(1; 1) ; ( 1; 1)g. In this case U is not a cartesian product, so that there is no minmax robust weakly e cient solution. The relationship between minmax robust solutions and highly robust solutions is established in the following proposition. Proposition 13 Let (P ) be an uncertain multi-objective linear programming problem as in (11), with U = Q m i=1 U i: If x 2 X is a highly robust weakly e cient solution to (P ), then x is also a minmax robust weakly e cient solution to (P ). Proof. Assume that x 2 X is not a minmax robust weakly e cient solution to (P ). Then, (x; f(x)) is not a weakly e cient solution to (13). By the compactness assumption, for each i 2 I, Ui (x) = c > i x for certain c i 2 U i. Now, since (x; f(x)) is not a weakly e cient solution to (13), there exists (~x; ~z) 2 R n R m such that ~x 2 X, ~z i c > i ~x for all c i 2 U i, i 2 I, and ~z i < c > i x; 8i 2 I: Since c > i ~x ~z i < c > i x for all i 2 I; x is not a weakly e cient solution to (1) when (c 1 ; : : : ; c m ) = (c 1 ; : : : ; c m ) 2 U and so, x is not a highly robust weakly e cient solution to (P ). The next example illustrates the fact that the set of highly robust weakly e cient solutions may be empty despite of the existence of minmax robust weakly e cient solutions (the opposite situation holds, e.g., whenever U fails to be a Cartesian product of subsets of R n and X is a singleton set). Example 14 Consider again the linear multi-objective programming problem stated in (21) with a di erent uncertainty set U := U 1 U 2 with 1 2 0 0 U 1 = + 0 1 : 0 1 2 [0; 1] ; U 2 = + 1 2 : 2 2 2 [0; 1] : Its robust counterpart can be formulated as ( ) V- min ( sup (2 1 1) x 1 ; sup (2 2 1) x 2 ) : x 2 [ 1; 1] 2 ; 1 2[0;1] 2 2[0;1] 15

which is equivalent to V- min (jx 1 j; jx 2 j) : x 2 [ 1; 1] 2. It can be easily checked that the set of minmax robust weakly e cient solutions is ([ 1; 1] f0g) [ (f0g [ 1; 1]) while the set of weakly e cient solutions is 8 ([ 1; 1] f1g) [ (f1g [ 1; 1]) ; if 0 1 < 1; 0 2 2 < 1; 2 >< >: ([ 1; 1] f 1g) [ (f1g [ 1; 1]) ; if 0 1 < 1 2 ; 1 2 < 2 1; [ 1; 1] 2 ; if 1 = 1 2 or 2 = 1 2 ; ([ 1; 1] f1g) [ (f 1g [ 1; 1]) ; if 1 2 < 1 1; 0 2 < 1 2 ; ([ 1; 1] f 1g) [ (f 1g [ 1; 1]) ; if 1 2 < 1 1; 1 2 < 2 1; so that there is no highly robust weakly e cient solution. 5.1 A ne objective data perturbations The existence of highly robust weakly e cient solutions can be frequently guaranteed in the case of a ne objective data perturbations of the objectives. For this purpose, consider the parameterized uncertain linear multi-objective programming problem where (P ) V- min (c > 1 x; : : : ; c > mx) s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; (c 1 ; : : : ; c m ) 2 U = my (c i + B n ) ; with c i 2 R n, i 2 I, and 0. Inspired in the de nition of the radius of feasibility, we de ne the radius of highly robust e ciency (U) as the supremum of those 2 R + such that (P ) has some highly robust weakly e cient solution. When X is bounded and = 0; U = fc 1 ; : : : ; c m g and the minimizers on X of the scalar functions x 7! c > i x; i 2 I, are highly robust weakly e cient solutions of (P 0 ). So, (U) 2 R + [ f+1g : Assume that X is a polytope (e.g., the sets V j, j 2 J, are all polytopes, recall Proposition 2) and denote by E the set of extreme points of X. So, given c 2 R n, the function x 7! c > x attains its minimum on X at some point e 2 E, so that c > (x e) 0 for all x 2 X, i.e., c 2 N(X; e). Moreover, e is the unique minimizer of x 7! c > x on X for all c 2 int N(X; e). So, the nite family of solid polyhedral convex cones fn(x; e); e 2 Eg constitutes a tessellation of R n. The boundary of each cone N(X; e), e 2 E, is contained in a nite union of hyperplanes, so that S e2e bd N(X; e) is contained in a nite union of hyperplanes too. Thus, a vector c generated at random in R n belongs to R n n S e2e bd N(X; e) = S e2e int N(X; e) with probability 1. This intuitive argument, together with the next result, shows that we can get a positive lower bound for (U) > 0 under a mild condition. Theorem 15 (Radius of highly robust e ciency) Let X be a polytope and let E be its set of extreme points. If there exists an index i 2 I 0 and a corresponding extreme point e i 2 E such that c i 2 int N(X; e i ), then (U) > 0. 16 i=1

Proof. Let X = x 2 R n : p > t x q t ; t 2 T be a linear representation of the polytope X such that kp t k = 1 for all t 2 T. The normal cone N(X; e) at e 2 E is the negative polar of the cone of feasible directions of X at e, i.e., N(X; e) = fx 2 R n : p > t x 0; 8t 2 T (e)g, where T (e) = t 2 T : p > t e = q t is the set of active indices at e. Moreover, int N(X; e) = fx 2 R n : p > t x < 0; 8t 2 T (e)g. So, for any c 2 int N(X; e), the radius of the greatest ball centered at c and contained in N(X; e) is d ( c; bd N(X; e)) = min p > t c : t 2 T (e) > 0: (22) The supremum of any set of positive scalars such that some extreme point of X which minimizes on X at least one objective function x 7! c > i x when c i 2 c i + B n will be a lower bound for (U). We now compute such a lower bound. Denote by I 0 the set of indices i 2 I such that c i lies in the interior of some element of fn(x; e); e 2 Eg. By assumption, I 0 6= ;. For each i 2 I 0 there exists a unique extreme point e i of X such that c i 2 int N(X; e i ). Then, from (22), one has (U) max d c > i ; bd N(X; e i ) : i 2 I 0 = max i2i 0 min p > t c i : t 2 T (e i ) > 0: (23) The assumption that X is a polytope cannot be replaced by the weaker assumption that X is a compact convex set. Indeed, in this case, we still have X = conv E with fn(x; e); e 2 Eg being a tesselation of R n ; but we may have int N(X; e) = ; for all e 2 E (e.g., if X is a closed ball, fn(x; e); e 2 Eg is formed by all rays emanating from 0 n ). Observe that the lower bound for (U) in (23) can be e ectively computed. Below, we provide two examples. The rst example illustrates how to use (23) to compute a lower bound for the radius of highly robust e ciency. The second example shows that the lower bound for the radius of highly robust e ciency provided by (23) can be achieved (and so, is the best possible lower bound). Example 16 (A lower bound for radius of highly robust e ciency) Consider the problem V- min (c > 1 x; c > 2 x) : x 2 X, where the feasible set is X := x 2 R 2 : x 1 1; x 1 1; x 2 1; x 2 1 : The extreme points of X are e 1 = (1; 1), e 2 = ( 1; 1), e 3 = ( 1; 1) and e 4 = (1; 1). (a) Let c 1 = ( 2; 1) > and c 2 = ( 1; 1) >. We have c 1 2 int N(X; e 1 ) and c 2 2 int N(X; e 4 ) with T (e 1 ) = f1; 3g and T (e 4 ) = f1; 4g ; so that (23) yields (U) max fmin f2; 1g ; min f1; 1gg = 1: (24) (b) The vectors c 1 = (1; 0) > and c 2 = ( 1; 0) > belong to S 4 i=1 bd N(X; e i) and do not satisfy the assumption of Theorem 15. It is easy to see that any element of bd X is a weakly e cient solution. We associate with r 2 N the couples of perturbed vectors (1; 1 r )>, ( 1; 1 r )> 1 and (1; r )> 1, ( 1; r )>, whose corresponding sets of weakly e cient solutions are conv fe 3 ; e 4 g and conv fe 1 ; e 2 g, respectively. Since conv fe 3 ; e 4 g\ conv fe 1 ; e 2 g = ;, the problem (P ) with = 1 has no highly robust weakly e cient r solutions, and so (U) < 1 for all r 2 N. Consequently, (U) = 0. r 17

Example 17 (Best possible lower bound) Consider the following multi-objective linear programming problem (EP ) V- minf(c 1 x; c 2 x) : x 1; x 2g; where the data (c 1 ; c 2 ) is uncertain and belongs to the uncertainty set U = [c 1 ; c 1 + ] [c 2 ; c 2 + ] with c 1 = 2, c 2 = 1 and 0. The extreme points of the feasible set X = [1; 2] are e 1 = 1 and e 2 = 2, and one has c 1 2 int N(X; e 1 ) and c 2 2 int N(X; e 2 ). Let I 0 = f1; 2g, p 1 = 1, p 2 = 1. As T (e 1 ) = f1g and T (e 2 ) = f2g, from (23), we have (U) max min fp t c i : t 2 T (e i )g = 2: i2i 0 Indeed, the obtained lower bound 2 is tight. To see this, we rst note that, for all (c 1 ; c 2 ) 2 [c 1 2; c 1 + 2] [c 2 2; c 2 + 2], x = 1 is a highly robust weakly e cient solution for the problem (EP ) with = 2. On the other hand, for any > 2, there exist (c j 1; c j 2) 2 [c 1 ; c 1 + ] [c 2 ; c 2 + ], j = 1; 2 such that c 1 1 < 0, c 1 2 < 0, c 2 1 > 0 and c 2 2 > 0. Note that the set of weakly e cient solutions of (EP ) is ( f1g; if c1 > 0; c 2 > 0; f2g; if c 1 < 0; c 2 < 0: So, if > 2, the highly robust weakly e cient solution set for the problem (EP ) is empty. This shows that (U) = 2. 5.2 Radial objective data perturbations We now associate with a given matrix C := (c 1 ; : : : ; c m ) 2 R nm and given vectors u 2 R n nf0 n g and v 2 R m +nf0 m g the parameterized uncertain linear multi-objective programming problem (P ) V- min C > x s.t. a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 J; where the data C is uncertain and it belongs to the following uncertainty set U = C + uv > : 2 [0; ] : (25) and 0. This data uncertainty set was introduced and examined in [36, Section 3] (see also [28]). We de ne again the radius of highly robust e ciency (U) as the supremum of those 2 R + such that (P ) has some highly robust weakly e cient solution. Obviously, (U) 6= 1 whenever at least one of the scalar functions x 7! c > i x; i 2 I, attains its minimum on X: As a straightforward consequence of the next theorem we shall obtain the following lower bound for the radius of highly robust e ciency: n (U) sup 2 R + : 9 x 2 X; ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ o 2 N(X; x) : Moreover, the supremum in the de nition of (U) is attained whenever there exist x 2 X; and ; ~ 2 m such that C 2 N(X; x) and (C + (U)uv > ) ~ 2 N(X; x). 18

Theorem 18 (Characterizing highly robust weakly e cient solutions) Consider the uncertain problem (P ) with = 1, with uncertain set U = C + uv > : 2 [0; 1] ; and let x 2 X: Then, x is highly robust weakly solution if and only if there exist ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x): Moreover, if V j is convex for each j 2 J and K (V) is closed, then the highly robust weakly e ciency of x 2 X is equivalent to the condition that there exist ; ~ 2 m and (a j ; b j ); (~a j ; ~ b j ) 2 V j, j ; ~ j 0, j 2 J, such that C = j a j and j (a > j x b j ) = 0; j 2 J; and (C + uv > ) ~ = ~ j a j and ~ j (~a > j x ~ bj ) = 0; j 2 J: Proof. Let x 2 X be a robust weakly e cient solution. Then, we have for each C 2 U, there exists no x 2 X such that C > x < C > x. By [20, Prop. 18 (iii)], this is equivalent to the fact that (8C 2 U); (9 2 R m + f0 m g)(c 2 N(X; x)): As N(X; x) is a cone, by normalization, we may assume that 2 m, and so, x is a robust weakly e cient solution if and only if (8C 2 U); (9 2 m )(C 2 N(X; x)): (26) To see the rst assertion, it su ces to show that (26) is further equivalent to (9 ; ~ 2 m )(C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x)): (27) To see the equivalence, we only need to show that (27) implies (26) when u 6= 0 m (otherwise U is a singleton set). To achieve this, suppose that (27) holds and x an arbitrary C 2 U. Then there exists 2 [0; 1] such that C = C +uv >. We may assume 2 (0; 1), otherwise there is nothing to prove. Firstly, if ~ > v = 0, then (uv > ) ~ = u(v > ~ ) = 0n. Hence, (C + uv > ) ~ = C ~ 2 N(X; x), which means that (27) holds for = ~. So, for any 2 (0; 1) one has (C + uv > ) ~ = (1 )C ~ + (C + uv > ) ~ 2 N(X; x): Consequently, we may assume ~ > v 6= 0. Even more, as v 2 R m +nf0g and ~ 2 m, we may assume ~ > v > 0. In the same way, we get that > v 0. Hence, one has 19

(1 ) ~ > v + > v > 0 and so, := (1 )~ > v (1 ) ~ > v+ > v 2 [0; 1] and := +(1 ) ~ 2 m. Moreover, we have Now, = = (uv > ) (1 )(1 )(uv > ) ~ (1 ) ~ > v (1 ) ~ > v+ > v (uv> ) (1 ) ~ > v (1 ) ~ > v+ > v (> v)u C = C + uv > ( + (1 ) ~ ) > v (1 ) ~ > v+ > v (1 )(uv> ) ~ = C + (uv > ) + (1 ) C + uv > ~ > v (1 ) ~ > v+ > v (1 )(~ > v)u = 0 n : (28) = C + (uv > ) + (1 )(C + uv > ) ~ (1 )(1 )(uv > ) ~ = C + (1 )(C + uv > ) ~ 2 N(X; x): where the fourth equality follows from (28) and the last relation follows from (27) and the convexity of N(X; x). To see the second assertion, we assume that V j is convex, j 2 J, and K (V) is closed. We only need to show ( ) N(X; x) = j a j : (a j ; b j ) 2 V j ; j 0 and j (a > j x b j ) = 0; j 2 J : The system a > x b; (a; b) 2 T, with T := [ j2j V j, is a linear representation of X. Thus, u 2 N(X; x) if and only if the inequality u > x u > x is consequence of a > x b; (a; b) 2 T, which is equivalent, according to Lemma 1, to u; u > x 2 cone(t ) + R + f(0 n ; 1)g : This is equivalent to assert the existence of a nite subset S of T; corresponding nonnegative scalars s ; s 2 S; and 0; such that u; u > x = X (a;b)2s (a;b) (a; b) + (0 n ; 1) : (29) Multiplying by (x; 1) both members of (29) we get = 0; so that (29) is equivalent to u = X (a;b)a and (a;b) (a > x b) = 0; 8 (a; b) 2 S: (30) (a;b)2s Finally, since S [ j2j V j ; we can write S = [ j2j S j, with S j V j, j 2 J, and S i \S j = ; when i 6= j. Let j := P (a;b)2s j (a;b), j 2 J. If j 6= 0 one has, by convexity of V j, P (a;b)2s (a j ; b j ) := j (a;b) (a;b) 2 V j : j Take (a j ; b j ) 2 V j arbitrarily when j = 0: Then we get from (30) that u = j a j and j (a > j x b j ) = 0; j = 1; : : : ; p: 20

Thus, the conclusion follows. In Theorem 18 we require that v 2 R m +. The following example (inspired in [36, Example 3.3]) illustrates that this non-negativity requirement cannot be dropped. Example 19 (Non-negativity requirement for rank-1 objective data uncertainty) Let 0 1 0 1 3 0 0 C = @ 1 1A 1 ; v = =2 R 2 + and u = @ 3A : 1 2 2 0 Consider the uncertain multi-objective optimization problem V- min C > x : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 f1; 2g ; (31) where the objective data matrix C is an element of 80 1 0 1 9 < 3 0 0 0 = fc + uv > : 2 [0; 1]g = @ 1 1A + @ 3 3A : 2 [0; 1] : ; 2 2 0 0 and the uncertainty sets for the constraints are the convex polytopes 80 1 0 19 80 1 0 1 0 2 1 1 0 >< V 1 = conv B 1 C @ 2A >: ; B 2 >= >< C @ 2A ; V 2 = conv B 0 C @ 0 A >; >: ; B 1 C @ 0 A ; B @ 6 6 3 3 Note that the robust feasible set is 0 0 1 3 19 >= C A : >; X = fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 f1; 2gg = a > j x b j ; j 2 f1; : : : ; 5g ; where a > j x b j ; j 2 f1; : : : ; 5g is the set in (10). It can be checked that x = (1; 1; 3=2) 2 X and so, 8 0 1 0 1 9 < 2 1 = N(X; x) = : @ 1 1A + 2 @ 2A : 1 0; 2 0 ; : 2 2 Let = (2=3; 1=3) > and ~ = (1=3; 2=3) >. Then, we have On the other hand, for 0 C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x): C = @ 1 0 1 0 3 0 1 1A + 1 0 0 @ 3 3A = @ 2 2 2 0 0 and x = (0; 0; 3) > 2 X, we see that 6 C > x = < 6 1 3 0 5A 2 U; 2 2 1 2! 11 2 = C > x: 11 2 So, x is not a weakly e cient solution to (31). Thus, the above solution characterization fails. 21

In the case where the constraints are uncertainty free, i.e., the sets V j are all singletons, we obtain the following solution characterization for robust multi-objective optimization problems with rank-1 objective uncertainty. Corollary 20 Consider the set U as in Theorem 18 and V j = (a j ; b j ), j 2 J. For each C 2 U, consider the uncertain multi-objective linear programming problem (1). Given x 2 X, the following statements are equivalent: (i) x is a highly robust weakly e cient solution. (ii) There exist ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x): (iii) There exist ; ~ 2 m and j ; ~ j 0, j 2 J, such that C = j a j and j (a > j x b j ) = 0; j 2 J; and (C + uv > ) ~ = ~ j a j and ~ j (a > j x b j ) = 0; j 2 J: (iv) x is a weakly e cient solution to the problems (P 0 ) V- min C > x s.t. a > j x b j ; j 2 J; and (P 1 ) V- min (C + uv > ) > x s.t. a > j x b j ; j 2 J: Proof. Let V j = f a j ; b j g, j 2 J. The equivalences (i), (ii), (iii) come from Theorem 18, taking into account that all the uncertainty sets V j are polytopes. Note that (i) ) (iv) always holds because of (25) and De nition 11. Finally, the implication (iv) ) (ii) is immediate by the usual characterization for weakly e cient solutions (e.g., see [20, Prop. 18 (iii)]). Thus, the conclusion follows. Remark 21 The equivalence (i), (iii) in Corollary 20, on robust weakly e cient solutions of uncertain vector linear programming problems, can be seen as a counterpart of [36, Theorem 3.1], on robust e cient solutions of the same type of problems. 22

6 Tractable optimality conditions for highly robust solutions Next, we provide various classes of commonly used uncertainty sets determining the robust feasible set X = fx 2 R n : a > j x b j ; 8(a j ; b j ) 2 V j ; j 2 Jg; under which one can numerically check whether a robust feasible point is a highly robust weakly e cient solution or not. Throughout this section we assume that the objective function of (1) satis es the rank-1 matrix data uncertainty, as de ned in Section 5. We begin with the simple box constraint data uncertainty. 6.1 Box constraint data Uncertainty Consider the box data uncertainty set V j = [a j ; a j ] [b j ; b j ]; (32) where a j ; a j 2 R n ; a j a j ; and b j ; b j 2 R; b j b j ; j 2 J. Denote the extreme points of [a j ; a j ] by f^a (1) j ; : : : ; ^a (2n ) j g. Theorem 22 Consider the set U as in Theorem 18 and V j, j 2 J, as in (32). For each C 2 U, consider the uncertain multi-objective linear programming problem in (1). Then, the following statements are equivalent: (i) x 2 X is a highly robust weakly e cient solution to (P ). (ii) There exist ; ~ 2 m and (l) j ; ~(l) j 0 such that C = 2 n X l=1 (l) j ^a(l) j and (l) j (^a (l) j )> x b j = 0; j 2 J; l = 1; : : : ; 2 n ; and (C + uv > ) ~ = 2 n X l=1 ~ (l) j ^a(l) j and ~ (l) j (^a (l) j )> x b j = 0; j 2 J; l = 1; : : : ; 2 n : (iii) x is a weakly e cient solution for the following two deterministic multi-objective linear programming problems and V- min C > x s.t. (^a (l) j )> x b j 0; l = 1; : : : ; 2 n ; j 2 J; V- min (C + uv > ) > x s.t. (^a (l) j )> x b j 0; l = 1; : : : ; 2 n ; j 2 J: 23

Proof. (i), (ii) Let x be a robust weakly e cient solution to (1). Note that X can be rewritten as X = x 2 R n : a > j x b j 0 for all (a j ; b j ) 2 [a j ; a j ] [b j ; b j ]; j 2 J n o = x 2 R n : (^a (l) j )> x b j 0; l = 1; : : : ; 2 n ; j 2 J : Then, we have N(X; x) = ( 2 n X l=1 ) (l) j ^a(l) j : (l) j (^a (l) j )> x b j = 0; (l) j 0; 8l; 8j : (33) The conclusion follows from Theorem 18. (i) ) (iii) This implication follows by the de nition of a highly robust weakly e cient solution. (iii) ) (ii) By the usual characterization for weakly e cient solutions (e.g., see [20, Prop. 18 (iii)]), we see that there exist ; ~ 2 m such that C 2 N(X; x) and (C + uv > ) ~ 2 N(X; x). Thus, this implication follows by (33)). It is worth noting that one can determine from Theorem 22 whether or not a given robust feasible point x under the box constraint data uncertainty is a highly robust weakly e cient solution by solving nitely many linear equalities. 6.2 Norm constraint data uncertainty Consider the norm constraint data uncertainty set V j = fa j + j v j : v j 2 R n ; kz j v j k s 1g [b j ; b j ]; (34) where a j 2 R n ; b j ; b j 2 R, b j b j ; j > 0, Z j is an invertible symmetric n n matrix, j 2 J. Recall that kk s denotes the s-norm, s 2 [1; +1], and s 2 [1; +1] is the number so that 1 + 1 = 1. The following simple fact about s-norms will be used later s s on: @(kk s )(u) = fv : kvk s 1; v > u = kuk s g; where @h(x) denotes the usual convex subdi erential of a convex function h : R n! R at x 2 R n, i.e., @h(x) = fz 2 R n : z > (y x) h(y) h(x) 8y 2 R n g: In this case, we have the following characterization of robust weakly e cient solutions. Theorem 23 Consider the set U as in Theorem 18 and V j, j 2 J, as in (34). For each C 2 U, consider the uncertain multi-objective linear programming problem in (1). Suppose that there exists x 0 2 R n such that Then, the following statements are equivalent: a > j x 0 b j j kz 1 j x 0 k s > 0; j 2 J: (35) (i) x 2 X is a highly robust weakly e cient solution to (P ). 24