Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Similar documents
Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Robust Duality in Parametric Convex Optimization

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Robust linear semi-in nite programming duality under uncertainty?

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Robust linear optimization under general norms

Operations Research Letters

Stationarity and Regularity of Infinite Collections of Sets. Applications to

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

On John type ellipsoids

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Chapter 2 Convex Analysis

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

that a broad class of conic convex polynomial optimization problems, called

Interval solutions for interval algebraic equations

Strong Duality and Dual Pricing Properties in Semi-infinite Linear Programming A Non-Fourier-Motzkin Elimination Approach

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

1 Robust optimization

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

On smoothness properties of optimal value functions at the boundary of their domain under complete convexity

POLARS AND DUAL CONES

Optimality Conditions for Constrained Optimization

A strongly polynomial algorithm for linear systems having a binary solution

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

CONVEX OPTIMIZATION VIA LINEARIZATION. Miguel A. Goberna. Universidad de Alicante. Iberian Conference on Optimization Coimbra, November, 2006

On duality theory of conic linear problems

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

On Linear Systems Containing Strict Inequalities in Reflexive Banach Spaces

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

Handout 8: Dealing with Data Uncertainty

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Topological properties

Some Contributions to Convex Infinite-Dimensional Optimization Duality

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Course 212: Academic Year Section 1: Metric Spaces

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Stability in linear optimization under perturbations of the left-hand side coefficients 1

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

On the convexity of piecewise-defined functions

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Subdifferential representation of convex functions: refinements and applications

A New Fenchel Dual Problem in Vector Optimization

Centre d Economie de la Sorbonne UMR 8174

Preprint February 19, 2018 (1st version October 31, 2017) Pareto efficient solutions in multi-objective optimization involving forbidden regions 1

The Split Closure of a Strictly Convex Body

The Split Closure of a Strictly Convex Body

Convex Analysis and Economic Theory Winter 2018

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Lecture 7: Weak Duality

Stability of efficient solutions for semi-infinite vector optimization problems

A MAXIMUM PRINCIPLE FOR A MULTIOBJECTIVE OPTIMAL CONTROL PROBLEM

Robust conic quadratic programming with ellipsoidal uncertainties

3. Linear Programming and Polyhedral Combinatorics

Convex Sets. Prof. Dan A. Simovici UMB

Vector Variational Principles; ε-efficiency and Scalar Stationarity

Handout 6: Some Applications of Conic Linear Programming

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization

On the sufficiency of finite support duals in semi-infinite linear programming

On mixed-integer sets with two integer variables

Chapter 1. Preliminaries

Optimization and Optimal Control in Banach Spaces

Refined optimality conditions for differences of convex functions

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Where is matrix multiplication locally open?

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

Assignment 1: From the Definition of Convexity to Helley Theorem

Math 341: Convex Geometry. Xi Chen

Constraint qualifications for convex inequality systems with applications in constrained optimization

On duality gap in linear conic problems

Robust portfolio selection under norm uncertainty

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

3. Linear Programming and Polyhedral Combinatorics

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

A Note on Robustness of the Min-Max Solution to Multiobjective Linear Programs

Generalized metric properties of spheres and renorming of Banach spaces

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Week 3: Faces of convex sets

Optimization Theory. A Concise Introduction. Jiongmin Yong

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

Mathematics for Economists

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Fragmentability and σ-fragmentability

Transcription:

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data arxiv:1402.3095v1 [math.oc] 13 Feb 2014 M.A. Goberna, V. Jeyakumar, G. Li, and J. Vicente-Pérez December 9, 2013 Abstract In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for radius of robust feasibility guaranteeing constraint feasibility for all possible uncertainties within a specified uncertainty set under affine data parametrization. We then present a complete characterization of robust weakly effcient solutions that are immunized against rank one objective matrix data uncertainty. We also provide classes of commonly used constraint data uncertainty sets under which a robust feasible solution of an uncertain multi-objective linear program can be numerically checked whether or not it is a robust weakly efficient solution. Keywords. Robust optimization. Multi-objective linear programming. Robust feasibility. Robust weakly efficient solutions. 1 Introduction Consider the deterministic multi-objective linear programming problem P V-min { Cx : a j x b j, j = 1,...,p } This research was partially supported by the MICINN of Spain, Grant MTM2011-29064-C03-02, and the Australian Research Council, Grant DP120100467. Corresponding author. Tel.: +34 965903533. Fax: +34 965903531. Dept. of Statistics and Operations Research, Alicante University, 03071 Alicante, Spain. Dept. of Applied Mathematics, University of New South Wales, Sydney 2052, Australia. E-mail addresses: mgoberna@ua.es M.A. Goberna, v.jeyakumar@unsw.edu.au V. Jeyakumar, g.li@unsw.edu.au G. Li, jose.vicente@ua.es J. Vicente-Pérez. 1

where V-min stands for vector minimization, C is a real m n matrix called objective matrix, x R n is the decision variable, and a j,b j R n+1, for j = 1,...,p, are the constraint input data of the problem. The problem P has been extensively studied in the literature see e.g. the overviews [2] and [5], where perfect information is often assumed that is, accurate values for the input quantities or parameters, despite the reality that such precise knowledge is rarely available in practice for real-world optimization problems. The data of real-world optimization problems are often uncertain that is, they are not known exactly at the time of the decision due to estimation errors, prediction errors or lack of information. Scalar uncertain optimization problems have been traditionally treated via sensitivity analysis, which estimates the impact of small perturbations of the data in the optimal value, while robust optimization, which provides a deterministic framework for uncertain problems [1],[6], has recently emerged as a powerful alternative approach. Particular types of uncertain multi-objective linear programming problems have been studied, e.g. [12] considers changes in one objective function via sensitivity analysis, [10] and [11] consider changes in the whole objective function x Cx, and [8] deals with changes in the constraints, the latter three works using different robustness approaches. The purpose of the present work is to study multi-objective linear programming problems in the face of data uncertainty both in the objective function and constraints from a robustness perspective. Following the robust optimization framework, the multi-objective problempinthefaceofdata uncertainty bothintheobjectivematrixandinthe data of the constraints can be captured by a parameterized multi-objective linear programming problem of the form P w V-min { Cx : a j x b j, j = 1,...,p } where the input data, the rows of C and a j,b j, j = 1,...,p, are uncertain vectors. The sets U and V j, j = 1,...,p, are specified uncertainty sets that are bounded, but often infinite sets, C U R m n and a j,b j V j R n+1,j = 1,...,p. So, the uncertain parameter is w := C,a 1,b 1,...,a p,b p W := U p V j. By enforcing the constraints for all possible uncertainties within V j, j = 1,...,p, the uncertain problem becomes the following uncertain multi-objective linear semi-infinite programming problem P C V-min { Cx : a j x b j, a j,b j V j, j = 1,...,p } 2

where the data uncertainty occurs only in the objective function and X := {x R n : a j x b j, a j,b j V j, j = 1,...,p}, is the robust feasible set of P w. Following the recent work on robust linear programming see [1], some of the key questions of multi-objective linear programming under data uncertainty include: I. Guaranteeing robust feasibility How to guarantee non-emptiness of the robust feasible set X for specified uncertainty sets V j, j = 1,...,p? II. Defining and identifying robust solutions How to define and characterize a robust solution that is immunized against data uncertainty for the uncertain multi-objective problem P C? III. Numerical tractability of robust solutions For what classes of uncertainty sets robust solutions can be numerically checked? In this paper, we provide some answers to the above questions for the multi-objective linear programming problem P C in the face of data uncertainty. In particular, we derive a formula for the radius of robust feasibility guaranteeingnon-emptinessoftherobustfeasiblesetx ofp C underaffinely parameterized data uncertainty. Then, we establish complete characterizations of robust weakly efficient solutions under rank one objective matrix data uncertainty the same type of uncertainty considered in [11] for efficient solutions of similar problems with deterministic constraints. We finally provide classes of commonly used uncertainty sets under which robust feasible solutions can be numerically checked whether or not they are robust weakly efficient solutions. 2 Radius of robust feasibility In this section, we first discuss the feasibility of our uncertain multi-objective model under affine constraint data perturbations. In other words, for any given matrix C R m n, we study the feasibility of the problem P α V-min Cx s.t. a j x b j, a j,b j V α j, j = 1,...,p, for α 0, where the uncertain set-valued mapping V α j takes the form V α j := a j,b j +αb n+1, j = 1,...,p, 1 3

with {x R n : a j x b j }, and B n+1 denotes the closed unit ball for the Euclidean norm in R n+1. Let V := p V j. The radius of feasibility associated with V j, j = 1,...,p, as in 1 is defined to be ρv := sup{α R + : P α is feasible for α}. 2 To establish the formula for the radius of robust feasibility, we first note a known useful characterization of feasibility of an infinite inequality system in terms of the closure of the convex cone generated by its set of coefficient vectors. Lemma 1 [9, Theorem 4.4]. Let T be an arbitrary index set. Then, {x R n : a t x b t,t T} if and only if 0 n,1 / clcone{a t,b t : t T}. Using the above Lemma, we first observe that the radius of robust feasibility ρv is a non-negative number since, given j = 1,...,p, 0 n,1 a j,b j +αb n+1 for a positive large enough α, in which case the corresponding problem P α is not feasible. The next result provides a formula for the radius of feasibility which involves the so-called hypographical set [3] of the system {a j x b j,j = 1,...,p}, defined as Ha,b := conv { a j,b j,j = 1,...,p } +R + {0 n, 1}, 3 where a := a 1,...,a p R n p and b := b 1,...,b p R p. We observe that Ha,b isthesumofthepolytopeconv { a j,b j,j = 1,...,p } withtheclosed half-line R + {0 n, 1}, so that it is a polyhedral convex set. Lemma 2. Let a j,b j R n R, j = 1,...,p, and α 0. Suppose that 0 n,1 clcone {a j,b j,j = 1,...,p}+αB n+1. Then, for all δ > 0, we have 0 n,1 cone {a j,b j,j = 1,...,p}+α+δB n+1. Proof. Let δ > 0. To see the conclusion, we assume by contradiction that 0 n,1 / cone {a j,b j,j = 1,...,p}+α+δB n+1. Then, the separation theorem implies that there exists ξ,r R n+1 \{0 n+1 } such that for all y,s cone {a j,b j,j = 1,...,p}+α+δB n+1 one has r = ξ,r,0 n,1 0 ξ,r,y,s, 4 4

where, denotes the usual inner product, i.e. ξ,r,y,s = ξ y + rs. Recall that 0 n,1 clcone {a j,b j,j = 1,...,p}+αB n+1. So, there exist sequences {y k,s k } k N R n R, {µ j k } k N R +, and {z j k,tj k } k N B n+1, j = 1,...,p, such that y k,s k 0 n,1 and y k,s k = µ k j aj,b j +αz j k,tj k. If { p µj k } k N is a bounded sequence, by passing to subsequence if necessary, we have 0 n,1 cone {a j,b j,j = 1,...,p}+αB n+1. Thus, the claim is true whenever { p µj k } k N is a bounded sequence. So, we may assume that p µj k + as k. Let y,s B n+1 be such that y,s,ξ,r = ξ,r. Note that µ j k aj,b j +αz j k,tj k δy,s cone {a j,b j,j = 1,...,p}+α+δB n+1. Then, 4 implies that r 0 ξ,r, µ k j aj,b j +αz j k µ j k δ ξ,r = ξ,r,y k,s k µ j k δ ξ,r. Passing to the limit, we arrive to a contradiction as ξ,r 0 n+1, δ > 0, p µj k + and y k,s k 0 n,1. We now provide our promised formula for the radius of robust feasibility. Observe that, since 0 n+1 / Ha,b by Lemma 1, d 0 n+1,ha,b can be computed minimizing 2 on Ha,b i.e. by solving a convex quadratic program. Theorem 3 Radius of robust feasibility. For P α, let a j,b j R n R, j = 1,...,p, with {x R n : a j x b j,j = 1,...,p}. Let V j := a j,b j + αb n+1, j = 1,...,p, and V := p V j. Let ρv be the radius of robust feasibility as given in 2 and let Ha,b be the hypographical set as given in 3. Then, ρv = d 0 n+1,ha,b. 5

Proof. If a given v,w R n p R p is interpreted as a perturbation of v,w R n p R p, we can measure the size of this perturbation as the supremum of the distances between the vectors of coefficients corresponding to the same index. This can be done by endowing the parameter space R n p R p with the metric d defined by dv,w,p,q := sup v j,w j p j,q j, for v,w,p,q R n p R p.,...,p Let a R n p and b R p be as in 3. Denote the set consisting of all inconsistent parameters by Θ i, that is, Θ i = {v,w R n p R p : {x R n : v j x w j,j = 1,...,p} = }. We now show that d a,b,θ i = d 0n+1,Ha,b. 5 ByLemma1,d 0 n+1,ha,b > 0.Leta,b Ha,bbesuchthat a,b = d 0 n+1,ha,b. Then, 0 n+1 H 1 where H 1 := Ha,b a,b = conv { a j a,b j b,j = 1,...,p } +R + {0 n, 1}. So, there exist λ j 0 with p λ j = 1 and µ 0 such that 0 n+1 = λ j a j a,b j b+µ0 n, 1. This shows that 0 n,1 = λ j µ+ 1 k a j a,b j b+ 1, k N. k So, {x : a j a x b j b+ 1 k,j = 1,...,p} =. Thus, a a,b b+1 k Θ i, and so, a a,b b clθ i. It follows that d a,b,θ i = d a,b,clθi a,b = d 0n+1,Ha,b. To see 5, we suppose on the contrary that d a,b,θ i < d 0n+1,Ha,b. Then, there exist ε 0 > 0, with ε 0 < a,b, and â,ˆb bdθ i such that d a,b,â,ˆb = d a,b,θ i < a,b ε0. Then, one can find {â k,ˆb k } k N Θ i such that â k,ˆb k â,ˆb. So, Lemma 1 gives us that 0 n,1 clcone{â k j,ˆb k j : j = 1,...,p} = cone{âk j,ˆb k j : j = 1,...,p}. 6

Thus, there exist λ k j 0 such that 0 n,1 = p λk jâ k j,ˆb k j. Note that > 0, and so, p λk j 0 n+1 = λ k j p â k j,ˆb k 1 j+ p 0 n, 1. λk j λk j Then as k, λ k j p â j,ˆb j + 1 p 0 n, 1 = λk j λk j λ k j p â j â k j,ˆb j ˆb k j 0. λk j So, 0 n+1 clhâ,ˆb = Hâ,ˆb. It then follows that there exist λ j 0 with p λ j = 1 and µ 0 such that 0 n+1 = λ j â j,ˆb j +µ0 n, 1. Thus, we have p λ ja j,b j +µ0, 1 = p λ ja j,b j +µ0, 1 p λ jâ j,ˆb j +µ0 n, 1 = p λ j aj,b j â j,ˆb j d a,b,â,ˆb < a,b ε 0, where the first inequality follows from the definition of d and λ j 0 with p λ j = 1. Note that p λ ja j,b j +µ0 n, 1 Ha,b. We see that Ha,b a,b ε 0 B n+1. Thisshowsthatd0 n+1,ha,b a,b ε 0 which contradicts the fact that d0 n+1,ha,b = a,b. Therefore, 5 holds. Let α R + so that P C is feasible for α. Then, a,b Θ i implies that d a,b,a,b > α. Therefore, 5 gives us that d 0 n+1,ha,b = d a,b,θ i α. Thus, ρv d 0n+1,Ha,b. We now show that ρv = d 0 n+1,ha,b. To see this, we proceed by the method of contradition and suppose that ρv < d 0 n+1,ha,b. The, there exists δ > 0 such that ρv+2δ < d 0 n+1,ha,b. Let α 0 := ρv+δ. Then, by the definition of ρv, P α0 is not feasible, that is, {x R n : a x b,a,b p { } aj,b j +αb n+1 } =. 7

Hence, it follows from Lemma 1 that 0 n,1 clcone{ p { } aj,b j +αb n+1 }. By applying Lemma 2, we can find µ j 0 and z j,t j B n+1, j = 1,...,p, such that 0 n,1 = µ j aj,b j +α 0 +δz j,t j. Leta j,b j = a j,b j +α 0 +δz j,t j,j = 1,...,p,a := a 1,...,a p R n p and b := b 1,...,b p R p. Then, d a,b,a,b α 0 +δ and 0 n,1 = µ j a j,b j cone{a j,b j,j = 1,...,p}. So, Lemma 1 implies that {x R n : a j,b j,j = 1,...,p} = and hence a,b Θ i. Thus, d a,b,θ i d a,b,a,b α0 +δ = ρv+2δ. Thus, from 5, we see that d 0 n+1,ha,b d a,b,θ i ρv + 2δ. This contradicts the fact that ρv + 2δ < d 0 n+1,ha,b. So, the conclusion follows. Remark 4. We would like to note that we have given a self-contained and simple proof for Theorem 3 by exploiting the finitness of the linear inequality system. A semi-infinite version of Theorem 3 under a regularity condition was presented in [8, Theorem 3.3], where the proof relies on several results in [3] and [4]. In the following example we show how the radius of robust feasibility of P α can be calculated using Theorem 3. Example 5. Calculating radius of robust feasibility Consider P α with n = 3, p = 5 and V α j as in 1, with { aj,b j,j = 1,...,5 } = 2 1 2 6, 8 1 2 2 6, 1 0 0 3, 0 1 0 3, 0 0 1 3 6.

The minimum of 2 on Ha,b, whose linear representation x 1 +x 2 x 3 1 3x 1 +3x 2 +3x 3 4x 4 9 x 1 x 2 x 3 1 3x 1 +x 2 +x 3 1 x 1 3x 2 +x 3 1 x 1 x 2 +3x 3 3 is obtained from 3 and 6 by Fourier-Motzkin elimination, is attained at 1 3, 1 3, 1, 3. So, 3 ρv = 1 28 3, 1 3, 1 3, 3 = 3. 3 Rank-1 Objective Matrix Uncertainty In this section we assume that the matrix C in the objective function is uncertain and it belongs to the one-dimensional compact convex uncertainty set in R m n given by U = {C +ρuv : ρ [0,1]}, where C is a given m n matrix while u R m + and v Rn are given vectors. This data uncertainty set was introduced and examined in [11, Section 3]. Recall that the normal cone of a closed convex set X at x X is NX,x := {u R n : u x x 0, x X}. Moreover, the simplex m is defined as m := {λ R m + : m i=1 λ i = 1}. Recall that given x,y R m, we write x y x < y when x i y i x i < y i, respectively for all i I := {1,...,m}. Moreover, we write x y when x y and x y. Robust efficiency means in [12] and [11, Section 4], where the constraints are deterministic, the preservation of the corresponding property for all C U. So, this concept is very restrictive unless the uncertainty set U is small in some sense e.g. segments emanating from C. In our general framework of uncertain objectives and constraints the following definition of robust weak efficiency is referred to the set X of robust feasible solutions. Definition 6 Robust weakly efficient solution. We say that x R n is a robust weakly efficient solution of P C if there is no x X such that Cx < Cx, for all C U. 9

The next characterization of the robust weakly efficient solutions in terms of multipliers involves the so-called characteristic cone [9, p. 81] of the constraint system of P C, defined as p CV := cone V j +R + {0 n, 1}. If V j is a polytope for all j = 1,...,p, then CV is generated by the extreme points of the sets V j, j = 1,...,p, together with the vector 0 n, 1. So, CV is a polyhedral convex cone. If V j is a compact convex set for all j = 1,...,p and the strict robust feasibility condition {x R n : a j x > b j, a j,b j V j, j = 1,...,p} 7 p holds, then, according to [9, Theorem 5.3 ii], cone V j is closed, an this in turn implies that CV is closed too. Theorem 7 Robust weakly efficient solutions. The point x X is a robust weakly solution of P C if and only if there exist λ, λ m such that C λ NX,x and C +uv λ NX,x. Moreover, if V j is convex, j = 1,...,p, and CV is closed, then the robust weak efficiency of x X is further equivalent to the condition that there exist λ, λ m and a j,b j,ã j, b j V j, µ j, µ j 0, j = 1,...,p, such that C λ = µ j a j and µ j a j x b j = 0,j = 1,...,p, and C +uv λ = µ j a j and µ j ã j x b j = 0,j = 1,...,p. Proof. Let x X be a robust weakly efficient solution. Then, we have for each C U, there exist no x X such that Cx < Cx. By [7, Prop. 18 iii], this is equivalent to the fact that C U, λ R m + {0 m }C λ NX,x. 10

As NX,x is a cone, by normalization, we may assume that λ m, and so, x is a robust weakly efficient solution if and only if C U, λ m C λ NX,x. 8 To see the first assertion, it suffices to show that 8 is further equivalent to λ, λ m C λ NX,x and C +uv λ NX,x. 9 Toseetheequivalence, weonlyneedtoshowthat9implies8whenu 0 m otherwise U is a singleton set. To achieve this, suppose that 9 holds and fix an arbitrary C U. Then there exists α [0,1] such that C = C+αuv. 1 α λ Define τ := u and γ := τλ +1 τ λ 0 1 α λ u+αλ u m. As λ, λ m and u R m +, we see that τ [0,1] and γ m. Moreover, we have = = ταuv λ 1 α1 τuv λ 1 α λ u 1 α λ u+αλ u αuv αλ λ u 1 α λ u+αλ u 1 αuv λ 1 α λ u 1 α λ u+αλ u αu αλ λv u 1 αu λv 1 α λ u+αλ u = 0m. 10 Now, C γ = C +αuv τλ+1 τ λ = τc λ+ταuv λ+1 τ C +αuv λ = τc λ+ταuv λ+1 τc +uv λ 1 α1 τuv λ = τc λ+1 τc +uv λ NX,x. where the fourth equality follows from 10 and the last relation follows from 9 and the convexity of NX,x. To see the second assertion, we assume that V j is convex, j = 1,...,p, and CV is closed. We only need to show { } NX,x = µ j a j : a j,b j V j,µ j 0 and µ j a j x b j = 0,j = 1,...,p. The system { a x b,a,b T }, with T = p V j, is a linear representation of X. Thus, u NX,x if and only if the inequality u x 11

u x is consequence of { a x b,a,b T } if and only if by the Farkas Lemma, [9, Corollary 3.1.2] u,u x conet +R + {0 n, 1}. This is equivalent to assert the existence of a finite subset S of T, corresponding non-negative scalars λ s, s S, and µ 0, such that u,u x = a,b S λ a,ba,b+µ0 n, 1. 11 Multiplying by x, 1 both members of 11 we get µ = 0, so that 11 is equivalent to u = a,b S λ a,ba and λ a,b a x b = 0,a,b S. 12 Finally, since S p V j, we can write S = p S j, with S j V j, j = 1,...,p, and S i S j = when i j. Let µ j := a,b S j λ a,b, j = 1,...,p. If µ j 0 one has, by convexity of V j, a j,b j := a,b S j λ a,b a,b µ j V j. Take a j,b j V j arbitrarily when µ j = 0. Then we get from 12 that u = µ j a j and µ j a j x b j = 0,j = 1,...,p. Thus, the conclusion follows. In the definition of the rank-1 objective data uncertainty set, U = {C + ρuv : ρ [0,1]}, we require that u R m +. The following example inspired in [11, Example 3.3] illustrates that if this non-negativity requirement is dropped, then the above solution characterization in Theorem 7 may fail. Example 8 Non-negativity requirement for rank-1 objective data uncertainty. Let 0 3 1 2 1 C =, u = / R 2 + 0 1 2 1 and v = 3. 0 Consider the uncertain multiobjective optimization problem V-min { Cx : a j x b j, a j,b j V j, j = 1,...,4 }, 13 12

where the objective data matrix C is an element of { 3 1 2 0 3 0 {C+ρuv : ρ [0,1]} = +ρ 0 1 2 0 3 0 } : ρ [0,1] and the uncertainty sets for the constraints are the convex polytopes 2 1 1 0 V 1 = conv 1 2, 2 2 and V 2 = conv 0 0, 1 0, 6 6 3 3 Note that the robust feasible set is X = {x R n : a j x b j, a j,b j V j,j = 1,2} = { a j x b j,j = 1,...,5 }, where { a j x b j,j = 1,...,5 } is the set in 6. It can be checked that x = 1,1,3/2 X and so, 2 1 NX,x = µ 1 1 +µ 2 2 : µ 1 0,µ 2 0. 2 2 Let λ = 2/3,1/3 and λ = 1/3,2/3. Then, we have C λ NX,x and C +uv λ NX,x. On the other hand, for 3 1 2 C = + 1 0 1 2 2 0 3 0 0 3 0 = 3 1 2 2 0 5 2 2 U, 0 0 1 3. and x = 0,0,3 X, we see that Cx = 6 6 < 11 2 11 2 = Cx. So, x is not a weakly efficient solution of 13. Thus, the above solution characterization fails. In the case where the constraints are uncertainty free, i.e. the sets V j are all singletons, we obtain the following solution characterization for robust multiobjective optimization problem with rank-one objective uncertainty. 13

Corollary 9. Let V j = { a j,b j }, j = 1,...,p, and x X. Then, the following statements are equivalent: i x is a robust weakly efficient solution; ii there exist λ, λ m such that C λ NX,x and C +uv λ NX,x; iii there exist λ, λ m and µ j, µ j 0, j = 1,...,p, such that C λ = µ j a j and µ j a j x b j = 0,j = 1,...,p, and C +uv λ = µ j a j and µ j a j x b j = 0,j = 1,...,p; iv x is a weakly efficient solution for the problems and P 0 V-min Cx s.t. a j x b j, j = 1,...,p, P 1 V-min C +uv x s.t. a j x b j, j = 1,...,p. Proof. Let V j = { a j,b j }, j = 1,...,p. The equivalences i ii iii come from Theorem 7, taking into account that all the uncertainty sets V j are polytopes. Note that i iv always holds. Finally, the implication iv ii is immediate by the usual characterization for weakly efficient solutions e.g. see [7, Prop. 18iii]. Thus, the conclusion follows. Remark 10. The equivalence i iii in Corollary 9, on robust weakly efficient solutions of uncertain vector linear programming problems, can be seen as a counterpart of [11, Theorem 3.1], on robust efficient solutions of the same type of problems. 4 Tractable Classes of Robust Multi-Objective LPs In this Section, we provide various classes of commonly used uncertainty sets determining the robust feasible set X = {x R n : a j x b j, a j,b j V j, j = 1,...,p}, 14

under which one can numerically check whether a robust feasible point is a robust weakly efficient solution or not. Throughout this Section we assume that the objective function of P C satisfies the rank-1 matrix data uncertainty, as defined in Section 3. We begin with the simple box constraint data uncertainty. 4.1 Box constraint data Uncertainty Consider V j = [a j,a j ] [b j,b j ], 14 where a j,a j R n and b j,b j R, j = 1,...,p. Denote the extreme points of [a j,a j ] by {â 1 j,...,â 2n j }. Theorem 11. Let V j be as in 14, j = 1,...,p. The point x X is a robust weakly efficient solution of P C if and only if there exist λ, λ m and µ l j, µl j 0 such that C λ = and 2 n l=1 C+uv λ = µ l j âl j 2 n l=1 and µ l j â l j x b j = 0, j = 1,...,p,l = 1,...,2 n, µ l j âl j and µ l j â l j x b j = 0, j = 1,...,p,l = 1,...,2 n. Proof. Let x be a robust weakly efficient solution of P C. Note that X can be rewritten as X = { x R n : a j x b j 0 for all a j,b j [a j,a j ] [b j,b j ] } { } = x R n : a l j x b j 0,l = 1,...,2 n,j = 1,...,p. Then, we have { NX,x = 2 n l=1 } µ l j âl j : µ l j â l j x b j = 0,µ l j 0, l, j. Since V j is a convex polytope for j = 1,...,p, the conclusion follows from Theorem 7. It is worth noting, from Theorem 11, that one can determine whether or not a given robust feasible point x of P C under the box constraint data uncertainty is a robust weakly efficient solution by solving finitely many linear equalities. 15

4.2 Norm constraint data uncertainty Consider the constraint data uncertainty set V j = {a j +δ j v j : v j R n, Z j v j s 1} [b j,b j ], 15 where a j R n, b j R, Z j is an invertible symmetric n n matrix, j = 1,...,p, and let s denote the s-norm, s [1,+ ], defined by { s n i=1 x s = x i s if s [1,+, max{ x i : 1 i n} if s = +. Moreover, we define s [1,+ ] to be the number so that 1 s + 1 s = 1. The following simple facts about s-norms will be used later on. First, the dual norm of the s-norm is the s -norm, that is, sup u x = u s for all u R n. x s 1 Second, s u = {v : v s 1,v u = u s } where fx denotes the usual convex subdifferential of a convex function f : R n R at x R n, i.e. fx = {z R n : z y x fy fx y R n }. In this case, we have the following characterization of robust weakly efficient solutions. Theorem 12. Let V j be as in 15, j = 1,...,p, and suppose that there exists x 0 R n such that a j x 0 b j δ Z 1 j x 0 s > 0,j = 1,...,p. 16 Then, a point x X is a robust weakly efficient solution of P C if and only if there exist λ, λ m, µ, µ R p + and w j, w j R n, with w j s 1 and w j s 1, such that and λ Cx = λ C+uv x = µ j b j and C λ+ µ j b j and C+uv λ+ µ j a j δz 1 j w j = 0 n. µ j a j δz 1 j w j = 0 n. 16

Proof. Note that X can be rewritten as X = { x R n : a j x b j +δv j x 0 for all Z j v j s 1,b j [b j,b j ],j = 1,...,p } = { x R n : a j x b j +δz 1 j u j x 0 for all u j s 1,b j [b j,b j ],j = 1,...,p } = { x R n : a j x b j δ Z 1 j x s 0,j = 1,...,p }. Since V j is a compact convex set for j = 1,...,p and the strict robust feasibility condition 7 holds as a consequence of 16, the conclusion will follow from Theorem 7 if we show that { } NX,x = u : µ j 0, w j s 1 s.t. u x = µ j b j and u+ µ j a j δz 1 j w j = 0 n. To see this, let u NX,x. Then, x is a solution of the following convex optimization problem: min { u x : a j x b j δ Zj 1 x s 0,j = 1,...,p } As the strict feasibility condition 16 holds, by the Lagrangian duality, there exist µ j 0, j = 1,...,p, such that { u x = min u x+ µ j a j x+b j +δ Z 1 x R n j x s }. As x X, this implies that µ j a j x+b j +δ Z 1 j x s = 0,j = 1,...,p, and so, the function hx := u x + p µ j a j x+b j +δ Zj 1 x s attains its minimum on X at x and min x R nhx = u x. This implies that 0 n hx, and so, there exist w j R n with w j s 1 such that wj Z 1 j x = Z 1 j x s and u+ µ j aj δz 1 j w j = 0n. This together with hx = u x gives us that u x = p µ jb j. Then, we have { } NX,x u : µ j 0, w j s 1 s.t. u x = µ j b j and u+ µ j aj δz 1 j w j = 0n To see the reverse inclusion, let u R n with u x = p µ jb j and u + p µ j aj δz 1 j w j = 0n for some µ j 0, w j s 1. Then, for all 17

x X, u x x = u x+ µ j b j = = µ j a j δz 1 j w j x+ µ j b j µ j a j x+b j +δ Z 1 j w j x µ j a j x+b j +δ Z 1 j x s 0, wheretheinequalityfollowsfrom w j s = max u s 1wj uandhence, wj v w j s v s v s for all v R n. So, u NX,x and hence, the conclusion follows. Theorem 12 shows that one can determine whether a robust feasible point x under norm data uncertainty is a robust weakly efficient solution or not by solving finitely many sth-order cone systems that is, linear equations where the variable lies in the ball determined by the s -norm as long as the strict feasibility condition 16 is satisfied. 4.3 Ellipsoidal constraint data uncertainty In this subsection we consider the case where the constraint data are uncertain and belong to the ellipsoidal constraint data uncertainty sets q j V j = {a 0 j + l=1 v l ja l j : v 1 j,...,v q j j 1} [b j,b j ], 17 where a l j Rn, l = 0,1,...,q j, q j N and b j,b j R, j = 1,...,p. Theorem 13. Let V j, j = 1,...,p, be as in 17 and suppose that there exists x 0 R n such that a 0 j x 0 b j a 1 j x 0,...,a q j j x 0 > 0, j = 1,...,p. 18 Then, a point x X is a robust weakly efficient solution of P C if and only if there exist λ, λ m, µ, µ R p + and w, w R n with w 1 and w 1 such that λ Cx = µ j b j and C λ 18 µ j a 0 j y j = 0 m

and λ C +uv x = µ j b j and C +uv λ µ j a 0 j y j = 0 m, where y j = a 1 j w,...,a q j j w. Proof. Note that X can be rewritten as X = {x R n : a 0 j x b j + q j l=1 vl ja l j x 0 for all vj 1,...vq j j 1,b j [b j,b j ], j = 1,...,p } = { x R n : a 0 j x b j a 1 j x,...,a q j j x 0, j = 1,...,p }. The conclusion will follow from Theorem 7 if we show that { NX,x = u R n : µ j 0, w 1 s.t. u x = µ j b j and u To see this, let u NX,x. Then, x is a solution of the following convex optimization problem: min { u x : a 0 j x b j a 1 j x,...,a q j j x 0,j = 1,...,p }. As the strict feasibility condition 18 holds, by the Lagrangian duality, there exist µ j 0, j = 1,...,p, such that { u x = min u x+ µ j a 0 j x+b j + a 1 j x,...,a q j x R n j x }. As x X, this implies that µ j a 0 j x b j a 1 j x,...,a q j j x = 0,j = 1,...,p, and so, the function hx := u x + p µ j a 0 j x + b j + a 1 j x,...,a q j j x attains its minimum at x and min x R nhx = u x. This implies that 0 n hx, and so, there exists w R n with w 1 such that u x = µ j b j and u µ j a 0 j + µ j y j = 0 n, } µ j a 0 j y j = 0 m where y j = a 1 j w,...,a q j j w. Then, we have { NX,x u R n : µ j 0, w 1 s.t. u x = 19 µ j b j and u } µ j a 0 j y j = 0 n

Toseethereverseinclusion, letu R n besuchthat u x = p µ jb j and u p µ ja 0 j y j = 0 n for some µ j 0, j = 1,...,p, and w 1. Then, for all x X, u x x = µ j b j µ j a 0 j y j x µ j a 0 j x+b j + a 1 j x,...,a q j j x 0. Thus, u NX,x and so, the conclusion follows. The above robust solution characterization under the constraint ellipsoidal data uncertainty shows that one can determine whether a robust feasible point is a robust weakly efficient solution point or not by solving finitely many second order cone systems as long as the strict robust feasibility condition 18 is satisfied. Finally, it should be noted that there are other approaches in defining robust solutions for uncertain multiobjective optimization when the data uncertainty U R m n in the objective matrix is a columnwise objective data uncertainty, that is, U = m i=1 U i where U i R n. In this case, one can define a robust solution of the uncertain multi-objective optimization problem as the solution of the following deterministic multiobjective optimization problem { } V-min maxc 1x,..., max c m x : a j x b j, a j,b j V j, j = 1,...,p. c 1 U 1 c m U m This approach has been recently examined in the paper [8] for uncertain multiobjective optimization with semi-infinite constraints under columnwise objective data uncertainty. Acknowledgments M.A. Goberna would like to thank the coauthors of this paper for their hospitality during his stay in June/July 2013 at the School of Mathematics and Statistics of the University of New South Wales in Sydney. References [1] A. Ben-Tal, L. El Ghaoui, A. Nemirovski, Robust optimization, Princeton U.P., Princeton, 2009. [2] J. Branke, K. Deb, K. Miettinen, R. Slowinski eds., Multiobjective optimization: interactive and evolutionary approaches, Lect. Notes in Computer Sci. 5252, Springer, 2008. 20

[3] M.J. Cánovas, M.A. López, J. Parra, F.J. Toledo, Distance to illposedness and the consistency value of linear semi-infinite inequality systems, Math Programming 103A 2005 95-126. [4] M.J. Cánovas, M.A. López, J. Parra, F.J. Toledo, Distance to illposedness for linear inequality systems under block perturbations: convex and infinite-dimensional cases, Optimization 60 2011 925-946. [5] M. Ehrgott, Multicriteria optimization, 2nd Ed., Springer, Berlin, 2005. [6] L. El Ghaoui, H. Lebret, Robust solutions to least-squares problems with uncertain data, SIAM J. Matrix Anal. Appl. 18 1997 1035-1064. [7] M.A. Goberna, F. Guerra-Vázquez, M.I. Todorov, Constraint qualifications in linear vector semi-infinite optimization, European J. Oper. Res. 227 2013 12-21. [8] M.A. Goberna, V. Jeyakumar, G. Li, J. Vicente-Pérez, Robust solutions of uncertain multi-objective linear semi-infinite programming, Preprint, Department of Applied Mathematics, University of New South Wales, Sydney. [9] M.A. Goberna, M.A. López, Linear semi-infinite optimization, Wiley, Chichester,1998. [10] B.L. Gorissen, D. den Hertog, Approximating the Pareto set of multiobjective linear programs via robust optimization, Oper. Res. Letters 40 2012 319-324. [11] Gr.G. Pando, D.T. Luc, P. Pardalos, Robust aspects of solutions in deterministic multiple objective linear programming, European J. Oper. Res. 229 2013 29-36. [12] S. Sitarz, Postoptimal analysis in multicriteria linear programming, European J. Oper. Res. 191 2008 7-18. 21