Abstract. SOCPs remain SOCPs and robust SDPs remain SDPs moreover, when the data entries are

Similar documents
Tractable Approximations to Robust Conic Optimization Problems

Robust linear optimization under general norms

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Handout 8: Dealing with Data Uncertainty

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

Robust portfolio selection under norm uncertainty

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Norms and Perturbation theory for linear systems

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lecture Note 5: Semidefinite Programming for Stability Analysis

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Handout 6: Some Applications of Conic Linear Programming

Distributionally Robust Optimization with ROME (part 1)

Lecture 6: Conic Optimization September 8

4 Linear Algebra Review

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

1 Robust optimization

Springer-Verlag Berlin Heidelberg

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Largest dual ellipsoids inscribed in dual cones

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Lecture: Convex Optimization Problems

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009

Operations Research Letters

Robust discrete optimization and network flows

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias

4. Convex optimization problems

Robust conic quadratic programming with ellipsoidal uncertainties

Functional Analysis Review

Lecture: Cone programming. Approximating the Lorentz cone.

Convex optimization problems. Optimization problem in standard form

Semidefinite Programming Basics and Applications

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Semidefinite Programming

Linear Algebra: Characteristic Value Problem

EE 227A: Convex Optimization and Applications October 14, 2008

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Distributionally Robust Convex Optimization

15. Conic optimization

4. Convex optimization problems

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

BCOL RESEARCH REPORT 07.04

Lecture: Examples of LP, SOCP and SDP

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

Absolute value equations

The Value of Adaptability

Convex Optimization and Modeling

Convex Optimization in Classification Problems

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

Introduction to Linear Algebra. Tyrone L. Vincent

An E cient A ne-scaling Algorithm for Hyperbolic Programming

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)


Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

Convex Optimization M2

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Assignment 1: From the Definition of Convexity to Helley Theorem

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Theory and applications of Robust Optimization

Robust Scheduling with Logic-Based Benders Decomposition

LOWELL JOURNAL. LOWSLL, MICK., WSSDNaSDAY", FEB 1% 1894:

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Lifting for conic mixed-integer programming

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

IE 521 Convex Optimization

15-780: LinearProgramming

Robust l 1 and l Solutions of Linear Inequalities

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

Scientiae Mathematicae Vol. 2, No. 3(1999), 263{ OPERATOR INEQUALITIES FOR SCHWARZ AND HUA JUN ICHI FUJII Received July 1, 1999 Abstract. The g

Lecture 2: Review of Prerequisites. Table of contents

A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

Transcription:

Robust Conic Optimization Dimitris Bertsimas Melvyn Sim y March 4 Abstract In earlier proposals, the robust counterpart of conic optimization problems exhibits a lateral increase in complexity, i.e., robust linear programming problems (LPs) become second order cone problems (SOCPs), robust SOCPs become semidenite programming problems (SDPs), and robust SDPs become NP-hard. We propose a relaxed robust counterpart for general conic optimization problems that (a) preserves the computational tractability of the nominal problem specically the robust conic optimization problem retains its original structure, i.e., robust LPs remain LPs, robust SOCPs remain SOCPs and robust SDPs remain SDPs moreover, when the data entries are independently distributed, the size of the proposed robust problem especially under the l norm is practically the same as the nominal problem, and (b) allows us to provide a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey independent and identically distributed normal distributions. Boeing Professor of Operations Research, Sloan School of Management and Operations Research Center, Massachusetts Institute of Technology, E53-363, Cambridge, MA 39, dbertsim@mit.edu. The research of the author was partially supported by the Singapore-MIT alliance. y NUS Business School, National University of Singapore, dscsimm@nus.edu.sg.

Introduction The general optimization problem under parameter uncertainty is as follows: max f (x ~D ) subject to f i (x ~D i ) i I () x X where f i (x D ~ i ), i fg[i are given functions, X is a given set and D ~ i i fg[i is the vector of uncertain coecients. We dene the nominal problem to be Problem () when the uncertain coecients ~D i take values equal to their expected values D i. In order to address parameter uncertainty Problem () Ben-Tal and Nemirovski [, 3] and independently by El Ghaoui et al. [, ] propose to solve the following robust optimization problem max s:t: min DU f (x D ) min D i U i f i (x D i ) i I x X () where U i, i fg[i are given uncertainty sets. The motivation for solving Problem () is to nd a solution x X that \immunizes" Problem () against parameter uncertainty. By selecting appropriate uncertainty setsu i,we can address the tradeo between robustness and optimality. In designing such an approach two criteria are important in our view: (a) Preserving the computational tractability both theoretically and most importantly practically of the nominal problem. From a theoretical perspective it is desirable that if the nominal problem is solvable in polynomial time, then the robust problem is also polynomially solvable. More specically, it is desirable that robust conic optimization problems retain their original structure, i.e., robust linear programming problems (LPs) remain LPs, robust second order cone problems (SOCPs) remain SOCPs and robust semidenite programming problems (SDPs) remain SDPs. (b) Being able to nd a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey some natural probability distributions. This is important, since from these guarantees we can select parameters that aect the uncertainty setsu i that allows to control the tradeo between robustness and optimality. Let us examine whether the state of the art in robust optimization has the two properties mentioned above:

. Linear Programming: A uncertain LP constraint is of the form ~a x ~ b, for which ~a and ~ b are subject to uncertainty. When the corresponding uncertainty setu is a polyhedron, then the robust counterpart is also an LP (see Ben-Tal and Nemirovski [3, 4] and Bertsimas and Sim [8, 9]). When U is ellipsoidal, then the robust counterpart becomes an SOCP. For linear programming there are probabilistic guarantees for feasibility available ([3, 4] and[8, 9]) under reasonable probabilistic assumptions on data variation.. Quadratic Constrained Quadratic Programming (QCQP): An uncertain QCQP constraint is of the form k ~Axk + ~b x +~c, where A, ~ ~ b and ~c are subject to data uncertainty. The robust counterpart is an SDP if the uncertainty set is a simple ellipsoid, and NP-hard if the set is polyhedral (Ben-Tal and Nemirovski [, 3]). To the best of our knowledge, there are no available probabilistic bounds. 3. Second Order Cone Programming (SOCP): An uncertain SOCP constraint is of the form k ~Ax+ bk ~ ~c x+ d, ~ where ~A, ~ b, ~c and d ~ are subject to data uncertainty. The robust counterpart is an SDP if A, ~ ~ b belong in an ellipsoidal uncertainty setu and ~c, d ~ belong in another ellipsoidal set U. The problem is NP-hard, however, if A, ~ b, ~ ~c, d ~ vary together in a common ellipsoidal set. To the best of our knowledge, there are no available probabilistic bounds. 4. Semidenite Programming (SDP): An uncertain SDP constraintoftheform P n j= ~A j x j ~B, where ~A ::: ~A n and ~B are subject to data uncertainty. The robust counterpart is NP-hard for ellipsoidal uncertainty sets, while there are no available probabilistic bounds. 5. Conic Programming: An uncertain Conic Programming constraint of the form P n j= ~A j x j K ~B, where ~A ::: ~A n and ~B are subject to data uncertainty. The cone K is closed, pointed and with a nonempty interior. To the best of our knowledge, there are no results available regarding tractability and probabilistic guarantees in this case. Our goal in this paper is to address (a) and (b) above for robust conic optimization problems. Specically, we propose a new robust counterpart of Problem () that has two properties: (a) It inherits the character of the nominal problem for example, robust SOCPs remain SOCPs and robust SDPs remain SDPs (b) under reasonable probabilistic assumptions on data variation we establish probabilistic guarantees for feasibility that lead to explicit ways for selecting parameters that control robustness. The structure of the paper is as follows. In Section, we describe the proposed robust model and in Section 3, we show that the robust model inherits the character of the nominal problem for LPs, 3

QCQPs, SOCPs and SDPs. In Section 4, we prove probabilistic guarantees for feasibility for these classes of problems. In Section 5, we show tractability and give explicit probabilistic bounds for general conic problems. Section 6 concludes this paper. The Robust model In this section, we outline the ingredients of the proposed framework for robust conic optimization.. Model for parameter uncertainty The model of data uncertainty we consider is ~D = D + X D j ~z j (3) where D is the nominal value of the data, D j, j N is a direction of data perturbation, and ~z j j N are independent and identically distributed random variables with mean equal to zero, so that E[ ~D] =D. The cardinality ofn may be small, modeling situations involving a small collection of primitive independent uncertainties (for example a factor model in a nance context), or large, potentially as large as the number of entries in the data. In the former case, the elements of ~D are strongly dependent, while in the latter case the elements of ~D are weakly dependent oreven independent (when jnj is equal to the number of entries in the data). The support of ~z j j N can be unbounded or bounded. Ben-Tal and Nemirovskii [4] and Bertsimas and Sim [8] have considered the case that jnj is equal to the number of entries in the data.. Uncertainty sets and related norms In the robust optimization framework of (), we consider the uncertainty set U as follows: U = 8 < : D j9u <jnj : D = D + X 9 = D j u j kuk (4) where is a parameter controling the tradeo between robustness and optimality (robustness increases as increases). We restrict the vector norm k:k we consider by imposing the condition: kuk = ku + k (5) where u + j = ju j j8j N. The following norms commonly used in robust optimization satisfy Eq. (5): 4

The polynomial norm l k, k = ::: (see [, 4, 4]). The l \ l norm: maxfkuk kuk g,> (see [4]). This norm is used in modeling bounded and symmetrically distributed random data. The l \ l norm: maxf kuk ; kuk g,;> (see[8, 7] ). Note that this norm is equal to l if ;=jnj, and l if ; =. This norm is used in modeling bounded and symmetrically distributed random data, and has the additional property that the robust counterpart of an LP is still an LP (Bertsimas et al. [7]). Given a norm k:k we consider the dual norm k:k dened as ksk =maxs x: kxk We next show some basic properties of norms satisfying Eq. (5), which we will subsequently use in our development. Proposition If the norm kk satises Eq. (5), then we have (a) kwk = kw + k : (b) For all v w such that v + w + kvk kwk : (c) For all v w such that v + w + kvk kwk: Proof (a) Let y arg max kxk w x, and for every j N, letz j = jy j j if w j and z j = ;jy j j, otherwise. Clearly, w z =(w + ) y + w y.since,kzk = kz + k = ky + k = kyk, and from the optimality ofy, we have w z w y, leading to w z =(w + ) y + = w y. Since kwk = kw + k,we obtain (b) Note that If v + w +, kwk =max(w) x =max(w + ) x + =max(w + ) x = kw + k : kxk kxk kxk kwk =max(w + ) x + =max(w + ) x: kxk kxk x kvk = max kxk x (v + ) x max (w + ) x = kwk : kxk x (c) We apply part (b) to the norm k:k. From the self dual property of norms k:k = k:k, we obtain part (c). 5

.3 The class of functions f(x D) We impose the following restrictions on the class of functions f(x D) in Problem () (we drop index j for clarity): Assumption The function f(x D) satises: (a) The function f(x D) is concave in D for all x < n. (b) f(x kd) =kf(x D), for all k, D x < n. Note that for functions f( ) satisfying Assumption we have: f(x A + B) f(x A)+ f(x B) =f(x A)+f(x B): (6) The restrictions implied by Assumption still allow us to model LPs, QCQPs, SOCPs and SDPs. Table shows the function f(x D) for these problems. Note that SOCP() models situations that only A and b vary, while SOCP() models situations that A, b, c and d vary. Note that for QCQP, the function, ;kaxk ; x ; c does not satisfy the second assumption. However, by extending the dimension of the b problem, it is well-known that the QCQP constraint is SOCP constraint representable (see [5]). Finally, the SDP constraint, is equivalent to min nx j= A i x i B X @ n A i x i ; B j= A where min (M) is the function that returns the smallest eigenvalue of the symmetric matric M. 3 The proposed robust framework and its tractability The robust framework () leads to a signicant increase in complexity for conic optimization problems. For this reason, we propose a more restricted robust problem, which, as we show in this section, retains the complexity of the nominal problem. Specically, under the model of data uncertainty in Eq. (3) we propose the following constraint for addressing the data uncertainty in the constraint f(x ~ D) : X o min f(x (v w)v D )+ nf(x D j )v j + f(x ;D j )w j (7) 6

Type Constraint D f(x D) LP a x b (a b) a x ; b QCQP kaxk + (A b c d) b x + c d;(b x+c) ; skaxk + d+b x+c d = d j =8j N SOCP() kax + bk c x + d (A b c d) c j = d j =8j N c x + d ;kax + bk SOCP() kax + bk c x + d (A b c d) c x + d ;kax + bk SDP P nj= A i x i ; B S m + (A ::: A n B) min ( P n j= A i x i ; B) Table : The function f(x D) for dierent conic optimization problems. where V = n (v w) < jnjjnj + jkv + wk o (8) and the norm k:k satises Eq. (5). We next show that under Assumption, Eq. (7) implies the classical denition of robustness: f(x D) 8D U (9) where U is dened in Eq. (4). Moreover, if the function f(x D)islinearinD, then Eq. (7) is equivalent to Eq. (9). Proposition Suppose the given norm k:k satises Eq. (5). (a) If f(x A + B) =f(x A)+f(x B), thenx satises (7) if and only if x satises (9). (b) Under Assumption, if x is feasible in Problem (7), then x is feasible in Problem (9). Proof (a) Under the linearity assumption, Eq. (7) is equivalent to: f @ X x D + D j (v j ; w j ) A 8kv + wk v w () while Eq. (9) can be written as: f @ X x D + D j r j A 8krk : () Suppose x is infeasible in (), that is, there exists r krk such that f @ X x D + D j r j A < : 7

For all j N, letv j = maxfr j g and w j = ; minfr j g. Clearly, r = v ; w and since v j + w j = jr j j, we have from Eq. (5) that kv + wk = krk. Hence, x is infeasible in () as well. Conversely, suppose x is infeasible in (), then there exist v w and kv + wk such that f @ X x D + D j (v j ; w j ) A < : For all j N, weletr j = v j ; w j and we observe that jr j jv j + w j. Therefore, for norms satisfying Eq. (5) we have krk = kr + kkv + wk and hence, x is infeasible in (). (b) Suppose x is feasible in Problem (7), i.e., f(x D X o )+ nf(x D j )v j + f(x ;D j )w j 8kv + wk v w : From Eq. (6) and Assumption (b) f(x D X o )+ nf(x D j )v j + f(x ;D j )w j f(x D X + D j (v j ; w j )) for all kv + wk v w : In the proof of part (a) we established that f(x D + X D j r j ) 8krk is equivalent to f(x D + X D j (v j ; w j )) 8kv + wk v w and thus x satises (9). Note that there are other proposals that relax the classical denition of robustness (9) (see for instance []) and lead to tractable solutions. However, our particular proposal in Eq. (7) combines tractability with the ability to derive probabilistic guarantees that the solution of Eq. (7) would remain feasible under reasonable assumptions on data variation. 3. Tractability of the proposed framework Unlike the classical denition of robustness (9), which can not be represented in a tractable manner, we next show that Eq. (7) can be represented in a tractable manner. 8

Theorem For a norm satisfying Eq. (5) and a function f(x D) satisfying Assumption (a) Constraint (7) is equivalent to f(x D ) ksk () where s j =maxf;f(x D j ) ;f(x ;D j )g 8j N: (b) Eq. () can be written as: f(x D ) y f(x D j )+t j f(x ;D j )+t j ktk y y < t < jnj : 8j N 8j N (3) Proof (a) We introduce the following problems: z =max s:t: a v + b w kv + wk v w (4) and z =max X maxfa j b j gr j s:t: krk (5) and show that z = z. Suppose r is an optimal solution to (5). For all j N, let v j = w j = if maxfa j b j g v j = jr j j w j = if a j b j a j > w j = jr j j v j = if b j >a j b j > : Observe that a j v j + b j w j maxfa j b j gr j and w j + v j jr j j8j N. From Proposition (c) we have kv + wk kr k, and thus v w are feasible in Problem (4), leading to X X z (a j v j + b j w j ) maxfa j b j gr j = z : 9

Conversely, let v w be an optimal solution to Problem (4). Let r = v + w. Clearly krk and observe that r j maxfa j b j ga j v j + b j w j 8j N: Therefore, we have X X z maxfa j b j gr j j v (a j + b j w j )=z leading to z = z.we next observe that min (v w)v = ; max X o nf(x D j )v j + f(x ;D j )w j (v w)v = ; max fkrkg X o n;f(x D j )v j ; f(x ;D j )w j X n o maxf;f(x D j ) ;f(x ;D j ) gr j and using the denition of dual norm, ksk =max kxk s x,we obtain ksk =max kxk s x, i.e., Eq. () follows. Note that s j =maxf;f(x D j ) ;f(x ;D j )g, since otherwise there exists an x such thats j <, i.e., f(x D j ) > andf(x ;D j ) >. From Assumption (b) f(x ) =, contradicting the concavity off(x D) (Assumption (a)). Suppose that x is feasible in Problem (). Dening t = s and y = ksk,we can easily check that (x t y) are feasible in Problem (3). Conversely, suppose, x is infeasible in (), that is, f(x D ) < ksk : Since, t j s j =maxf;f(x D j ) ;f(x ;D j )gwe apply Proposition (b) to obtain ktk ksk.thus, f(x D ) < ksk ktk y i.e., x is infeasible in (3). (b) It is immediate that Eq. () can be written in the form of Eq. (3). In Table, we list the common choices of norms, the representation of their dual norms and the corresponding references. 3. Representation of the function maxf;f(x D) ;f(x ;D)g The function g(x D j ) = maxf;f(x D j ) ;f(x ;D j )g naturally arises in Theorem. Recall that a norm satises kak kkak = jkjkak, ka + Bk kak + kbk, andkak = implies that

Norms kuk ktk y References l kuk ktk y [4] l kuk t j y 8j N [7] l kuk P t j y [7] l p, p kuk p P t q l \ l norm maxfkuk kuk g l \ l norm maxf ; kuk kuk g ; > q; j ks ; tk + s < jnj + q; q P s j y ;p + P s j y s j + p t j 8j N p < + s < jnj + y [7] [4] [7] Table : Representation of the dual norm for t. A =. We show next that the function g(x A) satises all these properties except the last one, i.e., it behaves almost like a norm. Proposition 3 Under Assumption, the function g(x A) = maxf;f(x A) ;f(x ;A)g satises the following properties: (a) g(x A), (b) (c) g(x ka) =jkjg(x A), g(x A + B) g(x A)+g(x B). Proof (a) Suppose there exists x such that g(x A) <, i.e., f(x A) > and f(x ;A) >. From Assumption (b) f(x ) =, contradicting the concavity off(x A) (Assumption (a)). (b) For k, we apply Assumption (b) and obtain g(x ka) = maxf;f(x ka) ;f(x ;ka)g = k maxf;f(x A) ;f(x ;A)g = kg(x A): Similarly, ifk<wehave (c) Using Eq. (6) we obtain g(x ka) = maxf;f(x ;k(;a)) ;f(x ;k(a))g = ;kg(x A): g(x A + B) =g(x (A +B)) g(x A)+ g(x B) =g(x A)+g(x B):

Note that the function g(x A) does not necessarily dene a norm for A, sinceg(x A) =doesnot necessarily imply A =. However, for LP, QCQP. SOCP(), SOCP() and SDP, and specic direction of data perturbation, D j,wecanmapg(x D j ) to a function of a norm such that g(x D j )=kh(x D j )k g where H(x D j ) is linear in D j and dened as follows (see also the summary in Table 3): (a) LP: f(x D) =a x ; b, where D =(a b)andd j =(a j b j ). Hence, g(x D j )=maxf;(a j ) x +b j (a j ) x ; b j g = j(a j ) x ; b j j: (b) (c) QCQP: f(x D) =(d ; (b x + c))= ; (A j b j c j ). Therefore, SOCP(): g(x D j ) = max = q kaxk + ; (d + b x + c)=, where D =(A b c d) and D j = 8 < : (bj ) x+c j + ; (bj ) x+c j + (bj ) x+c j ska j xk + ska j xk + s + ka j xk + (b j ) x+c j 9 (b j ) x+c = j (b j ) x+c j : f(x D) =c x + d ;kax + bk, where D =(A b c d)andd j =(A j b j ). Therefore, g(x D j )=ka j x + b j k : (d) SOCP(): f(x D) =c x+d;kax+bk,whered =(A b c d) and Dj =(A j b j c j d j ). Therefore, o g(x D j ) = max n;(c j ) x ; d j + ka j x + b j k (c j ) x +d j + ka j x + b j k = j(c j ) x +d j j + ka j x + b j k : (e) SDP: f(x D) = min ( P n j= A i x i ; B), where D =(A ::: A n B) and D j =(A j ::: Aj n Bj ).

Type r = H(x D j ) g(x D j )=krk g LP r =(a j ) x ; b j jrj h r i h A, r r = j x ((b j ) x+c j )= QCQP r = i, r =((b j ) x +c j )= kr k + jr j SOCP() h r = A j x + b j krk i SOCP() r = r r, r = A j x + b j, r =(c j ) x +d j kr k + jr j SDP R = P n i= Aj i x i ; B j krk Table 3: The function H(x D j ) and the norm kk g for dierent conic optimization problems. Therefore, n g(x D j ) = max ; min ( P n j= A j i x i ; B j Pnj= ) ; min ; A j i x i ; B jo P = max n max ; nj= A j i x i ; B j max ( P o n j= A j i x i ; B j ) = nx A j i x i ; B j : j= 3.3 The nature and size of the robust problem In this section, we discuss the nature and size of the proposed robust conic problem. Note that in the proposed robust model (3) for every uncertain conic constraint f(x ~ D)we add at most jnj + new variables, jn j conic constraints of the same nature as the nominal problem and an additional constraint involving the dual norm. The nature of this constraint depends on the norm we use to describe the uncertainty set U dened in Eq. (4). When all the data entries of the problem have independent random perturbations, by exploiting sparsity of the additional conic constraints, we can further reduce the size of the robust model. Essentially, we can express the model of uncertainty in the form of Eq. (3), for which ~z j is the independent random variable associated with the jth data element, and D j contains mostly zeros except at the entries corresponding to the data element. As an illustration, consider the following semidenite constraint, B @ a a a a 3 C A x + B @ a 4 a 5 a 5 a 6 C A x B @ a 7 a 8 a 8 a 9 such that each element in the data d =(a ::: a 9 ) has an independent random perturbation, that is ~a i = a i +a i~z i and ~z i are independently distributed. Equivalently, in Eq. (3) we have ~d = d + 9X d i ~z i i= 3 C A

l -norm l -norm l \ l -norm l -norm l \ l -norm Num. Vars. n + jnj + jnj + Num. linear Const. n + n + 4jN j + 3jN j Num SOC Const. LP LP LP LP SOCP SOCP QCQP SOCP SOCP SOCP SOCP SOCP SOCP() SOCP SOCP SOCP SOCP SOCP SOCP() SOCP SOCP SOCP SOCP SOCP SDP SDP SDP SDP SDP SDP Table 4: Size increase and nature of robust formulation when each data entry has independent uncertainty. where d =(a ::: a 9 ) and d i is a vector with a i at the ith entry and zero, otherwise. Hence, we can simplify the conic constraint in Eq. (3), f(x d )+t or B min @ B @ a C A B x + @ C A B x ; @ C A C A + t as t ;minfa x g or equivalently as linear constraints t ;a x t. In Appendix A we derive and in Table 4 we summarize the number of variables and constraints and their nature when the nominal problem is an LP, QCQP, SOCP () (only A b vary), SOCP () (A b c dvary) and SDP for various choices of norms. Note that for the cases of the l, l and l norms, we are able to collate terms so that the number of variables and constraints introduced is minimal. Furthermore, using the l norm results in only one additional variable, one additional SOCP type of constraint, while maintaining the nature of the original conic optimization problem of SOCP and SDP. The use of other norms comes at the expense of more variables and constraints of the order of jnj, which is not very appealing for large problems. 4 Probabilistic Guarantees In this section, we derive a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey some natural probability distributions. An important component ofour analysis is the relation among dierent norms. We denote by h i the inner product on a vector space, 4

< m or the space of m by m symmetric matrices, S mm. The inner product induces a norm p hx xi. For avector space, the natural inner product is the Euclidian inner product, hx yi = x y, and the induced norm is the Euclidian norm kxk. For the space of symmetric matrices, the natural inner product is the trace product or hx Y i = trace(xy ) and the corresponding induced norm is the Frobenius norm, kxk F (see [3]). We analyze the relation of the inner product norm p hx xi with the norm kxk g dened in Table 3 for the conic optimization problems we consider. Since kxk g and p hx xi are valid norms in a nite dimensional space, there exist nite > such that for all r in the relevant space. krk g q hr ri krk g (6) Proposition 4 For the norm kk g dened intable 3 for the conic optimization problems we consider, Eq. (6) holds with the following parameters: (a) LP: = =: (b) QCQP, SOCP(): = p and =. (c) SOCP(): = =: (d) SDP: =and = p m: Proof (a) LP: For r <and krk g = jrj, leading to Eq. (6) with = =: (b) QCQP, SOCP(): For r =(r r ) < l+,leta = kr k and b = jr j. Since a b >, using the inequality a + b p p a + b and p a + b a + b, wehave p (kr k + jr j) p r r = krk kr k + jr j leading to Eq. (6) with = p and =: (c) SOCP(): For all r, Eq. (6) holds with = =: (d) Let j, j = ::: m be the eigenvalues of the matrix A. Since kak F = and kak =max j j j j,wehave kak kak F p mkak qtrace(a )= q P j j leading to Eq. (6) with =and = p m: The central result of the section is as follows. 5

Theorem (a) Under the model of uncertainty in Eq. (3), and given a feasible solution x in Eq. (7), then where P(f(x ~D) < ) P X @ k r j ~z j k g > ksk A r j = H(x D j ) s j = kr j k g j N: (b) When we use the l -norm in Eq. (8), i.e., ksk = ksk, and under the assumption that z j normally and independently distributed withmean zero and variance one,i.e.,~z N( I), then P B @ X s X r j ~z j > g where =,, derived inproposition 4 and >. are p! kr C e j k g A exp ; (7) Proof We have P(f(x ~ D) < ) P@ f(x D )+f(x X D j ~z j ) < A (From (6)) P P f(x D j ~z j ) < ;ksk (From (), s j = kh(x D j )k g ) X P@ min @ f(x D X j ~z j ) f(x ; D j ~z j ) A < ;ksk A X =P@ g(x D j ~z j ) > ksk A X =P@ kh(x D j ~z j )k g > ksk A X =P@ k H(x D j )~z j k g > ksk A (H(x D) islinearind) =P@ X k r j ~z j k g > ksk A : 6

p (b) Using, the relations krk g hr ri and krkg p hr ri from Proposition 4, we obtain B P @ X s X r j ~z j > kr C j k g ga B = P @ X r X j ~z j > kr C j k ga g * + P @ X X r j ~z j r X k ~z k > j r j ia kn hr = P @ X X j r k i~z j ~z k > knhr X hr j r j ia @ ~z R~z X > j r j ia hr = P where R jk = hr j r k i. Clearly, R is a symmetric positive semidenite matrix and can be spectrally decomposed such that R = Q Q, where is the diagonal matrix of the eigenvalues and Q is the corresponding orthonormal matrix. Let ~y = Q~z so that ~z R~z = ~y ~y = P j ~y j.since~z N( I), we also have ~y N( I), that is, ~y j, j N are independent and normally distributed. Moreover, Therefore, P =P X j = trace(r) = X @ ~z R~z X > j r j ia hr @ X X j ~y j > j A E exp = = P j ~y j exp P j exp P j! j Q E exp j ~y j Q E exp ~y j exp P j! j Q E exp ~y j exp P j hr j r j i: (From Markov's inequality, >) (~y j are independent) for all > and j 8j N 7

where the last inequality follows from Jensen inequality, noting that x j is a concave function of x if j [ ]. Since ~y j N( ), E exp ~y j!! = p Z exp ; y ;! dy = s ; : Thus, we obtain Q exp E ~y! j j exp P j = Q exp j ln exp P j = exp ln P ; exp P j ; j : We select ==( ) where =max j, and obtain exp ln P ; j exp P =exp ln j ; ; where =( P j)= : Taking derivatives and choosing the best, wehave = ; for which >. Substituting and simplifying, we have exp!! p ln ; e = ; where the last inequality follows from, and from p e! p e exp(; )!! exp(; ) exp(; ) < for>. Note that f(x ~D) <, implies that k~zk >. Thus, when ~z N( I) P(f(x ~D) < ) P(k~zk > ) = ; jnj ( ) (8) where jnj ()isthecdfofa-square distribution with jnj degrees of freedom. Note that the bound (8) does not take into account the structure of f(x ~D) incontrast to bound (7) that depends on f(x ~D) via the parameter. To illustrate this, we substitute the value of the parameter from Proposition 4 in Eq. (7) and report in Table 6 the bound in Eq. (7). To amplify the previous discussion, we show intable 6 the value of in order for the bound (7) to be less than or equal to. The last column shows the value of using bound (8) that is independent of the structure of the problem. We choose jnj = 495 which isapproximately the maximum number 8

Type LP QCQP SOCP() SOCP() SDP Probability bound of infeasibility p e exp(; q e ) exp(; p e exp(; q e 4 ) ) exp(; ) 4 q e exp(; ) m m Table 5: Probability bounds of P(f(x ~D) < ) for ~z N( I). LP QCQP SOCP() SOCP() SDP Eq. (8) ; :76 3:9 :76 3:9 7:6 74:5 ; 3:57 5:5 3:57 5:5 35:7 75: ;3 4: 5:95 4: 5:95 4: 75:7 ;6 5:68 7:99 5:68 7:99 56:8 76:9 Table 6: Sample calculations of using Probability Bounds of Table 5 for m =, n =, jnj = 495. of data entries in a SDP constraint with n = and m =. Although the size jnj is unrealistic for constraints with less data entries p such aslp, the derived probability bounds remain valid. Note that bound (8) leads to = O( jnj ln(=)). For LP, SOCP, and QCQP, bound (7) leads to = O(ln(=)), which is independent of the dimension of the problem. For SDP it leads to we have =O( p m ln(=)). As a result, ignoring the structure of the problem and using bound (8) leads to very conservative solutions. 5 General cones In this section, we generalize the results in Sections -4 to arbitrary conic constraints of the form, nx j= ~A j x j K ~B (9) 9

where f ~A ::: ~A n ~Bg = ~D constitutes the set of data that is subject to uncertainty, andk is a closed, convex, pointed cone with nonempty interior. For notational simplicity, we dene nx A(x D)= ~ ~A j x j ; B ~ j= so that Eq. (9) is equivalent to A(x ~ D) K : () We assume that the model for data uncertainty isgiven in Eq. (3) with ~z N( I). The uncertainty set U satises Eq. (4) with the given norm satisfying kuk = ku + k: Paralleling the earlier development, starting with a cone K and constraint (),we dene the function f( ) as follows so that f(x D) > if and only if A(x D) K. Proposition 5 For any V K, the function f(x D) = max s:t: A(x D) K V () satises the properties: (a) f(x D) is bounded andconcave in x and D. (b) f(x kd) =kf(x D) 8k. (c) f(x D) y if and only if A(x D) K yv. (d) f(x D) >yif and only if A(x D) K yv. Proof (a) Consider the dual of Problem (): z = min hu A(x D)i s:t: hu V i = u K where K is the dual cone of K. Since K is a closed, convex, pointed cone with nonempty interior, so is K (see [5]). As V K, for all u K and u 6=, wehave hu V i >, hence, the dual problem is bounded. Furthermore, since K has a nonempty interior, the dual problem is strictly feasible, i.e., there exists u K hu V i =. Therefore, by conic duality, the dual objective z has the same nite objective as the primal objective function f(x D). Since A(x D) is a linear mapping of D and an

ane mapping of x, it follows that f(x D) is concave inx and D. (b) Using the dual expression of f(x D), and that A(x kd) =ka(x D), the result follows. (c) If = y is feasible in Problem (), we have f(x D) = y. Conversely, iff(x D) y, then A(x D) K f(x D)V K yv. (d) Suppose A(x D) K yv, then there exists > such that A(x D) ; yv K V or A(x D) K ( + y)v. Hence, f(x D) + y>y. Conversely, since V K,iff(x D) >ythen (f(x D) ; y)v K.Hence,A(x D) K f(x D)V K yv. Remark : With y =, (c) establishes that A(x D) K if and only if f(x D) and (d) establishes that A(x D) K if and only if f(x D) >. The proposed robust model is given in Eqs. (7) and (8). We next derive an expression for g(x D) =maxf;f(x D) ;f(x ;D)g. Proposition 6 Let g(x D) = maxf;f(x D) ;f(x ;D)g. Then g(x D) =kh(x D)k g where H(x D) =A(x D) and ksk g = min fy : yv K S K ;yv g : Proof We observe that g(x D) =maxf;f(x D) ;f(x ;D)g =minfy j ; f(x D) y ;f(x ;D) yg =minfy ja(x D) K ;yv ;A(x D) K ;yv g (From Proposition 5(c)) = ka(x D)k g : We also need to show that k:k g is indeed a valid norm. Since V K, then ksk g. Clearly, kk g = and if ksk g =, then K S K, which implies that S =. To show that kksk g = jkjksk g,we observe that for k>, kksk g = min fy j yv K ks K ;yv g y = k min k j y V k K S K ; y V k = kksk g :

Likewise, if k< kksk g =minfy j yv K ks K ;yv g =minfy j yv K ;ks K ;yv g = k;ksk g Finally, to verify triangle inequality, = ;kksk g : ksk g + kt k g =minfy j yv K S K ;yv g + min fz j zv K T K ;zv g =minfy + z j yv K S K ;yv zv K T K ;zv g min fy + z j (y + z)v K S + T K ;(y + z)v g = ks + T k g : For the general conic constraint, the norm, kk g is dependent on the cone K and a point in the interior of the cone V. Hence, we dene kk K V := kk g. Using Proposition 5 and Theorem we nextshow that the robust counterpart for the conic constraint () is tractable and provide a bound on the probability that the constraint is feasible. Theorem 3 We have (a) (Tractability) For a norm satisfying Eq. (5), constraint (7) for general cones is equivalent to A(x D ) K yv t j V K A(x D j ) K ;t j V j N ktk y y < t < jnj : (b) (Probabilistic guarantee) When we use the l -norm in Eq. (8), i.e., ksk = ksk, and under the assumption that ~z N( I), then for all V we have p! e P(A(x ~D) = K) exp ; K V K V where K V = @ p max ksk K V A hs Si= q! max hs Si ksk K V = and ksk K V =minfy : yv K S K ;yv g :

Proof The Theorem follows directly from Propositions 5, 6, Theorems,. From Theorem 3, for any cone K, we select V in order to minimize K V, i.e., K = min V K V : We next show that the smallest parameter is p and p m for SOCP and SDP respectively. For the second order cone, K = L n+, L n+ = fx < n+ : kx n k x n+ g where x n =(x ::: x n ). The induced norm is given by kxk L n+ v =minfy : yv L n+ x L n+ ;yvg =minfy : kx n + v n yk v n+ y + x n+ kx n ; v n yk v n+ y ; x n+ g and L n+ v = max kxk=! kxk L n+ v @ max kxk L n+ v = kxk A : For the symmetric positive semidenite cone, K = S m +, Proposition 7 We have S m + V = kxk S m + V = min fy : yv X ;yvg @ p max kxk S m + V A @ hx Xi= max q hx XiA : kxk S m + V = (a) (b) For the second order cone, L n+ v p for all v L n+ (kv n k <v n+ )withequality holding for v =( ). For the symmetric positive semidenite cone, S m + V p m for all V with equality hoding for V = I. Proof For any V K,we observe that kv k K V = min fy : yv K V K ;yv g =: 3

Otherwise, if kv k K V <, there exist y < such that yv K V, which implies that ;V K, contradicting V K. Hence, kvk L n+ v =andwe obtain! max kxk kvk : kxk L n+ v = Likewise, when x n =(v n )=( p kv n k )andx n+ = ;=( p ), so that kxk =,we can also verify that the inequalities v n kp + v n yk v n+ y ; p kvn k v n kp ; v n yk v n+ y + p kvn k hold if and only if y p =(v n+ ;kv n k ). Hence, kxk L n+ v = p =(v n+ ;kv n k ) and we obtain max kxk L n+ v kxk= Therefore, since <v n+ ;kv n k v n+ kvk, wehave! L n+ v = max kxk L n+ v kxk= When v =( ),wehave p v n+ ;kv n k :! max kxk kxk L n+ v = kxk L n+ v = kx n k + jx n+ j and from Proposition 4(b), the bound is achieved. Hence, L n+ = p. (b) Since V is an invertible matrix, we observe that p kvk v n+ ;kv n k p : kxk S m + V =minfy : yv X ;yvg =min ny : yi V ; XV ; o ;yi = kv ; XV ; k : For any V, letx = V,wehave kxk S m + V =and hx Xi = trace(vv)=kk where < m is a vector corresponding to all the eigenvalues of the matrix V. Hence, we obtain @ max hx XiA kk : kxk S m + V = q 4

Without loss of generality, let be the smallest eigenvalue of V with corresponding normalized eigenvector, q.now, let X = q q. Observe that hx Xi = trace(xx) = trace(q q q q ) = trace(q q q q ) =: We can express the matrix, V in its spectral decomposition, so that V = P j q j q j j. Hence, Therefore, we establish that Combining the results, we have S m V = When V = I, wehave @ kxk S m + V = kv ; XV ; k @ = P k j q j q j ; j q q Pj q j q j ; j k = k ; q q k = ; : q max hx Xi kxk S m + V = p max kxk S m + V hx Xi= A @ A ; : p max kxk S m + V hx Xi= kxk S m V = kxk and from Proposition 4(d), the bound is achieved. Hence, S m = p m. A kk p m: 6 Conclusions We proposed a relaxed robust counterpart for general conic optimization problems that we believe achieves the objectives outlined in the introduction, namely: (a) (b) It preserves the computational tractability of the nominal problem. Specically the robust conic optimization problem retains its original structure, i.e., robust LPs remain LPs, robust SOCPs remain SOCPs and robust SDPs remain SDPs. Moreover, the size of the proposed robust problem especially under the l norm is practically the same as the nominal problem. It allows us to provide a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey independent and identically distributed normal distributions. 5

A Simplied formulation under independent uncertainties In this section, we show that if each data entry of the model has independent uncertainty, we can substantially reduce the size of the robust formulation (3). We focus on the equivalent representation (), f(x D ) y ksk y where, s j =maxf;f(x D j ) ;f(x ;D j )g = g(x D j ), for j N. Proposition 8 For LP, QCQP, SOCP(), SOCP() and SDP, we can express s j = jd j x i(j) j for which d j, j N are constants and the function, i : N!f ::: ng maps j N to the index of the corresponding variable. We dene x =, to address the case when s j is not variable dependent. Proof We associate the jth data entry, j N with an iid random variable ~z j. The corresponding expression of g(x D j ) is shown in Table 3. (a) LP: Uncertain LP data is represented as D ~ =(~a ~ b), where ~a j = a j +a j ~z j j= ::: n ~ b = b +b~z n+ : We have jnj = n + and s j = ja j x j j j= ::: n s n+ = jbj: (b) QCQP: Uncertain QCQP data is represented as ~D =(~A ~b ~c ), where ~A kj = A kj +A kj ~z n(k;)+j j ::: n k= ::: l ~ bj = b j +b j ~z nl+j j= ::: n ~c = c +c~z n(l+)+ : We have jnj = n(l +)+and s n(k;)+j = ja kj x j j j= ::: n k = ::: l s nl+j = jb j x j j j= ::: n s n(l+)+ = jcj: 6

(c) SOCP()/SOCP(): Uncertain SOCP() data is represented as ~D =(~A ~b ~c d), where ~A kj = A kj +A kj ~z n(k;)+j j= ::: n k = ::: l ~ bk = b k +b k ~z nl+k k= ::: l ~c j = c j +c j ~z (n+)l+j j= ::: n ~d = d +d~z (n+)l+n+ : We have jnj =(n +)l + n + and s n(k;)+j = ja kj x j j j= ::: n k = ::: l s nl+k = jb k j j= ::: l s (n+)l+j = jc j x j j j= ::: n s (n+)l+n+ = jdj: Note that SOCP() is a special case of SOCP(), for which jnj =(n +)l, that is, s j j>(n +)l. (d) SDP: Uncertain SDP data is represented as D ~ =( A ~ ::: A ~ n B), ~ where = for all ~A i = A i + P m k= P kj= [A i ] jk I jk ~z p(i j k) i = ::: n ~B = B + P n k= P kj= [B] jk I jk ~z p(n+ j k) where the index function p(i j k)=(i ; )(m(m +)=) + k(k ; )=+j, and the symmetric matrix I jk < mm satises, I jk = 8 >< >: (e j e k + e ke j ) if j 6= k e k e k if k = j e k being the kth unit vector. Hence, jnj =(n +)(m(m + ))=. Note that if j = k, ki jk k =. Otherwise, I jk has rank and (e j + e k )= p and (e j ; e k )= p aretwo eigenvectors of I jk with corresponding eigenvalues and ;. Hence, ki jk k = for all valid indices j and k. Therefore, we have s p(i j k) = j[a i ] jk x i j 8i f ::: ng j kf ::: mg j k s p(n+ j k) = j[b] jk j 8j k f ::: mg j k: We dene the set J(l) =fj : i(j)=l j Ng for l f ::: ng. From Table, we have the following robust formulations under the dierent norms in the restriction set V of Eq. (8). 7

(a) l -norm The constraint ksk y for the l -norm is equivalent to or X jd j x i(j) jy, nx @ X l= jj(l) jd j ja jxl jy PjJ() jd j j + P n P l= jj(l) jd j j t x t ;x t < n : t l y We introduce additional n + variables, including the variable y, and n + linear constraints to the nominal problem. (b) l -norm The constraint ksk y for the l -norm is equivalent to or max jd jx i(j) jy, max lf ::: ng max jj() jd j jy max jd j j jj(l)! jx l jy max jj(l) jd j jx l y l = ::: n ; max jj(l) jd j jx l y l = ::: n: We introduce an additional variable and n + linear constraints to the nominal problem. (c) l \ l -norm The constraint ksk y for the l \ l -norm is equivalent to t j jd j jx i(j) t j ;jd j jx i(j) j N j N ;p + P r j y r j + p t j 8j N r < jnj + t <jnj p < + leading to an additional of jn j + variables and 4jN j + linear constraints, including non-negativity constraints, to the nominal problem. (d) l -norm The constraint ksk y for the l -norm is equivalent to s X (d j x i(j) ) y, vu u t X jj() 8 jd j j + nx @ X l= jj(l) d A j x l y:

We only introduce an additional variable, y and one SOCP constraint to the nominal problem. (e) l \ l -norm The constraint ksk y for the l \ l -norm is equivalent to t j jd j jx i(j) t j ;jd j jx i(j) P kr ; tk + r j y t < jnj r < jnj + : j N j N We introduce adds jn j + variables, one SOCP constraint and 3jN j linear constraints, including non-negativity constraints, to the nominal problem. In Table 4, we summarize the size increase and the nature of the robust model for dierent choices of the given norm. References [] Ben-Tal, A., Nemirovski, A. (998): Robust convex optimization, Math. Oper. Res., 3, 769-85. [] Ben-Tal, A., Nemirovski, A. (998): On the quality of SDP approximations of uncertain SDP programs, Research Report #4/98 Optimization Laboratory, Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology, Israel. [3] Ben-Tal, A., Nemirovski, A. (999): Robust solutions to uncertain programs, Oper. Res. Let., 5, -3. [4] Ben-Tal, A., Nemirovski, A. (): Robust solutions of Linear Programming problems contaminated with uncertain data, Math. Progr., 88, 4-44. [5] Ben-Tal, A., Nemirovski, A. (): Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, MPR-SIAM Series on Optimization, SIAM, Philadelphia. [6] Ben-Tal, A., El-Ghaoui, L., Nemirovski, A. (): Robust semidenite programming, in Saigal, R., Vandenberghe, L., Wolkowicz, H., eds., Semidenite programming and applications, Kluwer Academic Publishers. [7] Bertsimas, D., Pachamanova, D., Sim, M. (3): Robust Linear Optimization under General Norms, to appear in Oper. Res. Let. 9

[8] Bertsimas, D., Sim, M. (4): Price of Robustness, Oper. Res., 5(), 35-53. [9] Bertsimas, D., Sim, M. (3): Robust Discrete Optimization and Network Flows, Math. Progr., 98, 49-7. [] Birge, J. R., Louveaux, F. (997): Introduction to Stochastic Programming. Springer, New York. [] El-Ghaoui, Lebret, H. (997): Robust solutions to least-square problems to uncertain data matrices, SIAM J. Matrix Anal. Appl., 8, 35-64. [] El-Ghaoui, L., Oustry, F., Lebret, H. (998): Robust solutions to uncertain semidenite programs, SIAM J. Optim., 9,33-5. [3] Renegar, J. (): A mathematical view of interior-point methods in convex optimization, MPS- SIAM Series on Optimization, SIAM, Philadelphia. [4] Soyster, A.L. (973): Convex programming with set-inclusive constraints and applications to inexact linear programming, Oper. Res.,, 54-57. 3