Operations Research Letters

Similar documents
1 Robust optimization

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization

Robust linear optimization under general norms

Lecture 7: Weak Duality

Robust-to-Dynamics Linear Programming

Lecture 6: Conic Optimization September 8

9. Interpretations, Lifting, SOS and Moments

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Lecture 14: Optimality Conditions for Conic Problems

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

EE 227A: Convex Optimization and Applications October 14, 2008

Handout 8: Dealing with Data Uncertainty

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

Lec12p1, ORF363/COS323. The idea behind duality. This lecture

Convex Optimization Overview (cnt d)

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Robust portfolio selection under norm uncertainty

Duality revisited. Javier Peña Convex Optimization /36-725

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Handout 6: Some Applications of Conic Linear Programming

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Interval solutions for interval algebraic equations

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p

1 Kernel methods & optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization

Likelihood Bounds for Constrained Estimation with Uncertainty

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Review of Optimization Basics

Lecture 13: Duality Uses and Correspondences

A Note on Robustness of the Min-Max Solution to Multiobjective Linear Programs

Lecture 8. Strong Duality Results. September 22, 2008

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints

Lecture: Cone programming. Approximating the Lorentz cone.

EE/AA 578, Univ of Washington, Fall Duality

ROBUST CONVEX OPTIMIZATION

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Additional Homework Problems

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Adjustable Robust Solutions of Uncertain Linear Programs 1

Lecture 23: November 19

Robust goal programming

MODELS FOR ROBUST ESTIMATION AND IDENTIFICATION

Convex Optimization Problems. Prof. Daniel P. Palomar

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lecture Note 5: Semidefinite Programming for Stability Analysis

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Lecture 4: September 12

Lecture 5. The Dual Cone and Dual Problem

Sensitivity Analysis and Duality

Robust Fisher Discriminant Analysis

Economics 205 Exercises

IE 5531: Engineering Optimization I

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST)

Learning the Kernel Matrix with Semidefinite Programming

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Lagrangian Duality for Dummies

Convex Optimization M2

Convex Optimization in Classification Problems

Robust Portfolio Selection Based on a Multi-stage Scenario Tree

Keywords. quadratic programming, nonconvex optimization, strong duality, quadratic mappings. AMS subject classifications. 90C20, 90C26, 90C46

How to Take the Dual of a Linear Program

Introduction to Alternating Direction Method of Multipliers

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Lecture 16: October 22

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

The Value of Adaptability

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

Computation of the Joint Spectral Radius with Optimization Techniques

Pessimistic Bi-Level Optimization

II. Analysis of Linear Programming Solutions

Computational Optimization. Constrained Optimization Part 2

Convexity Properties Associated with Nonconvex Quadratic Matrix Functions and Applications to Quadratic Programming

Lecture 26: April 22nd

Robust combinatorial optimization with variable budgeted uncertainty

Semidefinite Programming Basics and Applications

Learning the Kernel Matrix with Semi-Definite Programming

NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

Largest dual ellipsoids inscribed in dual cones

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints

Pareto Efficiency in Robust Optimization

On State Estimation with Bad Data Detection

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

Transcription:

Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst equals dual best Amir Beck, Aharon Ben-Tal Department of Industrial Engineering and Management, Technion Israel Institute of Technology, Haifa 32000, Israel a r t i c l e i n f o a b s t r a c t Article history: Received 1 April 2008 Accepted 26 September 2008 Available online 17 October 2008 We study the dual problems associated with the robust counterparts of uncertain conve programs. We show that while the primal robust problem corresponds to a decision maker operating under the worst possible data, the dual problem corresponds to a decision maker operating under the best possible data. 2008 Elsevier B.V. All rights reserved. Keywords: Robust optimization Optimistic counterpart Conve programg duality 1. Introduction (P) Consider a general uncertain optimization problem g(; u) f i (; v i ) 0, i = 1,..., m, R n, (1.1) where g and f i are conve functions with respect to the decision variable and u R p, v i R q i are the uncertain parameters of the problem. Robust optimization (RO) [1,2] is one of the basic methodologies that deals with the case in which the parameters u, v i are not eactly known. The setting in RO is that the information available on the uncertainties is crude: u and v i are only known to reside in certain conve compact uncertainty sets: u U, v i V i, i = 1,..., m. A vector is a robust feasible solution of (P) if it satisfies the constraints for every possible realization of the parameters. That is, it satisfies for every i = 1,..., m: f i (; v i ) 0 for every v i V i. We emphasize the fact that RO deals eclusively with problems modelled as (1.1), i.e., all constraints are inequalities and the uncertainty is constraint-wise. The constraints in problem (1.1) can be written as F i () 0, i = 1,..., m, Corresponding author. E-mail addresses: becka@ie.technion.ac.il (A. Beck), abental@ie.technion.ac.il (A. Ben-Tal). where F i () = ma v i V i f i (; v i ). (1.2) If we will also denote G() = ma g(, u), (1.3) u U then the robust counterpart (RC) of the original problem is given by G() F i () 0, i = 1,..., m, R n. (1.4) The functions F and G i are conve as pointwise maima of conve functions [6]. Therefore, the robust counterpart (1.4) is always a conve optimization problem and thus it has a dual conve problem (call it (DR-P)), which under some regularity conditions has the same value as the primal problem. At the same time the primal uncertain problem (P) itself has an uncertain dual problem (call it (D)), with the same uncertain parameters. In this paper we study the relation between (DR-P) and the uncertain dual problem (D). The relation involves the notion of optimistic counterpart which is introduced in Section 2. We then show in Section 3 that for linear programg (LP) problems the dual of the robust counterpart is the optimistic counterpart of the uncertain dual problem. For LP problems the dual of the robust counterpart can be computed eplicitly and so the above relation is eplicitly revealed. In the last section we study the same relation for a general uncertain conve program. Although here the robust counterpart cannot be computed eplicitly, we employ a ima theorem to show that the relation primal worst equals dual best is valid; so, while in the primal robust problem we have a decision maker operating under the worst possible data, in the dual problem we have a decision maker operating under the best possible data. 0167-6377/$ see front matter 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.orl.2008.09.010

2 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) 1 6 2. The optimistic counterpart A vector is an optimistic feasible solution of (P) if it satisfies the constraints for at least one realization of the uncertainty set. That is, is optimistic feasible solution if and only if for every i = 1,..., m f i (; v i ) 0 for some v i V. The optimistic counterpart of problem (P) consists of imizing the best possible objective function (i.e., imal with respect to the parameters) over the set of optimistic feasible solutions: i) u U f i (; v i ) 0 for some v i V i, i = 1,..., m, [ ] R n. Denoting Ĝ() = u U g(; u) and ˆFi () = vi V i f i (; v i ), the above problem can be equivalently written as (OC) Ĝ() ˆFi () 0, i = 1,..., m, R n. As opposed to the robust counterpart, the above problem is in general not conve. The objective function Ĝ and constraints ˆF i are pointwise ima of conve functions and as such do not necessarily posses any conveity/concavity properties. Eample 2.1 (Linear Programg). Consider a linear programg (LP) problem c T a T i b i, i = 1,..., m, R n, (2.1) where c, a i R n, i = 1,..., m and b i R for every i = 1,..., m. Suppose now that the vectors a i are not fied but rather known to reside in an uncertainty set V i which is an l balls of the form: V i = ã i + v i : v i ρ, where ã i is the noal value of a i and ρ > 0. This means that the coefficients a ij (jth component of a i ) have an interval uncertainty: a ij ã ij ρ. The ith constraint function is F i () ma vi (ã i + v i ) T = ã T i + ρ 1, and consequently the robust counterpart becomes c T ã T i + ρ 1 b i, i = 1,..., m, R n. The above problem is of course a conve optimization problem and can be cast as an LP, thus rendering it tractable. On the other hand, the ith constraint in the optimistic counterpart is given by ˆF i () = vi (ã i + v i ) T = ã T i ρ 1. Consequently, the optimistic counterpart of (2.1) is c T ã T i ρ 1 b i, i = 1,..., m, R n, which is clearly not a conve problem. Note however that if instead of (2.1) the nonlinear LP is c T a T i b i, i = 1,..., m, 0, (2.2) then the corresponding optimistic counterpart associated with the same uncertainty sets V i is c T n ã T i ρ i b i, i = 1,..., m, 0, which is a conve (in fact linear) program. Thus we observe that the conveity status of the optimistic counterpart depends among other things on the representation of the noal problem. It also depends on the choice of the uncertainty set; for eample, for the same noal problem (2.2) if the uncertainty sets U i are U i = ã i + v i : v i 2 ρ, i = 1,..., m, then the optimistic counterpart is again a nonconve problem: c T ã T i ρ 2 b i, i = 1,..., m, 0. Eample 2.2 (Robust and Optimistic Least Squares). Given a matri A R mn and a vector b R m, the celebrated least squares problem [3] consists of imizing the data error over the entire space: (LS) A b 2. Now, assume that the matri A is not fied but is rather known to reside in the uncertainty set: U = + U : F ρ, where F stands for the Frobenius norm and à is the fied noal matri. The robust counterpart of (LS) is given by (R-LS) ma A b 2. A U This problem was introduced and studied in [5] where it was called robust least squares. By eplicitly solving the inner maimization problem (see [5]), (R-LS) becomes à b 2 + ρ 2. The above is of course a conve problem, and more specifically a conic quadratic problem, that can be solved efficiently. Here the optimistic counterpart of (LS) is given by (O-LS) A b 2. A U Using the variational form of the Euclidean norm, the inner imization problem becomes à b + 2 = F ρ = ma y: y 2 1 F ρ yt (à b + ) ma y T (à b + ) F ρ y: y 2 1 = ma y T (à b) ρ 2 y 2, (2.3) y: y 2 1 where the second equality follows from the equality between ma and ma for conve concave functions [6, Corollary 37.3.2]. An eplicit epression for (2.3) can be found as follows: ma y T (à b) ρ y = ma ma y: y 1 0 α 1 y =α yt (à b) ρ α, = ma α( à b ρ ) 0 α 1 [ ] = à b ρ +

A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) 1 6 3 where for a scalar, [] + denotes the positive part of : > 0, [] + = 0 0. Therefore, (O-LS) simplifies to [ Ã b 2 ρ 2 ] +, which is not a conve optimization problem. It is interesting to note that as was shown in [5], the solution of (R-LS) is of the form ˆ = (Ã T Ã + αi) 1 Ã T b where α 0 and in [4] it was shown, under the assumption that ρ is small enough, that the solution of (O- LS) is of the form ˆ = (Ã T Ã βi) 1 Ã T b where β 0. Both α and β are solutions of certain one-dimensional secular equations. Therefore, the optimal solution of (R-LS) is a regularization of the least squares solution while the optimal solution of (O-LS) has an opposite de-regularization effect. We end this section by defining the optimistic counterpart for a more general optimization model in which equality constraints are also present and with uncertainty parameters which are not necessarily constraint-wise. Specifically, consider the general model (G) g(; u) H(; v 1 ) 0, K(; v 2 ) = 0, R n where H : R n R m 1 and K : R n R m 2 are vector functions and g : R n R is a scalar function of n variables. The parameters u R p, v 1 R q 1, v2 R q 2 are uncertain and only known to reside in compact sets U 1, V 1 and V 2 respectively. The optimistic counter part of problem (G) is defined as (O-G) g(; u) u U H(; v 1 ) 0 for some v 1 V 1, K(; v 2 ) = 0 for some v 2 V 2, R n. Note that as opposed to the robust counterpart, in the contet of the optimistic counterpart, it does make sense to deal with equality constraints. 3. The dual of robust linear programg problems (LP) In this section we consider an LP of the following general form: ma c T a T i b i, i = 1,..., m, 0, (3.1) where c, a i R n. Assume now that for every i = 1,..., m, the vector a i is not fied but rather known to reside in some uncertainty set U i : a i U i, where U i is a nonempty compact set. We assume that b and c are certain data vectors for the sake of eposition. Indeed a problem with uncertainty in the objective function and righthand side of the constraints can be reduced to an equivalent one with uncertainty only in the lefthand side of the constraints. Throughout, the letters D, R, O stand for dual, robust and optimistic respectively. In particular, (R-LP) is the robust counterpart of (LP), (DR-LP) is the dual of (R-LP), (D-LP) is the dual of (LP) and (OD-LP) is the optimistic counterpart of (D-LP). Now, consider (DR-LP) the dual of the robust counterpart of (LP). A very natural question is whether or not the dual of the robust counterpart is the same as the robust counterpart of the dual problem. The answer is certainly not. On the contrary, we will show in this section (Theorem 3.1) that regardless of the choice of the uncertainty set, problem (DR-LP) the dual of the robust counterpart of (LP) is the same as the optimistic counterpart of the dual problem of (LP), that is, problem (OD-LP). The following sketch illustrates this relation. (LP) robust dual (R-LP) (D-LP) dual optimistic (DR-LP) = (OD-LP) We use the following notation. For a compact nonempty set S, the support function of S at a point y is denoted by σ S (y) may T z : z S. Theorem 3.1. The dual of the robust counterpart of (LP) and the optimistic counterpart of the dual problem of (LP) are both given by the conve program (DR-LP),(OD-LP) where g(y) = 0 b T y g(y) 0, y 0, (3.2) y i σ Ui () c T. (3.3) Proof. The ith constraint of the robust counterpart of (3.1) is maa T i : a i U i b i, which can also be written as (R-LP) ma c T σ Ui () b i, i = 1,..., m, 0. To construct the dual, let us write the Lagrangian: L(, y) = c T y i (σ Ui () b i ). By the homogenicity of the support function we have b ma L(, y) = T y g(y) 0, 0 else where g is given in (3.3). Therefore, the dual problem of (R-LP) is given by (DR-LP) b T y g(y) 0, y 0, where g is given in (3.3). Let us now consider the uncertain dual problem of (LP): (D-LP) y b T y y i a i c, y 0. (3.4) A vector y 0 is optimistic-feasible solution of (D) if and only if a i U i : y i a i c.

4 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) 1 6 The latter is satisfied if and only if ( ) ma z T y i a i c 0. a i U i z 0 Using the equality of ma and ma for conve concave functions [6, Corollary 37.3.2], we can replace the order of the and the ma, thus resulting with ma z 0 a i U i y i a T i z ct z 0, which is the same as g(y) = y i σ Ui (z) c T z 0. z 0 Therefore, the optimistic counterpart of the dual problem (D-LP) coincides with (DR-LP) the dual of the robust counterpart of the primal problem. Eample 3.1. Suppose that the uncertainty sets U i are ellipsoidal sets given by U i = a i = ã i + v i : v i 2 ρ, i = 1,..., m. In this case σ Ui () = ma z U i T z = ma v i 2 ρ (T ã i + T v i ) = ã T i + ρ 2. Therefore, g(y) = y i σ Ui () c T ( T ( ) = y i ã i c) + ρ y i 2 ( ) 0 y = i ã i c ρ y i 2 else. Plugging this in (3.2) we conclude that the dual of the robust counterpart of LP, which is the same as the optimistic counterpart of the dual of LP is given by the conic quadratic problem ma b T y y i ã i c y 0. 2 ( ) ρ y i, Theorem 3.1 shows that val(dr-lp) = val(od-lp). In addition, if (R-LP), the robust counterpart of (LP), is bounded below and satisfies a regularity condition such as Slater condition, then strong duality holds [6] and val(r-lp) = val(dr-lp). From this we conclude that val(r-lp) = val(od-lp). (3.5) Namely, the value of the robust counterpart of (LP) ( primal worst ) is equal to the value of the optimistic counterpart of the dual problem ( dual best ). To interpret this result, assume that problem (LP) models a standard production problem. In this contet j, the decision variable, stands for the amount to be produced of item j; c j is the profit from the sale of one unit of item j; a ij (the jth component of the vector a i ) is the quantity of resource i needed to produce one unit of item j, and b i is the the available supply of resource i. The dual problem of (3.1) is b T y : y i a i c, y 0. (3.6) A common interpretation of the dual is as follows: suppose that a merchant wants to buy the resources of the manufacturer. For every i, his decision variable y i stands for the price he offers to pay for purchasing one unit of resource i. Of course he wants to pay as little as possible and thus his objective is to imize b T y. The set m of constraints a ijy i c j indicates that the merchant has to suggest competitive prices (so that the profit c j the manufacturer can get from selling a unit of item j is no more than the value he can get from what the merchant is paying for the resources needed to produce a unit of item j). Now suppose that the vectors are uncertain. The robust counterpart of the production problem (3.1) reflects in a sense a pessimistic manufacturer that wishes to maimize his profit under a worst case scenario. The equality (3.5) states that what is worst for the factory is best for the merchant. Specifically, since the manufacturer underestimates the value of the resources, the merchant needs to be less competitive and can offer lower prices for the resources, thus reducing his total payment. Remark 3.1. In Eample 2.1 we showed that the optimistic counterpart of the LP (2.1) is nonconve, and thus it cannot be equivalent to the dual of a robust LP (which is always a conve problem). How does this then agree with the duality result in Theorem 3.1? the answer is that the LP in (2.1) is the dual of the following LP: ma b T y y i a i = c, y 0, which is not of the general form (3.1), where the constraints are assumed to be inequalities and the uncertainties must be constraint-wise. 4. Primal worst equals dual best Consider now the general model (P) given in (1.1). The robust counterpart of (P) is the problem (R-P) given in (1.1). The dual of (R-P) is given by ma G() + λ i F i (). Recalling the definition of G and F i (see (1.2) and (1.3)), the problem becomes (DR-P) ma ma g(; u) + λ i f i (; v i ). u U,v i V i We will assume that Slater constraint qualification is satisfied for problem (R-P) and that (R-P) is bounded below. Under these conditions it is well known that val(r-p) = val(dr-p) [6]. On the other hand, the dual of (P) is given by (D-P) ma q(λ; u, v i ), where q(λ; u, v i ) g i (; u) + λ i f i (; v i ). (4.1)

A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) 1 6 5 The optimistic counterpart of (D-P) is ma ma q(λ; u, v i ). u U,v i V i Plugging the epression (4.1) for q in the above problem, we arrive at the following formulation of the optimistic counterpart of the dual: (OD-P) ma ma g i (; u) + λ i f i (; v i ). u U,v i V i The LP case considered in Section 3 suggests that under some conditions the optimal values of (OD-P) and (DR-P) are equal. The following result shows that val(od-p) is always greater than or equal to val(dr-p) and that under suitable conveity assumptions, equality holds. Theorem 4.1. Consider the general conve problem (P) (problem (1.1)). Then val(od P) val(dr P). (4.2) If in addition the functions g, f i are concave with respect to the unknown parameters u, v i, then the following equality holds: val(od P) = val(dr P). (4.3) Proof. Since ma is always greater than or equal to ma we have: val(od-p) = ma ma g i (; u) + λ i f i (; v i ) u U,v i V i ma ma g i (; u) + λ i f i (; v i ) = val(dr-p). u U,v i V i If g, f i are concave with respect to u, v i (in addition to the conveity with respect to ), and since U, V i are conve compact sets, then by [6, Corollary 37.3.2] equality (4.3) holds. Clearly, Theorem 4.1 generalizes the result obtained in Section 3. As was noted in the LP case, by the strong duality result for conve programg we know that val(r-p) = val(dr-p). We therefore conclude that if the functions are all conve with respect to and concave with respect to u, v i the unknown parameters, then val(r-p) = val(od-p). (4.4) Loosely speaking, relation (4.4) states that optimizing under the worst case scenario in the primal is the same as optimizing under the best case scenario in the dual ( primal worst equals dual best ). Theorem 4.1 is not restricted only to LP problems. Another interesting eample is that of a conve quadratically constrained quadratic programg (QCQP). Eample 4.1. Consider the QCQP problem (QCQP) T A 0 + 2b T 0 + c 0 T A i + 2b T i + c i 0, R n. Then for each i = 0, 1,..., m the matri A i is uncertain and resides in the uncertainty set U i = Ãi + i : i = T, i i 2 ρ i here S 2 is the spectral norm of a matri S, i.e., S 2 = λma (S T S). We assume that the noal matrices à i are positive definite and that ρ i < λ (à i ) for every i = 0, 1,..., m. Under this condition the objective functions and constraints are strictly conve for every possible realization of the uncertain parameters. Therefore, the objective function and all the constraint functions are conve with respect to and concave (in fact linear) with respect to i. By Theorem 4.1 this implies that val(dr-p) = val(od- P). Let us find eplicit epressions for these problems and verify the result. Since for every i = 0, 1,..., m ma T A i + 2b T i + c i = T (à i + ρ i I) + 2b T i + c i, A i U i the robust counterpart of (QCQP) is given by ) (Ã0 T + ρ 0 I + 2b T 0 + c 0 ) (R-QCQP) (Ãi T + ρ i I + 2b T i + c i 0, R n. The dual problem of (R-QCQP) is (DR-QCQP) ma T (Ã0 + ρ 0 I + 1 λ i (à i + ρ i I)) + c 0 + c i λ i, where the summation is over i = 1,..., m. Now consider the dual problem of (QCQP): (D-QCQP) ma T (A 0 + ) 1 λ i A i + c 0 + c i λ i. The optimistic counterpart of (D-QCQP) is given by ma ma T i 2 ρ i (Ã0 + 0 + )) 1 λ i (Ãi + i + c 0 + c i λ i. (4.5) To solve the inner maimization, we will use the following property: if two positive definite matrices satisfy A B, then A 1 B 1. Therefore, since i ρ i I for every i = 0, 1,..., m, we conclude that ( b 0 + ) T (Ã0 + 0 + ) 1 λ i (à i + i ) ( b 0 + ) T (Ã0 + ρ 0 I + ) 1 λ i (à i + ρ i I). Since the upper bound is attained at i = ρ i I, we conclude that the solution of the inner maimization is i = ρ i I, thus simplifying (4.5) to ( (OD-QCQP) ma b 0 + ) T (Ã0 + ρ 0 I + )) 1 λ i (Ãi + ρ i I + c 0 + c i λ i,

6 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) 1 6 which is the same as (DR-QCQP). Also note that by using Schur complement, problem (DR-QCQP) can be cast as the linear semidefinite program ma,t t + c 0 + c i λ i à 0 + ρ 0 I + λ i (à i + ρ i I) ( b 0 + ) T b 0 + 0. t The net and last eample illustrates that when the concavity assumption of the functions with respect to the unknown parameters fails, strict inequality may occur in (4.2). Eample 4.2. Consider the least squares problem (LS): A b 2. Let us assume that A is uncertain and is known to reside in a ball: A U = A 0 + : F ρ. The robust counterpart of (LS) is: ma (A 0 + ) b 2. F ρ Solving the inner maimization problem, the above problem reduces to (R-LS) A 0 b 2 + ρ 2. The dual problem to (R-LS) is (DR-LS) (LS ) ma b T λ λ 2 1, A T λ 0 2 ρ. In order to write a dual problem of (LS), let us rewrite it first as y 2 : y = A b.,y The dual problem to (LS ) is (D-LS) ma b T λ λ 2 1 A T λ = 0. To find the optimistic counterpart of the dual problem we need to write eplicitly (i.e., in terms of λ) the constraint: A T λ = 0 for some A U. The above constraint is the same as A T 0 λ 2 ρ λ 2 and thus the optimistic counterpart of (D-LS) is (OD-LS) ma b T λ λ 2 1 A T λ 0 2 ρ λ 2. The feasible set of (DR-LS) is contained in the feasible set of (OD- LS), thus verifying the inequality val(od-ls) val(dr-ls). Strict inequality may occur. For eample, let us take m = n = 1 and A 0 = 2, b = 2, ρ = 1. (DR-LS) is just the problem ma 2λ : λ 1, 2λ 1, whose optimal solution λ = 0.5 with an optimal function value of 1. On the other hand, (OD-LS) is (OD-LS) ma 2λ : λ 1, 2λ λ. The only feasible point in (OD-LS) is λ = 0. Thus, in this case the optimal value of (OD-LS) is zero and is strictly smaller than the optimal value of (DR-LS). References [1] A. Ben-Tal, A. Nemirovski, Robust conve optimization, Math. Oper. Res. 23 (4) (1998) 769 805. [2] A. Ben-Tal, A. Nemirovski, Robust optimization methodology and applications, Math. Program. Ser. B 92 (3) (2002) 453 480. ISMP 2000, Part 2 (Atlanta, GA). [3] A. Björck, Numerical Methods for Least-Squares Problems, SIAM, Philadelphia, PA, 1996. [4] S. Chandrasekaran, G.H. Golub, M. Gu, A.H. Sayed, Efficient algorithms for least squares type problems with bounded uncertainties, in: Recent Advances in Total Least Squares Techniques and Errors-in-variables Modeling (Leuven, 1996), SIAM, Philadelphia, PA, 1997, pp. 171 180. [5] L. El Ghaoui, H. Lebret, Robust solution to least-squares problems with uncertain data, SIAM J. Matri Anal. Appl. 18 (4) (1997) 1035 1064. [6] R.T. Rockafellar, Conve Analysis, Princeton Univ. Press, Princeton, NJ, 1970.