Operations Research Letters

Size: px
Start display at page:

Download "Operations Research Letters"

Transcription

1 Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: Duality in robust optimization: Primal worst equals dual best Amir Beck, Aharon Ben-Tal Department of Industrial Engineering and Management, Technion Israel Institute of Technology, Haifa 32000, Israel a r t i c l e i n f o a b s t r a c t Article history: Received 1 April 2008 Accepted 26 September 2008 Available online 17 October 2008 We study the dual problems associated with the robust counterparts of uncertain conve programs. We show that while the primal robust problem corresponds to a decision maker operating under the worst possible data, the dual problem corresponds to a decision maker operating under the best possible data Elsevier B.V. All rights reserved. Keywords: Robust optimization Optimistic counterpart Conve programg duality 1. Introduction (P) Consider a general uncertain optimization problem g(; u) f i (; v i ) 0, i = 1,..., m, R n, (1.1) where g and f i are conve functions with respect to the decision variable and u R p, v i R q i are the uncertain parameters of the problem. Robust optimization (RO) [1,2] is one of the basic methodologies that deals with the case in which the parameters u, v i are not eactly known. The setting in RO is that the information available on the uncertainties is crude: u and v i are only known to reside in certain conve compact uncertainty sets: u U, v i V i, i = 1,..., m. A vector is a robust feasible solution of (P) if it satisfies the constraints for every possible realization of the parameters. That is, it satisfies for every i = 1,..., m: f i (; v i ) 0 for every v i V i. We emphasize the fact that RO deals eclusively with problems modelled as (1.1), i.e., all constraints are inequalities and the uncertainty is constraint-wise. The constraints in problem (1.1) can be written as F i () 0, i = 1,..., m, Corresponding author. addresses: becka@ie.technion.ac.il (A. Beck), abental@ie.technion.ac.il (A. Ben-Tal). where F i () = ma v i V i f i (; v i ). (1.2) If we will also denote G() = ma g(, u), (1.3) u U then the robust counterpart (RC) of the original problem is given by G() F i () 0, i = 1,..., m, R n. (1.4) The functions F and G i are conve as pointwise maima of conve functions [6]. Therefore, the robust counterpart (1.4) is always a conve optimization problem and thus it has a dual conve problem (call it (DR-P)), which under some regularity conditions has the same value as the primal problem. At the same time the primal uncertain problem (P) itself has an uncertain dual problem (call it (D)), with the same uncertain parameters. In this paper we study the relation between (DR-P) and the uncertain dual problem (D). The relation involves the notion of optimistic counterpart which is introduced in Section 2. We then show in Section 3 that for linear programg (LP) problems the dual of the robust counterpart is the optimistic counterpart of the uncertain dual problem. For LP problems the dual of the robust counterpart can be computed eplicitly and so the above relation is eplicitly revealed. In the last section we study the same relation for a general uncertain conve program. Although here the robust counterpart cannot be computed eplicitly, we employ a ima theorem to show that the relation primal worst equals dual best is valid; so, while in the primal robust problem we have a decision maker operating under the worst possible data, in the dual problem we have a decision maker operating under the best possible data /$ see front matter 2008 Elsevier B.V. All rights reserved. doi: /j.orl

2 2 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) The optimistic counterpart A vector is an optimistic feasible solution of (P) if it satisfies the constraints for at least one realization of the uncertainty set. That is, is optimistic feasible solution if and only if for every i = 1,..., m f i (; v i ) 0 for some v i V. The optimistic counterpart of problem (P) consists of imizing the best possible objective function (i.e., imal with respect to the parameters) over the set of optimistic feasible solutions: i) u U f i (; v i ) 0 for some v i V i, i = 1,..., m, [ ] R n. Denoting Ĝ() = u U g(; u) and ˆFi () = vi V i f i (; v i ), the above problem can be equivalently written as (OC) Ĝ() ˆFi () 0, i = 1,..., m, R n. As opposed to the robust counterpart, the above problem is in general not conve. The objective function Ĝ and constraints ˆF i are pointwise ima of conve functions and as such do not necessarily posses any conveity/concavity properties. Eample 2.1 (Linear Programg). Consider a linear programg (LP) problem c T a T i b i, i = 1,..., m, R n, (2.1) where c, a i R n, i = 1,..., m and b i R for every i = 1,..., m. Suppose now that the vectors a i are not fied but rather known to reside in an uncertainty set V i which is an l balls of the form: V i = ã i + v i : v i ρ, where ã i is the noal value of a i and ρ > 0. This means that the coefficients a ij (jth component of a i ) have an interval uncertainty: a ij ã ij ρ. The ith constraint function is F i () ma vi (ã i + v i ) T = ã T i + ρ 1, and consequently the robust counterpart becomes c T ã T i + ρ 1 b i, i = 1,..., m, R n. The above problem is of course a conve optimization problem and can be cast as an LP, thus rendering it tractable. On the other hand, the ith constraint in the optimistic counterpart is given by ˆF i () = vi (ã i + v i ) T = ã T i ρ 1. Consequently, the optimistic counterpart of (2.1) is c T ã T i ρ 1 b i, i = 1,..., m, R n, which is clearly not a conve problem. Note however that if instead of (2.1) the nonlinear LP is c T a T i b i, i = 1,..., m, 0, (2.2) then the corresponding optimistic counterpart associated with the same uncertainty sets V i is c T n ã T i ρ i b i, i = 1,..., m, 0, which is a conve (in fact linear) program. Thus we observe that the conveity status of the optimistic counterpart depends among other things on the representation of the noal problem. It also depends on the choice of the uncertainty set; for eample, for the same noal problem (2.2) if the uncertainty sets U i are U i = ã i + v i : v i 2 ρ, i = 1,..., m, then the optimistic counterpart is again a nonconve problem: c T ã T i ρ 2 b i, i = 1,..., m, 0. Eample 2.2 (Robust and Optimistic Least Squares). Given a matri A R mn and a vector b R m, the celebrated least squares problem [3] consists of imizing the data error over the entire space: (LS) A b 2. Now, assume that the matri A is not fied but is rather known to reside in the uncertainty set: U = + U : F ρ, where F stands for the Frobenius norm and à is the fied noal matri. The robust counterpart of (LS) is given by (R-LS) ma A b 2. A U This problem was introduced and studied in [5] where it was called robust least squares. By eplicitly solving the inner maimization problem (see [5]), (R-LS) becomes à b 2 + ρ 2. The above is of course a conve problem, and more specifically a conic quadratic problem, that can be solved efficiently. Here the optimistic counterpart of (LS) is given by (O-LS) A b 2. A U Using the variational form of the Euclidean norm, the inner imization problem becomes à b + 2 = F ρ = ma y: y 2 1 F ρ yt (à b + ) ma y T (à b + ) F ρ y: y 2 1 = ma y T (à b) ρ 2 y 2, (2.3) y: y 2 1 where the second equality follows from the equality between ma and ma for conve concave functions [6, Corollary ]. An eplicit epression for (2.3) can be found as follows: ma y T (à b) ρ y = ma ma y: y 1 0 α 1 y =α yt (à b) ρ α, = ma α( à b ρ ) 0 α 1 [ ] = à b ρ +

3 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) where for a scalar, [] + denotes the positive part of : > 0, [] + = 0 0. Therefore, (O-LS) simplifies to [ Ã b 2 ρ 2 ] +, which is not a conve optimization problem. It is interesting to note that as was shown in [5], the solution of (R-LS) is of the form ˆ = (Ã T Ã + αi) 1 Ã T b where α 0 and in [4] it was shown, under the assumption that ρ is small enough, that the solution of (O- LS) is of the form ˆ = (Ã T Ã βi) 1 Ã T b where β 0. Both α and β are solutions of certain one-dimensional secular equations. Therefore, the optimal solution of (R-LS) is a regularization of the least squares solution while the optimal solution of (O-LS) has an opposite de-regularization effect. We end this section by defining the optimistic counterpart for a more general optimization model in which equality constraints are also present and with uncertainty parameters which are not necessarily constraint-wise. Specifically, consider the general model (G) g(; u) H(; v 1 ) 0, K(; v 2 ) = 0, R n where H : R n R m 1 and K : R n R m 2 are vector functions and g : R n R is a scalar function of n variables. The parameters u R p, v 1 R q 1, v2 R q 2 are uncertain and only known to reside in compact sets U 1, V 1 and V 2 respectively. The optimistic counter part of problem (G) is defined as (O-G) g(; u) u U H(; v 1 ) 0 for some v 1 V 1, K(; v 2 ) = 0 for some v 2 V 2, R n. Note that as opposed to the robust counterpart, in the contet of the optimistic counterpart, it does make sense to deal with equality constraints. 3. The dual of robust linear programg problems (LP) In this section we consider an LP of the following general form: ma c T a T i b i, i = 1,..., m, 0, (3.1) where c, a i R n. Assume now that for every i = 1,..., m, the vector a i is not fied but rather known to reside in some uncertainty set U i : a i U i, where U i is a nonempty compact set. We assume that b and c are certain data vectors for the sake of eposition. Indeed a problem with uncertainty in the objective function and righthand side of the constraints can be reduced to an equivalent one with uncertainty only in the lefthand side of the constraints. Throughout, the letters D, R, O stand for dual, robust and optimistic respectively. In particular, (R-LP) is the robust counterpart of (LP), (DR-LP) is the dual of (R-LP), (D-LP) is the dual of (LP) and (OD-LP) is the optimistic counterpart of (D-LP). Now, consider (DR-LP) the dual of the robust counterpart of (LP). A very natural question is whether or not the dual of the robust counterpart is the same as the robust counterpart of the dual problem. The answer is certainly not. On the contrary, we will show in this section (Theorem 3.1) that regardless of the choice of the uncertainty set, problem (DR-LP) the dual of the robust counterpart of (LP) is the same as the optimistic counterpart of the dual problem of (LP), that is, problem (OD-LP). The following sketch illustrates this relation. (LP) robust dual (R-LP) (D-LP) dual optimistic (DR-LP) = (OD-LP) We use the following notation. For a compact nonempty set S, the support function of S at a point y is denoted by σ S (y) may T z : z S. Theorem 3.1. The dual of the robust counterpart of (LP) and the optimistic counterpart of the dual problem of (LP) are both given by the conve program (DR-LP),(OD-LP) where g(y) = 0 b T y g(y) 0, y 0, (3.2) y i σ Ui () c T. (3.3) Proof. The ith constraint of the robust counterpart of (3.1) is maa T i : a i U i b i, which can also be written as (R-LP) ma c T σ Ui () b i, i = 1,..., m, 0. To construct the dual, let us write the Lagrangian: L(, y) = c T y i (σ Ui () b i ). By the homogenicity of the support function we have b ma L(, y) = T y g(y) 0, 0 else where g is given in (3.3). Therefore, the dual problem of (R-LP) is given by (DR-LP) b T y g(y) 0, y 0, where g is given in (3.3). Let us now consider the uncertain dual problem of (LP): (D-LP) y b T y y i a i c, y 0. (3.4) A vector y 0 is optimistic-feasible solution of (D) if and only if a i U i : y i a i c.

4 4 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) 1 6 The latter is satisfied if and only if ( ) ma z T y i a i c 0. a i U i z 0 Using the equality of ma and ma for conve concave functions [6, Corollary ], we can replace the order of the and the ma, thus resulting with ma z 0 a i U i y i a T i z ct z 0, which is the same as g(y) = y i σ Ui (z) c T z 0. z 0 Therefore, the optimistic counterpart of the dual problem (D-LP) coincides with (DR-LP) the dual of the robust counterpart of the primal problem. Eample 3.1. Suppose that the uncertainty sets U i are ellipsoidal sets given by U i = a i = ã i + v i : v i 2 ρ, i = 1,..., m. In this case σ Ui () = ma z U i T z = ma v i 2 ρ (T ã i + T v i ) = ã T i + ρ 2. Therefore, g(y) = y i σ Ui () c T ( T ( ) = y i ã i c) + ρ y i 2 ( ) 0 y = i ã i c ρ y i 2 else. Plugging this in (3.2) we conclude that the dual of the robust counterpart of LP, which is the same as the optimistic counterpart of the dual of LP is given by the conic quadratic problem ma b T y y i ã i c y 0. 2 ( ) ρ y i, Theorem 3.1 shows that val(dr-lp) = val(od-lp). In addition, if (R-LP), the robust counterpart of (LP), is bounded below and satisfies a regularity condition such as Slater condition, then strong duality holds [6] and val(r-lp) = val(dr-lp). From this we conclude that val(r-lp) = val(od-lp). (3.5) Namely, the value of the robust counterpart of (LP) ( primal worst ) is equal to the value of the optimistic counterpart of the dual problem ( dual best ). To interpret this result, assume that problem (LP) models a standard production problem. In this contet j, the decision variable, stands for the amount to be produced of item j; c j is the profit from the sale of one unit of item j; a ij (the jth component of the vector a i ) is the quantity of resource i needed to produce one unit of item j, and b i is the the available supply of resource i. The dual problem of (3.1) is b T y : y i a i c, y 0. (3.6) A common interpretation of the dual is as follows: suppose that a merchant wants to buy the resources of the manufacturer. For every i, his decision variable y i stands for the price he offers to pay for purchasing one unit of resource i. Of course he wants to pay as little as possible and thus his objective is to imize b T y. The set m of constraints a ijy i c j indicates that the merchant has to suggest competitive prices (so that the profit c j the manufacturer can get from selling a unit of item j is no more than the value he can get from what the merchant is paying for the resources needed to produce a unit of item j). Now suppose that the vectors are uncertain. The robust counterpart of the production problem (3.1) reflects in a sense a pessimistic manufacturer that wishes to maimize his profit under a worst case scenario. The equality (3.5) states that what is worst for the factory is best for the merchant. Specifically, since the manufacturer underestimates the value of the resources, the merchant needs to be less competitive and can offer lower prices for the resources, thus reducing his total payment. Remark 3.1. In Eample 2.1 we showed that the optimistic counterpart of the LP (2.1) is nonconve, and thus it cannot be equivalent to the dual of a robust LP (which is always a conve problem). How does this then agree with the duality result in Theorem 3.1? the answer is that the LP in (2.1) is the dual of the following LP: ma b T y y i a i = c, y 0, which is not of the general form (3.1), where the constraints are assumed to be inequalities and the uncertainties must be constraint-wise. 4. Primal worst equals dual best Consider now the general model (P) given in (1.1). The robust counterpart of (P) is the problem (R-P) given in (1.1). The dual of (R-P) is given by ma G() + λ i F i (). Recalling the definition of G and F i (see (1.2) and (1.3)), the problem becomes (DR-P) ma ma g(; u) + λ i f i (; v i ). u U,v i V i We will assume that Slater constraint qualification is satisfied for problem (R-P) and that (R-P) is bounded below. Under these conditions it is well known that val(r-p) = val(dr-p) [6]. On the other hand, the dual of (P) is given by (D-P) ma q(λ; u, v i ), where q(λ; u, v i ) g i (; u) + λ i f i (; v i ). (4.1)

5 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) The optimistic counterpart of (D-P) is ma ma q(λ; u, v i ). u U,v i V i Plugging the epression (4.1) for q in the above problem, we arrive at the following formulation of the optimistic counterpart of the dual: (OD-P) ma ma g i (; u) + λ i f i (; v i ). u U,v i V i The LP case considered in Section 3 suggests that under some conditions the optimal values of (OD-P) and (DR-P) are equal. The following result shows that val(od-p) is always greater than or equal to val(dr-p) and that under suitable conveity assumptions, equality holds. Theorem 4.1. Consider the general conve problem (P) (problem (1.1)). Then val(od P) val(dr P). (4.2) If in addition the functions g, f i are concave with respect to the unknown parameters u, v i, then the following equality holds: val(od P) = val(dr P). (4.3) Proof. Since ma is always greater than or equal to ma we have: val(od-p) = ma ma g i (; u) + λ i f i (; v i ) u U,v i V i ma ma g i (; u) + λ i f i (; v i ) = val(dr-p). u U,v i V i If g, f i are concave with respect to u, v i (in addition to the conveity with respect to ), and since U, V i are conve compact sets, then by [6, Corollary ] equality (4.3) holds. Clearly, Theorem 4.1 generalizes the result obtained in Section 3. As was noted in the LP case, by the strong duality result for conve programg we know that val(r-p) = val(dr-p). We therefore conclude that if the functions are all conve with respect to and concave with respect to u, v i the unknown parameters, then val(r-p) = val(od-p). (4.4) Loosely speaking, relation (4.4) states that optimizing under the worst case scenario in the primal is the same as optimizing under the best case scenario in the dual ( primal worst equals dual best ). Theorem 4.1 is not restricted only to LP problems. Another interesting eample is that of a conve quadratically constrained quadratic programg (QCQP). Eample 4.1. Consider the QCQP problem (QCQP) T A 0 + 2b T 0 + c 0 T A i + 2b T i + c i 0, R n. Then for each i = 0, 1,..., m the matri A i is uncertain and resides in the uncertainty set U i = Ãi + i : i = T, i i 2 ρ i here S 2 is the spectral norm of a matri S, i.e., S 2 = λma (S T S). We assume that the noal matrices à i are positive definite and that ρ i < λ (à i ) for every i = 0, 1,..., m. Under this condition the objective functions and constraints are strictly conve for every possible realization of the uncertain parameters. Therefore, the objective function and all the constraint functions are conve with respect to and concave (in fact linear) with respect to i. By Theorem 4.1 this implies that val(dr-p) = val(od- P). Let us find eplicit epressions for these problems and verify the result. Since for every i = 0, 1,..., m ma T A i + 2b T i + c i = T (à i + ρ i I) + 2b T i + c i, A i U i the robust counterpart of (QCQP) is given by ) (Ã0 T + ρ 0 I + 2b T 0 + c 0 ) (R-QCQP) (Ãi T + ρ i I + 2b T i + c i 0, R n. The dual problem of (R-QCQP) is (DR-QCQP) ma T (Ã0 + ρ 0 I + 1 λ i (à i + ρ i I)) + c 0 + c i λ i, where the summation is over i = 1,..., m. Now consider the dual problem of (QCQP): (D-QCQP) ma T (A 0 + ) 1 λ i A i + c 0 + c i λ i. The optimistic counterpart of (D-QCQP) is given by ma ma T i 2 ρ i (à )) 1 λ i (Ãi + i + c 0 + c i λ i. (4.5) To solve the inner maimization, we will use the following property: if two positive definite matrices satisfy A B, then A 1 B 1. Therefore, since i ρ i I for every i = 0, 1,..., m, we conclude that ( b 0 + ) T (à ) 1 λ i (à i + i ) ( b 0 + ) T (Ã0 + ρ 0 I + ) 1 λ i (à i + ρ i I). Since the upper bound is attained at i = ρ i I, we conclude that the solution of the inner maimization is i = ρ i I, thus simplifying (4.5) to ( (OD-QCQP) ma b 0 + ) T (Ã0 + ρ 0 I + )) 1 λ i (Ãi + ρ i I + c 0 + c i λ i,

6 6 A. Beck, A. Ben-Tal / Operations Research Letters 37 (2009) 1 6 which is the same as (DR-QCQP). Also note that by using Schur complement, problem (DR-QCQP) can be cast as the linear semidefinite program ma,t t + c 0 + c i λ i à 0 + ρ 0 I + λ i (à i + ρ i I) ( b 0 + ) T b t The net and last eample illustrates that when the concavity assumption of the functions with respect to the unknown parameters fails, strict inequality may occur in (4.2). Eample 4.2. Consider the least squares problem (LS): A b 2. Let us assume that A is uncertain and is known to reside in a ball: A U = A 0 + : F ρ. The robust counterpart of (LS) is: ma (A 0 + ) b 2. F ρ Solving the inner maimization problem, the above problem reduces to (R-LS) A 0 b 2 + ρ 2. The dual problem to (R-LS) is (DR-LS) (LS ) ma b T λ λ 2 1, A T λ 0 2 ρ. In order to write a dual problem of (LS), let us rewrite it first as y 2 : y = A b.,y The dual problem to (LS ) is (D-LS) ma b T λ λ 2 1 A T λ = 0. To find the optimistic counterpart of the dual problem we need to write eplicitly (i.e., in terms of λ) the constraint: A T λ = 0 for some A U. The above constraint is the same as A T 0 λ 2 ρ λ 2 and thus the optimistic counterpart of (D-LS) is (OD-LS) ma b T λ λ 2 1 A T λ 0 2 ρ λ 2. The feasible set of (DR-LS) is contained in the feasible set of (OD- LS), thus verifying the inequality val(od-ls) val(dr-ls). Strict inequality may occur. For eample, let us take m = n = 1 and A 0 = 2, b = 2, ρ = 1. (DR-LS) is just the problem ma 2λ : λ 1, 2λ 1, whose optimal solution λ = 0.5 with an optimal function value of 1. On the other hand, (OD-LS) is (OD-LS) ma 2λ : λ 1, 2λ λ. The only feasible point in (OD-LS) is λ = 0. Thus, in this case the optimal value of (OD-LS) is zero and is strictly smaller than the optimal value of (DR-LS). References [1] A. Ben-Tal, A. Nemirovski, Robust conve optimization, Math. Oper. Res. 23 (4) (1998) [2] A. Ben-Tal, A. Nemirovski, Robust optimization methodology and applications, Math. Program. Ser. B 92 (3) (2002) ISMP 2000, Part 2 (Atlanta, GA). [3] A. Björck, Numerical Methods for Least-Squares Problems, SIAM, Philadelphia, PA, [4] S. Chandrasekaran, G.H. Golub, M. Gu, A.H. Sayed, Efficient algorithms for least squares type problems with bounded uncertainties, in: Recent Advances in Total Least Squares Techniques and Errors-in-variables Modeling (Leuven, 1996), SIAM, Philadelphia, PA, 1997, pp [5] L. El Ghaoui, H. Lebret, Robust solution to least-squares problems with uncertain data, SIAM J. Matri Anal. Appl. 18 (4) (1997) [6] R.T. Rockafellar, Conve Analysis, Princeton Univ. Press, Princeton, NJ, 1970.

1 Robust optimization

1 Robust optimization ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization

More information

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization

Duality Uses and Correspondences. Ryan Tibshirani Convex Optimization Duality Uses and Correspondences Ryan Tibshirani Conve Optimization 10-725 Recall that for the problem Last time: KKT conditions subject to f() h i () 0, i = 1,... m l j () = 0, j = 1,... r the KKT conditions

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Lecture 7: Weak Duality

Lecture 7: Weak Duality EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly

More information

Robust-to-Dynamics Linear Programming

Robust-to-Dynamics Linear Programming Robust-to-Dynamics Linear Programg Amir Ali Ahmad and Oktay Günlük Abstract We consider a class of robust optimization problems that we call robust-to-dynamics optimization (RDO) The input to an RDO problem

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

9. Interpretations, Lifting, SOS and Moments

9. Interpretations, Lifting, SOS and Moments 9-1 Interpretations, Lifting, SOS and Moments P. Parrilo and S. Lall, CDC 2003 2003.12.07.04 9. Interpretations, Lifting, SOS and Moments Polynomial nonnegativity Sum of squares (SOS) decomposition Eample

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Lecture 14: Optimality Conditions for Conic Problems

Lecture 14: Optimality Conditions for Conic Problems EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Lec12p1, ORF363/COS323. The idea behind duality. This lecture

Lec12p1, ORF363/COS323. The idea behind duality. This lecture Lec12 Page 1 Lec12p1, ORF363/COS323 This lecture Linear programming duality + robust linear programming Intuition behind the derivation of the dual Weak and strong duality theorems Max-flow=Min-cut Primal/dual

More information

Convex Optimization Overview (cnt d)

Convex Optimization Overview (cnt d) Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize

More information

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem CS 189 Introduction to Machine Learning Spring 2018 Note 22 1 Duality As we have seen in our discussion of kernels, ridge regression can be viewed in two ways: (1) an optimization problem over the weights

More information

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b, ECE 788 - Optimization for wireless networks Final Please provide clear and complete answers. PART I: Questions - Q.. Discuss an iterative algorithm that converges to the solution of the problem minimize

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Robust portfolio selection under norm uncertainty

Robust portfolio selection under norm uncertainty Wang and Cheng Journal of Inequalities and Applications (2016) 2016:164 DOI 10.1186/s13660-016-1102-4 R E S E A R C H Open Access Robust portfolio selection under norm uncertainty Lei Wang 1 and Xi Cheng

More information

Duality revisited. Javier Peña Convex Optimization /36-725

Duality revisited. Javier Peña Convex Optimization /36-725 Duality revisited Javier Peña Conve Optimization 10-725/36-725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and

More information

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty V. Jeyakumar, G.Y. Li and G. M. Lee Revised Version: January 20, 2011 Abstract The celebrated von Neumann minimax

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p

Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p Equivalence of Minimal l 0 and l p Norm Solutions of Linear Equalities, Inequalities and Linear Programs for Sufficiently Small p G. M. FUNG glenn.fung@siemens.com R&D Clinical Systems Siemens Medical

More information

1 Kernel methods & optimization

1 Kernel methods & optimization Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions

More information

Robust Optimization for Risk Control in Enterprise-wide Optimization

Robust Optimization for Risk Control in Enterprise-wide Optimization Robust Optimization for Risk Control in Enterprise-wide Optimization Juan Pablo Vielma Department of Industrial Engineering University of Pittsburgh EWO Seminar, 011 Pittsburgh, PA Uncertainty in Optimization

More information

Likelihood Bounds for Constrained Estimation with Uncertainty

Likelihood Bounds for Constrained Estimation with Uncertainty Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 5 Seville, Spain, December -5, 5 WeC4. Likelihood Bounds for Constrained Estimation with Uncertainty

More information

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Convex Optimization. 4. Convex Optimization Problems. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University Conve Optimization 4. Conve Optimization Problems Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2017 Autumn Semester SJTU Ying Cui 1 / 58 Outline Optimization problems

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

Lecture 13: Duality Uses and Correspondences

Lecture 13: Duality Uses and Correspondences 10-725/36-725: Conve Optimization Fall 2016 Lectre 13: Dality Uses and Correspondences Lectrer: Ryan Tibshirani Scribes: Yichong X, Yany Liang, Yanning Li Note: LaTeX template cortesy of UC Berkeley EECS

More information

A Note on Robustness of the Min-Max Solution to Multiobjective Linear Programs

A Note on Robustness of the Min-Max Solution to Multiobjective Linear Programs A Note on Robustness of the Min-Max Solution to Multiobjective Linear Programs Erin K. Doolittle, Karyn Muir, and Margaret M. Wiecek Department of Mathematical Sciences Clemson University Clemson, SC January

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints Noname manuscript No. (will be inserted by the editor) Using quadratic conve reformulation to tighten the conve relaation of a quadratic program with complementarity constraints Lijie Bai John E. Mitchell

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

ROBUST CONVEX OPTIMIZATION

ROBUST CONVEX OPTIMIZATION ROBUST CONVEX OPTIMIZATION A. BEN-TAL AND A. NEMIROVSKI We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, yet

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty

Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems

More information

Adjustable Robust Solutions of Uncertain Linear Programs 1

Adjustable Robust Solutions of Uncertain Linear Programs 1 Adjustable Robust Solutions of Uncertain Linear Programs 1 A. Ben-Tal, A. Goryashko, E. Guslitzer, A. Nemirovski Minerva Optimization Center, Faculty of Industrial Engineering and Management, Technion,

More information

Lecture 23: November 19

Lecture 23: November 19 10-725/36-725: Conve Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 23: November 19 Scribes: Charvi Rastogi, George Stoica, Shuo Li Charvi Rastogi: 23.1-23.4.2, George Stoica: 23.4.3-23.8, Shuo

More information

Robust goal programming

Robust goal programming Control and Cybernetics vol. 33 (2004) No. 3 Robust goal programming by Dorota Kuchta Institute of Industrial Engineering Wroclaw University of Technology Smoluchowskiego 25, 50-371 Wroc law, Poland Abstract:

More information

MODELS FOR ROBUST ESTIMATION AND IDENTIFICATION

MODELS FOR ROBUST ESTIMATION AND IDENTIFICATION MODELS FOR ROBUST ESTIMATION AND IDENTIFICATION S Chandrasekaran 1 K E Schubert 1 Department of Department of Electrical and Computer Engineering Computer Science University of California California State

More information

Convex Optimization Problems. Prof. Daniel P. Palomar

Convex Optimization Problems. Prof. Daniel P. Palomar Conve Optimization Problems Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19, HKUST,

More information

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)

1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1) ROBUST SOLUTIONS OF MULTI-OBJECTIVE LINEAR SEMI-INFINITE PROGRAMS UNDER CONSTRAINT DATA UNCERTAINTY M.A. GOBERNA, V. JEYAKUMAR, G. LI, AND J. VICENTE-PÉREZ Abstract. The multi-objective optimization model

More information

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Lecture 4: September 12

Lecture 4: September 12 10-725/36-725: Conve Optimization Fall 2016 Lecture 4: September 12 Lecturer: Ryan Tibshirani Scribes: Jay Hennig, Yifeng Tao, Sriram Vasudevan Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Sensitivity Analysis and Duality

Sensitivity Analysis and Duality Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan

More information

Robust Fisher Discriminant Analysis

Robust Fisher Discriminant Analysis Robust Fisher Discriminant Analysis Seung-Jean Kim Alessandro Magnani Stephen P. Boyd Information Systems Laboratory Electrical Engineering Department, Stanford University Stanford, CA 94305-9510 sjkim@stanford.edu

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST)

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST) Mini Problems Daniel P. Palomar Hong Kong University of Science and Technolgy (HKUST) ELEC547 - Convex Optimization Fall 2009-10, HKUST, Hong Kong Outline of Lecture Introduction Matrix games Bilinear

More information

Learning the Kernel Matrix with Semidefinite Programming

Learning the Kernel Matrix with Semidefinite Programming Journal of Machine Learning Research 5 (2004) 27-72 Submitted 10/02; Revised 8/03; Published 1/04 Learning the Kernel Matrix with Semidefinite Programg Gert R.G. Lanckriet Department of Electrical Engineering

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

Lagrangian Duality for Dummies

Lagrangian Duality for Dummies Lagrangian Duality for Dummies David Knowles November 13, 2010 We want to solve the following optimisation problem: f 0 () (1) such that f i () 0 i 1,..., m (2) For now we do not need to assume conveity.

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Convex Optimization in Classification Problems

Convex Optimization in Classification Problems New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1

More information

Robust Portfolio Selection Based on a Multi-stage Scenario Tree

Robust Portfolio Selection Based on a Multi-stage Scenario Tree Robust Portfolio Selection Based on a Multi-stage Scenario Tree Ruijun Shen Shuzhong Zhang February 2006; revised November 2006 Abstract The aim of this paper is to apply the concept of robust optimization

More information

Keywords. quadratic programming, nonconvex optimization, strong duality, quadratic mappings. AMS subject classifications. 90C20, 90C26, 90C46

Keywords. quadratic programming, nonconvex optimization, strong duality, quadratic mappings. AMS subject classifications. 90C20, 90C26, 90C46 STRONG DUALITY IN NONCONVEX QUADRATIC OPTIMIZATION WITH TWO QUADRATIC CONSTRAINTS AMIR BECK AND YONINA C. ELDAR Abstract. We consider the problem of minimizing an indefinite quadratic function subject

More information

How to Take the Dual of a Linear Program

How to Take the Dual of a Linear Program How to Take the Dual of a Linear Program Sébastien Lahaie January 12, 2015 This is a revised version of notes I wrote several years ago on taking the dual of a linear program (LP), with some bug and typo

More information

Introduction to Alternating Direction Method of Multipliers

Introduction to Alternating Direction Method of Multipliers Introduction to Alternating Direction Method of Multipliers Yale Chang Machine Learning Group Meeting September 29, 2016 Yale Chang (Machine Learning Group Meeting) Introduction to Alternating Direction

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

10-725/36-725: Convex Optimization Spring Lecture 21: April 6 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 21: April 6 Scribes: Chiqun Zhang, Hanqi Cheng, Waleed Ammar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

The Value of Adaptability

The Value of Adaptability The Value of Adaptability Dimitris Bertsimas Constantine Caramanis September 30, 2005 Abstract We consider linear optimization problems with deterministic parameter uncertainty. We consider a departure

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Computation of the Joint Spectral Radius with Optimization Techniques

Computation of the Joint Spectral Radius with Optimization Techniques Computation of the Joint Spectral Radius with Optimization Techniques Amir Ali Ahmadi Goldstine Fellow, IBM Watson Research Center Joint work with: Raphaël Jungers (UC Louvain) Pablo A. Parrilo (MIT) R.

More information

Pessimistic Bi-Level Optimization

Pessimistic Bi-Level Optimization Pessimistic Bi-Level Optimization Wolfram Wiesemann 1, Angelos Tsoukalas,2, Polyeni-Margarita Kleniati 3, and Berç Rustem 1 1 Department of Computing, Imperial College London, UK 2 Department of Mechanical

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Convexity Properties Associated with Nonconvex Quadratic Matrix Functions and Applications to Quadratic Programming

Convexity Properties Associated with Nonconvex Quadratic Matrix Functions and Applications to Quadratic Programming Convexity Properties Associated with Nonconvex Quadratic Matrix Functions and Applications to Quadratic Programming Amir Beck March 26,2008 Abstract We establish several convexity results which are concerned

More information

Lecture 26: April 22nd

Lecture 26: April 22nd 10-725/36-725: Conve Optimization Spring 2015 Lecture 26: April 22nd Lecturer: Ryan Tibshirani Scribes: Eric Wong, Jerzy Wieczorek, Pengcheng Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Learning the Kernel Matrix with Semi-Definite Programming

Learning the Kernel Matrix with Semi-Definite Programming Learning the Kernel Matrix with Semi-Definite Programg Gert R.G. Lanckriet gert@cs.berkeley.edu Department of Electrical Engineering and Computer Science University of California, Berkeley, CA 94720, USA

More information

NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES

NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES J Syst Sci Complex () 24: 466 476 NEW ROBUST UNSUPERVISED SUPPORT VECTOR MACHINES Kun ZHAO Mingyu ZHANG Naiyang DENG DOI:.7/s424--82-8 Received: March 8 / Revised: 5 February 9 c The Editorial Office of

More information

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Ruken Düzgün Aurélie Thiele July 2012 Abstract This paper describes a multi-range robust optimization approach

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints

On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints On the Adaptivity Gap in Two-Stage Robust Linear Optimization under Uncertain Constraints Pranjal Awasthi Vineet Goyal Brian Y. Lu July 15, 2015 Abstract In this paper, we study the performance of static

More information

Pareto Efficiency in Robust Optimization

Pareto Efficiency in Robust Optimization Pareto Efficiency in Robust Optimization Dan Iancu Graduate School of Business Stanford University joint work with Nikolaos Trichakis (HBS) 1/26 Classical Robust Optimization Typical linear optimization

More information

On State Estimation with Bad Data Detection

On State Estimation with Bad Data Detection On State Estimation with Bad Data Detection Weiyu Xu, Meng Wang, and Ao Tang School of ECE, Cornell University, Ithaca, NY 4853 Abstract We consider the problem of state estimation through observations

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information