Abstract. SOCPs remain SOCPs and robust SDPs remain SDPs moreover, when the data entries are

Size: px
Start display at page:

Download "Abstract. SOCPs remain SOCPs and robust SDPs remain SDPs moreover, when the data entries are"

Transcription

1 Robust Conic Optimization Dimitris Bertsimas Melvyn Sim y March 4 Abstract In earlier proposals, the robust counterpart of conic optimization problems exhibits a lateral increase in complexity, i.e., robust linear programming problems (LPs) become second order cone problems (SOCPs), robust SOCPs become semidenite programming problems (SDPs), and robust SDPs become NP-hard. We propose a relaxed robust counterpart for general conic optimization problems that (a) preserves the computational tractability of the nominal problem specically the robust conic optimization problem retains its original structure, i.e., robust LPs remain LPs, robust SOCPs remain SOCPs and robust SDPs remain SDPs moreover, when the data entries are independently distributed, the size of the proposed robust problem especially under the l norm is practically the same as the nominal problem, and (b) allows us to provide a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey independent and identically distributed normal distributions. Boeing Professor of Operations Research, Sloan School of Management and Operations Research Center, Massachusetts Institute of Technology, E53-363, Cambridge, MA 39, dbertsim@mit.edu. The research of the author was partially supported by the Singapore-MIT alliance. y NUS Business School, National University of Singapore, dscsimm@nus.edu.sg.

2 Introduction The general optimization problem under parameter uncertainty is as follows: max f (x ~D ) subject to f i (x ~D i ) i I () x X where f i (x D ~ i ), i fg[i are given functions, X is a given set and D ~ i i fg[i is the vector of uncertain coecients. We dene the nominal problem to be Problem () when the uncertain coecients ~D i take values equal to their expected values D i. In order to address parameter uncertainty Problem () Ben-Tal and Nemirovski [, 3] and independently by El Ghaoui et al. [, ] propose to solve the following robust optimization problem max s:t: min DU f (x D ) min D i U i f i (x D i ) i I x X () where U i, i fg[i are given uncertainty sets. The motivation for solving Problem () is to nd a solution x X that \immunizes" Problem () against parameter uncertainty. By selecting appropriate uncertainty setsu i,we can address the tradeo between robustness and optimality. In designing such an approach two criteria are important in our view: (a) Preserving the computational tractability both theoretically and most importantly practically of the nominal problem. From a theoretical perspective it is desirable that if the nominal problem is solvable in polynomial time, then the robust problem is also polynomially solvable. More specically, it is desirable that robust conic optimization problems retain their original structure, i.e., robust linear programming problems (LPs) remain LPs, robust second order cone problems (SOCPs) remain SOCPs and robust semidenite programming problems (SDPs) remain SDPs. (b) Being able to nd a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey some natural probability distributions. This is important, since from these guarantees we can select parameters that aect the uncertainty setsu i that allows to control the tradeo between robustness and optimality. Let us examine whether the state of the art in robust optimization has the two properties mentioned above:

3 . Linear Programming: A uncertain LP constraint is of the form ~a x ~ b, for which ~a and ~ b are subject to uncertainty. When the corresponding uncertainty setu is a polyhedron, then the robust counterpart is also an LP (see Ben-Tal and Nemirovski [3, 4] and Bertsimas and Sim [8, 9]). When U is ellipsoidal, then the robust counterpart becomes an SOCP. For linear programming there are probabilistic guarantees for feasibility available ([3, 4] and[8, 9]) under reasonable probabilistic assumptions on data variation.. Quadratic Constrained Quadratic Programming (QCQP): An uncertain QCQP constraint is of the form k ~Axk + ~b x +~c, where A, ~ ~ b and ~c are subject to data uncertainty. The robust counterpart is an SDP if the uncertainty set is a simple ellipsoid, and NP-hard if the set is polyhedral (Ben-Tal and Nemirovski [, 3]). To the best of our knowledge, there are no available probabilistic bounds. 3. Second Order Cone Programming (SOCP): An uncertain SOCP constraint is of the form k ~Ax+ bk ~ ~c x+ d, ~ where ~A, ~ b, ~c and d ~ are subject to data uncertainty. The robust counterpart is an SDP if A, ~ ~ b belong in an ellipsoidal uncertainty setu and ~c, d ~ belong in another ellipsoidal set U. The problem is NP-hard, however, if A, ~ b, ~ ~c, d ~ vary together in a common ellipsoidal set. To the best of our knowledge, there are no available probabilistic bounds. 4. Semidenite Programming (SDP): An uncertain SDP constraintoftheform P n j= ~A j x j ~B, where ~A ::: ~A n and ~B are subject to data uncertainty. The robust counterpart is NP-hard for ellipsoidal uncertainty sets, while there are no available probabilistic bounds. 5. Conic Programming: An uncertain Conic Programming constraint of the form P n j= ~A j x j K ~B, where ~A ::: ~A n and ~B are subject to data uncertainty. The cone K is closed, pointed and with a nonempty interior. To the best of our knowledge, there are no results available regarding tractability and probabilistic guarantees in this case. Our goal in this paper is to address (a) and (b) above for robust conic optimization problems. Specically, we propose a new robust counterpart of Problem () that has two properties: (a) It inherits the character of the nominal problem for example, robust SOCPs remain SOCPs and robust SDPs remain SDPs (b) under reasonable probabilistic assumptions on data variation we establish probabilistic guarantees for feasibility that lead to explicit ways for selecting parameters that control robustness. The structure of the paper is as follows. In Section, we describe the proposed robust model and in Section 3, we show that the robust model inherits the character of the nominal problem for LPs, 3

4 QCQPs, SOCPs and SDPs. In Section 4, we prove probabilistic guarantees for feasibility for these classes of problems. In Section 5, we show tractability and give explicit probabilistic bounds for general conic problems. Section 6 concludes this paper. The Robust model In this section, we outline the ingredients of the proposed framework for robust conic optimization.. Model for parameter uncertainty The model of data uncertainty we consider is ~D = D + X D j ~z j (3) where D is the nominal value of the data, D j, j N is a direction of data perturbation, and ~z j j N are independent and identically distributed random variables with mean equal to zero, so that E[ ~D] =D. The cardinality ofn may be small, modeling situations involving a small collection of primitive independent uncertainties (for example a factor model in a nance context), or large, potentially as large as the number of entries in the data. In the former case, the elements of ~D are strongly dependent, while in the latter case the elements of ~D are weakly dependent oreven independent (when jnj is equal to the number of entries in the data). The support of ~z j j N can be unbounded or bounded. Ben-Tal and Nemirovskii [4] and Bertsimas and Sim [8] have considered the case that jnj is equal to the number of entries in the data.. Uncertainty sets and related norms In the robust optimization framework of (), we consider the uncertainty set U as follows: U = 8 < : D j9u <jnj : D = D + X 9 = D j u j kuk (4) where is a parameter controling the tradeo between robustness and optimality (robustness increases as increases). We restrict the vector norm k:k we consider by imposing the condition: kuk = ku + k (5) where u + j = ju j j8j N. The following norms commonly used in robust optimization satisfy Eq. (5): 4

5 The polynomial norm l k, k = ::: (see [, 4, 4]). The l \ l norm: maxfkuk kuk g,> (see [4]). This norm is used in modeling bounded and symmetrically distributed random data. The l \ l norm: maxf kuk ; kuk g,;> (see[8, 7] ). Note that this norm is equal to l if ;=jnj, and l if ; =. This norm is used in modeling bounded and symmetrically distributed random data, and has the additional property that the robust counterpart of an LP is still an LP (Bertsimas et al. [7]). Given a norm k:k we consider the dual norm k:k dened as ksk =maxs x: kxk We next show some basic properties of norms satisfying Eq. (5), which we will subsequently use in our development. Proposition If the norm kk satises Eq. (5), then we have (a) kwk = kw + k : (b) For all v w such that v + w + kvk kwk : (c) For all v w such that v + w + kvk kwk: Proof (a) Let y arg max kxk w x, and for every j N, letz j = jy j j if w j and z j = ;jy j j, otherwise. Clearly, w z =(w + ) y + w y.since,kzk = kz + k = ky + k = kyk, and from the optimality ofy, we have w z w y, leading to w z =(w + ) y + = w y. Since kwk = kw + k,we obtain (b) Note that If v + w +, kwk =max(w) x =max(w + ) x + =max(w + ) x = kw + k : kxk kxk kxk kwk =max(w + ) x + =max(w + ) x: kxk kxk x kvk = max kxk x (v + ) x max (w + ) x = kwk : kxk x (c) We apply part (b) to the norm k:k. From the self dual property of norms k:k = k:k, we obtain part (c). 5

6 .3 The class of functions f(x D) We impose the following restrictions on the class of functions f(x D) in Problem () (we drop index j for clarity): Assumption The function f(x D) satises: (a) The function f(x D) is concave in D for all x < n. (b) f(x kd) =kf(x D), for all k, D x < n. Note that for functions f( ) satisfying Assumption we have: f(x A + B) f(x A)+ f(x B) =f(x A)+f(x B): (6) The restrictions implied by Assumption still allow us to model LPs, QCQPs, SOCPs and SDPs. Table shows the function f(x D) for these problems. Note that SOCP() models situations that only A and b vary, while SOCP() models situations that A, b, c and d vary. Note that for QCQP, the function, ;kaxk ; x ; c does not satisfy the second assumption. However, by extending the dimension of the b problem, it is well-known that the QCQP constraint is SOCP constraint representable (see [5]). Finally, the SDP constraint, is equivalent to min nx j= A i x i B n A i x i ; B j= A where min (M) is the function that returns the smallest eigenvalue of the symmetric matric M. 3 The proposed robust framework and its tractability The robust framework () leads to a signicant increase in complexity for conic optimization problems. For this reason, we propose a more restricted robust problem, which, as we show in this section, retains the complexity of the nominal problem. Specically, under the model of data uncertainty in Eq. (3) we propose the following constraint for addressing the data uncertainty in the constraint f(x ~ D) : X o min f(x (v w)v D )+ nf(x D j )v j + f(x ;D j )w j (7) 6

7 Type Constraint D f(x D) LP a x b (a b) a x ; b QCQP kaxk + (A b c d) b x + c d;(b x+c) ; skaxk + d+b x+c d = d j =8j N SOCP() kax + bk c x + d (A b c d) c j = d j =8j N c x + d ;kax + bk SOCP() kax + bk c x + d (A b c d) c x + d ;kax + bk SDP P nj= A i x i ; B S m + (A ::: A n B) min ( P n j= A i x i ; B) Table : The function f(x D) for dierent conic optimization problems. where V = n (v w) < jnjjnj + jkv + wk o (8) and the norm k:k satises Eq. (5). We next show that under Assumption, Eq. (7) implies the classical denition of robustness: f(x D) 8D U (9) where U is dened in Eq. (4). Moreover, if the function f(x D)islinearinD, then Eq. (7) is equivalent to Eq. (9). Proposition Suppose the given norm k:k satises Eq. (5). (a) If f(x A + B) =f(x A)+f(x B), thenx satises (7) if and only if x satises (9). (b) Under Assumption, if x is feasible in Problem (7), then x is feasible in Problem (9). Proof (a) Under the linearity assumption, Eq. (7) is equivalent to: X x D + D j (v j ; w j ) A 8kv + wk v w () while Eq. (9) can be written as: X x D + D j r j A 8krk : () Suppose x is infeasible in (), that is, there exists r krk such that X x D + D j r j A < : 7

8 For all j N, letv j = maxfr j g and w j = ; minfr j g. Clearly, r = v ; w and since v j + w j = jr j j, we have from Eq. (5) that kv + wk = krk. Hence, x is infeasible in () as well. Conversely, suppose x is infeasible in (), then there exist v w and kv + wk such that X x D + D j (v j ; w j ) A < : For all j N, weletr j = v j ; w j and we observe that jr j jv j + w j. Therefore, for norms satisfying Eq. (5) we have krk = kr + kkv + wk and hence, x is infeasible in (). (b) Suppose x is feasible in Problem (7), i.e., f(x D X o )+ nf(x D j )v j + f(x ;D j )w j 8kv + wk v w : From Eq. (6) and Assumption (b) f(x D X o )+ nf(x D j )v j + f(x ;D j )w j f(x D X + D j (v j ; w j )) for all kv + wk v w : In the proof of part (a) we established that f(x D + X D j r j ) 8krk is equivalent to f(x D + X D j (v j ; w j )) 8kv + wk v w and thus x satises (9). Note that there are other proposals that relax the classical denition of robustness (9) (see for instance []) and lead to tractable solutions. However, our particular proposal in Eq. (7) combines tractability with the ability to derive probabilistic guarantees that the solution of Eq. (7) would remain feasible under reasonable assumptions on data variation. 3. Tractability of the proposed framework Unlike the classical denition of robustness (9), which can not be represented in a tractable manner, we next show that Eq. (7) can be represented in a tractable manner. 8

9 Theorem For a norm satisfying Eq. (5) and a function f(x D) satisfying Assumption (a) Constraint (7) is equivalent to f(x D ) ksk () where s j =maxf;f(x D j ) ;f(x ;D j )g 8j N: (b) Eq. () can be written as: f(x D ) y f(x D j )+t j f(x ;D j )+t j ktk y y < t < jnj : 8j N 8j N (3) Proof (a) We introduce the following problems: z =max s:t: a v + b w kv + wk v w (4) and z =max X maxfa j b j gr j s:t: krk (5) and show that z = z. Suppose r is an optimal solution to (5). For all j N, let v j = w j = if maxfa j b j g v j = jr j j w j = if a j b j a j > w j = jr j j v j = if b j >a j b j > : Observe that a j v j + b j w j maxfa j b j gr j and w j + v j jr j j8j N. From Proposition (c) we have kv + wk kr k, and thus v w are feasible in Problem (4), leading to X X z (a j v j + b j w j ) maxfa j b j gr j = z : 9

10 Conversely, let v w be an optimal solution to Problem (4). Let r = v + w. Clearly krk and observe that r j maxfa j b j ga j v j + b j w j 8j N: Therefore, we have X X z maxfa j b j gr j j v (a j + b j w j )=z leading to z = z.we next observe that min (v w)v = ; max X o nf(x D j )v j + f(x ;D j )w j (v w)v = ; max fkrkg X o n;f(x D j )v j ; f(x ;D j )w j X n o maxf;f(x D j ) ;f(x ;D j ) gr j and using the denition of dual norm, ksk =max kxk s x,we obtain ksk =max kxk s x, i.e., Eq. () follows. Note that s j =maxf;f(x D j ) ;f(x ;D j )g, since otherwise there exists an x such thats j <, i.e., f(x D j ) > andf(x ;D j ) >. From Assumption (b) f(x ) =, contradicting the concavity off(x D) (Assumption (a)). Suppose that x is feasible in Problem (). Dening t = s and y = ksk,we can easily check that (x t y) are feasible in Problem (3). Conversely, suppose, x is infeasible in (), that is, f(x D ) < ksk : Since, t j s j =maxf;f(x D j ) ;f(x ;D j )gwe apply Proposition (b) to obtain ktk ksk.thus, f(x D ) < ksk ktk y i.e., x is infeasible in (3). (b) It is immediate that Eq. () can be written in the form of Eq. (3). In Table, we list the common choices of norms, the representation of their dual norms and the corresponding references. 3. Representation of the function maxf;f(x D) ;f(x ;D)g The function g(x D j ) = maxf;f(x D j ) ;f(x ;D j )g naturally arises in Theorem. Recall that a norm satises kak kkak = jkjkak, ka + Bk kak + kbk, andkak = implies that

11 Norms kuk ktk y References l kuk ktk y [4] l kuk t j y 8j N [7] l kuk P t j y [7] l p, p kuk p P t q l \ l norm maxfkuk kuk g l \ l norm maxf ; kuk kuk g ; > q; j ks ; tk + s < jnj + q; q P s j y ;p + P s j y s j + p t j 8j N p < + s < jnj + y [7] [4] [7] Table : Representation of the dual norm for t. A =. We show next that the function g(x A) satises all these properties except the last one, i.e., it behaves almost like a norm. Proposition 3 Under Assumption, the function g(x A) = maxf;f(x A) ;f(x ;A)g satises the following properties: (a) g(x A), (b) (c) g(x ka) =jkjg(x A), g(x A + B) g(x A)+g(x B). Proof (a) Suppose there exists x such that g(x A) <, i.e., f(x A) > and f(x ;A) >. From Assumption (b) f(x ) =, contradicting the concavity off(x A) (Assumption (a)). (b) For k, we apply Assumption (b) and obtain g(x ka) = maxf;f(x ka) ;f(x ;ka)g = k maxf;f(x A) ;f(x ;A)g = kg(x A): Similarly, ifk<wehave (c) Using Eq. (6) we obtain g(x ka) = maxf;f(x ;k(;a)) ;f(x ;k(a))g = ;kg(x A): g(x A + B) =g(x (A +B)) g(x A)+ g(x B) =g(x A)+g(x B):

12 Note that the function g(x A) does not necessarily dene a norm for A, sinceg(x A) =doesnot necessarily imply A =. However, for LP, QCQP. SOCP(), SOCP() and SDP, and specic direction of data perturbation, D j,wecanmapg(x D j ) to a function of a norm such that g(x D j )=kh(x D j )k g where H(x D j ) is linear in D j and dened as follows (see also the summary in Table 3): (a) LP: f(x D) =a x ; b, where D =(a b)andd j =(a j b j ). Hence, g(x D j )=maxf;(a j ) x +b j (a j ) x ; b j g = j(a j ) x ; b j j: (b) (c) QCQP: f(x D) =(d ; (b x + c))= ; (A j b j c j ). Therefore, SOCP(): g(x D j ) = max = q kaxk + ; (d + b x + c)=, where D =(A b c d) and D j = 8 < : (bj ) x+c j + ; (bj ) x+c j + (bj ) x+c j ska j xk + ska j xk + s + ka j xk + (b j ) x+c j 9 (b j ) x+c = j (b j ) x+c j : f(x D) =c x + d ;kax + bk, where D =(A b c d)andd j =(A j b j ). Therefore, g(x D j )=ka j x + b j k : (d) SOCP(): f(x D) =c x+d;kax+bk,whered =(A b c d) and Dj =(A j b j c j d j ). Therefore, o g(x D j ) = max n;(c j ) x ; d j + ka j x + b j k (c j ) x +d j + ka j x + b j k = j(c j ) x +d j j + ka j x + b j k : (e) SDP: f(x D) = min ( P n j= A i x i ; B), where D =(A ::: A n B) and D j =(A j ::: Aj n Bj ).

13 Type r = H(x D j ) g(x D j )=krk g LP r =(a j ) x ; b j jrj h r i h A, r r = j x ((b j ) x+c j )= QCQP r = i, r =((b j ) x +c j )= kr k + jr j SOCP() h r = A j x + b j krk i SOCP() r = r r, r = A j x + b j, r =(c j ) x +d j kr k + jr j SDP R = P n i= Aj i x i ; B j krk Table 3: The function H(x D j ) and the norm kk g for dierent conic optimization problems. Therefore, n g(x D j ) = max ; min ( P n j= A j i x i ; B j Pnj= ) ; min ; A j i x i ; B jo P = max n max ; nj= A j i x i ; B j max ( P o n j= A j i x i ; B j ) = nx A j i x i ; B j : j= 3.3 The nature and size of the robust problem In this section, we discuss the nature and size of the proposed robust conic problem. Note that in the proposed robust model (3) for every uncertain conic constraint f(x ~ D)we add at most jnj + new variables, jn j conic constraints of the same nature as the nominal problem and an additional constraint involving the dual norm. The nature of this constraint depends on the norm we use to describe the uncertainty set U dened in Eq. (4). When all the data entries of the problem have independent random perturbations, by exploiting sparsity of the additional conic constraints, we can further reduce the size of the robust model. Essentially, we can express the model of uncertainty in the form of Eq. (3), for which ~z j is the independent random variable associated with the jth data element, and D j contains mostly zeros except at the entries corresponding to the data element. As an illustration, consider the following semidenite constraint, a a a a 3 C A x + a 4 a 5 a 5 a 6 C A x a 7 a 8 a 8 a 9 such that each element in the data d =(a ::: a 9 ) has an independent random perturbation, that is ~a i = a i +a i~z i and ~z i are independently distributed. Equivalently, in Eq. (3) we have ~d = d + 9X d i ~z i i= 3 C A

14 l -norm l -norm l \ l -norm l -norm l \ l -norm Num. Vars. n + jnj + jnj + Num. linear Const. n + n + 4jN j + 3jN j Num SOC Const. LP LP LP LP SOCP SOCP QCQP SOCP SOCP SOCP SOCP SOCP SOCP() SOCP SOCP SOCP SOCP SOCP SOCP() SOCP SOCP SOCP SOCP SOCP SDP SDP SDP SDP SDP SDP Table 4: Size increase and nature of robust formulation when each data entry has independent uncertainty. where d =(a ::: a 9 ) and d i is a vector with a i at the ith entry and zero, otherwise. Hence, we can simplify the conic constraint in Eq. (3), f(x d )+t or B a C A B x C A B x C A C A + t as t ;minfa x g or equivalently as linear constraints t ;a x t. In Appendix A we derive and in Table 4 we summarize the number of variables and constraints and their nature when the nominal problem is an LP, QCQP, SOCP () (only A b vary), SOCP () (A b c dvary) and SDP for various choices of norms. Note that for the cases of the l, l and l norms, we are able to collate terms so that the number of variables and constraints introduced is minimal. Furthermore, using the l norm results in only one additional variable, one additional SOCP type of constraint, while maintaining the nature of the original conic optimization problem of SOCP and SDP. The use of other norms comes at the expense of more variables and constraints of the order of jnj, which is not very appealing for large problems. 4 Probabilistic Guarantees In this section, we derive a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey some natural probability distributions. An important component ofour analysis is the relation among dierent norms. We denote by h i the inner product on a vector space, 4

15 < m or the space of m by m symmetric matrices, S mm. The inner product induces a norm p hx xi. For avector space, the natural inner product is the Euclidian inner product, hx yi = x y, and the induced norm is the Euclidian norm kxk. For the space of symmetric matrices, the natural inner product is the trace product or hx Y i = trace(xy ) and the corresponding induced norm is the Frobenius norm, kxk F (see [3]). We analyze the relation of the inner product norm p hx xi with the norm kxk g dened in Table 3 for the conic optimization problems we consider. Since kxk g and p hx xi are valid norms in a nite dimensional space, there exist nite > such that for all r in the relevant space. krk g q hr ri krk g (6) Proposition 4 For the norm kk g dened intable 3 for the conic optimization problems we consider, Eq. (6) holds with the following parameters: (a) LP: = =: (b) QCQP, SOCP(): = p and =. (c) SOCP(): = =: (d) SDP: =and = p m: Proof (a) LP: For r <and krk g = jrj, leading to Eq. (6) with = =: (b) QCQP, SOCP(): For r =(r r ) < l+,leta = kr k and b = jr j. Since a b >, using the inequality a + b p p a + b and p a + b a + b, wehave p (kr k + jr j) p r r = krk kr k + jr j leading to Eq. (6) with = p and =: (c) SOCP(): For all r, Eq. (6) holds with = =: (d) Let j, j = ::: m be the eigenvalues of the matrix A. Since kak F = and kak =max j j j j,wehave kak kak F p mkak qtrace(a )= q P j j leading to Eq. (6) with =and = p m: The central result of the section is as follows. 5

16 Theorem (a) Under the model of uncertainty in Eq. (3), and given a feasible solution x in Eq. (7), then where P(f(x ~D) < ) P k r j ~z j k g > ksk A r j = H(x D j ) s j = kr j k g j N: (b) When we use the l -norm in Eq. (8), i.e., ksk = ksk, and under the assumption that z j normally and independently distributed withmean zero and variance one,i.e.,~z N( I), then P X s X r j ~z j > g where =,, derived inproposition 4 and >. are p! kr C e j k g A exp ; (7) Proof We have P(f(x ~ D) < ) P@ f(x D )+f(x X D j ~z j ) < A (From (6)) P P f(x D j ~z j ) < ;ksk (From (), s j = kh(x D j )k g ) X P@ f(x D X j ~z j ) f(x ; D j ~z j ) A < ;ksk A X =P@ g(x D j ~z j ) > ksk A X =P@ kh(x D j ~z j )k g > ksk A X =P@ k H(x D j )~z j k g > ksk A (H(x D) islinearind) =P@ X k r j ~z j k g > ksk A : 6

17 p (b) Using, the relations krk g hr ri and krkg p hr ri from Proposition 4, we obtain B X s X r j ~z j > kr C j k g ga B = X r X j ~z j > kr C j k ga g * + X X r j ~z j r X k ~z k > j r j ia kn hr = X X j r k i~z j ~z k > knhr X hr j r j ~z R~z X > j r j ia hr = P where R jk = hr j r k i. Clearly, R is a symmetric positive semidenite matrix and can be spectrally decomposed such that R = Q Q, where is the diagonal matrix of the eigenvalues and Q is the corresponding orthonormal matrix. Let ~y = Q~z so that ~z R~z = ~y ~y = P j ~y j.since~z N( I), we also have ~y N( I), that is, ~y j, j N are independent and normally distributed. Moreover, Therefore, P =P X j = trace(r) = ~z R~z X > j r j ia X X j ~y j > j A E exp = = P j ~y j exp P j exp P j! j Q E exp j ~y j Q E exp ~y j exp P j! j Q E exp ~y j exp P j hr j r j i: (From Markov's inequality, >) (~y j are independent) for all > and j 8j N 7

18 where the last inequality follows from Jensen inequality, noting that x j is a concave function of x if j [ ]. Since ~y j N( ), E exp ~y j!! = p Z exp ; y ;! dy = s ; : Thus, we obtain Q exp E ~y! j j exp P j = Q exp j ln exp P j = exp ln P ; exp P j ; j : We select ==( ) where =max j, and obtain exp ln P ; j exp P =exp ln j ; ; where =( P j)= : Taking derivatives and choosing the best, wehave = ; for which >. Substituting and simplifying, we have exp!! p ln ; e = ; where the last inequality follows from, and from p e! p e exp(; )!! exp(; ) exp(; ) < for>. Note that f(x ~D) <, implies that k~zk >. Thus, when ~z N( I) P(f(x ~D) < ) P(k~zk > ) = ; jnj ( ) (8) where jnj ()isthecdfofa-square distribution with jnj degrees of freedom. Note that the bound (8) does not take into account the structure of f(x ~D) incontrast to bound (7) that depends on f(x ~D) via the parameter. To illustrate this, we substitute the value of the parameter from Proposition 4 in Eq. (7) and report in Table 6 the bound in Eq. (7). To amplify the previous discussion, we show intable 6 the value of in order for the bound (7) to be less than or equal to. The last column shows the value of using bound (8) that is independent of the structure of the problem. We choose jnj = 495 which isapproximately the maximum number 8

19 Type LP QCQP SOCP() SOCP() SDP Probability bound of infeasibility p e exp(; q e ) exp(; p e exp(; q e 4 ) ) exp(; ) 4 q e exp(; ) m m Table 5: Probability bounds of P(f(x ~D) < ) for ~z N( I). LP QCQP SOCP() SOCP() SDP Eq. (8) ; :76 3:9 :76 3:9 7:6 74:5 ; 3:57 5:5 3:57 5:5 35:7 75: ;3 4: 5:95 4: 5:95 4: 75:7 ;6 5:68 7:99 5:68 7:99 56:8 76:9 Table 6: Sample calculations of using Probability Bounds of Table 5 for m =, n =, jnj = 495. of data entries in a SDP constraint with n = and m =. Although the size jnj is unrealistic for constraints with less data entries p such aslp, the derived probability bounds remain valid. Note that bound (8) leads to = O( jnj ln(=)). For LP, SOCP, and QCQP, bound (7) leads to = O(ln(=)), which is independent of the dimension of the problem. For SDP it leads to we have =O( p m ln(=)). As a result, ignoring the structure of the problem and using bound (8) leads to very conservative solutions. 5 General cones In this section, we generalize the results in Sections -4 to arbitrary conic constraints of the form, nx j= ~A j x j K ~B (9) 9

20 where f ~A ::: ~A n ~Bg = ~D constitutes the set of data that is subject to uncertainty, andk is a closed, convex, pointed cone with nonempty interior. For notational simplicity, we dene nx A(x D)= ~ ~A j x j ; B ~ j= so that Eq. (9) is equivalent to A(x ~ D) K : () We assume that the model for data uncertainty isgiven in Eq. (3) with ~z N( I). The uncertainty set U satises Eq. (4) with the given norm satisfying kuk = ku + k: Paralleling the earlier development, starting with a cone K and constraint (),we dene the function f( ) as follows so that f(x D) > if and only if A(x D) K. Proposition 5 For any V K, the function f(x D) = max s:t: A(x D) K V () satises the properties: (a) f(x D) is bounded andconcave in x and D. (b) f(x kd) =kf(x D) 8k. (c) f(x D) y if and only if A(x D) K yv. (d) f(x D) >yif and only if A(x D) K yv. Proof (a) Consider the dual of Problem (): z = min hu A(x D)i s:t: hu V i = u K where K is the dual cone of K. Since K is a closed, convex, pointed cone with nonempty interior, so is K (see [5]). As V K, for all u K and u 6=, wehave hu V i >, hence, the dual problem is bounded. Furthermore, since K has a nonempty interior, the dual problem is strictly feasible, i.e., there exists u K hu V i =. Therefore, by conic duality, the dual objective z has the same nite objective as the primal objective function f(x D). Since A(x D) is a linear mapping of D and an

21 ane mapping of x, it follows that f(x D) is concave inx and D. (b) Using the dual expression of f(x D), and that A(x kd) =ka(x D), the result follows. (c) If = y is feasible in Problem (), we have f(x D) = y. Conversely, iff(x D) y, then A(x D) K f(x D)V K yv. (d) Suppose A(x D) K yv, then there exists > such that A(x D) ; yv K V or A(x D) K ( + y)v. Hence, f(x D) + y>y. Conversely, since V K,iff(x D) >ythen (f(x D) ; y)v K.Hence,A(x D) K f(x D)V K yv. Remark : With y =, (c) establishes that A(x D) K if and only if f(x D) and (d) establishes that A(x D) K if and only if f(x D) >. The proposed robust model is given in Eqs. (7) and (8). We next derive an expression for g(x D) =maxf;f(x D) ;f(x ;D)g. Proposition 6 Let g(x D) = maxf;f(x D) ;f(x ;D)g. Then g(x D) =kh(x D)k g where H(x D) =A(x D) and ksk g = min fy : yv K S K ;yv g : Proof We observe that g(x D) =maxf;f(x D) ;f(x ;D)g =minfy j ; f(x D) y ;f(x ;D) yg =minfy ja(x D) K ;yv ;A(x D) K ;yv g (From Proposition 5(c)) = ka(x D)k g : We also need to show that k:k g is indeed a valid norm. Since V K, then ksk g. Clearly, kk g = and if ksk g =, then K S K, which implies that S =. To show that kksk g = jkjksk g,we observe that for k>, kksk g = min fy j yv K ks K ;yv g y = k min k j y V k K S K ; y V k = kksk g :

22 Likewise, if k< kksk g =minfy j yv K ks K ;yv g =minfy j yv K ;ks K ;yv g = k;ksk g Finally, to verify triangle inequality, = ;kksk g : ksk g + kt k g =minfy j yv K S K ;yv g + min fz j zv K T K ;zv g =minfy + z j yv K S K ;yv zv K T K ;zv g min fy + z j (y + z)v K S + T K ;(y + z)v g = ks + T k g : For the general conic constraint, the norm, kk g is dependent on the cone K and a point in the interior of the cone V. Hence, we dene kk K V := kk g. Using Proposition 5 and Theorem we nextshow that the robust counterpart for the conic constraint () is tractable and provide a bound on the probability that the constraint is feasible. Theorem 3 We have (a) (Tractability) For a norm satisfying Eq. (5), constraint (7) for general cones is equivalent to A(x D ) K yv t j V K A(x D j ) K ;t j V j N ktk y y < t < jnj : (b) (Probabilistic guarantee) When we use the l -norm in Eq. (8), i.e., ksk = ksk, and under the assumption that ~z N( I), then for all V we have p! e P(A(x ~D) = K) exp ; K V K V where K V p max ksk K V A hs Si= q! max hs Si ksk K V = and ksk K V =minfy : yv K S K ;yv g :

23 Proof The Theorem follows directly from Propositions 5, 6, Theorems,. From Theorem 3, for any cone K, we select V in order to minimize K V, i.e., K = min V K V : We next show that the smallest parameter is p and p m for SOCP and SDP respectively. For the second order cone, K = L n+, L n+ = fx < n+ : kx n k x n+ g where x n =(x ::: x n ). The induced norm is given by kxk L n+ v =minfy : yv L n+ x L n+ ;yvg =minfy : kx n + v n yk v n+ y + x n+ kx n ; v n yk v n+ y ; x n+ g and L n+ v = max kxk=! kxk L n+ max kxk L n+ v = kxk A : For the symmetric positive semidenite cone, K = S m +, Proposition 7 We have S m + V = kxk S m + V = min fy : yv X p max kxk S m + V hx Xi= max q hx XiA : kxk S m + V = (a) (b) For the second order cone, L n+ v p for all v L n+ (kv n k <v n+ )withequality holding for v =( ). For the symmetric positive semidenite cone, S m + V p m for all V with equality hoding for V = I. Proof For any V K,we observe that kv k K V = min fy : yv K V K ;yv g =: 3

24 Otherwise, if kv k K V <, there exist y < such that yv K V, which implies that ;V K, contradicting V K. Hence, kvk L n+ v =andwe obtain! max kxk kvk : kxk L n+ v = Likewise, when x n =(v n )=( p kv n k )andx n+ = ;=( p ), so that kxk =,we can also verify that the inequalities v n kp + v n yk v n+ y ; p kvn k v n kp ; v n yk v n+ y + p kvn k hold if and only if y p =(v n+ ;kv n k ). Hence, kxk L n+ v = p =(v n+ ;kv n k ) and we obtain max kxk L n+ v kxk= Therefore, since <v n+ ;kv n k v n+ kvk, wehave! L n+ v = max kxk L n+ v kxk= When v =( ),wehave p v n+ ;kv n k :! max kxk kxk L n+ v = kxk L n+ v = kx n k + jx n+ j and from Proposition 4(b), the bound is achieved. Hence, L n+ = p. (b) Since V is an invertible matrix, we observe that p kvk v n+ ;kv n k p : kxk S m + V =minfy : yv X ;yvg =min ny : yi V ; XV ; o ;yi = kv ; XV ; k : For any V, letx = V,wehave kxk S m + V =and hx Xi = trace(vv)=kk where < m is a vector corresponding to all the eigenvalues of the matrix V. Hence, we max hx XiA kk : kxk S m + V = q 4

25 Without loss of generality, let be the smallest eigenvalue of V with corresponding normalized eigenvector, q.now, let X = q q. Observe that hx Xi = trace(xx) = trace(q q q q ) = trace(q q q q ) =: We can express the matrix, V in its spectral decomposition, so that V = P j q j q j j. Hence, Therefore, we establish that Combining the results, we have S m V = When V = I, kxk S m + V = kv ; XV ; = P k j q j q j ; j q q Pj q j q j ; j k = k ; q q k = ; : q max hx Xi kxk S m + V = p max kxk S m + V hx Xi= A ; : p max kxk S m + V hx Xi= kxk S m V = kxk and from Proposition 4(d), the bound is achieved. Hence, S m = p m. A kk p m: 6 Conclusions We proposed a relaxed robust counterpart for general conic optimization problems that we believe achieves the objectives outlined in the introduction, namely: (a) (b) It preserves the computational tractability of the nominal problem. Specically the robust conic optimization problem retains its original structure, i.e., robust LPs remain LPs, robust SOCPs remain SOCPs and robust SDPs remain SDPs. Moreover, the size of the proposed robust problem especially under the l norm is practically the same as the nominal problem. It allows us to provide a guarantee on the probability that the robust solution is feasible, when the uncertain coecients obey independent and identically distributed normal distributions. 5

26 A Simplied formulation under independent uncertainties In this section, we show that if each data entry of the model has independent uncertainty, we can substantially reduce the size of the robust formulation (3). We focus on the equivalent representation (), f(x D ) y ksk y where, s j =maxf;f(x D j ) ;f(x ;D j )g = g(x D j ), for j N. Proposition 8 For LP, QCQP, SOCP(), SOCP() and SDP, we can express s j = jd j x i(j) j for which d j, j N are constants and the function, i : N!f ::: ng maps j N to the index of the corresponding variable. We dene x =, to address the case when s j is not variable dependent. Proof We associate the jth data entry, j N with an iid random variable ~z j. The corresponding expression of g(x D j ) is shown in Table 3. (a) LP: Uncertain LP data is represented as D ~ =(~a ~ b), where ~a j = a j +a j ~z j j= ::: n ~ b = b +b~z n+ : We have jnj = n + and s j = ja j x j j j= ::: n s n+ = jbj: (b) QCQP: Uncertain QCQP data is represented as ~D =(~A ~b ~c ), where ~A kj = A kj +A kj ~z n(k;)+j j ::: n k= ::: l ~ bj = b j +b j ~z nl+j j= ::: n ~c = c +c~z n(l+)+ : We have jnj = n(l +)+and s n(k;)+j = ja kj x j j j= ::: n k = ::: l s nl+j = jb j x j j j= ::: n s n(l+)+ = jcj: 6

27 (c) SOCP()/SOCP(): Uncertain SOCP() data is represented as ~D =(~A ~b ~c d), where ~A kj = A kj +A kj ~z n(k;)+j j= ::: n k = ::: l ~ bk = b k +b k ~z nl+k k= ::: l ~c j = c j +c j ~z (n+)l+j j= ::: n ~d = d +d~z (n+)l+n+ : We have jnj =(n +)l + n + and s n(k;)+j = ja kj x j j j= ::: n k = ::: l s nl+k = jb k j j= ::: l s (n+)l+j = jc j x j j j= ::: n s (n+)l+n+ = jdj: Note that SOCP() is a special case of SOCP(), for which jnj =(n +)l, that is, s j j>(n +)l. (d) SDP: Uncertain SDP data is represented as D ~ =( A ~ ::: A ~ n B), ~ where = for all ~A i = A i + P m k= P kj= [A i ] jk I jk ~z p(i j k) i = ::: n ~B = B + P n k= P kj= [B] jk I jk ~z p(n+ j k) where the index function p(i j k)=(i ; )(m(m +)=) + k(k ; )=+j, and the symmetric matrix I jk < mm satises, I jk = 8 >< >: (e j e k + e ke j ) if j 6= k e k e k if k = j e k being the kth unit vector. Hence, jnj =(n +)(m(m + ))=. Note that if j = k, ki jk k =. Otherwise, I jk has rank and (e j + e k )= p and (e j ; e k )= p aretwo eigenvectors of I jk with corresponding eigenvalues and ;. Hence, ki jk k = for all valid indices j and k. Therefore, we have s p(i j k) = j[a i ] jk x i j 8i f ::: ng j kf ::: mg j k s p(n+ j k) = j[b] jk j 8j k f ::: mg j k: We dene the set J(l) =fj : i(j)=l j Ng for l f ::: ng. From Table, we have the following robust formulations under the dierent norms in the restriction set V of Eq. (8). 7

28 (a) l -norm The constraint ksk y for the l -norm is equivalent to or X jd j x i(j) jy, X l= jj(l) jd j ja jxl jy PjJ() jd j j + P n P l= jj(l) jd j j t x t ;x t < n : t l y We introduce additional n + variables, including the variable y, and n + linear constraints to the nominal problem. (b) l -norm The constraint ksk y for the l -norm is equivalent to or max jd jx i(j) jy, max lf ::: ng max jj() jd j jy max jd j j jj(l)! jx l jy max jj(l) jd j jx l y l = ::: n ; max jj(l) jd j jx l y l = ::: n: We introduce an additional variable and n + linear constraints to the nominal problem. (c) l \ l -norm The constraint ksk y for the l \ l -norm is equivalent to t j jd j jx i(j) t j ;jd j jx i(j) j N j N ;p + P r j y r j + p t j 8j N r < jnj + t <jnj p < + leading to an additional of jn j + variables and 4jN j + linear constraints, including non-negativity constraints, to the nominal problem. (d) l -norm The constraint ksk y for the l -norm is equivalent to s X (d j x i(j) ) y, vu u t X jj() 8 jd j j + X l= jj(l) d A j x l y:

29 We only introduce an additional variable, y and one SOCP constraint to the nominal problem. (e) l \ l -norm The constraint ksk y for the l \ l -norm is equivalent to t j jd j jx i(j) t j ;jd j jx i(j) P kr ; tk + r j y t < jnj r < jnj + : j N j N We introduce adds jn j + variables, one SOCP constraint and 3jN j linear constraints, including non-negativity constraints, to the nominal problem. In Table 4, we summarize the size increase and the nature of the robust model for dierent choices of the given norm. References [] Ben-Tal, A., Nemirovski, A. (998): Robust convex optimization, Math. Oper. Res., 3, [] Ben-Tal, A., Nemirovski, A. (998): On the quality of SDP approximations of uncertain SDP programs, Research Report #4/98 Optimization Laboratory, Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology, Israel. [3] Ben-Tal, A., Nemirovski, A. (999): Robust solutions to uncertain programs, Oper. Res. Let., 5, -3. [4] Ben-Tal, A., Nemirovski, A. (): Robust solutions of Linear Programming problems contaminated with uncertain data, Math. Progr., 88, [5] Ben-Tal, A., Nemirovski, A. (): Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, MPR-SIAM Series on Optimization, SIAM, Philadelphia. [6] Ben-Tal, A., El-Ghaoui, L., Nemirovski, A. (): Robust semidenite programming, in Saigal, R., Vandenberghe, L., Wolkowicz, H., eds., Semidenite programming and applications, Kluwer Academic Publishers. [7] Bertsimas, D., Pachamanova, D., Sim, M. (3): Robust Linear Optimization under General Norms, to appear in Oper. Res. Let. 9

30 [8] Bertsimas, D., Sim, M. (4): Price of Robustness, Oper. Res., 5(), [9] Bertsimas, D., Sim, M. (3): Robust Discrete Optimization and Network Flows, Math. Progr., 98, [] Birge, J. R., Louveaux, F. (997): Introduction to Stochastic Programming. Springer, New York. [] El-Ghaoui, Lebret, H. (997): Robust solutions to least-square problems to uncertain data matrices, SIAM J. Matrix Anal. Appl., 8, [] El-Ghaoui, L., Oustry, F., Lebret, H. (998): Robust solutions to uncertain semidenite programs, SIAM J. Optim., 9,33-5. [3] Renegar, J. (): A mathematical view of interior-point methods in convex optimization, MPS- SIAM Series on Optimization, SIAM, Philadelphia. [4] Soyster, A.L. (973): Convex programming with set-inclusive constraints and applications to inexact linear programming, Oper. Res.,,

Tractable Approximations to Robust Conic Optimization Problems

Tractable Approximations to Robust Conic Optimization Problems Math. Program., Ser. B 107, 5 36 2006) Digital Object Identifier DOI) 10.1007/s10107-005-0677-1 Dimitris Bertsimas Melvyn Sim Tractable Approximations to Robust Conic Optimization Problems Received: April

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem Agenda 1 Duality for LP 2 Theorem of alternatives 3 Conic Duality 4 Dual cones 5 Geometric view of cone programs 6 Conic duality theorem 7 Examples Lower bounds on LPs By eliminating variables (if needed)

More information

Robust portfolio selection under norm uncertainty

Robust portfolio selection under norm uncertainty Wang and Cheng Journal of Inequalities and Applications (2016) 2016:164 DOI 10.1186/s13660-016-1102-4 R E S E A R C H Open Access Robust portfolio selection under norm uncertainty Lei Wang 1 and Xi Cheng

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective

More information

Norms and Perturbation theory for linear systems

Norms and Perturbation theory for linear systems CHAPTER 7 Norms and Perturbation theory for linear systems Exercise 7.7: Consistency of sum norm? Observe that the sum norm is a matrix norm. This follows since it is equal to the l -norm of the vector

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

4 Linear Algebra Review

4 Linear Algebra Review Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

1 Robust optimization

1 Robust optimization ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization

More information

Springer-Verlag Berlin Heidelberg

Springer-Verlag Berlin Heidelberg SOME CHARACTERIZATIONS AND PROPERTIES OF THE \DISTANCE TO ILL-POSEDNESS" AND THE CONDITION MEASURE OF A CONIC LINEAR SYSTEM 1 Robert M. Freund 2 M.I.T. Jorge R. Vera 3 Catholic University of Chile October,

More information

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????

WHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ???? DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b. Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk

Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Distributionally Robust Discrete Optimization with Entropic Value-at-Risk Daniel Zhuoyu Long Department of SEEM, The Chinese University of Hong Kong, zylong@se.cuhk.edu.hk Jin Qi NUS Business School, National

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009 Eduardo Corona Functional Analysis: Assignment Set # Spring 2009 Professor: Fengbo Hang April 29, 2009 29. (i) Every convolution operator commutes with translation: ( c u)(x) = u(x + c) (ii) Any two convolution

More information

Operations Research Letters

Operations Research Letters Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst

More information

Robust discrete optimization and network flows

Robust discrete optimization and network flows Math. Program., Ser. B 98: 49 71 2003) Digital Object Identifier DOI) 10.1007/s10107-003-0396-4 Dimitris Bertsimas Melvyn Sim Robust discrete optimization and network flows Received: January 1, 2002 /

More information

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias

An Analog of the Cauchy-Schwarz Inequality for Hadamard. Products and Unitarily Invariant Norms. Roger A. Horn and Roy Mathias An Analog of the Cauchy-Schwarz Inequality for Hadamard Products and Unitarily Invariant Norms Roger A. Horn and Roy Mathias The Johns Hopkins University, Baltimore, Maryland 21218 SIAM J. Matrix Analysis

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Robust conic quadratic programming with ellipsoidal uncertainties

Robust conic quadratic programming with ellipsoidal uncertainties Robust conic quadratic programming with ellipsoidal uncertainties Roland Hildebrand (LJK Grenoble 1 / CNRS, Grenoble) KTH, Stockholm; November 13, 2008 1 Uncertain conic programs min x c, x : Ax + b K

More information

Functional Analysis Review

Functional Analysis Review Functional Analysis Review Lorenzo Rosasco slides courtesy of Andre Wibisono 9.520: Statistical Learning Theory and Applications September 9, 2013 1 2 3 4 Vector Space A vector space is a set V with binary

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for Frames and pseudo-inverses. Ole Christensen 3 October 20, 1994 Abstract We point out some connections between the existing theories for frames and pseudo-inverses. In particular, using the pseudo-inverse

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP) Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem

A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,

More information

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex Optimization in Convex and Control Theory System Stephen Boyd Engineering Department Electrical University Stanford 3rd SIAM Conf. Control & Applications 1 Basic idea Many problems arising in system and

More information

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A). Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

The Value of Adaptability

The Value of Adaptability The Value of Adaptability Dimitris Bertsimas Constantine Caramanis September 30, 2005 Abstract We consider linear optimization problems with deterministic parameter uncertainty. We consider a departure

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

Convex Optimization in Classification Problems

Convex Optimization in Classification Problems New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1

More information

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods OUTLINE l p -Norm Constrained Quadratic Programming: Conic Approximation Methods Wenxun Xing Department of Mathematical Sciences Tsinghua University, Beijing Email: wxing@math.tsinghua.edu.cn OUTLINE OUTLINE

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007

LIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies

More information

Introduction to Linear Algebra. Tyrone L. Vincent

Introduction to Linear Algebra. Tyrone L. Vincent Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Robust Least Squares and Applications Laurent El Ghaoui and Herve Lebret Ecole Nationale Superieure de Techniques Avancees 3, Bd. Victor, 7739 Paris, France (elghaoui, lebret)@ensta.fr Abstract We consider

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

Theory and applications of Robust Optimization

Theory and applications of Robust Optimization Theory and applications of Robust Optimization Dimitris Bertsimas, David B. Brown, Constantine Caramanis May 31, 2007 Abstract In this paper we survey the primary research, both theoretical and applied,

More information

Robust Scheduling with Logic-Based Benders Decomposition

Robust Scheduling with Logic-Based Benders Decomposition Robust Scheduling with Logic-Based Benders Decomposition Elvin Çoban and Aliza Heching and J N Hooker and Alan Scheller-Wolf Abstract We study project scheduling at a large IT services delivery center

More information

LOWELL JOURNAL. LOWSLL, MICK., WSSDNaSDAY", FEB 1% 1894:

LOWELL JOURNAL. LOWSLL, MICK., WSSDNaSDAY, FEB 1% 1894: ? jpv J V# 33 K DDY % 9 D Y D z K x p * pp J - jj Q D V Q< - j «K * j G»D K* G 3 zz DD V V K Y p K K 9 G 3 - G p! ( - p j p p p p Q 3 p ZZ - p- pp p- D p pp z p p pp G pp G pp J D p G z p _ YK G D xp p-

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known

Department of Mathematics Technical Report May 2000 ABSTRACT. for any matrix norm that is reduced by a pinching. In addition to known University of Kentucky Lexington Department of Mathematics Technical Report 2000-23 Pinchings and Norms of Scaled Triangular Matrices 1 Rajendra Bhatia 2 William Kahan 3 Ren-Cang Li 4 May 2000 ABSTRACT

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection Ruken Düzgün Aurélie Thiele July 2012 Abstract This paper describes a multi-range robust optimization approach

More information

IE 521 Convex Optimization

IE 521 Convex Optimization Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Robust l 1 and l Solutions of Linear Inequalities

Robust l 1 and l Solutions of Linear Inequalities Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Applications and Applied Mathematics: An International Journal (AAM) Vol. 6, Issue 2 (December 2011), pp. 522-528 Robust l 1 and l Solutions

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu

CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu CS598 Machine Learning in Computational Biology (Lecture 5: Matrix - part 2) Professor Jian Peng Teaching Assistant: Rongda Zhu Feature engineering is hard 1. Extract informative features from domain knowledge

More information

Scientiae Mathematicae Vol. 2, No. 3(1999), 263{ OPERATOR INEQUALITIES FOR SCHWARZ AND HUA JUN ICHI FUJII Received July 1, 1999 Abstract. The g

Scientiae Mathematicae Vol. 2, No. 3(1999), 263{ OPERATOR INEQUALITIES FOR SCHWARZ AND HUA JUN ICHI FUJII Received July 1, 1999 Abstract. The g Scientiae Mathematicae Vol. 2, No. 3(1999), 263{268 263 OPERATOR INEQUALITIES FOR SCHWARZ AND HUA JUN ICHI FUJII Received July 1, 1999 Abstract. The geometric operator mean gives us an operator version

More information

Lecture 2: Review of Prerequisites. Table of contents

Lecture 2: Review of Prerequisites. Table of contents Math 348 Fall 217 Lecture 2: Review of Prerequisites Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In this

More information

A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra

A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra A Control Methodology for Constrained Linear Systems Based on Positive Invariance of Polyhedra Jean-Claude HENNET LAAS-CNRS Toulouse, France Co-workers: Marina VASSILAKI University of Patras, GREECE Jean-Paul

More information

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty

A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty A Robust von Neumann Minimax Theorem for Zero-Sum Games under Bounded Payoff Uncertainty V. Jeyakumar, G.Y. Li and G. M. Lee Revised Version: January 20, 2011 Abstract The celebrated von Neumann minimax

More information