Supplementary Material

Size: px
Start display at page:

Download "Supplementary Material"

Transcription

1 ec1 Supplementary Material EC.1. Measures in D(S, µ, Ψ) Are Less Dispersed Lemma EC.1. Given a distributional set D(S, µ, Ψ) with S convex and a random vector ξ such that its distribution F D(S, µ, Ψ), forall0 α 1 the random vector ζ := α(ξ µ)+µ also has adistributionthatliesind(s, µ, Ψ). Proof: We simply need to verify systematically the conditions imposed on the members of D(S, µ, Ψ). First, P F (ζ S)=P F (αξ +(1 α)µ S)=1, since P F (ξ S)=1, µ S,andthefactthatS is a convex set. Second, it is easy to see that E F [ζ=e F [αξ +(1 α)µ=µ. Finally, for any ψ( ) Ψ, we have that E F [ψ(αξ +(1 α)µ) E F [αψ(ξ)+(1 α)µ = αe F [ψ(ξ) + (1 α)ψ(µ) E F [ψ(ξ) 0, where we applied Jensen s inequality for the two first upper bounding steps. EC.2. Concavity of h(x,ξ) in ξ Consider the second stage stochastic minimization problem h(x,ξ):= minimize y f(x, y,ξ) s.t. y Y(x), where Y(x) isanygivenclosedandboundedsetfunctionofx. Herewemaenoassumptionthat set Y(x) isapolyhedralorevenaconvexset. We assume that f(x, y, ξ) is a concave function in the uncertain parameter-vector ξ. Note that this assumption has been made in most previous stochastic and robust optimization models (see Ben-Tal and Nemirovsi (1998)). Lemma EC.2. The minimum value function h(x,ξ) is a concave function in ξ. This lemma implies that our Proposition 1 is also applicable to most linear and nonlinear twostage optimization problems. We omit including a detailed proof of this lemma as it is well nown that the pointwise infimum of a set of affine functions is a concave function (see Boyd and Vandenberghe (2004) for more details).

2 ec2 e-companion to Delage et al.: The Value of Stochastic Modeling EC.3. Proof of Proposition 2 We follow similar steps as followed in the proof of Proposition 1. We first underline the fact that implementing the MVP solution, a policy that does not adapt to thesequenceofobservableξ, leads to a worst-case expected cost that is equal to the optimal value of the MVP problem, which we refer to as V MVP. [ T T E F ξ T t C t x t = E F [ξ T t C t x t F D(µ [1:T ) F D(µ [1:T ) T = µ T t C t x t, where D(µ [1:T )isshortford(s, [µ 1, µ 2,...,µ T, Ψ). Here, we used the fact that x t does not adapt to the observed uncertain parameters and the fact that all the distributionsinthesetthatis considered lead to the same expected value for the random vectors. We can therefore say that the optimal value of the distributionally robust multi-stage stochastic program must be smaller than V MVP. Secondly, after verifying that the Dirac measure δ µ[1:t lies in the set D(µ [1:T )(theargument being the same as in the proof of Proposition 1), one can show that V MVP is actually also a lower bound for the same distributionally robust problem. min (x 1,x 2,...,x T ) X a.s. [ T E F ξ T t C tx t (ξ [1:t ) F D(µ [1:T ) [ T min E δµ[1:t (x 1,x 2,...,x T ) X a.s. ξ T t C tx t (ξ [1:t ) T = min µ T t C t x t (µ [1:t ) (x 1,x 2,...,x T ) X a.s. = V MVP. Hence, we conclude that the MVP solution is an optimal solution for the distributionally robust multi-stage stochastic program under D(µ [1:T ). EC.4. Proof of Proposition 4 The proof consists of showing that the sequence of distributions F 1,F 2,... with F (ξ)=(1 (1 + β ) 1 )δ µ (ξ)+(1+β ) 1 G (ξ), satisfies E F [ψ(ξ) = 0, ψ Ψ, and that given any ϵ>0, one can choose large enough so that F satisfies: P F (ξ S) 1 ϵ E F [ξ µ ϵ.

3 ec3 Based on Condition 1, we can easily show the first part: E F [ψ(ξ) = (1 (1 + β ) 1 )E δµ [ψ(ξ) + (1 + β ) 1 E G [ψ(ξ) =(1 (1 + β ) 1 )ψ(µ)+(1+β ) 1 β =0. Based on Condition 2, we also have the property that E G [ ξ =O(β 1/γ ), for some γ>1. This is due to the fact that there exists an a>0, b R, andγ>1: E G [ ξ γ E G [ ξ γ =(1/a)(E G [b + a ξ γ b) (1/a)(E G [ψ 0 (ξ) b) =(1/a)(β b), where we used Jensen s inequality and the fact that ψ 0 (ξ)= i θ iψ i (ξ)forsomeconicalcombination of ψ i Ψallowingustoderive E G [ψ 0 (ξ) = E G [ i θ i ψ i (ξ) = i θ i β = β. Hence, we have that E G [ ξ =E G [ ξ γ/γ ((1/a)(β b)) 1/γ = O(β 1/γ ). Thus, we can demonstrate that both P F (ξ S)andE F [ξ willbecomefeasiblefor large enough: P F (ξ S)=(1 (1 + β ) 1 )P δµ (ξ S)+(1+β ) 1 )P G (ξ S) (1 (1 + β ) 1 )P δµ (ξ S)=1 (1 + β ) E F [ξ µ (1 (1 + β ) 1 ) E δµ [ξ µ +(1+β ) 1 E G [ξ µ (1 + β ) 1 ( E G [ξ + µ ) (1 + β ) 1 (E G [ ξ + µ )=O(β (1 1/γ) ). that Ultimately, based on the Lipschitz property of h(x, ) demonstrated in Lemma 1, we can verify E F [h(x, ξ) F D(S,µ,Ψ,ϵ) { F D(S,µ,Ψ,ϵ)} E F [h(x, ξ) = (1 (1 + β ) 1 )h(x, µ)+(1+β ) 1 E G [h(x, ξ) { F D(S,µ,Ψ,ϵ)} (1 (1 + β ) 1 )h(x, µ)+(1+β ) 1 E G [h(x, µ) R C 2 ξ µ { F D(S,µ,Ψ,ϵ)} { F D(S,µ,Ψ,ϵ)} = { F D(S,µ,Ψ,ϵ)} = h(x, µ). h(x, µ) (1 + β ) 1 R C 2 E G [ ξ + µ h(x, µ) (1 + β ) 1 R C 2 µ (1 + β ) 1 R C 2 O(β 1/γ )

4 ec4 e-companion to Delage et al.: The Value of Stochastic Modeling We can then resolve an approximate upper bound using Proposition 1 and the Lipschitz property of h(x, ): E F [h(x, ξ) F D(S,µ,Ψ,ϵ) {z z µ ϵ} F D(R d,z,ψ) {z z µ ϵ} E F [h(x, ξ) h(x, z) h(x, µ)+o(ϵ). We are left with demonstrating near-optimality of the MVP solution. This is easily done using the following argument, where for conciseness we let x MVP refer to the optimal solution of the MVP problem, g(x) beequalto F D(S,µ,Ψ,ϵ) c T 1x+E F [h(x, ξ), and x g be the member of X that minimizes g(x). g(x MVP ) g(x g ) c T 1x MVP + h(x MVP, µ)+o(ϵ) c T 1x g h(x g, µ) =min x X ct 1 x + h(x, µ) (ct 1 x g + h(x g, µ)) + O(ϵ) O(ϵ), so that g(x MVP ) g(x g )+O(ϵ). EC.5. Proof of Lemma 2 We first re-parametrize as z(,δ):=2µ + δ. We therefore need to show that for any R d g(δ):= µ + δ γ α 1 2 γ 1 2µ + δ γ α + µ γ α 0, δ R. This is done by showing that g(δ) isdecreasingfornegativeδ s, achieves the value of zero at δ =0 and is increasing for positive δ s. The function g(δ) therefore achieves its minimum value of zero at δ =0 for any. The simplest step is to show thatg(0) = 0. g(0) = µ γ α 1 2 γ 1 2µ γ α + µ γ α =0. The second step requires us to realize that g(δ) canberepresentedintermsoftheconvexfunction g 2 (δ)= µ + δ γ α: g(δ)=g 2 (δ) 2g 2 (δ/2) + µ γ α. One can verify that g 2 (δ) isconvexusingthefactthatitisthecompositionofafunction y γ,which is convex and increasing over y 0forγ 1, and of a convex function µ + δ α.theconvexityof g 2 (δ) tellsusthatg 2 (δ) g 2 (δ/2) for δ 0andthatg 2 (δ) g 2 (δ/2) for δ 0. Thus, we can easily verify the properties of the derivative of g(δ). While for δ 0, we have g (δ)=g 2(δ) g 2(δ/2) 0, we can also easily show that, for δ 0, we have g (δ)=g 2 (δ) g 2 (δ/2) 0. This completes our proof.

5 ec5 EC.6. Proof of Lemma 3 To mae this demonstration, we use the fact that h(x 2, ξ)=x 2 h(1, ξ). First, in the case where x 2 = 0, we already verified that h(0, ξ)=0. When x 2 > 0, one can easily show that h(x 2, ξ)=x 2 h(1, ξ) for all ξ R d by replacing the inner variable y by z = y/x 2.Thus,forallx 2 0, the objective of problem (6) reduces to F D(R d,0 d,i) {E F [ cx 2 h(x 2, ξ)} = F D(R d,0 d,i) = F D(R d,0 d,i) EC.7. Proof of Lemma 4 {E F [( c h(1, ξ))x 2 } {E F [( c h(1, ξ))} x 2. Based on Lemma 1 of Delage and Ye (2010), considering that h(1, ξ) isf -integrable for all F D(R d, 0 d, I) since E F [h(1, ξ) E F [ ξ 1 de F [ ξ 2 d E F [ ξ 2 2=d, by duality theory we can say that evaluating the remum of such an expression is equivalent to finding the optimal value of minimize Q,q,t t + I Q subject to t c ξ T y ξ T Qξ q T ξ, ξ R d, y Y(1) where Y(1) = {y R d a T y =0& 1 y i 1, i}. Thisisnecessarilyaconvexoptimization problem with linear objective function since each constraint is jointly convex in Q, q, andt. We now show that the separation problem associated with the only constraintofthisproblemisnphard. Thus, by the equivalence of optimization and separation (see Grötschel et al. (1981)) finding the optimal value of this problem can be shown to be NP-hard. Consider separating the solution Q =(1/4)I, q = 0 m,andt = d c from the feasible set. In this case, we must be able to verify its feasibility with respect to theonlyconstraint.thiscanbeshown to reduce to verifying if ξ R d, y Y(1) ξ T y (1/4)ξ T ξ d. After solving in terms of ξ, wegetthatweneedtoverifyiftheoptimalvalueoftheproblem imize y y T y subject to 1 y i 1, i {1, 2,...,d} a T y =0,

6 ec6 e-companion to Delage et al.: The Value of Stochastic Modeling is greater or equal to d or not. This is equivalent to showing if the set defined by Υ(a)={ y { 1, 1} d a T y =0}, for any a R d,isemptyornotsincetheextremepointoftheunitboxaretheonly ones for which y 2 d. Finally,onecaneasilyconfirmthattheNP-completePartition problem can be reduced to verifying whether some Υ(a) isemptyornot. Partition Problem: Givenasetofpositiveintegers{s i } d i=1 with index set A = {1, 2,...,d} is there a partition (B 1, B 2 )ofasuch that i B 1 s i = i B 2 s i? The reduction is obtained by casting a i := s i for all i, andconsideringthatafeasiblesolution y Υ(a) onlyexistsifsuchapartitionexistsandidentifiesthepartition through B 1 = {i A y i = 1}. Thiscompletesourproof. EC.8. Proof of Proposition 6 We first represent D(S, µ, I) intheformproposedinexample1,i.e.,using Ψ={ψ : R d R Q 0,ψ(ξ)=Q ((ξ µ)(ξ µ) T I)} It is indeed the case that this set Ψ satisfies Assumption 3 since the positive semi-definite cone is one for which it is relatively easy to verify feasibility using singular value decomposition. In formulating problem (8), we get that constraint (8b) taes the shape: ξ [ν i τ i,ν i + τ i t w i + ξv i α i β i (ξ ξ i ) q i(ξ µ i ) Q ((ξ µ i ) 2 e i e T i I), i {1, 2,...,d} {1, 2,...,K} Asimplereplacementoft for t + I Q leads to the equivalent formulation of problem (8) minimize t + I Q t,q,q,{w,v } K =1 ξ [ν i τ i,ν i + τ i subject to t w i + ξv i α i β i (ξ ξ i ) q i (ξ µ i ) (ξ µ i ) 2 Q i,i, i {1, 2,...,d} {1, 2,...,K} { i {1, 2,...,d} w i + v i ξm i c T 1 x 1 + h(x 1, µ +(ξ m i µ i )e i ), m, {1, 2,...,K} Q 0. Since only the diagonal terms of Q are involved in both the objective function and the constraints, one can arbitrarily set all off diagonal terms of Q to zero. Thus, we are left with minimize t,q,r,{w,v } K =1 t + e T r

7 ξ [ν i τ i,ν i + τ i subject to t w i + ξv i α i β i (ξ ξ i ) q i (ξ µ i ) (ξ i µ i ) 2 r i, i {1, 2,...,d} {1, 2,...,K} { i {1, 2,...,d} w i + v i ξm i c T 1 x 1 + h(x 1, µ +(ξ m i µ i )e i ), m, {1, 2,...,K} r 0. ec7 We finally apply the S-Lemma (cf., Theorem 2.2 in Póli and Terlay (2007)) to replace, for each fixed i and, thesetofconstraintsindexedovertheinterval[ν i τ i,ν i + τ i byanequivalent linear matrix inequality: [ r i β i v i +q i 2µ i r i 2 β i v i +q i 2µ i r i 2 t w i + α i β i ξ i µ i q i + µ 2 i r i s i [ 1 νi ν i τ 2 i, s i 0. Now regarding the complexity of solving this semi-definite programming problem, it is well nown that in the standard form ( ( K the problem can be solved in O m i=1 i minimize c T x x Rñ subject to A i (x) 0 i =1, 2,..., K ) 0.5 ( ñ 2 K ) ) K i=1 m2 i +ñ i=1 m3 i, where m i stands for the dimension of the positive semi-definite cone (i.e., A i (x) R m i m i ) (see Nesterov and Nemirovsi (1994)). In the SDP that interest us, one can show that ñ =1+(3K +2)d and that both problems can be solved in O(K 5 d 3.5 )operations,withk being the number of scenarios for each random variable that composes ξ.incalculatingthetotalcomplexityofevaluatinglb(x 1, X 2, {ξ i }) under such a Ψ, we also need to account for O(KdT MVP )operationsforevaluatingtherequired α i and β i.thiscompletesourproof. EC.9. Proof of Proposition 7 We first present without proof a Lemma that describes how to approximate a concave function on the real line to any level of accuracy by containing it between an outer and an inner piecewise linear concave functions. Lemma EC.3. Given any set of points {z i } K =1,aconcavefunctiong : R R is contained between the two piecewise linear concave functions. Specifically, g inner (z) g(z) g outer (z), where g outer (z):= min {1,2,...,K} with g (z ) R as any er-gradient of g(z) at z,and g(z )+(z z )g (z ), g inner (z):= min w,v w + zv subject to w + z v g(z ), {1, 2,...,K}.

8 ec8 e-companion to Delage et al.: The Value of Stochastic Modeling We then reduce the intractable tas of evaluating MVSM(x 1 )tothesearchforadistribution in the more manageable set D(S 0, µ, Ψ). The analysis is also limited to the potential regret of having committed to x 1 instead of ˆx 2.ThisnaturallyleadstoalowerboundfortheMVSM which considers distributions in the larger set D(S, µ, Ψ) and any alternative decision in X 2 MVSM(x 1 ) F D(S 0,µ,Ψ) E F [c T 1(x 1 ˆx 2 )+h(x 1, ξ) h(ˆx 2, ξ) We then apply duality theory to the F D(S0,µ,Ψ) operation. Considering the primal problem as a semi-infinite linear conic problem F M E F [c T 1(x 1 ˆx 2 )+h(x 1, ξ) h(ˆx 2, ξ) subject to E F [1{ξ S 0 }=1 E F [ξ=µ E F [r T ψ(ξ) 0, r K, (EC.1a) (EC.1b) (EC.1c) (EC.1d) the dual form taes the shape minimize t,q,r t subject to t c T 1(x 1 ˆx 2 )+h(x 1, ξ) h(ˆx 2, ξ) (ξ µ) T q r T ψ(ξ), ξ S 0 r K, (EC.2a) (EC.2b) (EC.2c) where t, q, andr are the dual variables associated respectively with constraints (EC.1b), (EC.1c), and (EC.1d). One can verify that there is no duality gap between primal and dual problems using the weaer version of Proposition 3.4 in Shapiro (2001) and the fact that the Dirac measure δ µ lies in the relative interior of D(S 0, µ, Ψ). Exploiting the structure of S 0 d i=1 {ξ ξ [ν i τ i,ν i + τ i, ξ = µ +(ξ µ i )e i },wecandecompose constraint (EC.2b) into d simpler constraints t g i (x 1,ξ) g i (ˆx 2,ξ) q i (ξ µ i ) r T ψ(µ +(ξ µ i )e i ) i {1, 2,...,d}(EC.3) where g i (x,ξ)=c T 1 x + h(x, µ +(ξ µ i)e i ). Note that if S 0 = S 0 B (ν, τ ), then this set of constraints is entirely equivalent to the original one. For any fixed i, we now go through a sequence of relaxation steps for constraint (EC.3). t = = min (w,v) W i g i (x 1,ξ) g i (ˆx 2,ξ) q i (ξ µ i ) r T ψ(µ +(ξ µ i )e i ) g i (x 1,ξ) min {α i +(ξ ξ i )β i } q i (ξ µ i ) r T ψ(µ +(ξ µ i )e i ) g i (x 1,ξ) α i (ξ ξ i )β i q i (ξ µ i ) r T ψ(µ +(ξ µ i )e i ) min {w + ξv} α i β i (ξ ξ i ) q i(ξ µ i ) r T ψ(µ +(ξ µ i )e i ) (w,v) W i w + ξv α i β i (ξ ξ i ) q i(ξ µ i ) r T ψ(µ +(ξ µ i )e i ),

9 ec9 where W i is short for {(w,v) R 2 w + ξ m v g i (x 1,ξ m ) m {1, 2,...,K}}. Thetworelaxation steps are obtained through the application of Lemma EC.3 to first replace g(x 2, ξ) byitsouter approximation, and then g(x 1, ξ) byitsinnerapproximation.thelastequalityisobtainedby inversing the order of ξ [νi τ i,ν i +τ i and min (w,v) Wi using Sion s mini theorem (see Sion (1958)). Thus, we obtain the optimization problem presented indefinition1. References Ben-Tal, A., A. Nemirovsi Robust convex optimization. Mathematics of Operations Research 23(4) Boyd, S., L. Vandenberghe Convex Optimization. 1st ed. Cambridge University Press. Delage, E., Y. Ye Distributionally robust optimization under moment uncertainty with application to data-driven problems. Operations Research 58(3) Grötschel, M., L. Lovász, A. Schrijver The ellipsoid method and its consequences in combinatorial optimization. Combinatorica Nesterov, Y., A. Nemirovsi Interior-point polynomial methods in convex programming, vol. 13. Studies in Applied Mathematics, Philadelphia. Póli, I., T. Terlay A survey of the S-Lemma. SIAM Review 49(3) Shapiro, A On duality theory of conic linear problems. M. A. Goberna, M. A. López, eds., Semi-Infinite Programming: Recent Advances. Kluwer Academic Publishers, Dordrecht, Sion, M On general mini theorems. Pacific Journal of Mathematics 8(1)

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition Sanjay Mehrotra and He Zhang July 23, 2013 Abstract Moment robust optimization models formulate a stochastic problem with

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Models and Algorithms for Distributionally Robust Least Squares Problems

Models and Algorithms for Distributionally Robust Least Squares Problems Models and Algorithms for Distributionally Robust Least Squares Problems Sanjay Mehrotra and He Zhang February 12, 2011 Abstract We present different robust frameworks using probabilistic ambiguity descriptions

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Distributionally Robust Convex Optimization Wolfram Wiesemann 1, Daniel Kuhn 1, and Melvyn Sim 2 1 Department of Computing, Imperial College London, United Kingdom 2 Department of Decision Sciences, National

More information

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Min-max- robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Christoph Buchheim, Jannis Kurtz 2 Faultät Mathemati, Technische Universität Dortmund Vogelpothsweg

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Time inconsistency of optimal policies of distributionally robust inventory models

Time inconsistency of optimal policies of distributionally robust inventory models Time inconsistency of optimal policies of distributionally robust inventory models Alexander Shapiro Linwei Xin Abstract In this paper, we investigate optimal policies of distributionally robust (risk

More information

Jitka Dupačová and scenario reduction

Jitka Dupačová and scenario reduction Jitka Dupačová and scenario reduction W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch Session in honor of Jitka Dupačová ICSP 2016, Buzios (Brazil),

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Distributionally robust optimization techniques in batch bayesian optimisation

Distributionally robust optimization techniques in batch bayesian optimisation Distributionally robust optimization techniques in batch bayesian optimisation Nikitas Rontsis June 13, 2016 1 Introduction This report is concerned with performing batch bayesian optimization of an unknown

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

Robust Optimization of Sums of Piecewise Linear Functions with Application to Inventory Problems

Robust Optimization of Sums of Piecewise Linear Functions with Application to Inventory Problems Robust Optimization of Sums of Piecewise Linear Functions with Application to Inventory Problems Amir Ardestani-Jaafari, Erick Delage Department of Management Sciences, HEC Montréal, Montréal, Quebec,

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

On the Sandwich Theorem and a approximation algorithm for MAX CUT

On the Sandwich Theorem and a approximation algorithm for MAX CUT On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

1. Introduction. Consider the following quadratic binary optimization problem,

1. Introduction. Consider the following quadratic binary optimization problem, ON DUALITY GAP IN BINARY QUADRATIC PROGRAMMING XIAOLING SUN, CHUNLI LIU, DUAN LI, AND JIANJUN GAO Abstract. We present in this paper new results on the duality gap between the binary quadratic optimization

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE

Robust and Stochastic Optimization Notes. Kevin Kircher, Cornell MAE Robust and Stochastic Optimization Notes Kevin Kircher, Cornell MAE These are partial notes from ECE 6990, Robust and Stochastic Optimization, as taught by Prof. Eilyan Bitar at Cornell University in the

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

On a class of minimax stochastic programs

On a class of minimax stochastic programs On a class of minimax stochastic programs Alexander Shapiro and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology 765 Ferst Drive, Atlanta, GA 30332. August 29, 2003

More information

arxiv: v1 [cs.dm] 27 Jan 2014

arxiv: v1 [cs.dm] 27 Jan 2014 Randomized Minmax Regret for Combinatorial Optimization Under Uncertainty Andrew Mastin, Patrick Jaillet, Sang Chin arxiv:1401.7043v1 [cs.dm] 27 Jan 2014 January 29, 2014 Abstract The minmax regret problem

More information

Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric

Data-Driven Distributionally Robust Chance-Constrained Optimization with Wasserstein Metric Data-Driven Distributionally Robust Chance-Constrained Optimization with asserstein Metric Ran Ji Department of System Engineering and Operations Research, George Mason University, rji2@gmu.edu; Miguel

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk

Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Noname manuscript No. (will be inserted by the editor) Quadratic Two-Stage Stochastic Optimization with Coherent Measures of Risk Jie Sun Li-Zhi Liao Brian Rodrigues Received: date / Accepted: date Abstract

More information

Distributionally Robust Optimization with ROME (part 1)

Distributionally Robust Optimization with ROME (part 1) Distributionally Robust Optimization with ROME (part 1) Joel Goh Melvyn Sim Department of Decision Sciences NUS Business School, Singapore 18 Jun 2009 NUS Business School Guest Lecture J. Goh, M. Sim (NUS)

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Portfolio Selection under Model Uncertainty:

Portfolio Selection under Model Uncertainty: manuscript No. (will be inserted by the editor) Portfolio Selection under Model Uncertainty: A Penalized Moment-Based Optimization Approach Jonathan Y. Li Roy H. Kwon Received: date / Accepted: date Abstract

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture

More information

Distributionally Robust Stochastic Optimization with Wasserstein Distance

Distributionally Robust Stochastic Optimization with Wasserstein Distance Distributionally Robust Stochastic Optimization with Wasserstein Distance Rui Gao DOS Seminar, Oct 2016 Joint work with Anton Kleywegt School of Industrial and Systems Engineering Georgia Tech What is

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie Shabbir Ahmed Ruiwei Jiang February 13, 2017 Abstract A distributionally robust joint chance constraint

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

Robust conic quadratic programming with ellipsoidal uncertainties

Robust conic quadratic programming with ellipsoidal uncertainties Robust conic quadratic programming with ellipsoidal uncertainties Roland Hildebrand (LJK Grenoble 1 / CNRS, Grenoble) KTH, Stockholm; November 13, 2008 1 Uncertain conic programs min x c, x : Ax + b K

More information

Stability of Stochastic Programming Problems

Stability of Stochastic Programming Problems Stability of Stochastic Programming Problems W. Römisch Humboldt-University Berlin Institute of Mathematics 10099 Berlin, Germany http://www.math.hu-berlin.de/~romisch Page 1 of 35 Spring School Stochastic

More information

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets

Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets manuscript Distributionally Robust Optimization with Infinitely Constrained Ambiguity Sets Zhi Chen Imperial College Business School, Imperial College London zhi.chen@imperial.ac.uk Melvyn Sim Department

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Reinforcement of gas transmission networks with MIP constraints and uncertain demands 1

Reinforcement of gas transmission networks with MIP constraints and uncertain demands 1 Reinforcement of gas transmission networks with MIP constraints and uncertain demands 1 F. Babonneau 1,2 and Jean-Philippe Vial 1 1 Ordecsys, scientific consulting, Geneva, Switzerland 2 Swiss Federal

More information

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations

More information

Scenario-Free Stochastic Programming

Scenario-Free Stochastic Programming Scenario-Free Stochastic Programming Wolfram Wiesemann, Angelos Georghiou, and Daniel Kuhn Department of Computing Imperial College London London SW7 2AZ, United Kingdom December 17, 2010 Outline 1 Deterministic

More information

arxiv: v2 [math.oc] 10 May 2017

arxiv: v2 [math.oc] 10 May 2017 Conic Programming Reformulations of Two-Stage Distributionally Robust Linear Programs over Wasserstein Balls arxiv:1609.07505v2 [math.oc] 10 May 2017 Grani A. Hanasusanto 1 and Daniel Kuhn 2 1 Graduate

More information

Electronic Companion to Optimal Policies for a Dual-Sourcing Inventory Problem with Endogenous Stochastic Lead Times

Electronic Companion to Optimal Policies for a Dual-Sourcing Inventory Problem with Endogenous Stochastic Lead Times e-companion to Song et al.: Dual-Sourcing with Endogenous Stochastic Lead times ec1 Electronic Companion to Optimal Policies for a Dual-Sourcing Inventory Problem with Endogenous Stochastic Lead Times

More information

Robust Farkas Lemma for Uncertain Linear Systems with Applications

Robust Farkas Lemma for Uncertain Linear Systems with Applications Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty

Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Learning the Kernel Matrix with Semi-Definite Programming

Learning the Kernel Matrix with Semi-Definite Programming Learning the Kernel Matrix with Semi-Definite Programg Gert R.G. Lanckriet gert@cs.berkeley.edu Department of Electrical Engineering and Computer Science University of California, Berkeley, CA 94720, USA

More information

Robust Fisher Discriminant Analysis

Robust Fisher Discriminant Analysis Robust Fisher Discriminant Analysis Seung-Jean Kim Alessandro Magnani Stephen P. Boyd Information Systems Laboratory Electrical Engineering Department, Stanford University Stanford, CA 94305-9510 sjkim@stanford.edu

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1110.0918ec e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2011 INFORMS Electronic Companion Mixed Zero-One Linear Programs Under Objective Uncertainty: A Completely

More information

arxiv: v1 [math.oc] 27 Nov 2013

arxiv: v1 [math.oc] 27 Nov 2013 Submitted, SIAM Journal on Optimization (SIOPT) http://www.cds.caltech.edu/~murray/papers/han+13-siopt.html Convex Optimal Uncertainty Quantification arxiv:1311.7130v1 [math.oc] 27 Nov 2013 Shuo Han 1,

More information

Distributionally Robust Optimization under Moment Uncertainty with Application to Data-Driven Problems

Distributionally Robust Optimization under Moment Uncertainty with Application to Data-Driven Problems OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 ISSN 0030-364X EISSN 526-5463 00 0000 000 INFORMS DOI 0.287/xxxx.0000.0000 c 0000 INFORMS Distributionally Robust Optimization under Moment Uncertainty

More information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information

Ambiguous Joint Chance Constraints under Mean and Dispersion Information Ambiguous Joint Chance Constraints under Mean and Dispersion Information Grani A. Hanasusanto 1, Vladimir Roitch 2, Daniel Kuhn 3, and Wolfram Wiesemann 4 1 Graduate Program in Operations Research and

More information

Robust linear semi-in nite programming duality under uncertainty?

Robust linear semi-in nite programming duality under uncertainty? Mathematical Programming (Series B) manuscript No. (will be inserted by the editor) Robust linear semi-in nite programming duality under uncertainty? Robust linear SIP duality Goberna M.A., Jeyakumar V.

More information

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 36, No., February 20, pp. 24 54 issn 0364-765X eissn 526-547 360 0024 informs doi 0.287/moor.0.0482 20 INFORMS A Geometric Characterization of the Power of Finite

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms

Convex Optimization Theory. Athena Scientific, Supplementary Chapter 6 on Convex Optimization Algorithms Convex Optimization Theory Athena Scientific, 2009 by Dimitri P. Bertsekas Massachusetts Institute of Technology Supplementary Chapter 6 on Convex Optimization Algorithms This chapter aims to supplement

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Strong duality in Lasserre s hierarchy for polynomial optimization

Strong duality in Lasserre s hierarchy for polynomial optimization Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization

A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization A Geometric Characterization of the Power of Finite Adaptability in Multi-stage Stochastic and Adaptive Optimization Dimitris Bertsimas Sloan School of Management and Operations Research Center, Massachusetts

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

8. Geometric problems

8. Geometric problems 8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid

More information

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods OUTLINE l p -Norm Constrained Quadratic Programming: Conic Approximation Methods Wenxun Xing Department of Mathematical Sciences Tsinghua University, Beijing Email: wxing@math.tsinghua.edu.cn OUTLINE OUTLINE

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints

Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Optimized Bonferroni Approximations of Distributionally Robust Joint Chance Constraints Weijun Xie 1, Shabbir Ahmed 2, Ruiwei Jiang 3 1 Department of Industrial and Systems Engineering, Virginia Tech,

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Lagrange Relaxation: Introduction and Applications

Lagrange Relaxation: Introduction and Applications 1 / 23 Lagrange Relaxation: Introduction and Applications Operations Research Anthony Papavasiliou 2 / 23 Contents 1 Context 2 Applications Application in Stochastic Programming Unit Commitment 3 / 23

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Convex computation of the region of attraction for polynomial control systems

Convex computation of the region of attraction for polynomial control systems Convex computation of the region of attraction for polynomial control systems Didier Henrion LAAS-CNRS Toulouse & CTU Prague Milan Korda EPFL Lausanne Region of Attraction (ROA) ẋ = f (x,u), x(t) X, u(t)

More information

Subdifferentiability and the Duality Gap

Subdifferentiability and the Duality Gap Subdifferentiability and the Duality Gap Neil E. Gretsky (neg@math.ucr.edu) Department of Mathematics, University of California, Riverside Joseph M. Ostroy (ostroy@econ.ucla.edu) Department of Economics,

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information