Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
|
|
- Kathlyn Parrish
- 5 years ago
- Views:
Transcription
1 . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski
2 Linear Optimization Problem, its Data and Structure Linear Optimization problem: min x { c T x + d : Ax b } (LO) x R n : vector of decision variables, c R n and d R form the objective, A: an m n constraint matrix, b R m : right hand side. Problem s structure: its sizes m, n. Problem s data: (c, d, A, b). Data Uncertainty The data of typical real world LOs are partially uncertain not known exactly when the problem is being solved. Sources of data uncertainty: Prediction errors. Some of data entries (future demands, returns, etc.) do not exist when the problem is solved and hence are replaced with their forecasts.
3 Measurement errors: Some of the data (parameters of technological devices and processes, contents associated with raw materials, etc.) cannot be measured exactly, and their true values drift around the measured nominal values. Implementation errors: Some of the decision variables (planned intensities of technological processes, parameters of physical devices we are designing, etc.) cannot be implemented exactly as computed. The implementation errors are equivalent to artificial data uncertainties. Indeed, the impact of implementation errors x j (1 + ϵ j )x j + δ j on the validity of the constraint a i1 x a in x n b i is as if there were no implementation errors, but the data of the constraint was subject to perturbations a ij (1 + ϵ j )a ij, b i b i j a ij δ j.
4 Data Uncertainty: Traditional Treatment and Dangers Traditionally, small (fractions of percents) data uncertainty is just ignored, the problem is solved as it is with the nominal data, and the resulting nominal optimal solution is forwarded to the end user; large data uncertainty is assigned with a probability distribution and is treated via Stochastic Programming techniques. Fact: in many cases, even small data uncertainty can make the nominal solution heavily infeasible and thus practically meaningless.
5 Example: Antenna Design [Physics:] Directional density of energy transmitted by an monochromatic antenna placed at the origin is proportional to D(δ) 2, where the antenna s diagram D(δ) is a complexvalued function of 3-D direction (unit 3-D vector) δ. [Physics:] For an antenna array a complex antenna comprised of a number of antenna elements, the diagram is D(δ) = j x jd j (δ) ( ) D j ( ): diagrams of elements x j : complex weights design parameters responsible for how the elements in the array are invoked. Antenna Design problem: Given diagrams D 1 ( ),..., D k ( ) and a target diagram D ( ), find the weights x i C such that the synthesized diagram ( ) is as close as possible to the target diagram D ( ). When D j ( ), D ( ), same as the weights, are real and the closeness is quantified by the uniform norm on a finite grid Γ of directions, Antenna Design becomes the LO problem { min τ : τ D (δ) } x jd j (δ) τ δ Γ. x R n,τ j
6 Example: Consider planar antenna array comprised of 10 elements (circle surrounded by 9 rings of equal areas) in the plane XY (Earth s surface ), and our goal is to send most of the energy up, along the 12 o cone around the Z-axis: Diagram of a ring {z = 0, a x 2 + y 2 b}: [ D a,b (θ) = 1 b 2π 2 r cos ( 2πrλ 1 cos(θ) cos(ϕ) ) ] dϕ dr, a 0 θ: altitude angle λ: wavelength antenna elements, equal areas, outer radius 1 m Nominal design problem: τ = min x R 10,τ Diagrams of the elements vs the altitude angle θ, λ =50 cm { τ : τ D (θ i ) 10 j=1 x jd j (θ i ) τ, 1 i 240, θ i = iπ } Target (blue) and nominal optimal (magenta) diagrams, τ =
7 But: The design variables are characteristics of physical devices and as such they cannot be implemented exactly as computed. What happens when there are implementation errors: x fact j with small ρ? = (1 + ξ j )x comp j, ξ j Uniform[ ρ, ρ] ρ = 0 ρ = ρ = ρ = 0.01 Dream and reality, nominal optimal design: samples of 100 actual diagrams (red) for different uncertainty levels. Blue: the target diagram -distance to target energy concentration Dream Reality ρ = 0 ρ = ρ = ρ = 0.01 value min mean max min mean max min mean max % 0.5% 16.4% 51.0% 0.1% 16.5% 48.3% 0.5% 14.9% 47.1% Quality of nominal antenna design: dream and reality. Data over 100 samples of actuation errors per each uncertainty level ρ. Conclusion: Nominal optimal design is completely meaningless...
8 Example: NETLIB Case Study. NETLIB: a collection of LO problems for testing LO algorithms. Constraint # 372 of the NETLIB problem PILOT4: a T x x x x x x x x x x x x x x x x x x x x x x x x x x 871 +x x x 916 b The related nonzero coordinates in the optimal solution x of the problem as reported by CPLEX are: x 826 = x 827 = x 828 = x 829 = x 849 = x 870 = x 871 = x 880 = This solution makes (C) an equality within machine precision. Note: The coefficients in a, except for the coefficient 1 at x 880, are ugly reals like or Ugly coefficients characterize certain technological devices and processes; as such they could hardly be known to high accuracy and coincide with the true data within accuracy of 3-4 digits, not more. Question: Assuming that the ugly entries in a are 0.1%- accurate approximations of the true data ã, what is the effect of this uncertainty on the validity of the true constraint ã T x b as evaluated at x? (C)
9 Answer: The minimum, over all 0.1% perturbations a ã of ugly entries in a, value of ã T x b, is < 104.9, that is, with 0.1% perturbations of ugly coefficients, the violation of the constraint as evaluated at the nominal solution can be as large as 450% of the right hand side! With independent random 0.1%-perturbations of ugly coefficients, the violation of the constraint at average is as large as 125% of the right hand side; the probability of violating the constraint by at least 150% of the right hand side is as large as Among 90 NETLIB problems, perturbing ugly coefficients by just 0.01% results in violating some of the constraints, as evaluated at nominal optimal solutions, by more than 50% in 13 problems, by more than 100% in 6 problems. by 210,000% in PILOT4.
10 Conclusion: In applications of LO, there exists a real need of a technique capable of detecting cases when data uncertainty can heavily affect the quality of the nominal solution, and in these cases to generate a reliable solution, one that is immunized against uncertainty. Robust Optimization is aimed at satisfying the above need. Uncertain Linear Optimization Problems x Definition: An uncertain LO problem is a collection { { min c T x + d : Ax b }} (LO U ) x (c,d,a,b) U { of LO problems (instances) min c T x + d : Ax b } of common structure (i.e., with common numbers m of constraints and n of variables) with the data varying in a given uncertainty set U R (m+1) (n+1). Usually we assume that the uncertainty set is parameterized, in an affine fashion, by perturbation vector ζ varying in a given perturbation set Z: { [ ] [ ] [ c T d c T U = = 0 d 0 + L c T ζ l A b A 0 b l 0 l=1 A l }{{} nominal data D 0 d l b l } {{ } basic shifts D l ] : ζ Z R }. L
11 Example: When speaking about PILOT4, we tacitly used the following model of uncertainty: Uncertainty affects only the ugly coefficients {a ij : (i, j) J } in the constraint matrix, and every one of them is allowed to run, independently of all other coefficients, through the interval [a n ij ρ ij a n ij, a n ij + ρ ij a n ij ] a n ij : nominal values of the data ρ ij : perturbation levels (which in the experiment were set to ρ = 0.001). Perturbation set: The box {ζ = {ζ ij } (i,j) J : ρ ij ζ ij ρ ij } Parameterization of the data by perturbation vector: [ ] [ ] c T d [c = n ] T d n A b A n b n + [ ] ζ ij e i e T j (i,j) J
12 { { min c T x + d : Ax b }} x (c,d,a,b) U { [ ] [ c T U = 0 d 0 + L c T ζ l A 0 b l 0 l=1 A l }{{} nominal data D 0 d l b l ] } {{ } basic shifts D l : ζ Z R L }. (LO U ) There is no universally defined notion of a solution to a family of optimization problems, like (LO U ). Consider decision environment as follows: A.1. All decision variables in (LO U ) represent here and now decisions; they should be assigned specific numerical values as a result of solving the problem before the actual data reveals itself. A.2. The decision maker is fully responsible for consequences of the decisions to be made when, and only when, the actual data is within the prespecified uncertainty set U. A.3. The constraints in (LO U ) are hard we cannot tolerate violations of constraints, even small ones, when the data is in U.
13 { { min c T x + d : Ax b }} (LO U ) x (c,d,a,b) U In the above decision environment, the only meaningful candidate solutions to (LO U ) are the robust feasible ones. Definition: x R n is called a robust feasible solution to (LO U ), if x is feasible for all instances: Ax b (c, d, A, b) U. Indeed, by A.1 a meaningful candidate solution should be independent of the data, i.e., it should be just a fixed vector x. By A.2-3, it should satisfy the constraints, whatever be a realization of the data from U. Acting in the same worst-case-oriented fashion, it makes sense to quantify the quality of a candidate solution x by the guaranteed (the worst, over the data from U) value of the objective: sup{c T x + d : (c, d, A, b) U}
14 { { min c T x + d : Ax b }} (LO U ) x (c,d,a,b) U Now we can associate with (LO U ) the problem of finding the best, in terms of the guaranteed value of the objective, among the robust feasible solutions: min t,x { t : c T x + d t, Ax b (c, d, A, b) U } (RC) This is called the Robust Counterpart of (LO U ). Note: Passing from LOs of the form { min c T x + d : Ax b } x to their equivalents { t : c T x + d t, Ax b } min t,x we always may assume that the objective is certain, and the RC respects this equivalence. We lose nothing by assuming the objective in (LO U ) certain, in which case we can think of U as of the set in the space R m (n+1) of the [A, b]-data, and the RC reads min x { c T x : Ax b [A, b] U }. (RC)
15 { { min c T x : Ax b }} (LO U ) x (A,b) U { min x c T x : Ax b [A, b] U } (RC) Fact I: The RC of uncertain LO with certain objective is a purely constraint-wise construction: when building the RC, we replace every constraint a T i x b i of the instances with its RC a T i x b i [a T i, b i ] U i (RC i ) where U i is the projection of the uncertainty set U on the space of data [a T i, b i] of i-th constraint. Fact II: The RC remains intact when extending the uncertainty set U to its closed convex hull. When (LO U ) has certain objective, the RC remains intact when extending U to the direct product of closed convex hulls of U i. Thus, the transformation U U + = [cl Conv(U 1 )]... [cl Conv(U m )] keeps the RC intact. From now on, we always assume uncertainty set U convex, and perturbation set Z convex and closed.
16 { { min c T x : Ax b }} (LO U ) x [A,b] U { min x c T x : Ax b [A, b] U } (RC) The central questions associated with the concept of RC are: A. What is the computational status of the RC? When is it possible to process the RC efficiently? to be addressed in-depth below. B. How to come-up with meaningful uncertainty sets? modeling issue NOT to be addressed in this talk.
17 min x { c T x : Ax b [A, b] U } (RC) Potentially bad news: The RC is a semi-infinite optimization problem (finitely many variables, infinitely many constraints) and as such can be computationally intractable. Example: Consider the nearly linear semi-infinite constraint P x p 1 1, [P, p] U U = {[P, p] : p = Bζ, ζ 2 1} To check whether x = 0 is robust feasible is, in general, NP-hard!
18 min x { c T x : Ax b [A, b] U } (RC) Good news: The RC of an uncertain LO problem is computationally tractable, provided the uncertainty set U is so. Explanation, I: The RC can be written down as the optimization problem { min c T x : f i (x) 0, i = 1,..., m } x f i (x) = sup [a T i x b i] [A,b] U The functions f i (x) are convex (due to their origin) and efficiently computable (as maxima of affine functions over computationally tractable convex sets). Thus, the RC is a Convex Programming program with efficiently computable objective and constraints, and problems of this type are efficiently solvable.
19 Recaling that the RC is a constraint-wise construction, all we need is to reformulate in a tractable form a single semi-infinite constraint α = [a; b] {α n + Aζ : ζ Z} R n+1 : α T [x; 1] a T x + b 0. Consider several instructive cases when tractable reformulation of ( ) is easy does not require any theory. 1. Scenario uncertainty Z = Conv{ζ 1,..., ζ N }. Setting α j = α n + Aζ j, 1 j N, we get ( ) {α j [x; 1] 0, 1 j N} 2. p -uncertainty Z = {ζ R L : ζ p 1}, ( L ) 1/p ζ p = l=1 ζ l p, 1 p < max l ζ l, p = We have ( ) [α n ] T [x; 1] + A T [x; 1] p 0, 1 p + 1 p = 1 In particular, Interval uncertainty { α : (α n ) j δ j α j α n j + δ j, 1 j n + 1 } ( ) leads to ( ) [α n ] T [x; 1] + n δ i x i + δ n+1 0 i=1
20 General Well-Structured Case Definition: Let us say that a set X R N is well-structured, if it admits a well-structured representation a representation of the form A 0 x + B 0 u + c 0 = 0 X = x R N : u R M A : 1 x + B 1 u + c 1 K 1, A K x + B k u + c K K K where K k, for every k K, is a simple cone, specifically, either nonnegative orthant R m + = {x R m : x 0}, m = m k, or a Lorentz cone L m = {x R m : x m x }, m = m x2 m 1 k, or a Semidefinite cone S m + the cone of positive semidefinite matrices in the space S m of real symmetric m m matrices, m = m k. Example 1: The set X = {x R N : x 1 1} admits polyhedral representation X = {x R N : u R N : u i x i u i, i u i 1} = x R n : u R N : A 1 x + B 1 u + c 1 u 1 x 1 u 1 + x 1. u N x N u N + x N 1 i u i R 2N+1 +
21 Example 2: The set X = {x R 4 + : x 1 x 2 x 3 x 4 1} admits conic quadratic representation 0 u 1 x 1 x 2 X = x R4 + : u R 3 : 0 u 2 x 3 x 4 = x R n : u R 3 : 1 u 3 u 1 u 2 [x 1 ; x 2 ; x 3 ; x 4 ; u 1 ; u 2 ; u 3 1] R 7 + [2u 1 ; x 1 x 2 ; x 1 + x 2 ] L 3 [2u 2 ; x 3 x 4 ; x 3 + x 4 ] L 3 [2u 3 ; u 1 u 2 ; u 1 + u 2 ] L 3 Example 3: The set X of m n matrices X with nuclear norm (sum of singular values) 1 admits semidefinite representation { X = X R m n : u = (U S m, V S n ) : Tr(U) [ + Tr(V ] ) 2 U X X T 0 V }.
22 X = x R N : u R M : A 0 x + B 0 u + c 0 = 0 A 1 x + B 1 u + c 1 K 1 A K x + B k u + c K K K, ( ) Good news on well-structured representations: Computational tractability: Minimizing a linear objective over a set given by ( ) reduces to solving a well-structured conic program min x,u c T x : A 0 x + B 0 u + c 0 = 0 A 1 x + B 1 u + c 1 K 1 A K x + B k u + c K K K and thus can be done in a theoretically (and to some extent also practically) efficient manner by polynomial time interior point algorithms. Extremely powerful expressive abilities: w.-s.r. s admit a simple fully algorithmic calculus which makes it easy to build a w.-s.r. for the result of a convexity-preserving operation with convex sets (like taking intersections, direct sums, affine images, inverse affine images, polars, etc.) via w.-s.r. s of the operands. As a result, for all practical purposes, all computationally tractable convex sets arising in Optimization admit explicit w.-s.r. s.,
23 Example: The set given by the constraints { Ax = b (a) x 0 [ 8 ] 1/3 x i 3 x 1/7 2 x 2/7 3 x 3/ x 1/5 1 x 2/5 5 x 1/5 6 (b) (c) i=1 5I 5x 2 1 x 1/2 1 x 2 2 x 2 x 1 x 1 x 4 x 3 x 3 x 6 x 3 x 3 x x 1/3 2 x 3 3 x5/8 4 is positive semidefinite x 1 x 2 x 1 x 3 x 2 x 4 x 3 x 2 x 1 x 2 x 3 x 2 x 4 x 3 x 3 x 2 x 3 x 2 x 3 x 4 x 3 x 4 x 3 x 4 x 3 x 4 x 3 x 4 is positive semidefinite x 1 + x 2 sin(ϕ) + x 3 sin(2ϕ) + x 4 sin(4ϕ) 0 ϕ [ ] 0, π 2 admits an explicit w.-s.r. This w.-s.r. can be built fully automatically, by a compiler, and this is how CVX works.
24 Coming back to Robust Counterpart of a scalar linear inequality, we want to answer the following Question: How to describe in a tractable form the fact that a vector x satisfies the semi-infinite constraint α = [a; b] {α n + Aζ : ζ Z} R n+1 : α T [x; 1] a T x + b 0. We are about to give a nice answer in the case when the perturbation set Z is well structured. Substituting α = α n + Aζ into α T [x; 1] 0, we arrive at the inequality ζ T A T [x; 1] }{{} p(x) or, which is the same, + [α n ] T [x; 1] }{{} q(x) 0, ( ) (p(x)) T ζ q(x) (!) Note that p(x) and q(x) are affine in x and that x satisfies ( ) if and only if (!) holds true for every ζ Z, that is, if and only if the relation (!), considered as a scalar linear inequality in variables ζ, is a consequence of the constraints defining Z. Thus, our goal is to understand when a scalar linear inequality is a consequence of the system of constraints defining a well-structured set. The answer is given by Conic Duality.
25 Excursion to Conic Duality Situation: We are given a system of linear equations and conic constraints A 0 w b 0 = 0 & A i w b i K i, 1 i I (C) w: variable from Euclidean space E with inner product, K i : closed convex pointed cones with nonempty interiors in Eculidean spaces E i with inner products, i w A i w b i : E E i : affine mappings Case of primary interest: Every K i is or nonnegative orthant R m i + = {y R m i : y 0}, (E i = R m i, y, z i = y T z), or Lorentz cone L m i + = {y R m i : y mi [y 1 ;...; y mi 1 2 } (E i = R m i, y, z i = y T z), or semidefinite cone S m i + (E i = S m i is the space of m i m i symmetric matrices with the Frobenius inner product y, z i = Tr(yz) = y ij z ij ) i,j Goal: To understand when a scalar linear inequality p, w q is a consequence of (C), meaning that ( ) is satisfied at every solution to (C). ( )
26 Given a system A 0 w b 0 = 0 & A i w b i K i, 1 i I (C) w: variable from Euclidean space (E,, ) K i : closed convex pointed cones with nonempty interiors in Euclidean spaces (E i,, i ) of linear equations and conic constraints, we want to understand when a scalar linear inequality p, w q is a consequence of the system. Linear aggregation: Choose Lagrange multipliers y i, 0 i I, such that for i 1, y i belongs to the cone dual to K i : i > 0 : y i K i := {y : y, u i 0 u K i } Since y i K i, the conic constraint A iw b i K i implies that y i, [A i w b i ] i 0, that is, [ (C) A i y i, w b i, y i i ] (+) A : F E is the conjugate of A : E F : A y, x E y, Ax F Equations A 0 w b 0 = 0 imply (+) when i = 0. Summing up inequalities (+) over i = 0, 1,..., I, we arrive at the following conclusion: Whenever y i, 0 i I are such that y i K i when i 1, and I I A i y i = p and b i, y i i q i=0 the scalar linear inequality ( ) is a consequence of (C). i=0 ( )
27 A 0 w b 0 = 0 & A i w b i K i, 1 i I (C) w: variable from Euclidean space (E,, ) K i : closed convex pointed cones with nonempty interiors in Euclidean spaces (E i,, i ) p, w q Conic Farkas Lemma: Let (C) be strictly feasible: w : A 0 w b 0 = 0 & A i w b i intk i, 1 i I Then ( ) is a consequence of (C) if and only if ( ) can be obtained from (C) by linear aggregation, i.e. if and only if y 0, {y i K i } I i=1 : I A i y i = p and i=0 I b i, y i i q. When all K i are nonnegative orthants, the statement remains true when replacing strict feasibility with feasibility. i=0 ( )
28 Corollary [Conic Duality]: Consider a conic problem Opt(P ) = min { c, w : A 0 w = b 0 & A i w b i K i, 1 i I} x (P ) along with its dual problem { I } I Opt(D) = max b i, y i i : y i K i i 1, A i y i = c y 1,...,y I i=0 i=0 (D) Weak Duality: One always has Opt(P ) Opt(D). Strong Duality: Assuming that (P ) is strictly feasible and below bounded, (D) is solvable, and Opt(P ) = Opt(D). When all K i are nonnegative orthants, Strong Duality holds when strict feasibility of (P ) is replaced with feasibility. Proof of Strong Duality: Apply Conic Farkas Lemma to the inequality c, x Opt(P ) which clearly is the consequence of the system of constraints in (P ). Note: Nonnegative orthants/lorentz/semidefinite cones are self-dual. When all K i are of nonnegative orthants, or Lorentz cones, or semidefinite cones, the constraints y i K i in (D) take the form y i K i.
29 Conic Farkas Lemma is really deep already in the case when all conic inequalities in question are scalar: K i = R + for all i. In this case it becomes the usual Farkas Lemma: A linear inequality p T x q is a consequence of solvable system of linear inequalities Ax b if and only if there exists y 0 such that A T y = p & b T y q, or, which is the same, if and only if the target inequality ( ) can be obtained by linear aggregation (taking weighted sum with nonnegative weights) of inequalities from the system and the identically true inequality 0 T x 1. While simply-looking, this fact is really deep. Indeed, consider the following derivation { } 1 x 1 x 1 y 1 2 1, y 2 1 x 2 + y 2 2 x + y = 1 x + 1 y x 2 + y = 2 [we have used Cauchy Inequality] In this chain of implications, the starting point is a solvable system of linear inequalities, and the conclusion x + y 2 is a linear inequality as well; however, the steps use highly nonlinear arguments. According to Farkas Lemma, every chain of this type, whatever long and with whatever complicated steps, can be replaced with linear aggregation. A statement with that huge predictive power indeed must be deep! ( )
30 Back to Robust Linear Optimization Now we are ready to answer the question When the scalar linear inequality in variables ζ Z = (p(x)) T ζ q(x) (!) with p(x), q(x) affine in x holds true for all ζ Z, where Z is well structured: ζ R L : u R M : A 0 ζ + B 0 u + c 0 = 0 A 1 ζ + B 1 u + c 1 K 1 A K ζ + B k u + c K K K ( ) K i : Nonnegative orthants/lorentz/semidefinite cones., Assume that either all K k are nonnegative orthants, or ( ) is strictly feasible. We ask when the scalar linear inequality (p(x)) T ζ + 0 T u q(x) in variables w = [ζ; u] holds true for all [ζ; u] satisfying the constraints in ( ). By conic Farkas Lemma, this is so if and only if there exist y 0, y 1,..., y K such that A T 0 y 0 + K k=1 AT k y k = p(x), B0 T y 0 + K k=1 BT k y k = 0, y0 T c 0 K k=1 yt k c k q(x) y k K k = K k, 1 k K ( ) Since p(x) and q(x) are affine in x, ( ) is a system of linear equations and conic constraints in variables x, y 0, y 1,..., y K We conclude that The set X of those x for which (!) holds true for all ζ Z is well-structured: X = {x : y 0, y 1,..., y k : (x, y 0,..., y k ) satisfy ( )}
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationNotation and Prerequisites
Appendix A Notation and Prerequisites A.1 Notation Z, R, C stand for the sets of all integers, reals, and complex numbers, respectively. C m n, R m n stand for the spaces of complex, respectively, real
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski
ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING Arkadi Nemirovski Georgia Institute of Technology and Technion Israel Institute of Technology Convex Programming solvable case in Optimization Revealing
More informationROBUST CONVEX OPTIMIZATION
ROBUST CONVEX OPTIMIZATION A. BEN-TAL AND A. NEMIROVSKI We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, yet
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationUncertain Conic Quadratic and Semidefinite Optimization Canonical Conic problem: A 1 x + b 1 K 1 min A I x + b I K I
Uncertain Conic Quadratic and Semidefinite Optimization Canonical Conic problem: A 1 x + b 1 K 1 min x ct x + d : A I x + b I K I (CP) x: decision vector K i : simple cone: nonnegative orthant R m +, or
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationRobust conic quadratic programming with ellipsoidal uncertainties
Robust conic quadratic programming with ellipsoidal uncertainties Roland Hildebrand (LJK Grenoble 1 / CNRS, Grenoble) KTH, Stockholm; November 13, 2008 1 Uncertain conic programs min x c, x : Ax + b K
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More information(Linear Programming program, Linear, Theorem on Alternative, Linear Programming duality)
Lecture 2 Theory of Linear Programming (Linear Programming program, Linear, Theorem on Alternative, Linear Programming duality) 2.1 Linear Programming: basic notions A Linear Programming (LP) program is
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationAnalysis and synthesis: a complexity perspective
Analysis and synthesis: a complexity perspective Pablo A. Parrilo ETH ZürichZ control.ee.ethz.ch/~parrilo Outline System analysis/design Formal and informal methods SOS/SDP techniques and applications
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationGEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III
GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS
More informationCharacterizing Robust Solution Sets of Convex Programs under Data Uncertainty
Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report
More informationPrimal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization
Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationA Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs
A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit
More informationOn deterministic reformulations of distributionally robust joint chance constrained optimization problems
On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationOperations Research Letters
Operations Research Letters 37 (2009) 1 6 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.com/locate/orl Duality in robust optimization: Primal worst
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in
More information1 Robust optimization
ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationApproximate Farkas Lemmas in Convex Optimization
Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The
More informationSelected Topics in Robust Convex Optimization
Noname manuscript No. (will be inserted by the editor) Aharon Ben-Tal Arkadi Nemirovski Selected Topics in Robust Convex Optimization Received: date / Revised version: date Abstract Robust Optimization
More informationHandout 8: Dealing with Data Uncertainty
MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite
More informationPractical Robust Optimization - an introduction -
Practical Robust Optimization - an introduction - Dick den Hertog Tilburg University, Tilburg, The Netherlands NGB/LNMB Seminar Back to school, learn about the latest developments in Operations Research
More informationStructure of Valid Inequalities for Mixed Integer Conic Programs
Structure of Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç-Karzan Tepper School of Business Carnegie Mellon University 18 th Combinatorial Optimization Workshop Aussois, France January
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationTrust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization
Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationSelected Topics in Robust Convex Optimization
Selected Topics in Robust Convex Optimization Arkadi Nemirovski School of Industrial and Systems Engineering Georgia Institute of Technology Optimization programs with uncertain data and their Robust Counterparts
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More information1. Introduction. Consider the deterministic multi-objective linear semi-infinite program of the form (P ) V-min c (1.1)
ROBUST SOLUTIONS OF MULTI-OBJECTIVE LINEAR SEMI-INFINITE PROGRAMS UNDER CONSTRAINT DATA UNCERTAINTY M.A. GOBERNA, V. JEYAKUMAR, G. LI, AND J. VICENTE-PÉREZ Abstract. The multi-objective optimization model
More informationGENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION
Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationRobust Solutions to Multi-Objective Linear Programs with Uncertain Data
Robust Solutions to Multi-Objective Linear Programs with Uncertain Data M.A. Goberna yz V. Jeyakumar x G. Li x J. Vicente-Pérez x Revised Version: October 1, 2014 Abstract In this paper we examine multi-objective
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationA Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition
A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition Sanjay Mehrotra and He Zhang July 23, 2013 Abstract Moment robust optimization models formulate a stochastic problem with
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationPOLARS AND DUAL CONES
POLARS AND DUAL CONES VERA ROSHCHINA Abstract. The goal of this note is to remind the basic definitions of convex sets and their polars. For more details see the classic references [1, 2] and [3] for polytopes.
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLecture 7 Monotonicity. September 21, 2008
Lecture 7 Monotonicity September 21, 2008 Outline Introduce several monotonicity properties of vector functions Are satisfied immediately by gradient maps of convex functions In a sense, role of monotonicity
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationExtending the Scope of Robust Quadratic Optimization
Extending the Scope of Robust Quadratic Optimization Ahmadreza Marandi Aharon Ben-Tal Dick den Hertog Bertrand Melenberg June 1, 017 Abstract In this paper, we derive tractable reformulations of the robust
More informationRobust combinatorial optimization with variable budgeted uncertainty
Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationC.O.R.E. Summer School on Modern Convex Optimization August 26-30, 2002 FIVE LECTURES ON MODERN CONVEX OPTIMIZATION
C.O.R.E. Summer School on Modern Convex Optimization August 26-30, 2002 FIVE LECTURES ON MODERN CONVEX OPTIMIZATION Arkadi Nemirovski nemirovs@ie.technion.ac.il, http://iew3.technion.ac.il/home/users/nemirovski.phtml
More informationIntroduction and Math Preliminaries
Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationOnline First-Order Framework for Robust Convex Optimization
Online First-Order Framework for Robust Convex Optimization Nam Ho-Nguyen 1 and Fatma Kılınç-Karzan 1 1 Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, 15213, USA. July 20, 2016;
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationConic optimization: an elegant framework for convex optimization
Conic optimization: an elegant framework for convex optimization François Glineur Service de Mathématique et de Recherche Opérationnelle, Faculté Polytechnique de Mons, Rue de Houdain, 9, B-7000 Mons,
More informationAdjustable Robust Solutions of Uncertain Linear Programs 1
Adjustable Robust Solutions of Uncertain Linear Programs 1 A. Ben-Tal, A. Goryashko, E. Guslitzer, A. Nemirovski Minerva Optimization Center, Faculty of Industrial Engineering and Management, Technion,
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationAdvances in Convex Optimization: Conic Programming
Advances in Convex Optimization: Conic Programming Arkadi Nemirovski Abstract. During the last two decades, major developments in Convex Optimization were focusing on Conic Programming, primarily, on Linear,
More information1 Quantum states and von Neumann entropy
Lecture 9: Quantum entropy maximization CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: February 15, 2016 1 Quantum states and von Neumann entropy Recall that S sym n n
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationAN INTRODUCTION TO CONVEXITY
AN INTRODUCTION TO CONVEXITY GEIR DAHL NOVEMBER 2010 University of Oslo, Centre of Mathematics for Applications, P.O.Box 1053, Blindern, 0316 Oslo, Norway (geird@math.uio.no) Contents 1 The basic concepts
More informationDO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO
QUESTION BOOKLET EECS 227A Fall 2009 Midterm Tuesday, Ocotober 20, 11:10-12:30pm DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the midterm. The midterm consists
More informationLIGHT ROBUSTNESS. Matteo Fischetti, Michele Monaci. DEI, University of Padova. 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007
LIGHT ROBUSTNESS Matteo Fischetti, Michele Monaci DEI, University of Padova 1st ARRIVAL Annual Workshop and Review Meeting, Utrecht, April 19, 2007 Work supported by the Future and Emerging Technologies
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationLifting for conic mixed-integer programming
Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)
More informationA Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem
A Hierarchy of Suboptimal Policies for the Multi-period, Multi-echelon, Robust Inventory Problem Dimitris J. Bertsimas Dan A. Iancu Pablo A. Parrilo Sloan School of Management and Operations Research Center,
More informationStrong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty
Strong Duality in Robust Semi-Definite Linear Programming under Data Uncertainty V. Jeyakumar and G. Y. Li March 1, 2012 Abstract This paper develops the deterministic approach to duality for semi-definite
More informationRelations between Semidefinite, Copositive, Semi-infinite and Integer Programming
Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations
More informationRobust Farkas Lemma for Uncertain Linear Systems with Applications
Robust Farkas Lemma for Uncertain Linear Systems with Applications V. Jeyakumar and G. Li Revised Version: July 8, 2010 Abstract We present a robust Farkas lemma, which provides a new generalization of
More informationDistributionally Robust Convex Optimization
Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More information