Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Size: px
Start display at page:

Download "Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization"

Transcription

1 Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics of Operations Research, and Mathematical Programming 1

2 Outline Primal-Dual Geometry of Level Sets in Conic Optimization A Geometric Measure of Feasible Regions and Interior-Point Method (IPM) Complexity Theory Geometric Measures and their Explanatory Value of the Practical Performance of IPMs 2

3 Primal and Dual Linear Optimization Problems P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x 0 z 0 A IR m n x 0 is x R n + 3

4 Primal and Dual Conic Problem P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C C X is a regular cone: closed, convex, pointed, with nonempty interior C := {z : z T x 0 x C} 4

5 The Semidefinite Cone S k denotes the set of symmetric k k matrices S+ k denotes the set of positive semi-definite k k symmetric matrices S k ++ denotes the set of positive definite k k symmetric matrices X 0 denotes that X is symmetric positive semi-definite X Y denotes that X Y 0 X 0 to denote that X is symmetric positive definite, etc. Remark: S+ k = {X Sk X 0} is a regular convex cone. Furthermore, (S+ k ) = S+ k, i.e., Sk + is self-dual. 5

6 Primal and Dual Semidefinite Optimization P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x S+ k z S+ k This is slightly awkward as we are not used to conceptualizing x as a matrix. 6

7 Primal and Dual Semidefinite Optimization, again min X C X max y,z mi=1 y i b i s.t. A i X = b i, i = 1,..., m s.t. mi=1 y i A i + Z = C X 0 Z 0 Here C X := k k i=1 j=1 C ij X ij is the trace inner product, since C X = trace(c T X) 7

8 Back to Linear Optimization P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x 0 z 0 8

9 Five Meta-Lessons from Interior-Point Theory/Methods 1. Linear optimization is not much more special than conic convex optimization 2. A problem is ill-conditioned if VAL is finite but the primal or dual objective function level sets are unbounded 3. ε-optimal solutions are important objects in their own right 4. Choice of norm is important; some norms are more natural for certain settings 9

10 Meta-Lessons from Interior-Point Theory/Methods, continued 5. All the important activity is in the (regular) cones Indeed, we could eliminate the y-variable and re-write P and D as: P : min x c T x D : VAL := min z (x 0 ) T z s.t. x x 0 L s.t. z c L x 0 z 0 where x 0 satisfies Ax 0 = b and L = null(a). But we won t. 10

11 Primal and Dual Near-Optimal Level Sets P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x 0 z 0 P ε := {x : Ax = b, x 0, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z 0, b T y VAL δ} 11

12 Level Set Geometry Measures Let e := (1, 1,..., 1) T. Define for ε, δ > 0: R P ε := max x x 1 r D δ := max y,z,r r s.t. Ax = b s.t. A T y + z = c x 0 z 0 c T x VAL + ε b T y VAL δ z r e R P ε is the norm of the largest primal ε-optimal solution rδ D measures the largest distance to the boundary of R n + among all dual δ-optimal solutions z 12

13 Level Set Geometry Measures, continued R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e P ε := {x : Ax = b, x 0, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z 0, b T y VAL δ} 13

14 R P ε Measures Large Near-Optimal Solutions 14

15 R P ε Measures Large Near-Optimal Solutions 15

16 r D δ Measures Nicely Interior Near-Optimal Solutions 16

17 r D δ Measures Nicely Interior Near-Optimal Solutions 17

18 r D δ Measures Nicely Interior Near-Optimal Solutions 18

19 Main Result: R P ε and r D δ are Reciprocally Related R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e Main Theorem: Suppose VAL is finite. If R P ε is positive and finite, then min{ε, δ} R P ε r D δ ε + δ. Otherwise {R P ε, r D δ } = {, 0}. 19

20 Comments min{ε, δ} R P ε r D δ ε + δ R P ε, r D δ each involves primal and dual information each inequality can be tight (and cannot be improved) setting δ = ε, we obtain ε Rε P rε D 2ε, showing these two measures are inversely proportional (to within a factor of 2) 20

21 Comments, continued min{ε, δ} R P ε r D δ ε + δ exchanging the roles of P and D how to prove 21

22 Comments, continued R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e ε R P ε r D ε 2ε The maximum norms of the primal objective level sets are almost exactly inversely proportional to the maximum distances to the boundary of the dual objective level sets 22

23 Relation to LP Non-Regularity Property Standard LP Non-Regularity Property: If VAL is finite, the set of primal optimal solutions is unbounded iff every dual feasible z lies in the boundary of R n +. R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e In our notation, this is Rε P = iff rd δ part of the Main Theorem = 0, which is the second 23

24 Relation to LP Non-Regularity, continued R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e The first part of the main theorem is: if Rε P then is finite and positive, min{ε, δ} R P ε r D δ ε + δ This then is a generalization to nearly-non-regular problems, where R P ε is finite and r D δ is non-zero 24

25 Question about Main Result R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e Q: Why the 1 norm? A: Because f(x) := x 1 is a linear function on the cone R n +. The linearity gives R P ε nice properties. If is not linear on R n + then we have to slightly weaken the main theorem as we will see

26 Primal and Dual Conic Problem P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C C X is a regular cone: closed, convex, pointed, with nonempty interior C := {z : z T x 0 x C} 26

27 Primal and Dual Level Sets P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C P ε := {x : Ax = b, x C, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z C, b T y VAL δ} 27

28 Level Set Geometry Measures Fix a norm x for the space of the x variables. The dual norm is z := max{z T x : x 1} for the z variables. Define for ε, δ > 0: R P ε := max x x r D δ := max y,z dist (z, C ) s.t. Ax = b s.t. A T y + z = c x C z C c T x VAL + ε b T y VAL δ dist (z, C ) denotes the distance from z to C in the dual norm z 28

29 Level Set Geometry Measures, continued Rε P := max x x rδ D := max y,z dist (z, C ) s.t. Ax = b s.t. A T y + z = c x C z C c T x VAL + ε b T y VAL δ R P ε is the norm of the largest primal ε-optimal solution rδ D measures the largest distance to the boundary of C among all dual δ-optimal solutions z 29

30 Level Set Geometry Measures, continued R P ε := max x x r D δ := max z dist (z, C ) s.t. x P ε s.t. z D δ P ε := {x : Ax = b, x C, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z C, b T y VAL δ} 30

31 R P ε Measures Large Near-Optimal Solutions 31

32 R P ε Measures Large Near-Optimal Solutions 32

33 r D δ Measures Nicely Interior Near-Optimal Solutions 33

34 r D δ Measures Nicely Interior Near-Optimal Solutions 34

35 r D δ Measures Nicely Interior Near-Optimal Solutions 35

36 Main Result, Again: R P ε and r D δ are Reciprocally Related R P ε := max x x r D δ := max z dist (z, C ) s.t. x P ε s.t. z D δ Main Theorem: Suppose VAL is finite. If Rε P finite, then τ C min{ε, δ} R P ε r D δ ε + δ. is positive and If R P ε = 0, then r D δ =. If R P ε = and VAL is finite, then r D δ = 0. Here τ C denotes the width of the cone C

37 On the Width of a Cone Let K be a convex cone with nonempty interior τ K := max{dist(x, K) : x K, x 1} x If K is a regular cone, then τ K (0, 1] τ K generalizes Goffin s inner measure 37

38 A Cone with small Width τ K τ K << 1 38

39 Equivalence of Norm Linearity and Width of Polar Cone Proposition: Let K be a regular cone. The following statements are equivalent: τ K α, and there exists w for which α w T x x w T x for all x K Corollary: τ K = 1 implies f(x) := x is linear on K 39

40 Main Result, Again: R P ε and r D δ are Reciprocally Related R P ε := max x x r D δ := max z dist (z, C ) s.t. x P ε s.t. z D δ Main Theorem: Suppose VAL is finite. If Rε P finite, then is positive and τ C min{ε, δ} R P ε r D δ ε + δ. If R P ε = 0, then r D δ =. If R P ε = and VAL is finite, then r D δ = 0. 40

41 Comments τ C min{ε, δ} R P ε r D δ ε + δ R P ε, r D δ each involves primal and dual information each inequality can be tight (and cannot be improved) many naturally arising norms have τ C = 1 setting δ = ε, we obtain ε Rε P rε D 2ε, showing these two measures are inversely proportional (to within a factor of 2) 41

42 Application: Robust Optimization [J.Vera] Amended format: P : z (b) := max x c T x D : min y b T y s.t. b Ax K s.t. A T y = c y K For a given tolerance ε > 0, what is the limit on the size of a perturbation b so that z (b + b) z (b) ε? 42

43 Application: Robust Optimization, continued P : z (b) := max x c T x D : min y b T y s.t. b Ax K s.t. A T y = c y K Theorem [Vera]: Let ε > 0 and b satisfy: ( ε b τ K Rε D Then z (b + b) z (b) ε. ). The result says that τ K ε/rε D is the required bound on the perturbation of the RHS needed to guarantee a change of no more than ε in the value of the problem. 43

44 τ K for Self-Scaled Cones Nonnegative Orthant: K = K = IR n +, define x p := p nj=1 x j p Then τ K = n (1/p 1), whereby τ K = 1 for p = 1 Semidefinite Cone: K = K = S k +, Define X p := λ(x) p := p kj=1 λ j (X) p Then τ K = k (1/p 1), whereby τ K = 1 for p = 1 Second-Order Cone: K = K = {x R n : (x 1,..., x n 1 ) 2 x n } Define x := max{ (x 1,..., x n 1 ) 2, x n }, then τ K = 1 44

45 Some Relations with Renegar s Condition Number For ε c it holds that: Rε P C 2 (d) + C(d) ε c r P ε ετ C 3 c (C 2 (d) + C(d)) 45

46 Outline, again Primal-Dual Geometry of Level Sets in Conic Optimization A Geometric Measure of Feasible Regions and Interior-Point Method (IPM) Complexity Theory Geometric Measures and their Explanatory Value of the Practical Performance of IPMs 46

47 Primal and Dual Conic Problem P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C C X is a regular cone: closed, convex, pointed, with nonempty interior C := {z : z T x 0 x C} 47

48 Geometric Measure of Primal Feasible Region G P := minimum x s.t. max { Ax = b x C x dist(x, C), x, 1 dist(x, C) } G P is smaller to the extent that there is a feasible solution that is not too large and that is not too close to C G P is smaller if the primal has a well-conditioned feasible solution 48

49 Geometric Complexity Theory of Conic Optimization G P := minimum x s.t. max { Ax = b x C x dist(x, C), x, 1 dist(x, C) } [F 04] Using a (theoretical) interior-point method that solves a primal-phase-i followed by a primal-phase-ii, one can bound the IPM iterations to compute an ε-optimal solution by ( ( ) ( O ( ϑ C ln R P ε + ln G P ) + ln(1/ε) )) 49

50 Geometric Complexity Theory of Conic Optimization Computational complexity of solving primal problem P is: ( ( ) ( O ( ϑ C ln R P ε + ln G P ) + ln(1/ε) )) Is this just a pretty theory? Are R P ε and G P correlated with the performance of IPMs on conic problems in practice, say from the SDPLIB suite of SDP problems? IPMs in practice are interchangeable insofar as role of primal versus dual. Therefore let us replicate the above theory for the dual problem. 50

51 Geometric Measure of Dual Feasible Region G D := minimum y,z s.t. max { z dist (z, C ), z, A T y + z = c z C 1 dist (z, C ) } G D is smaller to the extent that there is a feasible dual solution that is not too large and that is not too close to C G D is smaller if the dual has a well-conditioned feasible solution 51

52 Geometric Complexity Theory of Conic Optimization G D := minimum z s.t. max { z dist (z, C ), z, A T y + z = c z C 1 dist (z, C ) } [F 04] Using a (theoretical) interior-point method that solves a dual-phase-i followed by a dual-phase-ii, one can bound the IPM iterations to compute an ε-optimal solution by ( ( ) ( O ( ϑ C ln R D ε + ln G D ) + ln(1/ε) )) 52

53 Aggregate Geometry Measure for Primal and Dual Define the aggregate geometry measure: G A := ( R P ε R D ε G P G D) 1/4 G A aggregates the primal and dual level set measures and the primal and dual feasible region geometry measures Let us compute G A for the SDPLIB suite and see if G A is correlated with IPM iterations. 53

54 Outline, again Primal-Dual Geometry of Level Sets in Conic Optimization A Geometric Measure of Feasible Regions and Interior-Point Method (IPM) Complexity Theory Geometric Measures and their Explanatory Value of the Practical Performance of IPMs 54

55 Semi-Definite Programming (SDP) broad generalization of LP emerged in early 1990 s as the most significant computationally tractable generalization of LP independently discovered by Alizadeh and Nesterov-Nemirovskii applications of SDP are vast, encompassing such diverse areas as integer programming and control theory 55

56 Partial List of Applications of SDP LP, Convex QP, Convex QCQP tighter relaxations of IP ( 12% of optimality for MAXCUT) static structural (truss) design, dymamic truss design, antenna array filter design, other engineered systems problems control theory shape optimization, geometric design, volume optimization problems D-optimal experimental design, outlier identification, data mining, robust regression eigenvalue problems, matrix scaling/design sensor network localization optimization or near-optimization with large classes of non-convex polynomial constraints and objectives (Parrilo, Lasserre, SOS methods) robust optimization methods for standard LP, QCQP 56

57 IPM Set-up for SDP min X C X max mi=1 y,z y i b i s.t. A i X = b i, i = 1,..., m s.t. mi=1 y i A i + Z = C X 0 Z 0 57

58 IPM for SDP, Central Path Central path: X(µ) := argmin X C X µ n j=1 ln(λ j (X)) s.t. A i X = b i, i = 1,..., m X 0 Central path: X(µ) := argmin X C X µ ln(det(x)) s.t. A i X = b i, i = 1,..., m X 0 58

59 IPM for SDP, Central Path, continued Central path: X(µ) := argmin X C X + µ ln(det(x)) s.t. A i X = b i, i = 1,..., m X 0 59

60 IPM for SDP, Central Path, continued Central path: X(µ) := argmin X C X + µ ln(det(x)) s.t. A i X = b i, i = 1,..., m X 0 Optimality gap property of the central path: C X(µ) VAL n µ Algorithm strategy: trace the central path X(µ) for a decreasing sequence of values of µ 0 60

61 IPM Strategy for SDP 61

62 IPM for SDP: Computational Reality Alizadeh, Nesterov and Nemirovski - IPM theory for SDP software for SOCP, SDP iterations on SDPLIB suite, typically 30 iterations Each IPM iteration is expensive to solve: ( H(x k ) A T ) ( x A 0 y ) = ( r1 r 2 ) O(n 6 ) work per iteration, managing sparsity and numerical stability are tougher bottlenecks most IPM computational research since 1996 has focused on work per iteration, sparsity, numerical stability, lower-order methods, etc. 62

63 92 problems SDPLIB Suite standard equality block form, SDP variables and LP variables no linear dependent equations Work with 85 problems: removed 4 infeasible problems: infd1, infd2, infp1, infp2 removed 3 very large problems: maxg55 ( ), maxg60 ( ), thetag51 ( ) m : , n :

64 Histogram of IPM Iterations for SDPLIB problems Frequency IPM Iterations SDPT3-3.1 default settings used throughout SDPT3-3.1 solves the 85 SDPLIB problem instances in iterations 64

65 IPM Iterations versus n IPM Iterations n Scatter Plot of IPM iterations and n := n s + n l 65

66 IPM Iterations versus m IPM Iterations m Scatter Plot of IPM iterations and m 66

67 Computing the Aggregate Geometry Measure G A G A := ( R P ε R D ε G P G D) 1/4 are maximum norm problems, so are generally non- Rε P, RD ε convex. Computing G P, G D involves working with dist(x, C), dist (z, C ) which is not efficiently computable in general A judicious choice of norms allows us to compute all four quantities efficiently via one associated SDP for each quantity. 67

68 Geometry Measure Results G A was computed for 85 SDPLIB problems: G A Rε P Rε D G P G D Finite Infinite Total % of problems have finite G A The pattern in the table is no coincidence... G P = R D ε = and G D = R P ε = 68

69 60 50 IPM Iterations log(g m ) IPM iterations versus log(g A ) CORR ( log(g A ), IPM Iterations ) = (53 problems) 69

70 What About Other Behavioral Measures? How well does Renegar s condition measure C(d) explain the practical performance of IPMs on the SDPLIB suite? 70

71 C(d): Renegar s Condition Measure (P d ) : min c T x (D d ) : max b T y s.t. Ax = b s.t. A T y + z = c x C z C d = (A, b, c) is the data for the instance P d and D d d = max { A, b, c } 71

72 Distance to Primal and Dual Infeasibility (P d ) : min c T x (D d ) : max b T y s.t. Ax = b s.t. A T y + z = c x C z C Distance to primal infeasibility: ρ P (d) = min { d : P d+ d is infeasible } Distance to dual infeasibility: ρ D (d) = min { d : D d+ d is infeasible } The condition measure is: d C(d) = min{ρ P (d), ρ D (d)} 72

73 C(d): Renegar s Condition Measure C(d) = d min{ρ P (d), ρ D (d)} In theory, C(d) has been shown to be connected to: bounds on sizes of feasible solutions and aspect ratios of inscribed balls in feasible regions bounds on sizes of optimal solutions and objective values bounds on rates of deformation of feasible regions as data is modified bounds on deformation of optimal solutions as data is modified bounds on the complexity of a variety of algorithms 73

74 [Renegar 95] Using a (theoretical) IPM that solves a primalphase-i followed by a primal-phase-ii, one can bound IPM iterations needed to compute an ε-optimal solution by ( ) O ϑ C (ln (C(d)) + ln(1/ε)) ϑ C = n s + n l for SDP Is C(d) just really nice theory? Is C(d) correlated with IPM iterations among SDPLIB problems? 74

75 Condition Measure Results Computed C(d) for 80 (out of 85) problems: Unable to compute ρ p (d) for 5 problems: control11, equalg51, maxg32, theta6, thetag11 (m = 1596, 1001, 2000, 4375, 2401, respectively) 60% are well-posed 40% are almost primal infeasible ρ D (d) 0 > 0 Total ρ P (d) > Total

76 60 50 IPM Iterations log C(d) IPM iterations versus log (C(d)). CORR(log (C(d)), IPM Iterations) = (48 problems) 76

77 Some Conclusions 62% of 85 SDPLIB problems have finite aggregate geometry measure G A CORR(log ( G A), IPM Iterations) = among the SDPLIB problems with finite geometry measure G A 32 of 80 SDPLIB problems are almost primal infeasible, i.e. C(d) = + CORR(log (C(d)), IPM Iterations) = among the 42 problems with finite C(d) 77

78 60 50 IPM Iterations log(g m ) IPM iterations versus log(g A ) CORR ( log(g A ), IPM Iterations ) = (53 problems) 78

On the Behavior of the Homogeneous Self-Dual Model for Conic Convex Optimization

On the Behavior of the Homogeneous Self-Dual Model for Conic Convex Optimization Mathematical Programming manuscript No. 2016 Springer International Publishing AG. Part of Springer Nature. http://dx.doi.org/10.1007/s10107-005-0667-3 On the Behavior of the Homogeneous Self-Dual Model

More information

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems 2016 Springer International Publishing AG. Part of Springer Nature. http://dx.doi.org/10.1007/s10589-005-3911-0 On Two Measures of Problem Instance Complexity and their Correlation with the Performance

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM Mathematical Programming manuscript No. (will be inserted by the editor) Alexandre Belloni Robert M. Freund PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM Draft,

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

MIT LIBRARIES. III III 111 l ll llljl II Mil IHII l l

MIT LIBRARIES. III III 111 l ll llljl II Mil IHII l l MIT LIBRARIES III III 111 l ll llljl II Mil IHII l l DUPL 3 9080 02246 1237 [DEWEy )28 1414 \^^ i MIT Sloan School of Management Sloan Working Paper 4176-01 July 2001 ON THE PRIMAL-DUAL GEOMETRY OF

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski

ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING Arkadi Nemirovski Georgia Institute of Technology and Technion Israel Institute of Technology Convex Programming solvable case in Optimization Revealing

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

4. Convex optimization problems

4. Convex optimization problems Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization

More information

Parallel implementation of primal-dual interior-point methods for semidefinite programs

Parallel implementation of primal-dual interior-point methods for semidefinite programs Parallel implementation of primal-dual interior-point methods for semidefinite programs Masakazu Kojima, Kazuhide Nakata Katsuki Fujisawa and Makoto Yamashita 3rd Annual McMaster Optimization Conference:

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 3, pp. 818 836 c 2001 Society for Industrial and Applied Mathematics CONDITION-MEASURE BOUNDS ON THE BEHAVIOR OF THE CENTRAL TRAJECTORY OF A SEMIDEFINITE PROGRAM MANUEL A. NUNEZ

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39 PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University

More information

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM Mathematical Programming manuscript No. (will be inserted by the editor) Alexandre Belloni Robert M. Freund PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM May

More information

Globally convergent algorithms for PARAFAC with semi-definite programming

Globally convergent algorithms for PARAFAC with semi-definite programming Globally convergent algorithms for PARAFAC with semi-definite programming Lek-Heng Lim 6th ERCIM Workshop on Matrix Computations and Statistics Copenhagen, Denmark April 1 3, 2005 Thanks: Vin de Silva,

More information

A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Sivaramakrishnan Department of Mathematics NC State University kksivara@ncsu.edu http://www4.ncsu.edu/

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

MIT Sloan School of Management

MIT Sloan School of Management MIT Sloan School of Management Working Paper 4286-03 February 2003 On an Extension of Condition Number Theory to Non-Conic Convex Optimization Robert M Freund and Fernando Ordóñez 2003 by Robert M Freund

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

A Geometric Analysis of Renegar s Condition Number, and its interplay with Conic Curvature

A Geometric Analysis of Renegar s Condition Number, and its interplay with Conic Curvature A Geometric Analysis of Renegar s Condition Number, and its interplay with Conic Curvature Alexandre Belloni and Robert M. Freund April, 007 Abstract For a conic linear system of the form Ax K, K a convex

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

An Efficient Re-scaled Perceptron Algorithm for Conic Systems

An Efficient Re-scaled Perceptron Algorithm for Conic Systems An Efficient Re-scaled Perceptron Algorithm for Conic Systems Alexandre Belloni, Robert M. Freund, and Santosh S. Vempala Abstract. The classical perceptron algorithm is an elementary algorithm for solving

More information

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB 1 INTERIOR-POINT METHODS FOR SECOND-ORDER-CONE AND SEMIDEFINITE PROGRAMMING ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB Outline 2 Introduction

More information

Convex Optimization in Classification Problems

Convex Optimization in Classification Problems New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

The moment-lp and moment-sos approaches

The moment-lp and moment-sos approaches The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY

More information

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations

More information

Lecture: Convex Optimization Problems

Lecture: Convex Optimization Problems 1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization

More information

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.

More information

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NP-hard in general In contrast, robust linear programs admit

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Sparse Optimization Lecture: Basic Sparse Optimization Models

Sparse Optimization Lecture: Basic Sparse Optimization Models Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm

More information

Approximate Farkas Lemmas in Convex Optimization

Approximate Farkas Lemmas in Convex Optimization Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The

More information

A data-independent distance to infeasibility for linear conic systems

A data-independent distance to infeasibility for linear conic systems A data-independent distance to infeasibility for linear conic systems Javier Peña Vera Roshchina May 29, 2018 Abstract We offer a unified treatment of distinct measures of well-posedness for homogeneous

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Lecture 8. Strong Duality Results. September 22, 2008

Lecture 8. Strong Duality Results. September 22, 2008 Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

IE 521 Convex Optimization

IE 521 Convex Optimization Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Global Optimization of Polynomials

Global Optimization of Polynomials Semidefinite Programming Lecture 9 OR 637 Spring 2008 April 9, 2008 Scribe: Dennis Leventhal Global Optimization of Polynomials Recall we were considering the problem min z R n p(z) where p(z) is a degree

More information

Tractable Approximations to Robust Conic Optimization Problems

Tractable Approximations to Robust Conic Optimization Problems Math. Program., Ser. B 107, 5 36 2006) Digital Object Identifier DOI) 10.1007/s10107-005-0677-1 Dimitris Bertsimas Melvyn Sim Tractable Approximations to Robust Conic Optimization Problems Received: April

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

Inner approximation of convex cones via primal-dual ellipsoidal norms

Inner approximation of convex cones via primal-dual ellipsoidal norms Inner approximation of convex cones via primal-dual ellipsoidal norms by Miaolan Xie A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of

More information

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP) Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in

More information

Mathematics of Data: From Theory to Computation

Mathematics of Data: From Theory to Computation Mathematics of Data: From Theory to Computation Prof. Volkan Cevher volkan.cevher@epfl.ch Lecture 13: Disciplined convex optimization Laboratory for Information and Inference Systems (LIONS) École Polytechnique

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order

More information

An Efficient Re-scaled Perceptron Algorithm for Conic Systems

An Efficient Re-scaled Perceptron Algorithm for Conic Systems An Efficient Re-scaled Perceptron Algorithm for Conic Systems Alexandre Belloni IBM T. J. Watson Research Center and MIT, 32-22, 0 Kitchawan Road, Yorktown Heights, New York 0598 email: belloni@mit.edu

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information