Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

Similar documents
On the Behavior of the Homogeneous Self-Dual Model for Conic Convex Optimization

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

Largest dual ellipsoids inscribed in dual cones

4. Algebra and Duality

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Lecture: Introduction to LP, SDP and SOCP

Semidefinite Programming Basics and Applications

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

12. Interior-point methods

SEMIDEFINITE PROGRAM BASICS. Contents

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM

Additional Homework Problems

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Lecture 17: Primal-dual interior-point methods part II

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

MIT LIBRARIES. III III 111 l ll llljl II Mil IHII l l

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Advances in Convex Optimization: Theory, Algorithms, and Applications

Second-order cone programming

Duality Theory of Constrained Optimization

Convex Optimization and l 1 -minimization

15. Conic optimization

III. Applications in convex optimization

New stopping criteria for detecting infeasibility in conic optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Semidefinite Programming

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

4. Convex optimization problems

Parallel implementation of primal-dual interior-point methods for semidefinite programs

What can be expressed via Conic Quadratic and Semidefinite Programming?

Downloaded 10/11/16 to Redistribution subject to SIAM license or copyright; see

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

PENNON A Generalized Augmented Lagrangian Method for Convex NLP and SDP p.1/39

PROJECTIVE PRE-CONDITIONERS FOR IMPROVING THE BEHAVIOR OF A HOMOGENEOUS CONIC LINEAR SYSTEM

Globally convergent algorithms for PARAFAC with semi-definite programming

A CONIC INTERIOR POINT DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

MIT Sloan School of Management

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Semidefinite Programming

A Geometric Analysis of Renegar s Condition Number, and its interplay with Conic Curvature

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

An Efficient Re-scaled Perceptron Algorithm for Conic Systems

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

Convex Optimization in Classification Problems

Lecture: Examples of LP, SOCP and SDP

A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING

SDP Relaxations for MAXCUT

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

The moment-lp and moment-sos approaches

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Lecture: Convex Optimization Problems

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

A priori bounds on the condition numbers in interior-point methods

Copositive Programming and Combinatorial Optimization

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Convex Optimization M2

Lecture 6: Conic Optimization September 8

Lecture 9 Sequential unconstrained minimization

12. Interior-point methods

Sparse Optimization Lecture: Basic Sparse Optimization Models

Approximate Farkas Lemmas in Convex Optimization

A data-independent distance to infeasibility for linear conic systems

arxiv: v1 [math.oc] 26 Sep 2015

Lecture 8. Strong Duality Results. September 22, 2008

Summer School: Semidefinite Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Primal-dual IPM with Asymmetric Barrier

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Lecture: Cone programming. Approximating the Lorentz cone.

IE 521 Convex Optimization

Convex optimization problems. Optimization problem in standard form

Global Optimization of Polynomials

Tractable Approximations to Robust Conic Optimization Problems

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

Inner approximation of convex cones via primal-dual ellipsoidal norms

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Mathematics of Data: From Theory to Computation

Convex Optimization and Modeling

An Efficient Re-scaled Perceptron Algorithm for Conic Systems

Lecture 5. The Dual Cone and Dual Problem

A new primal-dual path-following method for convex quadratic programming

Interior Point Methods for Mathematical Programming

Transcription:

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics of Operations Research, and Mathematical Programming 1

Outline Primal-Dual Geometry of Level Sets in Conic Optimization A Geometric Measure of Feasible Regions and Interior-Point Method (IPM) Complexity Theory Geometric Measures and their Explanatory Value of the Practical Performance of IPMs 2

Primal and Dual Linear Optimization Problems P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x 0 z 0 A IR m n x 0 is x R n + 3

Primal and Dual Conic Problem P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C C X is a regular cone: closed, convex, pointed, with nonempty interior C := {z : z T x 0 x C} 4

The Semidefinite Cone S k denotes the set of symmetric k k matrices S+ k denotes the set of positive semi-definite k k symmetric matrices S k ++ denotes the set of positive definite k k symmetric matrices X 0 denotes that X is symmetric positive semi-definite X Y denotes that X Y 0 X 0 to denote that X is symmetric positive definite, etc. Remark: S+ k = {X Sk X 0} is a regular convex cone. Furthermore, (S+ k ) = S+ k, i.e., Sk + is self-dual. 5

Primal and Dual Semidefinite Optimization P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x S+ k z S+ k This is slightly awkward as we are not used to conceptualizing x as a matrix. 6

Primal and Dual Semidefinite Optimization, again min X C X max y,z mi=1 y i b i s.t. A i X = b i, i = 1,..., m s.t. mi=1 y i A i + Z = C X 0 Z 0 Here C X := k k i=1 j=1 C ij X ij is the trace inner product, since C X = trace(c T X) 7

Back to Linear Optimization P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x 0 z 0 8

Five Meta-Lessons from Interior-Point Theory/Methods 1. Linear optimization is not much more special than conic convex optimization 2. A problem is ill-conditioned if VAL is finite but the primal or dual objective function level sets are unbounded 3. ε-optimal solutions are important objects in their own right 4. Choice of norm is important; some norms are more natural for certain settings 9

Meta-Lessons from Interior-Point Theory/Methods, continued 5. All the important activity is in the (regular) cones Indeed, we could eliminate the y-variable and re-write P and D as: P : min x c T x D : VAL := min z (x 0 ) T z s.t. x x 0 L s.t. z c L x 0 z 0 where x 0 satisfies Ax 0 = b and L = null(a). But we won t. 10

Primal and Dual Near-Optimal Level Sets P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x 0 z 0 P ε := {x : Ax = b, x 0, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z 0, b T y VAL δ} 11

Level Set Geometry Measures Let e := (1, 1,..., 1) T. Define for ε, δ > 0: R P ε := max x x 1 r D δ := max y,z,r r s.t. Ax = b s.t. A T y + z = c x 0 z 0 c T x VAL + ε b T y VAL δ z r e R P ε is the norm of the largest primal ε-optimal solution rδ D measures the largest distance to the boundary of R n + among all dual δ-optimal solutions z 12

Level Set Geometry Measures, continued R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e P ε := {x : Ax = b, x 0, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z 0, b T y VAL δ} 13

R P ε Measures Large Near-Optimal Solutions 14

R P ε Measures Large Near-Optimal Solutions 15

r D δ Measures Nicely Interior Near-Optimal Solutions 16

r D δ Measures Nicely Interior Near-Optimal Solutions 17

r D δ Measures Nicely Interior Near-Optimal Solutions 18

Main Result: R P ε and r D δ are Reciprocally Related R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e Main Theorem: Suppose VAL is finite. If R P ε is positive and finite, then min{ε, δ} R P ε r D δ ε + δ. Otherwise {R P ε, r D δ } = {, 0}. 19

Comments min{ε, δ} R P ε r D δ ε + δ R P ε, r D δ each involves primal and dual information each inequality can be tight (and cannot be improved) setting δ = ε, we obtain ε Rε P rε D 2ε, showing these two measures are inversely proportional (to within a factor of 2) 20

Comments, continued min{ε, δ} R P ε r D δ ε + δ exchanging the roles of P and D how to prove 21

Comments, continued R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e ε R P ε r D ε 2ε The maximum norms of the primal objective level sets are almost exactly inversely proportional to the maximum distances to the boundary of the dual objective level sets 22

Relation to LP Non-Regularity Property Standard LP Non-Regularity Property: If VAL is finite, the set of primal optimal solutions is unbounded iff every dual feasible z lies in the boundary of R n +. R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e In our notation, this is Rε P = iff rd δ part of the Main Theorem = 0, which is the second 23

Relation to LP Non-Regularity, continued R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e The first part of the main theorem is: if Rε P then is finite and positive, min{ε, δ} R P ε r D δ ε + δ This then is a generalization to nearly-non-regular problems, where R P ε is finite and r D δ is non-zero 24

Question about Main Result R P ε := max x x 1 r D δ := max z,r r s.t. x P ε s.t. z D δ z r e Q: Why the 1 norm? A: Because f(x) := x 1 is a linear function on the cone R n +. The linearity gives R P ε nice properties. If is not linear on R n + then we have to slightly weaken the main theorem as we will see.... 25

Primal and Dual Conic Problem P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C C X is a regular cone: closed, convex, pointed, with nonempty interior C := {z : z T x 0 x C} 26

Primal and Dual Level Sets P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C P ε := {x : Ax = b, x C, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z C, b T y VAL δ} 27

Level Set Geometry Measures Fix a norm x for the space of the x variables. The dual norm is z := max{z T x : x 1} for the z variables. Define for ε, δ > 0: R P ε := max x x r D δ := max y,z dist (z, C ) s.t. Ax = b s.t. A T y + z = c x C z C c T x VAL + ε b T y VAL δ dist (z, C ) denotes the distance from z to C in the dual norm z 28

Level Set Geometry Measures, continued Rε P := max x x rδ D := max y,z dist (z, C ) s.t. Ax = b s.t. A T y + z = c x C z C c T x VAL + ε b T y VAL δ R P ε is the norm of the largest primal ε-optimal solution rδ D measures the largest distance to the boundary of C among all dual δ-optimal solutions z 29

Level Set Geometry Measures, continued R P ε := max x x r D δ := max z dist (z, C ) s.t. x P ε s.t. z D δ P ε := {x : Ax = b, x C, c T x VAL + ε} D δ := {z : y satisfying A T y + z = c, z C, b T y VAL δ} 30

R P ε Measures Large Near-Optimal Solutions 31

R P ε Measures Large Near-Optimal Solutions 32

r D δ Measures Nicely Interior Near-Optimal Solutions 33

r D δ Measures Nicely Interior Near-Optimal Solutions 34

r D δ Measures Nicely Interior Near-Optimal Solutions 35

Main Result, Again: R P ε and r D δ are Reciprocally Related R P ε := max x x r D δ := max z dist (z, C ) s.t. x P ε s.t. z D δ Main Theorem: Suppose VAL is finite. If Rε P finite, then τ C min{ε, δ} R P ε r D δ ε + δ. is positive and If R P ε = 0, then r D δ =. If R P ε = and VAL is finite, then r D δ = 0. Here τ C denotes the width of the cone C.... 36

On the Width of a Cone Let K be a convex cone with nonempty interior τ K := max{dist(x, K) : x K, x 1} x If K is a regular cone, then τ K (0, 1] τ K generalizes Goffin s inner measure 37

A Cone with small Width τ K τ K << 1 38

Equivalence of Norm Linearity and Width of Polar Cone Proposition: Let K be a regular cone. The following statements are equivalent: τ K α, and there exists w for which α w T x x w T x for all x K Corollary: τ K = 1 implies f(x) := x is linear on K 39

Main Result, Again: R P ε and r D δ are Reciprocally Related R P ε := max x x r D δ := max z dist (z, C ) s.t. x P ε s.t. z D δ Main Theorem: Suppose VAL is finite. If Rε P finite, then is positive and τ C min{ε, δ} R P ε r D δ ε + δ. If R P ε = 0, then r D δ =. If R P ε = and VAL is finite, then r D δ = 0. 40

Comments τ C min{ε, δ} R P ε r D δ ε + δ R P ε, r D δ each involves primal and dual information each inequality can be tight (and cannot be improved) many naturally arising norms have τ C = 1 setting δ = ε, we obtain ε Rε P rε D 2ε, showing these two measures are inversely proportional (to within a factor of 2) 41

Application: Robust Optimization [J.Vera] Amended format: P : z (b) := max x c T x D : min y b T y s.t. b Ax K s.t. A T y = c y K For a given tolerance ε > 0, what is the limit on the size of a perturbation b so that z (b + b) z (b) ε? 42

Application: Robust Optimization, continued P : z (b) := max x c T x D : min y b T y s.t. b Ax K s.t. A T y = c y K Theorem [Vera]: Let ε > 0 and b satisfy: ( ε b τ K Rε D Then z (b + b) z (b) ε. ). The result says that τ K ε/rε D is the required bound on the perturbation of the RHS needed to guarantee a change of no more than ε in the value of the problem. 43

τ K for Self-Scaled Cones Nonnegative Orthant: K = K = IR n +, define x p := p nj=1 x j p Then τ K = n (1/p 1), whereby τ K = 1 for p = 1 Semidefinite Cone: K = K = S k +, Define X p := λ(x) p := p kj=1 λ j (X) p Then τ K = k (1/p 1), whereby τ K = 1 for p = 1 Second-Order Cone: K = K = {x R n : (x 1,..., x n 1 ) 2 x n } Define x := max{ (x 1,..., x n 1 ) 2, x n }, then τ K = 1 44

Some Relations with Renegar s Condition Number For ε c it holds that: Rε P C 2 (d) + C(d) ε c r P ε ετ C 3 c (C 2 (d) + C(d)) 45

Outline, again Primal-Dual Geometry of Level Sets in Conic Optimization A Geometric Measure of Feasible Regions and Interior-Point Method (IPM) Complexity Theory Geometric Measures and their Explanatory Value of the Practical Performance of IPMs 46

Primal and Dual Conic Problem P : VAL := min x c T x D : VAL := max y,z b T y s.t. Ax = b s.t. A T y + z = c x C z C C X is a regular cone: closed, convex, pointed, with nonempty interior C := {z : z T x 0 x C} 47

Geometric Measure of Primal Feasible Region G P := minimum x s.t. max { Ax = b x C x dist(x, C), x, 1 dist(x, C) } G P is smaller to the extent that there is a feasible solution that is not too large and that is not too close to C G P is smaller if the primal has a well-conditioned feasible solution 48

Geometric Complexity Theory of Conic Optimization G P := minimum x s.t. max { Ax = b x C x dist(x, C), x, 1 dist(x, C) } [F 04] Using a (theoretical) interior-point method that solves a primal-phase-i followed by a primal-phase-ii, one can bound the IPM iterations to compute an ε-optimal solution by ( ( ) ( O ( ϑ C ln R P ε + ln G P ) + ln(1/ε) )) 49

Geometric Complexity Theory of Conic Optimization Computational complexity of solving primal problem P is: ( ( ) ( O ( ϑ C ln R P ε + ln G P ) + ln(1/ε) )) Is this just a pretty theory? Are R P ε and G P correlated with the performance of IPMs on conic problems in practice, say from the SDPLIB suite of SDP problems? IPMs in practice are interchangeable insofar as role of primal versus dual. Therefore let us replicate the above theory for the dual problem. 50

Geometric Measure of Dual Feasible Region G D := minimum y,z s.t. max { z dist (z, C ), z, A T y + z = c z C 1 dist (z, C ) } G D is smaller to the extent that there is a feasible dual solution that is not too large and that is not too close to C G D is smaller if the dual has a well-conditioned feasible solution 51

Geometric Complexity Theory of Conic Optimization G D := minimum z s.t. max { z dist (z, C ), z, A T y + z = c z C 1 dist (z, C ) } [F 04] Using a (theoretical) interior-point method that solves a dual-phase-i followed by a dual-phase-ii, one can bound the IPM iterations to compute an ε-optimal solution by ( ( ) ( O ( ϑ C ln R D ε + ln G D ) + ln(1/ε) )) 52

Aggregate Geometry Measure for Primal and Dual Define the aggregate geometry measure: G A := ( R P ε R D ε G P G D) 1/4 G A aggregates the primal and dual level set measures and the primal and dual feasible region geometry measures Let us compute G A for the SDPLIB suite and see if G A is correlated with IPM iterations. 53

Outline, again Primal-Dual Geometry of Level Sets in Conic Optimization A Geometric Measure of Feasible Regions and Interior-Point Method (IPM) Complexity Theory Geometric Measures and their Explanatory Value of the Practical Performance of IPMs 54

Semi-Definite Programming (SDP) broad generalization of LP emerged in early 1990 s as the most significant computationally tractable generalization of LP independently discovered by Alizadeh and Nesterov-Nemirovskii applications of SDP are vast, encompassing such diverse areas as integer programming and control theory 55

Partial List of Applications of SDP LP, Convex QP, Convex QCQP tighter relaxations of IP ( 12% of optimality for MAXCUT) static structural (truss) design, dymamic truss design, antenna array filter design, other engineered systems problems control theory shape optimization, geometric design, volume optimization problems D-optimal experimental design, outlier identification, data mining, robust regression eigenvalue problems, matrix scaling/design sensor network localization optimization or near-optimization with large classes of non-convex polynomial constraints and objectives (Parrilo, Lasserre, SOS methods) robust optimization methods for standard LP, QCQP 56

IPM Set-up for SDP min X C X max mi=1 y,z y i b i s.t. A i X = b i, i = 1,..., m s.t. mi=1 y i A i + Z = C X 0 Z 0 57

IPM for SDP, Central Path Central path: X(µ) := argmin X C X µ n j=1 ln(λ j (X)) s.t. A i X = b i, i = 1,..., m X 0 Central path: X(µ) := argmin X C X µ ln(det(x)) s.t. A i X = b i, i = 1,..., m X 0 58

IPM for SDP, Central Path, continued Central path: X(µ) := argmin X C X + µ ln(det(x)) s.t. A i X = b i, i = 1,..., m X 0 59

IPM for SDP, Central Path, continued Central path: X(µ) := argmin X C X + µ ln(det(x)) s.t. A i X = b i, i = 1,..., m X 0 Optimality gap property of the central path: C X(µ) VAL n µ Algorithm strategy: trace the central path X(µ) for a decreasing sequence of values of µ 0 60

IPM Strategy for SDP 61

IPM for SDP: Computational Reality 1991-94 - Alizadeh, Nesterov and Nemirovski - IPM theory for SDP 1996 - software for SOCP, SDP - 10-60 iterations on SDPLIB suite, typically 30 iterations Each IPM iteration is expensive to solve: ( H(x k ) A T ) ( x A 0 y ) = ( r1 r 2 ) O(n 6 ) work per iteration, managing sparsity and numerical stability are tougher bottlenecks most IPM computational research since 1996 has focused on work per iteration, sparsity, numerical stability, lower-order methods, etc. 62

http://www.nmt.edu/~sdplib/ 92 problems SDPLIB Suite standard equality block form, SDP variables and LP variables no linear dependent equations Work with 85 problems: removed 4 infeasible problems: infd1, infd2, infp1, infp2 removed 3 very large problems: maxg55 (5000 5000), maxg60 (7000 7000), thetag51 (6910 1001) m : 6 4375, n : 13 2000 63

Histogram of IPM Iterations for SDPLIB problems 16 14 12 Frequency 10 8 6 4 2 0 0 10 20 30 40 50 60 IPM Iterations SDPT3-3.1 default settings used throughout SDPT3-3.1 solves the 85 SDPLIB problem instances in 10-60 iterations 64

IPM Iterations versus n 60 50 IPM Iterations 40 30 20 10 0 10 0 10 1 10 2 10 3 10 4 n Scatter Plot of IPM iterations and n := n s + n l 65

IPM Iterations versus m 60 50 IPM Iterations 40 30 20 10 0 10 0 10 1 10 2 10 3 10 4 m Scatter Plot of IPM iterations and m 66

Computing the Aggregate Geometry Measure G A G A := ( R P ε R D ε G P G D) 1/4 are maximum norm problems, so are generally non- Rε P, RD ε convex. Computing G P, G D involves working with dist(x, C), dist (z, C ) which is not efficiently computable in general A judicious choice of norms allows us to compute all four quantities efficiently via one associated SDP for each quantity. 67

Geometry Measure Results G A was computed for 85 SDPLIB problems: G A Rε P Rε D G P G D Finite 53 85 53 53 85 Infinite 32-32 32 - Total 85 85 85 85 85 62% of problems have finite G A The pattern in the table is no coincidence... G P = R D ε = and G D = R P ε = 68

60 50 IPM Iterations 40 30 20 10 0 1 2 3 4 5 6 log(g m ) IPM iterations versus log(g A ) CORR ( log(g A ), IPM Iterations ) = 0.901 (53 problems) 69

What About Other Behavioral Measures? How well does Renegar s condition measure C(d) explain the practical performance of IPMs on the SDPLIB suite? 70

C(d): Renegar s Condition Measure (P d ) : min c T x (D d ) : max b T y s.t. Ax = b s.t. A T y + z = c x C z C d = (A, b, c) is the data for the instance P d and D d d = max { A, b, c } 71

Distance to Primal and Dual Infeasibility (P d ) : min c T x (D d ) : max b T y s.t. Ax = b s.t. A T y + z = c x C z C Distance to primal infeasibility: ρ P (d) = min { d : P d+ d is infeasible } Distance to dual infeasibility: ρ D (d) = min { d : D d+ d is infeasible } The condition measure is: d C(d) = min{ρ P (d), ρ D (d)} 72

C(d): Renegar s Condition Measure C(d) = d min{ρ P (d), ρ D (d)} In theory, C(d) has been shown to be connected to: bounds on sizes of feasible solutions and aspect ratios of inscribed balls in feasible regions bounds on sizes of optimal solutions and objective values bounds on rates of deformation of feasible regions as data is modified bounds on deformation of optimal solutions as data is modified bounds on the complexity of a variety of algorithms 73

[Renegar 95] Using a (theoretical) IPM that solves a primalphase-i followed by a primal-phase-ii, one can bound IPM iterations needed to compute an ε-optimal solution by ( ) O ϑ C (ln (C(d)) + ln(1/ε)) ϑ C = n s + n l for SDP Is C(d) just really nice theory? Is C(d) correlated with IPM iterations among SDPLIB problems? 74

Condition Measure Results Computed C(d) for 80 (out of 85) problems: Unable to compute ρ p (d) for 5 problems: control11, equalg51, maxg32, theta6, thetag11 (m = 1596, 1001, 2000, 4375, 2401, respectively) 60% are well-posed 40% are almost primal infeasible ρ D (d) 0 > 0 Total 0 0 32 32 ρ P (d) > 0 0 48 48 Total 0 80 80 75

60 50 IPM Iterations 40 30 20 10 0 1 2 3 4 5 6 7 8 log C(d) IPM iterations versus log (C(d)). CORR(log (C(d)), IPM Iterations) = 0.630 (48 problems) 76

Some Conclusions 62% of 85 SDPLIB problems have finite aggregate geometry measure G A CORR(log ( G A), IPM Iterations) = 0.901 among the SDPLIB problems with finite geometry measure G A 32 of 80 SDPLIB problems are almost primal infeasible, i.e. C(d) = + CORR(log (C(d)), IPM Iterations) = 0.630 among the 42 problems with finite C(d) 77

60 50 IPM Iterations 40 30 20 10 0 1 2 3 4 5 6 log(g m ) IPM iterations versus log(g A ) CORR ( log(g A ), IPM Iterations ) = 0.901 (53 problems) 78