Analytic Center Cutting-Plane Method

Similar documents
10. Ellipsoid method

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

12. Interior-point methods

Interior Point Algorithms for Constrained Convex Optimization

Nonlinear Optimization for Optimal Control

4. Convex optimization problems

A Brief Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

12. Interior-point methods

Lecture: Convex Optimization Problems

Convex Optimization. 9. Unconstrained minimization. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

10. Unconstrained minimization

Equality constrained minimization

Lecture 9 Sequential unconstrained minimization

11. Equality constrained minimization

4. Convex optimization problems (part 1: general)

ORIE 6326: Convex Optimization. Quasi-Newton Methods

Convex Optimization and l 1 -minimization

More First-Order Optimization Algorithms

Truncated Newton Method

Convex optimization problems. Optimization problem in standard form

minimize x subject to (x 2)(x 4) u,

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Unconstrained minimization

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

10 Numerical methods for constrained problems

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

5. Duality. Lagrangian

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

4. Convex optimization problems

Newton s Method. Javier Peña Convex Optimization /36-725

Primal/Dual Decomposition Methods

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

8. Geometric problems

EE364b Homework 5. A ij = φ i (x i,y i ) subject to Ax + s = 0, Ay + t = 0, with variables x, y R n. This is the bi-commodity network flow problem.

Constrained Optimization and Lagrangian Duality

Advances in Convex Optimization: Theory, Algorithms, and Applications

Primal-Dual Interior-Point Methods

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Lecture 14 Barrier method

subject to (x 2)(x 4) u,

Self-Concordant Barrier Functions for Convex Optimization

Unconstrained minimization: assumptions

Lecture 14 Ellipsoid method

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Interior Point Methods for Mathematical Programming

Lecture 6: Conic Optimization September 8

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization

8. Geometric problems

Barrier Method. Javier Peña Convex Optimization /36-725

Lecture: Duality of LP, SOCP and SDP

Lagrangian Duality Theory

Nonlinear Programming

Lecture 15: October 15

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

Optimization Tutorial 1. Basic Gradient Descent

Convex Optimization M2


A Tutorial on Convex Optimization II: Duality and Interior Point Methods

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Math 273a: Optimization Subgradients of convex functions

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Lecture: Duality.

Interior Point Methods in Mathematical Programming

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Bellman s Curse of Dimensionality

Convex Optimization and Modeling

Convex Optimization Lecture 13

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Convex Optimization M2

Exercises for EE364b

5 Handling Constraints

Interior-Point Methods for Linear Optimization

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Lagrangian Duality and Convex Optimization

5. Subgradient method

18. Primal-dual interior-point methods

Conic Linear Optimization and its Dual. yyye

Support Vector Machine Classification with Indefinite Kernels

Written Examination

7. Lecture notes on the ellipsoid algorithm

Convex Optimization & Lagrange Duality

Subgradient Methods. Stephen Boyd (with help from Jaehyun Park) Notes for EE364b, Stanford University, Spring

Lecture 5: September 12

Chemical Equilibrium: A Convex Optimization Problem

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lecture 14: October 17

Transcription:

Analytic Center Cutting-Plane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cutting-plane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower bound and stopping criterion 5 5 Convergence proof 7 6 Numerical examples 10 6.1 Basic ACCPM................................... 10 6.2 ACCPM with constraint dropping........................ 10 6.3 Epigraph ACCPM................................ 13 1

In these notes we describe in more detail the analytic center cutting-plane method (AC- CPM) for non-differentiable convex optimization, prove its convergence, and give some numerical examples. ACCPM was developed by Goffin and Vial [GV93] and analyzed by Nesterov [Nes95] and Atkinson and Vaidya [AV95]. These notes assume a basic familiarity with convex optimization (see [BV04]), cuttingplane methods (see the EE364b notes Localization and Cutting-Plane Methods), and subgradients (see the EE364b notes Subgradients). 1 Analytic center cutting-plane method The basic ACCPM algorithm is: Analytic center cutting-plane method (ACCPM) given an initial polyhedron P 0 known to contain X. k := 0. repeat Compute x (k+1), the analytic center of P k. Query the cutting-plane oracle at x (k+1). If the oracle determines that x (k+1) X, quit. Else, add the returned cutting-plane inequality to P. P k+1 := P k {z a T z b} If P k+1 =, quit. k := k +1. There are several variations on this basic algorithm. For example, at each step we can add multiple cuts, instead of just one. We can also prune or drop constraints, for example, after computing the analytic center of P k. Later we will see a simple but non-heuristic stopping criterion. We can construct a cutting-plane a T z b at x (k), for the standard convex problem minimize f 0 (x) subject to f i (x) 0, i = 1,...,m, as follows. If x (k) violates the ith constraint, i.e., f i (x (k) ) > 0, we can take where g i f i (x (k) ). If x (k) is feasible, we can take a = g 0, a = g i, b = g T i x (k) f i (x (k) ), (1) b = g T 0x (k) f 0 (x (k) )+f (k) best, (2) where g 0 f 0 (x (k) ), and f (k) best is the best (lowest) objective value encountered for a feasible iterate. 2

2 Computing the analytic center Each iteration of ACCPM requires computing the analytic center of a set of linear inequalities (and, possibly, determining whether the set of linear inequalities is feasible), a T i x b i, i = 1,...,m, that define the current localization polyhedron P. In this section we describe some methods that can be used to do this. We note that the inequalities defined by a i and b i, as well as their number m, can change at each iteration of ACCPM, as we add new cutting-planes and possibly prune others. In addition, these inequalities can include some of the original inequalities that define P 0. To find the analytic center, we must solve the problem minimize Φ(x) = m log(b i a T i x). (3) This is an unconstrained problem, but the domain of the objective function is the open polyhedron domφ = {x a T i x < b i, i = 1,...,m}, i.e., the interior of the polyhedron. Part of the challenge of computing the analytic center is thatwearenotgivenapointinthedomain. OnesimpleapproachistouseaphaseIoptimization method (see [BV04, 11.4]) to find a point in domφ (or determine that domφ = ). Once we find such a point, we can use a standard Newton method to compute the analytic center (see [BV04, 9.5]). A simple alternative is to use an infeasible start Newton method (see [BV04, 10.3]) to solve (3), starting from a point that is outside domφ, as suggested by Goffin and Sharifi- Mokhtarian in [GSM99]. We reformulate the problem as minimize m logy i subject to y = b Ax, (4) with variables x and y. The infeasible start Newton method can be started from any x and any y 0. We can, for example, take the inital x to be the previous point x prev, and choose y as { bi a y i = T i x if b i a T i x > 0 1 otherwise. In the basic form of ACCPM x prev strictly satisfies all of the inequalities except the last one added; in this case, y i = b i a T i x holds for all but one index. We define the primal and dual residuals for (4) as r p = y +Ax b, r d = [ A T ν g +ν ], (5) 3

where g = diag(1/y i )1 is the gradient of the objective. We also define r to be (r d,r p ). The Newton step at a point (x,y,ν) is defined by the system of linear equations 0 0 A T 0 H I A I 0 x y ν = where H = diag(1/y 2 i) is the Hessian of the objective. We can solve this system by block elimination (see [BV04, 10.4]), using the expressions [ rd x = (A T HA) 1 (A T g A T Hr p ), r p y = A x r p, (6) ν = H y g ν. We can compute x from the first equation in several ways. We can, for example, form A T HA, then compute its Cholesky factorization, then carry out backward and forward substitution. Another option is to compute x by solving the least-squares problem H x = argmin 1/2 z Az H 1/2 r p +H 1/2 g. The infeasible start Newton method is: Infeasible start Newton method. given starting point x, y 0, tolerance ǫ > 0, α (0,1/2), β (0,1). ν := 0. Compute residuals from (5). repeat 1. Compute Newton step ( x, y, ν) using (6). 2. Backtracking line search on r 2. t := 1. while y +t y 0, t := βt. while r(x+t x,y +t y,ν +t ν) 2 > (1 αt) r(x,y,ν) 2, t := βt. 3. Update. x := x+t x, y := y +t y, ν := ν +t ν. until y = b Ax and r(x,y,ν) 2 ǫ. This method works quite well, unless the polyhedron is empty(or, in practice, very small), in which case the algorithm does not converge. To guard against this, we fix a maximum number of iterations. Typical parameter values are β = 0.5, α = 0.01, with maximum iterations set to 50. ], 4

3 Pruning constraints There is a simple method for ranking the relevance of the inequalities a T i x b i, i = 1,...,m that define a polyhedron P, once we have computed the analytic center x. Let Then the ellipsoid lies inside P, and the ellipsoid m H = 2 Φ(x ) = (b i a T i x) 2 a i a T i. E in = {z (z x ) T H(z x ) 1} E out = {z (z x ) T H(z x ) m 2 }, which is E in scaled by a factor m about its center, contains P. Thus the ellipsoid E in at least grossly (within a factor m) approximates the shape of P. This suggests the (ir)relevance measure η i = b i a T i x H 1/2 a i = b i a T i x a T i H 1 a i for the inequality a T i x b i. This factor is always at least one; if it is m or larger, then the inequality is certainly redundant. These factors (which are easily computed from the computations involved in the Newton method) can be used to decide which constraints to drop or prune. We simply drop constraints with the large values of η i ; we keep constraints with smaller values. One typical scheme is to keep some fixed number N of constraints, where N is usually chosen to be between 3n and 5n. When this is done, the computational effort per iteration (i.e., centering) does not grow as ACCPM proceeds, as it does when no pruning is done. 4 Lower bound and stopping criterion In 7 of the notes Localization and Cutting-Plane Methods we described a general method for constructing a lower bound on the optimal value p of the convex problem minimize f 0 (x) subject to f 1 (x) 0, Cx d, assuming we have evaluated the value and a subgradient of its objective and constraint functions at some points. For notational simplicity we lump multiple constraints into one by forming the maximum of the constraint functions. We will re-order the iterates so that at x (1),...,x (q), we have evaluated the value and a subgradient of the objective function f 0. This gives us the piecewise-linear underestimator ˆf 0 of f 0, defined as ( ˆf 0 (z) = max f0 (x (i) )+g (i)t 0 (z x (i) ) ) f 0 (z).,...,q 5 (7)

We assume that at x (q+1),...,x (k), we have evaluated the value and a subgradient of the constraint function f 1. This gives us the piecewise-linear underestimator ˆf 1, given by ( ˆf 1 (z) = max f1 (x (i) )+g (i)t 1 (z x (i) ) ) f 1 (z). i=q+1,...,k Now we form the problem minimize ˆf0 (x) subject to ˆf1 (x) 0, Cx d, (8) whose optimal value is a lower bound on p. In ACCPM we can easily construct a lower bound on the problem (8), as a by-product of the analytic centering computation, which in turn gives a lower bound on the original problem (7). We first write (8) as the LP minimize t subject to f 0 (x (i) )+g (i)t 0 (x x (i) ) t, i = 1,...,q f 1 (x (i) )+g (i)t 1 (x x (i) ) 0, i = q +1,...,k Cx d, with variables x and t. The dual problem is maximize q λ i (f 0 (x (i) ) g (i)t 0 x (i) )+ k i=q+1 λ i (f 1 (x (i) ) g (i)t 1 x (i) ) d T µ subject to q λ i g (i) 0 + k i=q+1 λ i g (i) 1 +C T µ = 0 µ 0, λ 0, q λ i = 1, with variables λ, µ. We can compute at modest cost a lower bound to the optimal value of (9), and hence to p, by finding a dual feasible point for (10). The optimality condition for x (k+1), the analytic center of the inequalities is q f 0 (x (i) )+g (i)t 0 (x x (i) ) f (i) best, i = 1,...,q, f 1 (x (i) )+g (i)t 1 (x x (i) ) 0, i = q +1,...,k, c T i x d i, i = 1,...,m, g (i) 0 f (i) best f 0(x (i) ) g (i)t 0 (x (k+1) x (i) ) + k i=q+1 g (i) 1 f 1 (x (i) ) g (i)t 1 (x (k+1) x (i) ) + m c i (9) (10) = 0. d i c T i x (k+1) (11) Let τ i = 1/(f (i) best f 0(x (i) ) g (i)t 0 (x (k+1) x (i) )) for i = 1,...,q. Comparing (11) with the equality constraint in (10), we can construct a dual feasible point by taking λ i = { τi /1 T τ for i = 1,...,q 1/( f 1 (x (i) ) g (i)t 1 (x (k+1) x (i) ))(1 T τ) for i = q +1,...,k, µ i = 1/(d i c T i x (k+1) )(1 T τ) i = 1,...,m. 6

Using these values of λ and µ, we conclude that p l (k+1), where l (k+1) = q λ i (f 0 (x (i) ) g (i)t 0 x (i) )+ k i=q+1 λ i (f 1 (x (i) ) g (i)t 1 x (i) ) d T µ. Let l (k) best be the best lower bound found after k iterations. The ACCPM algorithm can be stopped once the gap f (k) best l(k) best is less than a desired value ǫ > 0. This guarantees that x (k) is, at most, ǫ-suboptimal. 5 Convergence proof In this section we give a convergence proof for ACCPM, adapted from Ye [Ye97, Chap. 6]. We take the initial polyhedron as the unit box, centered at the origin, with unit length sides, i.e., the initial set of linear inequalities is (1/2)1 z (1/2)1, so the first analytic center is x (1) = 0. We assume the target set X contains a ball with radius r < 1/2, and show that the number of iterations is no more than a constant times n 2 /r 2. Assuming the algorithm has not terminated, the set of inequalities after k iterations is (1/2)1 z (1/2)1, a T i z b i, i = 1,...,k. (12) We assume the cuts are neutral, so b i = a T i x (i) for i = 1,...,k. Without loss of generality we normalize the vectors a i so that a i 2 = 1. We will let φ k : R n R be the logarithmic barrier function associated with the inequalities (12), n n k φ k (z) = log(1/2+z i ) log(1/2 z i ) log(b i a T i x). The iterate x (k+1) is the minimizer of this logarithmic barrier function. Since the algorithm has not terminated, the polyhedron P k defined by (12) still contains the target set X, and hence also a ball with radius r and (unknown) center x c. We have ( 1/2+r)1 x c (1/2 r)1, and the slacks of the inequalities a T i z b i evaluated at x c also exceed r: b i sup a T i (x c +rv) = b i a T i x c r a i 2 = b i a T i x c r 0. v 2 1 Therefore φ k (x c ) (2n+k)logr and, since x (k) is the minimizer of φ k, φ k (x (k) ) = inf z φ k(z) φ k (x c ) (2n+k)log(1/r). (13) 7

We can also derive a lower bound on φ k (x (k) ) by noting that the functions φ j are selfconcordant for j = 1,...,k. Using the inequality (9.48), [BV04, p.502], we have φ j (x) φ j (x (j) )+ (x x (j) ) T H j (x x (j) ) log(1+ (x x (j) ) T H j (x x (j) )) for all x domφ j, where H j is the Hessian of φ j at x (j). If we apply this inequality to φ k 1 we obtain φ k (x (k) ) = inf φ k(x) x ( = inf φk 1 (x) log( a T x k(x x (k 1) )) ) inf v ( ) φ k 1 (x (k 1) )+ v T H k 1 v log(1+ v T H k 1 v) log( a T kv). By setting the gradient of the righthand side equal to zero, we find that it is minimized at which yields φ k (x (k) ) φ k 1 (x (k 1) )+ ˆv = 1+ 5 2 a T k H 1 k 1 a k H 1 k 1 a k, ˆv T H k 1ˆv log(1+ ˆv T H k 1ˆv) log( a T kˆv) = φ (k 1) (x k 1 )+0.1744 1 2 log(at kh 1 k 1 a k) 0.1744k 1 2 k 0.1744k k 2 log ( 1 k k 2 log ( 1 k k log(a T i H 1 i 1a i )+2nlog2 k ) a T i H 1 i 1a i a T i H 1 i 1a i ) +2nlog2 +2nlog2 (14) because φ 0 (x (0) ) = 2nlog2. We can further bound the second term on the righthand side by noting that H i = 4diag(1+2x (i) ) 2 +4diag(1 2x (i) ) 2 + because (1/2)1 x (i) (1/2)1 and i j=1 1 (b j a T j x (i) ) 2a ja T j I + 1 n b i a T i x (k) = a T i (x (i 1) x (k) ) a i 2 x (i 1) x (k) 2 n. Define B 0 = I and B i = I +(1/n) i j=1 a j a T j for i 1. Then nlog(1+k/n 2 ) = nlog(trb k /n) logdetb k 8 i a j a T j j=1

= logdetb k 1 +log(1+ 1 n at kb 1 k 1 a k) logdetb k 1 + 1 2n at kb 1 1 k a T i B 1 2n i 1a i. k 1 a k (The second inequality follows from the fact that a T kb 1 k 1 a k 1, and log(1+x) (log2)x x/2 for 0 x 1.) Therefore k and we can simplify (14) as a T i H 1 i 1a i k a T i B 1 i 1a i 2n 2 log(1+ k n 2), φ k (x (k) ) k ( ) 2 log 2 log(1+k/n2 ) +2nlog2 k/n 2 = (2n k 2 )log2+ k ( 2 log k/n 2 log(1+k/n 2 ) Combining this lower bound with the upper bound (13) we find ). (15) k 2 log2+ k ( 2 log k/n 2 ) klog(1/r)+2nlog(1/(2r)). (16) log(1+k/n 2 ) Fromthisitisclearthatthealgorithmterminatesafterafinitek: sincetheratio(k/n 2 )/log(1+ k/n 2 ) goes to infinity, the left hand side grows faster than linearly as k increases. We can derive an explicit bound on k as follows. Let α(r) be the solution of the nonlinear equation α/log(1+α) = 1/(2r 4 ). Suppose k > max{2n,n 2 α(r)}. Then we have a contradiction in (16): klog(1/(2r 2 )) k 2 log 2+ k ( 2 log k/n 2 ) klog(1/r)+2nlog(1/(2r)), log(1+k/n 2 ) i.e., k log(1/(2r)) 2n log(1/(2r)). We conclude that max{2n,n 2 α(r)} is an upper bound on the number of iterations. 9

6 Numerical examples We consider the problem of minimizing a piecewise linear function: minimize f(x) = max,...,m (a T i x+b i ), with variable x R n. The particular problem instance we use to illustrate the different methods has n = 20 variables and m = 100 terms, with problem data a i and b i generated from a unit normal distribution. Its optimal value (which is easily computed via linear programming) is f 1.1. 6.1 Basic ACCPM We use the basic ACCPM algorithm described in 1, with the infeasible start Newton method used to carry out the centering steps. We take P 0 to be the unit box {z z 1}. We keep track of f best, the best objective value found, and use this to generate deep objective cuts. Figure 1 shows convergence of f (k) f, which is nearly linear (on a semi-log scale). Figure 2 shows the convergence of the true suboptimality f (k) best f (which is not available as the algorithm is running), along with the upper bound on suboptimality f (k) best l(k) best (which is available as the algorithm runs). Figure 3 shows f (k) best f versus the cumulative number of Newton steps required by the infeasible start Newton method in the centering steps. This plots shows that around 10 Newton steps are needed, on average, to carry out the centering. We can see that as the algorithm progresses (and P (k) gets small), there is a small increase in the number of Newton steps required to achieve the same factor increase in accuracy. 6.2 ACCPM with constraint dropping We illustrate ACCPM with constraint dropping, keeping at most N = 3n constraints, using the constraint dropping scheme described in 3. Figure 4 shows convergence of f (k) f with and without constraint dropping. The plots show that keeping only 3n constraints has almost no effect on the progress of the algorithm, as measured in iterations. At k = 200 iterations, the pruned polyhedron has 60 constraints, whereas the unpruned polyhedron has 240 constraints. The number of flops per Newton step is, to first order, m k n 2, where m k is the number of constraints at iteration k, so the total flop count of iteration k can be estimated as N k m k n 2, where N k is the number of Newton steps required in iteration k. Figure 5 shows the convergence of f (k) best f versus the (estimated) cummulative flop count. 10

10 1 10 0 f (k) f 10 1 10 2 10 3 10 4 0 50 100 150 200 k Figure 1: The value of f (k) f versus iteration number k, for the basic ACCPM. 10 4 f (k) best f f (k) best l(k) best 10 2 10 0 10 2 10 4 0 50 100 150 200 k Figure 2: The value of f (k) best f (in blue) and the value of f (k) versus iteration number k for the basic ACCPM. best l(k) best (in red) 11

10 1 10 0 f (k) f 10 1 10 2 10 3 10 4 0 500 1000 1500 2000 2500 Newton iterations Figure 3: The value of f (k) best f versus the cumulative number of Newton steps, for the basic ACCPM. 10 1 10 0 no dropping keeping 3n f (k) f 10 1 10 2 10 3 10 4 0 50 100 150 200 k Figure 4: The value of f (k) f versus iteration number k in the case where all constraints are kept (blue) and only 3n constraints are kept (red). 12

10 1 10 0 no dropping keeping 3n f (k) best f 10 1 10 2 10 3 10 4 0 50 100 150 200 Cumulative flops/10 6 Figure 5: The value of f (k) best f versus estimated cummulative flop count in the case where all constraints are kept (blue) and only 3n constraints are kept (red). 6.3 Epigraph ACCPM Figure 6 shows convergence of f (k) f for the epigraph ACCPM. (See 6 of the EE364b notes Localization and Cutting-Plane Methods for details.) Epigraph ACCPM requires only 50 iterations to reach the same accuracy that was reached by basic ACCPM in 200 iterations. The convergence of f (k) best f versus the cumulative number of Newton steps is shown in figure 7. We see that in epigraph ACCPM the average number of Newton steps per iteration is a bit higher than for basic ACCPM, but a substantial advantage remains. 13

10 1 10 0 f (k) f 10 1 10 2 10 3 10 4 0 10 20 30 40 50 k Figure 6: The value of f (k) f versus iteration number k, for epigraph ACCPM. 10 1 10 0 f (k) best f 10 1 10 2 10 3 10 4 0 200 400 600 800 1000 1200 Newton iterations Figure 7: The value of f (k) best f versus the cumulative number of Newton steps, for epigraph ACCPM. 14

References [AV95] D. Atkinson and P. Vaidya. A cutting plane algorithm for convex programming that uses analytic centers. Mathematical Programming, 69:1 43, 1995. [BV04] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [GSM99] J. Goffin and F. Sharifi-Mokhtarian. Using the primal-dual infeasible newton method in the analytic center method for problems defined by deep cutting planes. Journal of Optimization Theory and Applications, 101:35 58, April 1999. [GV93] J. Goffin and J. Vial. On the computation of weighted analytic centers and dual ellipsoids with the projective algorithm. Mathematical Programming, 60:81 92, 1993. [Nes95] Y. Nesterov. Cutting-plane algorithms from analytic centers: efficiency estimates. Mathematical Programming, 69:149 176, 1995. [Ye97] Yinyu Ye. Complexity analysis of the analytic center cutting plane method that uses multiple cuts. Mathematical Programming, 78:85 104, 1997. 15