An interior-point trust-region polynomial algorithm for convex programming

Similar documents
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

An Interior-Point Trust-Region Algorithm for Quadratic Stochastic Symmetric Programming

An Optimal Affine Invariant Smooth Minimization Algorithm.

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

Interior Point Methods for Mathematical Programming

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Largest dual ellipsoids inscribed in dual cones

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Interior Point Algorithms for Constrained Convex Optimization

An interior-point gradient method for large-scale totally nonnegative least squares problems

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Interior Point Methods in Mathematical Programming

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Primal-dual path-following algorithms for circular programming

Complexity of gradient descent for multiobjective optimization

Convex Optimization and l 1 -minimization

On self-concordant barriers for generalized power cones

Lecture 9 Sequential unconstrained minimization

Newton s Method. Javier Peña Convex Optimization /36-725

Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints

Generalized Newton-Type Method for Energy Formulations in Image Processing

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Supplement: Universal Self-Concordant Barrier Functions

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

MATH 4211/6211 Optimization Basics of Optimization Problems

On Conically Ordered Convex Programs

CS711008Z Algorithm Design and Analysis

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

Barrier Method. Javier Peña Convex Optimization /36-725

Self-Concordant Barrier Functions for Convex Optimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

Oracle Complexity of Second-Order Methods for Smooth Convex Optimization

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Interior-Point Methods

Second Order Optimization Algorithms I

Higher-Order Methods

Introduction to Nonlinear Stochastic Programming

1. Introduction. We consider the general smooth constrained optimization problem:

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Composite nonlinear models at scale

Interior Point Algorithm for Linear Programming Problem and Related inscribed Ellipsoids Applications.

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

12. Interior-point methods

On well definedness of the Central Path

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Second-order cone programming

An example of slow convergence for Newton s method on a function with globally Lipschitz continuous Hessian

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Cubic regularization of Newton s method for convex problems with constraints


Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

On Lagrange multipliers of trust-region subproblems

A Proximal Method for Identifying Active Manifolds

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

On the complexity of an Inexact Restoration method for constrained optimization

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming


Derivative-Free Trust-Region methods

Nonsymmetric potential-reduction methods for general cones

Optimization for Machine Learning

Lecture 6: Conic Optimization September 8

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

Inner approximation of convex cones via primal-dual ellipsoidal norms

Worst Case Complexity of Direct Search

You should be able to...

An Inexact Newton Method for Nonlinear Constrained Optimization

Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming

We describe the generalization of Hazan s algorithm for symmetric programming

5 Handling Constraints

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

10 Numerical methods for constrained problems

Lecture 24: August 28

The Q Method for Symmetric Cone Programmin

A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

Worst case complexity of direct search

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

On Lagrange multipliers of trust region subproblems

Kantorovich s Majorants Principle for Newton s Method

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

Unconstrained minimization

On the iterate convergence of descent methods for convex optimization

10. Unconstrained minimization

Optimization Tutorial 1. Basic Gradient Descent

arxiv: v1 [math.oc] 1 Jul 2016

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

LINEAR AND NONLINEAR PROGRAMMING

Optimisation in Higher Dimensions

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

What s New in Active-Set Methods for Nonlinear Optimization?

Transcription:

An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic objective function over a general convex set. The algorithm uses a trust-region model to ensure descent on a suitable merit function. The complexity of our algorithm is proved to be as good as the interior-point polynomial algorithm. Key words. interior-point algorithm, self-concordant barrier, trust-region subproblem. 1 Introduction The idea of interior-point trust-region algorithm can be traced back to Dikin(1967) where an interior ellipsoid method was developed for linear problems. Recently, Tseng(004)produced a global and local convergence analysis of Dikin s algorithm for indefinite quadratic programming. We also refer Absil and Tits(005) for this direction. Ye(199) developed an affine scaling algorithm for indefinite quadratic programming by solving sequential trust-region subproblem. Global first-order and second-order convergence results were proved, and later enhanced by Sun(1993) for convex case. An affine-scaling potential-reduction interior-point trust-region algorithm was developed for the indefinite quadratic programming in the chapter 9 of Ye(1997). This algorithm has recently been extended to solve symmetric cone programming by Faybusovich and Lu(004). In the trust-region literature, Conn, Gould and Toint (000) developed a primal barrier trustregion algorithm, which has been recently extended to solve symmetric cone Department of Mathematics, University of Notre Dame. email: ylu4@nd.edu LSEC, ICMSEC, AMSS, Chinese Academy of Sciences, partially supported by NSFC grant 1031060. email: yyx@lsec.cc.ac.cn 1

programming by Lu and Yuan(005). In this paper, we present an affinescaling primal barrier interior-point trust-region algorithm. Our algorithm minimizes a convex quadratic objective function over a general convex set, which is the first interior-point trust-region algorithm of covering basically the whole convex programming! Moreover, by using the techniques and properties in both interior-point algorithms and trust-region methods literature, we show that the complexity of our algorithm is as good as the interior-point polynomial algorithms! This gives us the strong theoretical support for the good practical performance of the interior-point trust-region algorithm in Lu and Yuan(005). Although our analysis is based on a fixed trust-region radius and solving the trust-region subproblem exactly in each step of our algorithm, the frame of the interior-point trust-region algorithm allows us to make the trust-region radius flexible and use iterative methods to solve the trust-region subproblem approximately in practical implementation. This advantage makes the interior-point trust-region algorithm competitive with the pure interior-point algorithm for solving large-scale problems. The goal of this paper is to show that the complexity of interior-point trust-region algorithm is as good as the complexity of pure interior-point algorithm in convex programming. Self-concordant barrier and its properties In this section, we present the concept of self-concordant barrier and its properties that will play an important in our analysis of section 3. We assume that K is a convex set in a finite-dimensional real vector space E. The following is the definition of self-concordant barrier, which is due to Nesterov and Nemirovskii(1994). Definition.1. Let F : K R be a C 3 -smooth convex function such that F (x) as x K approaches the boundary of K and F (x)[h, h, h] F (x)h, h 3/ (.1) for all x K and for all h E. Then F is called a self-concordant function for K. F is called a self-concordant barrier if F is a self-concordant function and ϑ := sup x K F (x), F (x) 1 F (x) <. (.) ϑ is called barrier parameter of F.

Let F (x) denote the Hessian of a self-concordant function F (x). Since it is positive definite, for every x K, v x = v, F (x)v 1 is a norm on E induced by F (x). Let B x (y, r) denote the open ball of radius r centered at y, where the radius is measured w.r.t. x. This ball is called the Dikin ball. The following lemmas are very crucial for the analysis of our algorithm in the next section. For the proofs, see e.g. the chapter of Renegar(001). Lemma.1. Assume F (x) is a self-concordant function for K, then for all x K, we have B x (x, 1) K and if whenever y B x (x, 1) we have v y x x 1 1 y x x for all v 0. (.3) Lemma.. Assume F (x) is a self-concordant function for K, x K and y B x (x, 1), then F (y) F (x) F (x), y x y x, F (x)(y x) Let n(x) := F (x) 1 F (x) be the Newton step of F (x). y x 3 x 3(1 y x x ). (.4) Lemma.3. Assume F (x) is a self-concordant function. If n(x) x 1 4 then F (x) has a minimizer z and z x x n(x) x + 3 n(x) x (1 n(x) x ) 3. (.5) Lemma.4. Assume F (x) is a self-concordant barrier with barrier parameter ϑ. If x, y K then F (x), y x ϑ. (.6) 3 The interior-point trust-region algorithm In this section, we present our algorithm and give the complexity analysis. We consider the following optimization problem min q(x) = 1 x, Qx + c, x (3.1) subject to x K. (3.) 3

Here Q : E E is a positive definite or positive semi-definite linear operator, c E. K is a bounded convex set with nonempty relative interior. We assume F (x) is the self-concordant barrier for the symmetric cone K and define the merit function as f k (x) = q(x) + F (x). (3.3) We should mention here that f k (x) is a self-concordant function by the definition (.1)! We want to decrease the value of f k (x) for every fixed in each inner iteration, and increase to positive infinity in outer iterations. From Lemma.1, for any x k,j K and d E, we have that x k,j + d K provided that F (x k,j ) 1 d α k,j < 1. It follows from Lemma. that F (x k,j + d) F (x k,j ) F (x k,j ), d + d, F (x k,j )d Therefore, we get F (x k,j ), d + d, F (x k,j )d + + d 3 x k,j 3(1 d xk,j ) αk,j 3 3(1 α k,j ) (3.4). f k (x k,j + d) f k (x k,j ) d, (Q + F (x k,j ))d + (Qx k,j + c) + F αk,j 3 (x k,j ), d + 3(1 α k,j ). (3.5) Now, it is obviously that for decreasing f k (x), we solve the following trust-region subproblem min 1 d, (Q + F (x k,j ))d + (Qx k,j + c) + F (x k,j ), d = m k,j (d)(3.6) s.t. F (x k,j ) 1 d αk,j. (3.7) Define Q k,j = F (x k,j ) 1 QF (x k,j ) 1 + I, (3.8) c k,j = F (x k,j ) ( 1 k (Qx k,j + c) + F (x k,j ) ), (3.9) and using the transformation d = F (x k,j ) 1 d, (3.10) 4

the above problem (3.6)-(3.7) can be rewritten as min q k,j (d ) = 1 d, Q k,j d + c k,j, d (3.11) d α k,j. (3.1) Once d k,j is computed, we obtain the step and it follows from inequality (3.5) that d k,j = F (x k,j ) 1 d k,j, (3.13) f k (x k,j + d k,j ) f k (x k,j ) q k,j (d k,j ) + α 3 k,j 3(1 α k,j ). (3.14) Algorithm 3.1. (An Interior-Point Trust Region Algorithm) Step 0 Initialization. An initial point x 0,0 K and an initial parameter 0 > 0 are given. Set α k,j = α < 1 for some constant α. Set k = 0 and j = 0. Step 1 Test inner iteration termination. If set x k+1,0 = x k,j and go to Step 3. c k,j, Q 1 k,j c k,j 1 9, (3.15) Step Step calculation. Solve problem (3.11)-(3.1) obtaining d k,j exactly, set d k,j by (3.13) and x k,j+1 = x k,j + d k,j. Step 3 Update parameter. Set +1 = θ for some constant θ > 1. Increase k by 1 and go to step 1. Theorem 3.1. a)if we choose α = 1 4, then f k (x k,j+1 ) f k (x k,j ) < 1 48, (3.16) which is independent of k and j. b)if the initial point x 0,0 satisfies the condition (3.15), then for any ɛ > 0, we will get a solution x such that q(x) q(x ) < ɛ in at most steps, here x = argmin x K q(x). 48θ(ϑ + ϑ) ln ϑ+ ϑ ɛ 0 ln θ (3.17) 5

To prove part a) of this theorem, we need the following lemma which is well-known in trust-region literature. Lemma 3.1. Any global minimizer d k,j of problem (3.11)-(3.1) satisfies the equation (Q k,j + µ k,j I)d k,j = c k,j, (3.18) here Q k,j +µ k,j I is positive semi-definite, µ k,j 0 and µ k,j ( d k,j α k,j) = 0. For a proof, see e.g. Section 7. of Conn, Gould and Toint (000). Proof of Theorem 3.1 part a). If the solution of (3.11)-(3.1) lies on the boundary of the trust-region, that is, d k,j = α k,j, then q k,j (d k,j ) = 1 d k,j, Q k,jd k,j + c k,j, d k,j = d k,j, Q k,jd k,j + c k,j 1 d k,j, Q k,jd k,j = d k,j, µ k,jd k,j 1 d k,j, (F (x k,j ) 1 QF (x k,j ) 1 + I)d k,j = µ k,j α k,j 1 d k,j, F (x k,j ) 1 QF (x k,j ) 1 d k,j 1 α k,j 1 α k,j = 1 3, here the third equality follows from the equalities (3.8) and (3.18), and the inequality follows from the fact that F (x k,j ) 1 QF (x k,j ) 1 is positive definite or positive semi-definite. Therefore, it follows from inequality (3.14) that f k (x k,j+1 ) f k (x k,j ) 1 3 + ( 1 4 )3 3(1 1 4 ) < 1 48. If the solution of (3.11)-(3.1) lies in the interior of the trust-region, that is, d k,j < α k,j, then from Lemma 3.1 we know µ k,j = 0 and consequently d k,j = Q 1 k,j c k,j, and q k,j (d k,j ) = 1 d k,j, Q k,jd k,j + c k,j, d k,j = 1 c k,j, Q 1 k,j c k,j. (3.0) By the mechanism of our algorithm, we know that c k,j, Q 1 k,j c k,j > 1 9 k and j. Therefore, f k (x k,j+1 ) f k (x k,j ) 1 18 + ( 1 4 )3 3(1 1 4 ) < 1 48. for all 6

Therefore, we complete our proof. Let n k (x k,j ) be the Newton step of f k (x) at the point x k,j. We should point out that n k (x k,j ) xk,j = f (x k,j ), f (x k,j ) 1 f (x k,j ) = (Qx k,j + c) + F (x k,j ), ( Q + F (x k,j )) 1 ( (Qx k,j + c) + F (x k,j )) = c k,j, Q 1 k,j c k,j, where last equality follows equalities (3.8) and (3.9). This equality connects equality (3.19) and the assumption of the following two lemmas, which tells us that we can stop the inner iteration if the reduction of the objective function with an interior solution is smaller than some constant. The following two lemmas are extension of Renegar(001) s result of minimizing a linear objective function over a convex set. Lemma 3.. Let x = argmin x K q(x). If n (x) x 1 9, then P roof. Let x() = argmin x K f (x). Then q(x) q(x ) ϑ + ϑ. (3.) q(x()) q(x ) q (x()), x() x = F (x()), x() x ϑ (3.3) The first inequality is by the convexity of q(x). The equality follows from the fact that f (x()) = 0, and the last inequality follows from Lemma.4. It easily follows from Lemma.3 that x x() x 1 9 + 3( 1 9 ) (1 1 9 )3 < 1 4 (3.4) and consequently from Lemma.1 that Then we have x x() x() x x() x 1 x x() x < 1 3. (3.5) 7

q(x) q(x()) = q x x(), Q(x x()) (x()), x x() + = F (x()), x x() + x x(), Q(x x()) F (x()) 1 F (x()), F (x()) 1 (x x()) F (x()) 1 F (x()) F (x()) 1 (x x()) ϑ x x() x() + 1 8 + x x() x + 1 8 ϑ 3 + 1 ϑ 8, (3.6) where the last inequality follows from the fact that ϑ is always greater than 1 and the third last inequality follows from the definition of ϑ. By adding the inequality (3.1) and (3.4), we get inequality (3.0). This lemma tells us that to get an ɛ-solution, we only need and consequently outer iterations. = 0 θ k ϑ + ϑ, ɛ k ϑ+ ln ϑ ɛ 0 ln θ (3.8) Lemma 3.3. If n k (x) x 1 9, then f k+1 (x) f k+1 (x(+1 )) θ(ϑ + ϑ). (3.9) 8

P roof. First, we have f k+1 (x) f k+1 (x( )) f +1 (x), x x( ) = +1 (Qx + c) + F (x), x x( ) = +1 (Qx + c) + F (x), x x( ) + ( +1 1) F (x), x( ) x = θ f 1 (x)( (Qx + c) + F (x)), f (x) 1 (x x(k )) +(θ 1) F (x) 1 F (x), F (x) 1 (x(k ) x) θ f 1 (x)( (Qx + c) + F (x)) f (x) 1 (x x(k )) +(θ 1) F (x) 1 F (x) F (x) 1 (x(k ) x) θ n k (x) x x( ) x x + (θ 1) ϑ x( ) x x θ 1 1 9 4 + (θ 1) ϑ 1 4 θ ϑ, (3.30) where the first inequality follows from the convexity of f k+1 (x) and the second last inequality follows from inequality (3.). Similarly, we have f k+1 (x( )) f k+1 (x(+1 )) f +1 (x( )), x( ) x(+1 ) = +1 (Qx( ) + c) + F (x( )), x( ) x(+1 ) = +1 (Qx( ) + c) + F (x( )), x( ) x(+1 ) +( +1 1) F (x( )), x(+1 ) x( ) = θ f (x( )), x( ) x(+1 ) +(θ 1) F (x( )), x(+1 ) x( ) = (θ 1) F (x( )), x(+1 ) x( ) < θϑ, (3.31) where the last inequality follows from Lemma.4 and the last equality holds because x( ) minimizes f k (x) and hence f k (x( )) = 0. By adding inequality (3.7) and inequality (3.8), we get inequality (3.6). This lemma and Theorem (3.1) part a) tell us that we need at most steps in each inner iteration. 48(ϑ + ϑ) (3.3) 9

Proof of Theorem 3.1 part b). It follows from (3.5) and (3.9). We want to mention that even the initial point doesn t satisfy condition (3.15), our algorithm will make it satisfy in polynomial-time, which follows from the part a) of Theorem 3.1. 4 Concluding remarks In this paper, we have shown that the complexity of interior-point trustregion algorithm is as good as pure interior-point algorithm in convex programming. This gives us a bridge of connecting interior-point methods and trust-region methods. It is our belief that more efficient algorithm can be built by taking advantages of both methods. The bottom line is that we won t lose any complexity priority by the result of this paper. References [1] P.A. Absil and A.L. Tits(005): Newton-KKT Interior-Point Methods for Indefinite Quadratic Programming, to appear in Computational Optimization and Applications. [] A.R. Conn, N.I.M. Gould and Ph.L. Toint(000): Trust-region Methods, SIAM Publications, Philadelphia, Pennsylvania. [3] I. I. Dikin(1967): Iterative solution of problems of linear and quadratic programming. Soviet Math. Doklady. 8: 674-675. [4] L.Faybusovich and Y. Lu(004): Jordan-algebraic aspects of nonconvex optimization over symmetric cone, accepted by applied mathematics and optimization. [5] Y. Lu and Y. Yuan(005): An interior-point trust-region algorithm for general symmtric cone programming, submitted. [6] Y.E. Nesterov and A.S. Nemirovskii(1994): Interior-point Polynomial Algorithms in Convex Programming, SIAM Publications, Philadelphia, Pennsylvania. [7] J. Renegar(001): A Mathematical View of Interior-point Methods in Convex Programming, SIAM Publications, Philadelphia, Pennsylvania. 10

[8] J. Sun(1993): A convergence proof for an affine-scaling method for convex quadratic programming without nondegeneracy assumptions, Math. Program. 60: 69-79. [9] P.Tseng(004),Convergence properties of Dikin s affine scaling algorithm for non-convex quadratic minimization, J.Global Optim. 30(004),no.,85-300. [10] Y. Ye(199): On an affine scaling algorithm for non-convex quadratic programming, Math. Program. 5: 85-300. [11] Y. Ye(1997): Interior Point Algorithms: Theory and Analysis, John Wiley and Sons, New York. 11