A new primal-dual path-following method for convex quadratic programming
|
|
- Roger Griffin
- 6 years ago
- Views:
Transcription
1 Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté des Sciences University Ferhat Abbas, Sétif 9000, Algérie achache_m@yahoo.fr Abstract. In this paper, we describe a new primal-dual path-following method to solve a convex quadratic program (QP). The derived algorithm is based on new techniques for finding a new class of search directions similar to the ones developed in a recent paper by Darvay for linear programs. We prove that the short-update algorithm finds an ε-solution of (QP) in a polynomial time. Mathematical subject classification: 90C0, 90C5, 90C60. Key words: convex quadratic programming, interior point methods, primal-dual path-following methods, convergence of algorithms. Introduction In this paper we consider the following convex quadratic program in its classical standard (QP) form called the primal [ minimize c T x + ] x x T Qx, s.t. Ax b, x 0, () and its dual form maximize y, x, z [ b T y ] x T Qx, s.t. A T y + z Qx c, z 0, () #636/05. Received: /VI/05. Accepted: 6/X/05.
2 98 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS where Q isagiven(n n) matrix, c R n, b R m and A isagiven(m n) matrix. Recently, some other important formulations have been used to study convex quadratic program such as the (second-order) cone optimization problem (SOCP). They are currently the most often used for quadratic convex programs. See, e.g., [5]. Quadratic programming is a special case of nonlinear programming and has a wide range of applications. A classical method to solve convex (QP) is the Wolfe algorithm which can be considered as a direct extension of the well-known simplex method, in the worst case its complexity is exponential. Therefore, there is a growing interest for developing efficient, robust and polynomial time algorithms to solve the problem. Primal-dual path-following methods are among the most efficient algorithms of interior point methods (IPM) to solve linear, quadratic programming, complementarity problems (CP) and general conic optimization problems (COP). These algorithms are, on one hand, of Newton type and thus lead to an efficient practical implementation and on another hand they have a polynomial time complexity. Recently, many interior point methods have been proposed for solving (QP) problems in its standard format or via (SCOP). These methods are based on new search directions with best theoretical properties such as polynomial time complexity and efficient practical implementations. See, e.g., [,, 5, 6, 7]. In this paper, we extend a recent approach developed by Darvay [6] for linear programs to (QP) case. In this approach, based on the introduction of a monotonic function ϕ of one real variable. The equations x i z i µ in the nonlinear system which defines the central path (see Wright [8]) are replaced by ( ) xi z i ϕ ϕ(). µ Of course, if ϕ is the identical function we recover the classical path-following method. For ϕ(t) t we define a new class of search directions and hence a new primal-dual path-following method for approximating the central path. We prove that the related short-update algorithm solves (QP) locally and quadratically and in a polynomial time. We mention also some related and interesting IPM approaches studied recently by Bai et al. []. They have also definedanew class of search direction for linear optimization by using the so-called kernel Comp. Appl. Math., Vol. 5, N., 006
3 MOHAMED ACHACHE 99 function, that is any univariate strictly convex function k(t) that is minimal at t and such that k() 0. However, by defining k(t) t ϕ(ξ ) ϕ() dξ and for ϕ(t) t, k(t) (t ) ξϕ (ξ ) that gives the corresponding kernel function which defines exactly the same search direction and complexity. Nevertheless, for ϕ(t) t it corresponds to the kernel function k(t) (t ) log t of the logarithmic barrier function, which gives the classical search direction. This new search direction is also analyzed by the authors (Peng, Roos and Terlaky) in a more general context of Self-regular proximities measures for linear and (second order) cone optimization problem. See, e.g., [7]. The paper is organized as follows. In the next section, the problem is presented. Section 3 deals with the new search directions. The description of the algorithm and its convergence analysis are presented in section. Section 5 ends the paper with a conclusion. Our notation is the classical one. In particular, R n denotes the space of real n-dimensional vectors. Given u, v R n, u T v is their inner product, and u is the Euclidean norm, whereas u is the l -norm. Given a vector u in R n, U diag(u) is the n n diagonal matrix with U ii u i for all i. The statement of the problem Through the paper the following assumptions hold. Positive semidefinitness (PSD). semidefinite. The matrix Q is symmetric and positive Interior point condition (IPC). There exists (x 0, y 0, z 0 ) such that Ax 0 b, A T y 0 + z 0 Qx 0 c, x 0 > 0, z 0 > 0. The full rank condition (FR). The matrix A is of rank m (m n). Comp. Appl. Math., Vol. 5, N., 006
4 00 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS The optimal solutions of the primal and the dual are the solutions of nonlinear system of (n + m) of equations: Ax b, x 0 A T y + z Qx c, z 0, x i z i 0, for i,...,n. (3) The first equation of the system (3) represents the primal feasibility, the second one the dual feasibility and the last one the complementarity condition. The classical path-following method consists in introducing a positive parameter µ. One considers the nonlinear system parameterized by µ Ax b, A T y + z Qx c, z > 0, x > 0, x i z i µ, for i,...,n. () It is shown, under our assumptions, that there exists one unique solution (x(µ), y(µ), z(µ)). The path µ (x(µ), y(µ), z(µ)) is called the central-path (see for instance Wright [8]). It is known that when µ 0, (x(µ), y(µ), z(µ)) goes to a solution of (3). Applying the Newton s method for the system (), we develop the classical primal-dual path-following algorithm. In the next section, we shall present a new method for approximating the central path. 3 New search directions In the proposed method, ( ) we replace the n equations x i y i µ in () by n equivalent equations ϕ xi y i ϕ() where ϕ is a real valued function on [0, + ) µ and differentiable on (0, + ) such that ϕ and ϕ (t) >0, for all t > 0. Given µ>0and (x, y, z) ( such ) that A T y + z Qx c, Ax b, x > 0 and z > 0 but not such that ϕ xi z i ϕ() for all i, a new triple (x + x, y + µ y, z + z) is obtained thanks to the Newton method for solving the nonlinear Comp. Appl. Math., Vol. 5, N., 006
5 MOHAMED ACHACHE 0 system. One obtains µ ϕ Ax 0, A T y + z Qx 0, ( ) xi z i µ ( ) xi z i (z i x i + x i z i ) ϕ () ϕ µ for i,...,n. For commodity let us introduce the vectors v and p of R n defined by (5) v i xi z i µ Next let us introduce the vectors and the matrix and p i ϕ() ϕ(v i ) v i ϕ(v i ) for i,...,n. d x X V x, d z Z V z, d y y (6) Ā µ AV X and Q µ V XQV X then the system reduces to the system Ād x 0, Ā T d y d z + Qd x 0, d x + d z p, which leads to the system ( (I + Q) Ā T Ā 0 )( d x d y ) ( p 0 ) which is non singular since Q is positive semidefinite and Ā has a full rank. Then d z is computed by the formula d z p d x. Remark 3.. If we adopt ϕ(t) t, then we recover the classical primal-dual path-following method In this paper, we shall consider ϕ(t) t. Then ϕ (t). It follows that t ( p i Comp. Appl. Math., Vol. 5, N., 006 ) xi z i ( v i ), for all i µ
6 0 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS or p (e v) (7) where e (,...,) T and the system (5) becomes Ax 0, A T y + z Qx 0, z i x i + x i z i µv i p i for i,...,n. (8) In addition with the notation in (6), we have x i z i + z i x i µv i (d xi + d zi ) for i,...,n, (9) and µd xi d zi x i z i for i,...,n. (0) Description of the algorithm and its convergence analysis Nowwedefine a proximity measure for the central path as δ(xze,µ) p In addition, we define the vector q by e v. () q i d xi d zi. Then and d xi d zi p i q i, q p. () This last inequality follows from p q + d T x d z since dx T d z dx T ( Qd x Ā T d y ) dx T positive semidefinite matrix. Qd x 0, with Ād x 0 and Q is a Comp. Appl. Math., Vol. 5, N., 006
7 MOHAMED ACHACHE 03. The algorithm Let ε>0be the given tolerance, 0 <θ<the update parameter (default θ /( n)), and 0 <τ<the proximity parameter ( default τ ). Assume that for the triple (x 0, y 0, z 0 ) IPC holds and let µ 0 (x0 ) T z 0. In addition, we n assume that δ(x 0 Z 0 e,µ)<τ. begin x : x 0 ; y : y 0 ; z : z 0 ; µ : µ 0 ; while x T z nµε do begin µ : ( θ)µ; compute (x,y,z) from (8) x : x + x; y : y + y; z : z + z; end end In the next section we prove that the algorithm converges quadratically and locally and in a polynomial time.. Complexity analysis In the following lemma, we state a condition which ensures the feasibility of the full Newton step. Let x x + x and z z + z be the new iterate after a full Newton step. Lemma.. Let δ δ(xze,µ) <. Then the full Newton step is strictly feasible, hence x > 0 and z > 0. Proof. Hence For each 0 α. Let x(α) x + αx and z(α) z + αz. x i (α) z i (α) x i z i + α(x i z i + z i x i ) + α x i z i for all i,...,n. Comp. Appl. Math., Vol. 5, N., 006
8 0 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS Now, in view of (9) and (0) we have x i (α) z i (α) µ(v i + αv i (dx i + dz i ) + α dx i dz i ) for all i,...,n. In addition from (7), we have and thus Thereby v i + p i v i + v i p i p i, for i,...,n. for i,...,n. and ṽ i x i (α) z i (α) µ ( ( α)vi + α ( ) vi α ( )) + v i p i + p i qi, ( ) ( α)vi + α ( α) p i α q i, for i,...,n. (3) Thus the inequality x i (α) z i (α) > 0 holds if max i Using () and () we get max i ( ( α) p i α q i ( ( α) p i α q i ) ) <. ( α) p + α q ( α) p + α q ( ) p p δ <. Hence, x i (α) z i (α) > 0 for each 0 α. Since x i (α) and z i (α) are linear function of α, then they do not change sign on the interval [0, ] and for α 0 we have x i (0) >0 and z i (0) >0. This leads to x i () >0 and z i () >0 for all i. In the next lemma we state that under the condition δ(xze,µ)<, the full Newton step is quadratically convergent. Comp. Appl. Math., Vol. 5, N., 006
9 MOHAMED ACHACHE 05 Lemma.. Let x x + x and z z + z be the iteration obtained after a full step. Suppose δ(xze,µ)<. Then δ( X Ze,µ) δ + δ. Thus δ( X Ze,µ)<δ, which means quadratic convergence of the full Newton step. Proof. Setting in (3) α, we get Hence ṽ i q i for i,...,n () min ṽi q i and thereby On the other hand, we have q p δ, min i ṽ i δ. (5) δ ( X Ze,µ) ( ṽ i ) i i i ( ṽ i ) ( +ṽ i) ( +ṽ i ) ( ṽ i ) ( +ṽ i ) ( + min i ṽ i ) ( ṽi ) i and using (5) we get δ ( X Ze,µ) ( + δ ) ( ṽi ) i Comp. Appl. Math., Vol. 5, N., 006
10 06 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS Using (), () and () we get δ ( X Ze,µ) n i q i ( + δ ) ( n ) i q i ( + δ ) ( + δ ) ( + δ ) ( p ). ( q ) Hence This proves the Lemma. δ( X Ze,µ) δ + δ. The next Lemma gives an upper bound for the duality gap after a full Newton step. Lemma.3. hence Let x x + x and z z + z. Then the duality gap is ) x T z µ (n q, x T z µn. Proof. We have seen from the previous lemmas that Hence but since ṽ i ṽi i q i i for i,...,n. ( ) q i n q, x T z µ ṽi, (6) i Comp. Appl. Math., Vol. 5, N., 006
11 MOHAMED ACHACHE 07 it follows that This completes the proof. ) x T z µ (n q. The next Lemma discusses the influence on the proximity measure of the Newton process followed by a step along the central path. Lemma.. Then Let δ δ(xze,µ)< and µ + ( θ)µ, where 0 <θ<. δ( X Ze,µ + ) θ n + δ θ + ( θ)( δ). Furthermore, if δ</, θ /( n ) and n, then we get δ( X Ze,µ + ). Proof. We have δ ( X Ze,µ + ) e x ỹ θ θ θ θ µ + θe ṽ ( ) θ ṽi i ( ) ( ) θ +ṽi θ ṽi ( ) θ +ṽi i ( ) ( θ) ṽ i ( ) θ +ṽi i ( ) θ θ + mini ṽ i ( ) ( θ) ṽ i i Comp. Appl. Math., Vol. 5, N., 006
12 08 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS Hence by using (3) and () we get δ( X Ze,µ + ) ( ( n ( ) θ + θ θ)+ ( δ )) q i Now, in view of (0) and () we deduce that δ( X Ze,µ + ) Finally, we get θ + ( θ)( δ ) θ + ( θ)( δ ) i [ θ n + [ θ n + ( )] q ( p )]. δ( X Ze,µ + ) θ + [ ] θ n + δ. ( θ)( δ ) This completes the proof. It results from Lemma. that for the default θ /( n ), the (x, z) >0 and δ(xze,µ) < / are maintained during the algorithm. Hence the algorithm is well defined. In the next Lemma we compute an upper bound for the total number of iterations produced by the algorithm. Lemma.5. Assume that x 0 and z 0 are strictly feasible, µ 0 (x0 ) T z 0 and n δ(x 0 Z 0 e,µ) <. Moreover, let x k and z k be the vectors obtained after k iterations. Then the inequality ( x k) T z k ε is satisfied for k [ θ log (x 0 ) T z 0 ]. ε Proof. We have after k iterations that µ k ( θ) k µ 0. Using Lemma.3, it follows that ( x k) T z k nµ k ( θ) k (x 0 ) T z 0. Hence the inequality ( x k ) T z k ε holds if ( θ) k (x 0 ) T z 0 ε. By taking logarithms of both sides, we obtain k log( θ)+ log(x 0 ) T z 0 log ε. Comp. Appl. Math., Vol. 5, N., 006
13 MOHAMED ACHACHE 09 The estimate for the log function log( θ) θ for all 0 θ<. Therefore the inequality ( x k) T z k ε is fulfilled if k θ log (x0 ) T z 0 ε. For the default θ /( n), we obtain the following theorem. Theorem.6. Suppose that the pair (x 0, z 0 ) is strictly feasible and let µ 0 (x 0 ) T z 0 /n. If θ /( n), then the algorithm. requires at most [ n log (x 0 ) T z 0 ] ε iterations. For the resulting vectors we have ( x k) T z k ε. 5 Conclusion In this paper, we have described a new primal-dual path-following method to solve convex quadratic programs. We have proved that the short-update algorithm can find an ε-solution for (QP) in n log (x 0 ) T z 0 iterations, where x 0 and z 0 are the initial positive starting points. The initialization of the algorithm can be achieved as in linear optimization by using self-dual embedding techniques. Hence, if x 0 y 0 e, we get the best well-known short step complexity ε ( n ) n log. ε REFERENCES [] L. Adler and R.D.C. Monteiro, Interior path following primal-dual algorithms, Part II: Convex quadratic programming. Mathematical Programming, (989), [] F. Alizadeh and D. Goldfarb, Second-order cone programming. Mathematical programming, 95 (003), 3 5. [3] K.M. Anstreicher, D. den hertog, C. Roos and T. Terlaky, A long step barrier method for convex quadratic programming. Delft University of technology. Report, (990), Comp. Appl. Math., Vol. 5, N., 006
14 0 PATH-FOLLOWING METHOD FOR QUADRATIC PROGRAMS [] Y.Q. Bai, M. El ghami and C. Roos, A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM Journal on Optimization, 3(3) (003), [5] A. Ben-Tal and A. Nemirovski, Lectures on Modern convex Optimization. Analysis, Algorithms and engineering Applications, volume of MPS-SIAM Series on Optimization. SIAM, Philadelphia, USA (00). [6] Zs. Darvay, A new algorithm for solving self-dual linear programming problems. Studia Universitatis Babes-Bolyai, Series Informatica, 7() (00), 5 6. [7] J. Peng, C. Roos and T. Terlaky, Primal-dual interior point methods for second-order conic optimization based on self-regular proximities. SIAM J. Optimization, 3() (00), [8] S.J. Wright. Primal-Dual Interior Point Methods. SIAM, Philadelphia, (997), USA. Comp. Appl. Math., Vol. 5, N., 006
A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationA full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationA path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationImproved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationFull Newton step polynomial time methods for LO based on locally self concordant barrier functions
Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationA Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization
Appl Math Inf Sci 9, o, 973-980 (015) 973 Applied Mathematics & Information Sciences An International Journal http://dxdoiorg/101785/amis/0908 A Weighted-Path-Following Interior-Point Algorithm for Second-Order
More informationA NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS
ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,
More informationA New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization
A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University
More informationNew Interior Point Algorithms in Linear Programming
AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationInterior-point algorithm for linear optimization based on a new trigonometric kernel function
Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationA full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationA new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization
A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty
More information2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.
FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool
More informationA WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationOn self-concordant barriers for generalized power cones
On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov
More informationA Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics
More informationInterior Point Methods for Nonlinear Optimization
Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationCCO Commun. Comb. Optim.
Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationAgenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms
Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationOn Mehrotra-Type Predictor-Corrector Algorithms
On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationAn E cient A ne-scaling Algorithm for Hyperbolic Programming
An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationNew stopping criteria for detecting infeasibility in conic optimization
Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:
More informationIMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS
IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationIE 521 Convex Optimization
Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationUsing Schur Complement Theorem to prove convexity of some SOC-functions
Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan
More informationLP. Kap. 17: Interior-point methods
LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of
More informationRachid Benouahboun 1 and Abdelatif Mansouri 1
RAIRO Operations Research RAIRO Oper. Res. 39 25 3 33 DOI:.5/ro:252 AN INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING WITH STRICT EQUILIBRIUM CONSTRAINTS Rachid Benouahboun and Abdelatif Mansouri
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationImproved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationA New Self-Dual Embedding Method for Convex Programming
A New Self-Dual Embedding Method for Convex Programming Shuzhong Zhang October 2001; revised October 2002 Abstract In this paper we introduce a conic optimization formulation to solve constrained convex
More informationPrimal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond
Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x
More informationPrimal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725
Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T
More informationSemidefinite Programming
Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i
More informationOn the Sandwich Theorem and a approximation algorithm for MAX CUT
On the Sandwich Theorem and a 0.878-approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica e-mail: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/
More informationNonsymmetric potential-reduction methods for general cones
CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction
More informationPOLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More informationA sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm
Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential
More informationA CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More informationDEPARTMENT OF MATHEMATICS
A ISRN KTH/OPT SYST/FR 02/12 SE Coden: TRITA/MAT-02-OS12 ISSN 1401-2294 Characterization of the limit point of the central path in semidefinite programming by Göran Sporre and Anders Forsgren Optimization
More informationA smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function
Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationCONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION
CONVERGENCE OF A SHORT-STEP PRIMAL-DUAL ALGORITHM BASED ON THE GAUSS-NEWTON DIRECTION SERGE KRUK AND HENRY WOLKOWICZ Received 3 January 3 and in revised form 9 April 3 We prove the theoretical convergence
More informationPRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION
International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationThe Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System
The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture
More informationSecond-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software
and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationPrimal-dual IPM with Asymmetric Barrier
Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationFull-Newton-Step Interior-Point Method for the Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationSupplement: Hoffman s Error Bounds
IE 8534 1 Supplement: Hoffman s Error Bounds IE 8534 2 In Lecture 1 we learned that linear program and its dual problem (P ) min c T x s.t. (D) max b T y s.t. Ax = b x 0, A T y + s = c s 0 under the Slater
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationNew Primal-dual Interior-point Methods Based on Kernel Functions
New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische
More informationFRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System
FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations
More informationChapter 6 Interior-Point Approach to Linear Programming
Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationAn O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search
More informationImproved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationApproximate Farkas Lemmas in Convex Optimization
Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The
More informationSEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More informationFinding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming
Applied Mathematical Sciences, Vol 6, 22, no 4, 229-234 Finding a Strict Feasible Dual Initial Solution for Quadratic Semidefinite Programming El Amir Djeffal Departement of Mathematics University of Batna,
More information