Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Size: px
Start display at page:

Download "Interior-point algorithm for linear optimization based on a new trigonometric kernel function"

Transcription

1 Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: Reference: OPERES 0 To appear in: Operations Research Letters Received date: February 0 Revised date: June 0 Accepted date: June 0 Please cite this article as: X. Li, M. Zhang, Interior-point algorithm for linear optimization based on a new trigonometric kernel function, Operations Research Letters 0, This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

2 *Manuscript Click here to view linked References 0 0 Interior-point Algorithm for Linear Optimization Based on a New trigonometric Kernel Function Xin Li a Mingwang Zhang a a College of Science, China Three Gorges University, Yi Chang 00, P. R. China Abstract In this paper, we present a new primal-dual interior-point algorithm for linear optimization based on a trigonometric kernel function. By simple analysis, we derive the worst case complexity for a large-update primal-dual interior-point method based on this kernel function. This complexity estimate improves a result from [] and matches the one obtained in []. Keywords Linear optimization, Kernel function, Interior-point algorithm, Large-update, Polynomial complexity. Introduction After the landmark paper of Karmarkar [], linear optimization LO became an active area of research, due to their wide applications in the real world problems. The resulting interior-point methods IPMs are now among the most effective methods for solving LO problems. A number of various IPMs has been proposed and analyzed. For these the reader refers to [-]. The primal-dual IPMs for LO problems were firstly introduced by Megiddo []. Peng et al.[] introduced a class of self-regular kernel functions and designed primal-dual IPMs based on this class of functions for LO and SDO. They obtained O n log n logn/ε complexity bound for large-update primal-dual IPMs for LO. Later on, Qian et al.[] proposed a new kernel function with simple algebraic expression for SDO and established the iteration complexity as On / logn/ε. Recently, M.EI Ghami et al.[] presented a large-update IPM based on a kernel function with a trigonometric barrier term for LO and obtained the same iteration bound with []. Very recently, M.Reza Peyghami et al.[] proposed a large-update IPM based on a trigonometric kernel function and derived the polynomial complexity enjoys On / logn/ε, which improved the complexity result for trigonometric kernel function than []. Motivated by their work, in this paper we introduce a new trigonometric kernel function neither self-regular function nor [], [] proposed and propose a IPM for LO based on this kernel function. We develop some new analytic tools that are used in the complexity analysis of the algorithm. Finally, we obtain the same complexity result with [] for the large-update primal-dual IPM. The paper is organized as follows. In Section, we briefly recall the basic concepts of IPMs for LO. The generic primal-dual IPM for LO is presented in Section. In Section, we introduce new kernel function and study its properties. Finally, we analyze the algorithm and obtain the worst case complexity result in Section. Preliminaries In this section, we briefly recall the basic concepts of IPMs for LO. The standard LO problems as follows P min {c T x : Ax = b, x 0}, where A R m n with ranka = m n, x, c R n and b R m. The dual problem of P is given by D max {b T y : A T y + s = c, s 0}, where y R m and s R n. Without loss of generality, we may assume that the problems P and D satisfy the interior-point condition IPC [], i.e., there exist x 0 and y 0, s 0 such that Ax 0 = b, x 0 > 0, A T y 0 + s 0 = c, s 0 > 0. It is well known that finding an optimal solution of P and D is equivalent to solving the following system Ax = b, x 0, A T y + s = c, s 0, xs = 0. The basic idea of primal-dual IPMs is to replace the third equation in by a parametric equation xs = µe, where µ is a positive parameter, i.e., Ax = b, x 0, A T y + s = c, s 0, xs = µe. Surprisingly enough, if the IPC is satisfied, the parameterized system has a unique solution, for each µ > 0. It is denoted as xµ, yµ, sµ and we call xµ the µ-center of P and yµ, sµ the µ-center of D. The set of µ-centers with µ running through all positive real numbers gives a homotopy path, which is called the central path of P and D. The relevance of the central path for LO was recognized first by Sonnevend [] and Megiddo []. If µ 0, then the limit of the central path exists and since the limit points satisfy the complementarity condition, the limit yields optimal solutions for P and D. For fixed µ > 0, a direct application of the Newton method to the system, we have the following system A x = 0, A T y + s = 0, s x + x s = µe xs. Corresponding author.college of Science, China Three Gorges University, Yichang 00, P. R. China Tel.: zmwang@ctgu.edu.cn, sxlixin@.com

3 0 0 Xin Li Mingwang Zhang Since A has full row rank, the system has a unique solution x, y, s which defines the search direction. By taking a step along the search direction, one constructs a new iterate point x + := x + α x, y + := y + α y, s + := s + α s, where α 0, ] is obtained by using some rules so that the new iterate satisfies x +, y +, s + > 0. For the motivation of the new method, let s define the scaled vector v as v := xs/µ. Note that the pair x, s coincides with the µ-center xµ, sµ if and only if v = e. Using the scaled vector v, the Newton system can be rewritten as Ad x = 0, A T y + d s = 0, d x + d s = v v, where A := /µav X = AS V, d x = v x/x, d s = v s/s. A crucial observation is that the right hand side v v in the third equation of equals minus gradient of the barrier function Ψv = n i= ψcvi, ψct = t / log t, for t > 0, it can be easily seen that ψ ct is a strictly differentiable convex function on R n ++ with ψ ce = ψ c e = 0, i.e., it attains its minimal value at t = e. In this paper, we replace the barrier function Ψ cv by a barrier function Ψv = n i= ψvi, where ψv is any strictly differentiable convex function on Rn ++ with ψe = ψ e = 0, the system is converted to the following system Ad x = 0, A T y + d s = 0, d x + d s = Ψv. A generic primal-dual interior-point algorithm The generic form of the algorithm is shown in Figure. Algorithm : Generic Primal-Dual Algorithm for LO Input: A barrier function Ψv; a threshold parameter τ > 0; a barrier update parameter θ, 0 < θ < ; an accuracy parameter ε > 0; begin x := e; s := e; µ := ; while nµ > ε do begin µ := θµ; while Ψv > τ do begin x := x + α x; y := y + α y; s := s + α s; v := xs/µ; end end end Fig.. Remark. The choice of the barrier update parameter θ plays an important role in both theory and practice of IPMs. Usually, if θ is a constant independent of the dimension n of the problem, for instance, θ = /, then we call the algorithm a large-update or long-step method. If θ depends on the dimension of the problem, such as θ = / n, then the algorithm is called a small-update or short-step method. Remark. The choice of the step size α α > 0 is another crucial issue in the analysis of the algorithm. In the theoretical analysis the step size α is usually given a value that depends on the closeness of the current iterates to the µ-center. Hence it has to be made sure that the closeness of the iterates to the current µ-center improves by a sufficient amount. The new kernel function and its properties This section is devoted to introduce a new kernel function and study its properties, which are used in the complexity analysis of the Algorithm. In this paper, we consider new univariate function as follows where ψt = t /t + t / + tan ht/, ht = π t/t +. This kernel function has a trigonometric term which differs from the one proposed in [] and from the trigonometric kernels in [], []. The first three derivatives of the function ψt are ψ t = t t /t + h t tan ht + tan ht/,

4 0 0 IPM for LO based on a new trigonometric kernel function ψ t = + t /t + + tan ht[h t tan ht + h t + tan ht]/, ψ t = /t + + tan htkt/, where h t = π/ + t < 0, h t = π/ + t > 0, h t = π/ + t < 0, kt := h th t + tan ht + h t tan ht + tan ht + h t tan ht. In order to study the properties of our kernel function, we need the following technical lemmas. Lemma.. Lemma. in [] For the function ht defined in, one has tan ht /πt > 0, 0 < t /. Lemma.. Let ψt be as defined in, then i ψ t >, t > 0, ii tψ t + ψ t > 0, t > 0, iii tψ t ψ t > 0, t > 0, iv ψ t < 0, t > 0. Proof. The detailed proof see or see Appendix. Since ψ = ψ = 0, thus the function ψt is completely described by its second derivative as follows ψt = t ξ ψ ζdζdξ. The following lemma provides equivalent forms of the e-convexity property for a kernel function []. Lemmas.. Lemma.. in [] Let ψt be a twice differentiable function for t > 0. Then, the following three properties are equivalent i ψ t t ψt + ψt /, for t, t > 0. ii ψ t + tψ t 0, for t > 0. iii ψe ξ is a convex function. By Lemme. and., our new kernel function defined by has the e-convexity property. In the sequel, we provide some further results related to the new kernel function, we first define the norm-based proximity measure δv by δv := Ψv /, v R n ++. Next, we establish a lower bound of δv in terms of Ψv. Lemma.. Let ψt be as defined in, one has ψt < ψ t /, if t >. Proof. By Taylor s theorem and ψ = ψ = 0, we obtain ψt = ψ t / + ψ ξξ /, where < ξ < t if t >. Since ψ ξ < 0, the lemma follows. Lemma.. Let ψt be as defined in, one has tψ t ψt, if t. Proof. Defining ft := tψ t ψt, if t one has f = 0 and f t = tψ t 0. Hence ft 0 and the lemma follows. Theorem.. Theorem. in [] Let ϱ : [0, [, be the inverse function on ψt on [,. One has δv ψ ϱψv/. Corollary.. Let ϱ be as defined in Theorem.. Thus we have δv Ψv/ϱΨv. Proof. Using Theorem., i.e., δv ψ ϱψv, from Lemma. we obtain δv ψϱψv/ϱψv = Ψv/ϱΨv.

5 0 0 Xin Li Mingwang Zhang This proves the corollary. Theorem.. If Ψv, then δv Ψv / /. Proof. The inverse function of ψt for t [, is obtained by solving t from ψt = t /t + t / + tan ht/ = s, t. We derive an upper bound for t, as this suffices for our goal. One has s = ψt = t ξ which implies t = ϱs + s. ψ ζdζdξ t ξ dζdξ = t /, Assuming s, we get t = ϱs s+ s s. Omitting the argument v, and assuming Ψv, we have ϱψv Ψ v. Now, using Corollary., we have δv Ψv/ϱΨv Ψ / v/, This proves the lemma. Note that, if Ψv, substitution in gives δv /. Analysis of the algorithm. Growth behavior of the barrier function By Lemma. and., we know that our kernel function is an eligible function. So analogously to [], we have the following results. Theorem.. Theorem. in [] Let ϱ : [0, [, be the inverse function of ψt on [0,. Then for any positive vector v and any β > we have Ψβt nψβϱψv/n. Corollary.. Let 0 < θ < and v + = v/ θ. Then Ψv + nψϱψv/n/ θ. 0 Proof. Substitution of β := / θ into, the corollary is proved. In the sequel, we define L := Ln, θ, τ := nψϱψv/n/ θ. Obviously, L is an upper bound of Ψv + that the value of Ψv after the µ-update.. Decrease of the proximity during a damped Newton step After a damped step, with step size α, we have x + := x + α x = x/vv + αd x, y + := y + α y, s + := s + α s = s/vv + αd s, where α is a step size which is obtained by using a line search strategy. Thus we obtain Let v + := x +s +/µ = v + αd xv + αd s. fα := Ψv + Ψv = Ψ v + αd xv + αd s Ψv. Our aim is to find an upper bound for fα. Now using that ψt satisfies Lemma., we get Ψv + = Ψ v + αd xv + αd s Ψv + αd x + Ψv + αd s/. Thus we have fα f α, where f α := Ψv + αd x + Ψv + αd s/ Ψv,

6 0 0 IPM for LO based on a new trigonometric kernel function is a convex function of α, since Ψv is convex. Obviously, f0 = f = 0. Taking the derivative respect to α, we get and f α := n ψ v i + αd xid xi + ψ v i + αd sid si/, i= f = Ψ T vd x + d s/ = Ψ T v Ψv/ = δ v. Differentiating once more for f α to α, we obtain f α = n ψ v i + αd xid xi + ψ v i + αd sid si/. i= For simplicity, in the sequel, we use the following notationsv := minv, δ := δv. Lemma.. Lemma. in [] Let f α as defined in, one has f α δ ψ v αδ Lemma.. Lemma. in [] One has f α 0 if α satisfies ψ v αδ + ψ v δ. Lemma.. Lemma. in [] Let ρ : [0, 0, ] denote the inverse function of ψ t/ restricted to the interval 0, ]. Then the step size ᾱ := ρδ ρδ/δ is the largest possible solution of inequality. And then ᾱ /ψ ρδ. In what follows we use the notation α := /ψ ρδ, as a default value for the step size during an inner iteration. Lemma.. Lemma.. in [] Let h be a twice differentiable convex function with h0 = 0, h 0 < 0, which attains its minimum at t > 0. If h is increasing for t [0, t ], then ht th 0, 0 t t. Lemma.. Lemma. in [] If the step size α is such that α ᾱ, then fα αδ. Theorem.. Let ρ be as defined in Lemma. and α and Ψv. Then f α δ /ψ ρδ δ /δ / Θ Ψ /. Proof. Since α ᾱ, Lemma. gives f α αδ, where α = /ψ ρδ as defined in. Thus the first inequality follows. Next, we prove that the second inequality holds. To obtain the inverse function t = ρs of ψ t for t 0, ], we need to solve t from the equation t /t /+h t tan ht+tan ht/ = s. This implies tan ht+tan ht = s+t /t //h t = t + s+t /t //π s/π, where the inequality holds by using 0 < t. δ = ψ t, we get Hence, putting t = ρδ, which is equivalent to tan ht tan ht + tan ht s/π tan ht δ / /. Now, we obtain a lower bound for α. For this purpose, first by Lemma. we have +tan ht > /πt for all t 0, ], so we have α = / + /t + + tan ht[h t tan ht + h t + tan ht]/ / + π + δ / / + + 0δ / /h tδ / + h t + 0δ //. Since it holds h t = π π, and h t = π /t + π /. Therefor, using Theorem t+., we get α /δ /, which implies that f α δ /δ / = δ / Θ Ψ /. This completes the proof.. Iteration complexity

7 0 0 Xin Li Mingwang Zhang In the present section, we derive the iteration complexity bound for large-update IPMs. Lemma.. Proposition. in [] Let t 0, t,..., t k be a sequence of positive numbers such that t k+ t k βt γ k, k = 0,,..., K, where k > 0, and 0 γ, Then K t γ 0 /kγ. Lemma.. If K denotes the number of inner iterations, then we have K Ψ β 0 v/kβ = On /. Theorem.. The total number of iterations required by the algorithm is at most On / log n ε for the large-update primal-dual IPM. The above iteration bound coincides with the bound of the large-update primal-dual interior point method in [] which is based on trigonometric kernel functions as well. References [] M. El Ghami, Z.A. Guennoun, S. Boula, T. Steihaug, Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term, Journal of Computational and Applied Mathematics, -. [] M.Reza Peza Peyghami, S.Fathi Hafshejani, L.shirvani, Complexity of interior-point methods for linear optimization based on a new trigonometric kernel function, Journal of Computational and Applied Mathematics, 0 -. [] N.K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 0-. [] C. Roos, T. Terlaky and J.P.Vial, Interior Point Methods for Linear Optimization, Springer, New York, 00. [] G. Sonnevend, An analytic center for polyhedrons and new classes of global algorithms for linear smooth, convex programming, in: A. Prakopa, J. Szelezsan, B. Strazicky Eds., Lecture Notes in Control and Information Sciences, vol., Springer-Verlag, Berlin, -. [] N. Megiddo, Pathways to the optimal set in linear programming, in: N. Megiddo Ed., Progress in Mathematical Programming: Interior Point and Related Methods, Springer-Verlag, New York, -. [] S. Mehrotra, On the implementation of a primal-dual interior point method, SIAM Journal on Optimization, -. [] Y.Q. Bai, C. Roos, A primal-dual interior-point method based on a new kernel function with linear growth rate, in: Proceedings of Industrial Optimization Symposium and Optimization Day, Nov., 00. [] Y.Q. Bai, M. El Ghami, C. Roos, A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization, SIAM Journal on Optimization, 0 -. [] J. Peng, C. Roos, T. Terlaky, Self-regular functions and new search directions for linear and semidifinite optimization, [] Z.G. Qian, Y.Q. Bai, G.Q. Wang, Complexity analysis of interior-point algorithm based on a new kernel function for semi-definite optimization, J. Shanghai Univ Engl Ed, -. [] J. Peng, C. Roos, T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms, Princeton University Press, 00.

8 0 0 IPM for LO based on a new trigonometric kernel function Appendix Proof. We first prove i, since ψ t = + t + + tan ht[h t tan ht + h t + tan ht]. We consider two cases: Case : If 0 < t <, then tan ht > 0, h = π +t > 0, and h t = π +t. Thus, i holds. Case : In order to prove inequality i it is sufficient to prove that ηt = t + + tan ht[h t tan ht + h t + tan ht] > 0. Using the fact that, for t, it yields tan ht [, 0], we have ηt = + tan ht t + tan ht + h t tan ht + h t + tan ht + π + t tan ht t + t + π + t = + + t πt + t + π t tan ht t + t > 0, where the last inequality is obtained by using simple calculus. Next, we prove that ii holds. For this purpose, we have tψ t + ψ t = t + t + + tan ht[th t + ht tan ht + h t + tan ht]. Since for any t > 0 the inequality t + t > 0 holds, we consider the following cases, only Case : If t, one has tan ht [, 0], h t < 0. Thus, ψ t 0 and ψ t 0, and ii holds in this case. Case : If < t <, then tan ht > 0, and th t + h t = πt + t > 0. Thus tψ t + ψ t > 0. Case : If 0 < t, by Lemma. one has th t + h t tan ht + th t + tan ht πt + t tan ht + th t + tan ht πt = tan ht πt + π + t + th t > 0. So the three cases together prove ii. In order to prove iii, one has Case : If t 0,, we have ψ t > 0 and ψ t < 0, therefore tψ t ψ t > 0 holds. Case : If t, using the fact that th t h t > 0 and tan ht [, 0], then we have tψ t ψ t = t tan htth t h t tan ht + th t + tan ht t tan ht th t + h t + th t = + tan ht th t + h t + th t + t + t + tan ht > + tan πtt + t + + π t + t + + t ht t + t > 0, where the last inequality holds by using simple calculus. To complete the proof of the lemma, we need to show that holds for all t > 0. To do so, we have Case : If 0 < t, then h t < 0, h th t < 0, h t < 0 and tan ht > 0, which imply that ψ t < 0, for any t 0, ].

9 0 0 Xin Li Mingwang Zhang Case : If < t, we have tan ht < 0 and t < + t, therefore we have ψ t = t + + tan htkt = + tan ht + tan ht + tan ht < 0. t + tan ht + kt t + h th t h t h t 0 + t π + t + π + t 0 + t π + t + π + π + t. + t + t + tan ht where the last inequality is obtained by using simple calculus. Thus for all t, ], we have ψ t < 0. Case : If t >, we have tan ht < and t < + t, we obtain ψ t = t + + tan htkt = + tan ht + tan ht t + tan ht + kt t + h th t 0h h t + t + tan ht + t π + t + π + π + t + t + tan ht < 0. This completes the proof of the Lemma.

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION International Journal of Pure and Applied Mathematics Volume 4 No. 4 07, 797-88 ISSN: 3-8080 printed version); ISSN: 34-3395 on-line version) url: http://www.ijpam.eu doi: 0.73/ijpam.v4i4.0 PAijpam.eu

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

New Primal-dual Interior-point Methods Based on Kernel Functions

New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions New Primal-dual Interior-point Methods Based on Kernel Functions PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization Appl Math Inf Sci 9, o, 973-980 (015) 973 Applied Mathematics & Information Sciences An International Journal http://dxdoiorg/101785/amis/0908 A Weighted-Path-Following Interior-Point Algorithm for Second-Order

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised:

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

Unfolding the Skorohod reflection of a semimartingale

Unfolding the Skorohod reflection of a semimartingale Unfolding the Skorohod reflection of a semimartingale Vilmos Prokaj To cite this version: Vilmos Prokaj. Unfolding the Skorohod reflection of a semimartingale. Statistics and Probability Letters, Elsevier,

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel and Nimrod Megiddo IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia 95120-6099 and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel Submitted by Richard Tapia ABSTRACT

More information

This is a repository copy of Aggregation of growing crystals in suspension: III. Accounting for adhesion and repulsion.

This is a repository copy of Aggregation of growing crystals in suspension: III. Accounting for adhesion and repulsion. This is a repository copy of Aggregation of growing crystals in suspension: III. Accounting for adhesion and repulsion. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/88281/

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

The continuous d-step conjecture for polytopes

The continuous d-step conjecture for polytopes The continuous d-step conjecture for polytopes Antoine Deza, Tamás Terlaky and Yuriy Zinchenko September, 2007 Abstract The curvature of a polytope, defined as the largest possible total curvature of the

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky October 10, 006 (Revised) Abstract In this paper we discuss the polynomiality of a feasible version of Mehrotra s predictor-corrector

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming

Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming Journal of Convex Analysis Volume 2 (1995), No.1/2, 145 152 Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming R. Cominetti 1 Universidad de Chile,

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem Journal of Applied Mathematics Volume 2012, Article ID 219478, 15 pages doi:10.1155/2012/219478 Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Curvature as a Complexity Bound in Interior-Point Methods

Curvature as a Complexity Bound in Interior-Point Methods Lehigh University Lehigh Preserve Theses and Dissertations 2014 Curvature as a Complexity Bound in Interior-Point Methods Murat Mut Lehigh University Follow this and additional works at: http://preserve.lehigh.edu/etd

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

Supplement: Hoffman s Error Bounds

Supplement: Hoffman s Error Bounds IE 8534 1 Supplement: Hoffman s Error Bounds IE 8534 2 In Lecture 1 we learned that linear program and its dual problem (P ) min c T x s.t. (D) max b T y s.t. Ax = b x 0, A T y + s = c s 0 under the Slater

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Lecture 18 Oct. 30, 2014

Lecture 18 Oct. 30, 2014 CS 224: Advanced Algorithms Fall 214 Lecture 18 Oct. 3, 214 Prof. Jelani Nelson Scribe: Xiaoyu He 1 Overview In this lecture we will describe a path-following implementation of the Interior Point Method

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Tamás Terlaky George N. and Soteria Kledaras 87 Endowed Chair Professor. Chair, Department of Industrial and Systems Engineering Lehigh University

Tamás Terlaky George N. and Soteria Kledaras 87 Endowed Chair Professor. Chair, Department of Industrial and Systems Engineering Lehigh University 5th SJOM Bejing, 2011 Cone Linear Optimization (CLO) From LO, SOCO and SDO Towards Mixed-Integer CLO Tamás Terlaky George N. and Soteria Kledaras 87 Endowed Chair Professor. Chair, Department of Industrial

More information

JUNXIA MENG. 2. Preliminaries. 1/k. x = max x(t), t [0,T ] x (t), x k = x(t) dt) k

JUNXIA MENG. 2. Preliminaries. 1/k. x = max x(t), t [0,T ] x (t), x k = x(t) dt) k Electronic Journal of Differential Equations, Vol. 29(29), No. 39, pp. 1 7. ISSN: 172-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu POSIIVE PERIODIC SOLUIONS

More information

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

A Polynomial Column-wise Rescaling von Neumann Algorithm

A Polynomial Column-wise Rescaling von Neumann Algorithm A Polynomial Column-wise Rescaling von Neumann Algorithm Dan Li Department of Industrial and Systems Engineering, Lehigh University, USA Cornelis Roos Department of Information Systems and Algorithms,

More information

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1) Title On the stability of contact Navier-Stokes equations with discont free b Authors Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 4 Issue 4-3 Date Text Version publisher URL

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information