Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Similar documents
A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

PRIMAL-DUAL ALGORITHMS FOR SEMIDEFINIT OPTIMIZATION PROBLEMS BASED ON GENERALIZED TRIGONOMETRIC BARRIER FUNCTION

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

CCO Commun. Comb. Optim.

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new primal-dual path-following method for convex quadratic programming

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Local Self-concordance of Barrier Functions Based on Kernel-functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

On Mehrotra-Type Predictor-Corrector Algorithms

A Simpler and Tighter Redundant Klee-Minty Construction

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

New Primal-dual Interior-point Methods Based on Kernel Functions

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Interior Point Methods for Linear Programming: Motivation & Theory

On well definedness of the Central Path

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

Interior Point Methods for Nonlinear Optimization

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Using Schur Complement Theorem to prove convexity of some SOC-functions

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Semidefinite Programming

New Interior Point Algorithms in Linear Programming

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

A Weighted-Path-Following Interior-Point Algorithm for Second-Order Cone Optimization

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

4TE3/6TE3. Algorithms for. Continuous Optimization

Lecture 17: Primal-dual interior-point methods part II

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Interior Point Algorithms for Constrained Convex Optimization

Interior-Point Methods for Linear Optimization

Lecture 10. Primal-Dual Interior Point Method for LP

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

Unfolding the Skorohod reflection of a semimartingale

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

This is a repository copy of Aggregation of growing crystals in suspension: III. Accounting for adhesion and repulsion.

LP. Kap. 17: Interior-point methods

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Second-order cone programming

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

The continuous d-step conjecture for polytopes

CS711008Z Algorithm Design and Analysis

On Mehrotra-Type Predictor-Corrector Algorithms

We describe the generalization of Hazan s algorithm for symmetric programming

Introduction to Nonlinear Stochastic Programming

Asymptotic Convergence of the Steepest Descent Method for the Exponential Penalty in Linear Programming

Largest dual ellipsoids inscribed in dual cones

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Interior Point Methods in Mathematical Programming

Primal-dual IPM with Asymmetric Barrier

Curvature as a Complexity Bound in Interior-Point Methods

10 Numerical methods for constrained problems

Conic Linear Optimization and its Dual. yyye

Supplement: Hoffman s Error Bounds

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Cubic regularization of Newton s method for convex problems with constraints

Lecture 18 Oct. 30, 2014

Interior Point Methods for Mathematical Programming


A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

Tamás Terlaky George N. and Soteria Kledaras 87 Endowed Chair Professor. Chair, Department of Industrial and Systems Engineering Lehigh University

JUNXIA MENG. 2. Preliminaries. 1/k. x = max x(t), t [0,T ] x (t), x k = x(t) dt) k

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

12. Interior-point methods

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Chapter 1. Preliminaries

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Introduction to optimization

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

A Polynomial Column-wise Rescaling von Neumann Algorithm

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Lecture 5. The Dual Cone and Dual Problem

Transcription:

Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES 0 To appear in: Operations Research Letters Received date: February 0 Revised date: June 0 Accepted date: June 0 Please cite this article as: X. Li, M. Zhang, Interior-point algorithm for linear optimization based on a new trigonometric kernel function, Operations Research Letters 0, http://dx.doi.org/./j.orl.0.0.0 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

*Manuscript Click here to view linked References 0 0 Interior-point Algorithm for Linear Optimization Based on a New trigonometric Kernel Function Xin Li a Mingwang Zhang a a College of Science, China Three Gorges University, Yi Chang 00, P. R. China Abstract In this paper, we present a new primal-dual interior-point algorithm for linear optimization based on a trigonometric kernel function. By simple analysis, we derive the worst case complexity for a large-update primal-dual interior-point method based on this kernel function. This complexity estimate improves a result from [] and matches the one obtained in []. Keywords Linear optimization, Kernel function, Interior-point algorithm, Large-update, Polynomial complexity. Introduction After the landmark paper of Karmarkar [], linear optimization LO became an active area of research, due to their wide applications in the real world problems. The resulting interior-point methods IPMs are now among the most effective methods for solving LO problems. A number of various IPMs has been proposed and analyzed. For these the reader refers to [-]. The primal-dual IPMs for LO problems were firstly introduced by Megiddo []. Peng et al.[] introduced a class of self-regular kernel functions and designed primal-dual IPMs based on this class of functions for LO and SDO. They obtained O n log n logn/ε complexity bound for large-update primal-dual IPMs for LO. Later on, Qian et al.[] proposed a new kernel function with simple algebraic expression for SDO and established the iteration complexity as On / logn/ε. Recently, M.EI Ghami et al.[] presented a large-update IPM based on a kernel function with a trigonometric barrier term for LO and obtained the same iteration bound with []. Very recently, M.Reza Peyghami et al.[] proposed a large-update IPM based on a trigonometric kernel function and derived the polynomial complexity enjoys On / logn/ε, which improved the complexity result for trigonometric kernel function than []. Motivated by their work, in this paper we introduce a new trigonometric kernel function neither self-regular function nor [], [] proposed and propose a IPM for LO based on this kernel function. We develop some new analytic tools that are used in the complexity analysis of the algorithm. Finally, we obtain the same complexity result with [] for the large-update primal-dual IPM. The paper is organized as follows. In Section, we briefly recall the basic concepts of IPMs for LO. The generic primal-dual IPM for LO is presented in Section. In Section, we introduce new kernel function and study its properties. Finally, we analyze the algorithm and obtain the worst case complexity result in Section. Preliminaries In this section, we briefly recall the basic concepts of IPMs for LO. The standard LO problems as follows P min {c T x : Ax = b, x 0}, where A R m n with ranka = m n, x, c R n and b R m. The dual problem of P is given by D max {b T y : A T y + s = c, s 0}, where y R m and s R n. Without loss of generality, we may assume that the problems P and D satisfy the interior-point condition IPC [], i.e., there exist x 0 and y 0, s 0 such that Ax 0 = b, x 0 > 0, A T y 0 + s 0 = c, s 0 > 0. It is well known that finding an optimal solution of P and D is equivalent to solving the following system Ax = b, x 0, A T y + s = c, s 0, xs = 0. The basic idea of primal-dual IPMs is to replace the third equation in by a parametric equation xs = µe, where µ is a positive parameter, i.e., Ax = b, x 0, A T y + s = c, s 0, xs = µe. Surprisingly enough, if the IPC is satisfied, the parameterized system has a unique solution, for each µ > 0. It is denoted as xµ, yµ, sµ and we call xµ the µ-center of P and yµ, sµ the µ-center of D. The set of µ-centers with µ running through all positive real numbers gives a homotopy path, which is called the central path of P and D. The relevance of the central path for LO was recognized first by Sonnevend [] and Megiddo []. If µ 0, then the limit of the central path exists and since the limit points satisfy the complementarity condition, the limit yields optimal solutions for P and D. For fixed µ > 0, a direct application of the Newton method to the system, we have the following system A x = 0, A T y + s = 0, s x + x s = µe xs. Corresponding author.college of Science, China Three Gorges University, Yichang 00, P. R. China Tel.:+ 0 0 E-mail:zmwang@ctgu.edu.cn, sxlixin@.com

0 0 Xin Li Mingwang Zhang Since A has full row rank, the system has a unique solution x, y, s which defines the search direction. By taking a step along the search direction, one constructs a new iterate point x + := x + α x, y + := y + α y, s + := s + α s, where α 0, ] is obtained by using some rules so that the new iterate satisfies x +, y +, s + > 0. For the motivation of the new method, let s define the scaled vector v as v := xs/µ. Note that the pair x, s coincides with the µ-center xµ, sµ if and only if v = e. Using the scaled vector v, the Newton system can be rewritten as Ad x = 0, A T y + d s = 0, d x + d s = v v, where A := /µav X = AS V, d x = v x/x, d s = v s/s. A crucial observation is that the right hand side v v in the third equation of equals minus gradient of the barrier function Ψv = n i= ψcvi, ψct = t / log t, for t > 0, it can be easily seen that ψ ct is a strictly differentiable convex function on R n ++ with ψ ce = ψ c e = 0, i.e., it attains its minimal value at t = e. In this paper, we replace the barrier function Ψ cv by a barrier function Ψv = n i= ψvi, where ψv is any strictly differentiable convex function on Rn ++ with ψe = ψ e = 0, the system is converted to the following system Ad x = 0, A T y + d s = 0, d x + d s = Ψv. A generic primal-dual interior-point algorithm The generic form of the algorithm is shown in Figure. Algorithm : Generic Primal-Dual Algorithm for LO Input: A barrier function Ψv; a threshold parameter τ > 0; a barrier update parameter θ, 0 < θ < ; an accuracy parameter ε > 0; begin x := e; s := e; µ := ; while nµ > ε do begin µ := θµ; while Ψv > τ do begin x := x + α x; y := y + α y; s := s + α s; v := xs/µ; end end end Fig.. Remark. The choice of the barrier update parameter θ plays an important role in both theory and practice of IPMs. Usually, if θ is a constant independent of the dimension n of the problem, for instance, θ = /, then we call the algorithm a large-update or long-step method. If θ depends on the dimension of the problem, such as θ = / n, then the algorithm is called a small-update or short-step method. Remark. The choice of the step size α α > 0 is another crucial issue in the analysis of the algorithm. In the theoretical analysis the step size α is usually given a value that depends on the closeness of the current iterates to the µ-center. Hence it has to be made sure that the closeness of the iterates to the current µ-center improves by a sufficient amount. The new kernel function and its properties This section is devoted to introduce a new kernel function and study its properties, which are used in the complexity analysis of the Algorithm. In this paper, we consider new univariate function as follows where ψt = t /t + t / + tan ht/, ht = π t/t +. This kernel function has a trigonometric term which differs from the one proposed in [] and from the trigonometric kernels in [], []. The first three derivatives of the function ψt are ψ t = t t /t + h t tan ht + tan ht/,

0 0 IPM for LO based on a new trigonometric kernel function ψ t = + t /t + + tan ht[h t tan ht + h t + tan ht]/, ψ t = /t + + tan htkt/, where h t = π/ + t < 0, h t = π/ + t > 0, h t = π/ + t < 0, kt := h th t + tan ht + h t tan ht + tan ht + h t tan ht. In order to study the properties of our kernel function, we need the following technical lemmas. Lemma.. Lemma. in [] For the function ht defined in, one has tan ht /πt > 0, 0 < t /. Lemma.. Let ψt be as defined in, then i ψ t >, t > 0, ii tψ t + ψ t > 0, t > 0, iii tψ t ψ t > 0, t > 0, iv ψ t < 0, t > 0. Proof. The detailed proof see http://wenku.baidu.com/view/acdbe or see Appendix. Since ψ = ψ = 0, thus the function ψt is completely described by its second derivative as follows ψt = t ξ ψ ζdζdξ. The following lemma provides equivalent forms of the e-convexity property for a kernel function []. Lemmas.. Lemma.. in [] Let ψt be a twice differentiable function for t > 0. Then, the following three properties are equivalent i ψ t t ψt + ψt /, for t, t > 0. ii ψ t + tψ t 0, for t > 0. iii ψe ξ is a convex function. By Lemme. and., our new kernel function defined by has the e-convexity property. In the sequel, we provide some further results related to the new kernel function, we first define the norm-based proximity measure δv by δv := Ψv /, v R n ++. Next, we establish a lower bound of δv in terms of Ψv. Lemma.. Let ψt be as defined in, one has ψt < ψ t /, if t >. Proof. By Taylor s theorem and ψ = ψ = 0, we obtain ψt = ψ t / + ψ ξξ /, where < ξ < t if t >. Since ψ ξ < 0, the lemma follows. Lemma.. Let ψt be as defined in, one has tψ t ψt, if t. Proof. Defining ft := tψ t ψt, if t one has f = 0 and f t = tψ t 0. Hence ft 0 and the lemma follows. Theorem.. Theorem. in [] Let ϱ : [0, [, be the inverse function on ψt on [,. One has δv ψ ϱψv/. Corollary.. Let ϱ be as defined in Theorem.. Thus we have δv Ψv/ϱΨv. Proof. Using Theorem., i.e., δv ψ ϱψv, from Lemma. we obtain δv ψϱψv/ϱψv = Ψv/ϱΨv.

0 0 Xin Li Mingwang Zhang This proves the corollary. Theorem.. If Ψv, then δv Ψv / /. Proof. The inverse function of ψt for t [, is obtained by solving t from ψt = t /t + t / + tan ht/ = s, t. We derive an upper bound for t, as this suffices for our goal. One has s = ψt = t ξ which implies t = ϱs + s. ψ ζdζdξ t ξ dζdξ = t /, Assuming s, we get t = ϱs s+ s s. Omitting the argument v, and assuming Ψv, we have ϱψv Ψ v. Now, using Corollary., we have δv Ψv/ϱΨv Ψ / v/, This proves the lemma. Note that, if Ψv, substitution in gives δv /. Analysis of the algorithm. Growth behavior of the barrier function By Lemma. and., we know that our kernel function is an eligible function. So analogously to [], we have the following results. Theorem.. Theorem. in [] Let ϱ : [0, [, be the inverse function of ψt on [0,. Then for any positive vector v and any β > we have Ψβt nψβϱψv/n. Corollary.. Let 0 < θ < and v + = v/ θ. Then Ψv + nψϱψv/n/ θ. 0 Proof. Substitution of β := / θ into, the corollary is proved. In the sequel, we define L := Ln, θ, τ := nψϱψv/n/ θ. Obviously, L is an upper bound of Ψv + that the value of Ψv after the µ-update.. Decrease of the proximity during a damped Newton step After a damped step, with step size α, we have x + := x + α x = x/vv + αd x, y + := y + α y, s + := s + α s = s/vv + αd s, where α is a step size which is obtained by using a line search strategy. Thus we obtain Let v + := x +s +/µ = v + αd xv + αd s. fα := Ψv + Ψv = Ψ v + αd xv + αd s Ψv. Our aim is to find an upper bound for fα. Now using that ψt satisfies Lemma., we get Ψv + = Ψ v + αd xv + αd s Ψv + αd x + Ψv + αd s/. Thus we have fα f α, where f α := Ψv + αd x + Ψv + αd s/ Ψv,

0 0 IPM for LO based on a new trigonometric kernel function is a convex function of α, since Ψv is convex. Obviously, f0 = f = 0. Taking the derivative respect to α, we get and f α := n ψ v i + αd xid xi + ψ v i + αd sid si/, i= f = Ψ T vd x + d s/ = Ψ T v Ψv/ = δ v. Differentiating once more for f α to α, we obtain f α = n ψ v i + αd xid xi + ψ v i + αd sid si/. i= For simplicity, in the sequel, we use the following notationsv := minv, δ := δv. Lemma.. Lemma. in [] Let f α as defined in, one has f α δ ψ v αδ Lemma.. Lemma. in [] One has f α 0 if α satisfies ψ v αδ + ψ v δ. Lemma.. Lemma. in [] Let ρ : [0, 0, ] denote the inverse function of ψ t/ restricted to the interval 0, ]. Then the step size ᾱ := ρδ ρδ/δ is the largest possible solution of inequality. And then ᾱ /ψ ρδ. In what follows we use the notation α := /ψ ρδ, as a default value for the step size during an inner iteration. Lemma.. Lemma.. in [] Let h be a twice differentiable convex function with h0 = 0, h 0 < 0, which attains its minimum at t > 0. If h is increasing for t [0, t ], then ht th 0, 0 t t. Lemma.. Lemma. in [] If the step size α is such that α ᾱ, then fα αδ. Theorem.. Let ρ be as defined in Lemma. and α and Ψv. Then f α δ /ψ ρδ δ /δ / Θ Ψ /. Proof. Since α ᾱ, Lemma. gives f α αδ, where α = /ψ ρδ as defined in. Thus the first inequality follows. Next, we prove that the second inequality holds. To obtain the inverse function t = ρs of ψ t for t 0, ], we need to solve t from the equation t /t /+h t tan ht+tan ht/ = s. This implies tan ht+tan ht = s+t /t //h t = t + s+t /t //π s/π, where the inequality holds by using 0 < t. δ = ψ t, we get Hence, putting t = ρδ, which is equivalent to tan ht tan ht + tan ht s/π tan ht δ / /. Now, we obtain a lower bound for α. For this purpose, first by Lemma. we have +tan ht > /πt for all t 0, ], so we have α = / + /t + + tan ht[h t tan ht + h t + tan ht]/ / + π + δ / / + + 0δ / /h tδ / + h t + 0δ //. Since it holds h t = π π, and h t = π /t + π /. Therefor, using Theorem t+., we get α /δ /, which implies that f α δ /δ / = δ / Θ Ψ /. This completes the proof.. Iteration complexity

0 0 Xin Li Mingwang Zhang In the present section, we derive the iteration complexity bound for large-update IPMs. Lemma.. Proposition. in [] Let t 0, t,..., t k be a sequence of positive numbers such that t k+ t k βt γ k, k = 0,,..., K, where k > 0, and 0 γ, Then K t γ 0 /kγ. Lemma.. If K denotes the number of inner iterations, then we have K Ψ β 0 v/kβ = On /. Theorem.. The total number of iterations required by the algorithm is at most On / log n ε for the large-update primal-dual IPM. The above iteration bound coincides with the bound of the large-update primal-dual interior point method in [] which is based on trigonometric kernel functions as well. References [] M. El Ghami, Z.A. Guennoun, S. Boula, T. Steihaug, Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term, Journal of Computational and Applied Mathematics, -. [] M.Reza Peza Peyghami, S.Fathi Hafshejani, L.shirvani, Complexity of interior-point methods for linear optimization based on a new trigonometric kernel function, Journal of Computational and Applied Mathematics, 0 -. [] N.K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 0-. [] C. Roos, T. Terlaky and J.P.Vial, Interior Point Methods for Linear Optimization, Springer, New York, 00. [] G. Sonnevend, An analytic center for polyhedrons and new classes of global algorithms for linear smooth, convex programming, in: A. Prakopa, J. Szelezsan, B. Strazicky Eds., Lecture Notes in Control and Information Sciences, vol., Springer-Verlag, Berlin, -. [] N. Megiddo, Pathways to the optimal set in linear programming, in: N. Megiddo Ed., Progress in Mathematical Programming: Interior Point and Related Methods, Springer-Verlag, New York, -. [] S. Mehrotra, On the implementation of a primal-dual interior point method, SIAM Journal on Optimization, -. [] Y.Q. Bai, C. Roos, A primal-dual interior-point method based on a new kernel function with linear growth rate, in: Proceedings of Industrial Optimization Symposium and Optimization Day, Nov., 00. [] Y.Q. Bai, M. El Ghami, C. Roos, A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization, SIAM Journal on Optimization, 0 -. [] J. Peng, C. Roos, T. Terlaky, Self-regular functions and new search directions for linear and semidifinite optimization, 00 -. [] Z.G. Qian, Y.Q. Bai, G.Q. Wang, Complexity analysis of interior-point algorithm based on a new kernel function for semi-definite optimization, J. Shanghai Univ Engl Ed, -. [] J. Peng, C. Roos, T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms, Princeton University Press, 00.

0 0 IPM for LO based on a new trigonometric kernel function Appendix Proof. We first prove i, since ψ t = + t + + tan ht[h t tan ht + h t + tan ht]. We consider two cases: Case : If 0 < t <, then tan ht > 0, h = π +t > 0, and h t = π +t. Thus, i holds. Case : In order to prove inequality i it is sufficient to prove that ηt = t + + tan ht[h t tan ht + h t + tan ht] > 0. Using the fact that, for t, it yields tan ht [, 0], we have ηt = + tan ht t + tan ht + h t tan ht + h t + tan ht + π + t tan ht t + t + π + t = + + t πt + t + π t tan ht t + t > 0, where the last inequality is obtained by using simple calculus. Next, we prove that ii holds. For this purpose, we have tψ t + ψ t = t + t + + tan ht[th t + ht tan ht + h t + tan ht]. Since for any t > 0 the inequality t + t > 0 holds, we consider the following cases, only Case : If t, one has tan ht [, 0], h t < 0. Thus, ψ t 0 and ψ t 0, and ii holds in this case. Case : If < t <, then tan ht > 0, and th t + h t = πt + t > 0. Thus tψ t + ψ t > 0. Case : If 0 < t, by Lemma. one has th t + h t tan ht + th t + tan ht πt + t tan ht + th t + tan ht πt = tan ht πt + π + t + th t > 0. So the three cases together prove ii. In order to prove iii, one has Case : If t 0,, we have ψ t > 0 and ψ t < 0, therefore tψ t ψ t > 0 holds. Case : If t, using the fact that th t h t > 0 and tan ht [, 0], then we have tψ t ψ t = t + + + tan htth t h t tan ht + th t + tan ht t + + + tan ht th t + h t + th t = + tan ht th t + h t + th t + t + t + tan ht > + tan πtt + t + + π t + t + + t ht t + t > 0, where the last inequality holds by using simple calculus. To complete the proof of the lemma, we need to show that holds for all t > 0. To do so, we have Case : If 0 < t, then h t < 0, h th t < 0, h t < 0 and tan ht > 0, which imply that ψ t < 0, for any t 0, ].

0 0 Xin Li Mingwang Zhang Case : If < t, we have tan ht < 0 and t < + t, therefore we have ψ t = t + + tan htkt = + tan ht + tan ht + tan ht < 0. t + tan ht + kt t + h th t h t h t 0 + t π + t + π + t 0 + t π + t + π + π + t. + t + t + tan ht where the last inequality is obtained by using simple calculus. Thus for all t, ], we have ψ t < 0. Case : If t >, we have tan ht < and t < + t, we obtain ψ t = t + + tan htkt = + tan ht + tan ht t + tan ht + kt t + h th t 0h h t + t + tan ht + t π + t + π + π + t + t + tan ht < 0. This completes the proof of the Lemma.