A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

Similar documents
A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

Local Self-concordance of Barrier Functions Based on Kernel-functions

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

CCO Commun. Comb. Optim.

A new primal-dual path-following method for convex quadratic programming

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

Lecture 10. Primal-Dual Interior Point Method for LP

Interior Point Methods for Linear Programming: Motivation & Theory

On Mehrotra-Type Predictor-Corrector Algorithms

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Interior Point Methods in Mathematical Programming

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

An EP theorem for dual linear complementarity problems

2 The SDP problem and preliminary discussion

Interior-Point Methods

Interior Point Methods for Mathematical Programming

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Interior Point Methods for Nonlinear Optimization

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

Semidefinite Programming

Interior-Point Methods for Linear Optimization

A priori bounds on the condition numbers in interior-point methods

A projection algorithm for strictly monotone linear complementarity problems.

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

15. Conic optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Nonsymmetric potential-reduction methods for general cones

Lecture 14 Barrier method

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

CS711008Z Algorithm Design and Analysis

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

Chapter 14 Linear Programming: Interior-Point Methods

Chapter 6 Interior-Point Approach to Linear Programming

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

An interior-point gradient method for large-scale totally nonnegative least squares problems

Lecture 14: Primal-Dual Interior-Point Method

New Interior Point Algorithms in Linear Programming

Optimization: Then and Now

A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION

On well definedness of the Central Path

Absolute Value Programming

Lineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások

Primal-dual IPM with Asymmetric Barrier

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

c 2005 Society for Industrial and Applied Mathematics

Introduction to Nonlinear Stochastic Programming

A class of Smoothing Method for Linear Second-Order Cone Programming

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Barrier Method. Javier Peña Convex Optimization /36-725

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Infeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

10 Numerical methods for constrained problems

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0.

New stopping criteria for detecting infeasibility in conic optimization

Polynomial complementarity problems

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality

Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality

A Simpler and Tighter Redundant Klee-Minty Construction

18. Primal-dual interior-point methods

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Lecture 24: August 28

Operations Research Lecture 4: Linear Programming Interior Point Method

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

Second-order cone programming

Conic Linear Optimization and its Dual. yyye

Curvature as a Complexity Bound in Interior-Point Methods

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

Absolute value equations

Lecture 17: Primal-dual interior-point methods part II

Lecture: Algorithms for LP, SOCP and SDP

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM

Transcription:

J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG ZHANG,, ZHENGWEI HUANG Three Gorges Mathematical Research Center, China Three Gorges University, Yi Chang 44300, China College of Economics and Management, China Three Gorges University, Yi Chang 44300, China Abstract In this paper, ellipsoidal estimations are used to track the central path of a linear complementarity problem (LCP) A wide neighborhood primal-dual interior-point algorithm is devised to search an ε-approximate solution of the LCP along the ellipse The algorithm is proved to be polynomial with the complexity bound O(nlog (x0 ) T s 0 ε ), which is as good as the linear programming analogue The numerical results show that the proposed algorithm is efficient and reliable Keywords Interior-point algorithm; Ellipsoidal approximation; Polynomial complexity; Linear complementarity problem 00 Mathematics Subject Classification 90C33, 90C5 INTRODUCTION After Karmarkar s landmark paper [], primal-dual interior-point methods (IPMs) have shown their power in solving linear programming (LP), quadratic programming (QP), linear complementarity problem (LCP) and other optimization problems [, 3, 4, 5] It is noted that LCPs [6, 7] are a general class of mathematical problems having important applications in mathematical programming and equilibrium problems Therefore, we should pay more attention on IPMs for LCPs The existing primal-dual IPMs usually search for the optimizers along a straight line related to the first and higher-order derivatives associated with the central path [8, 9] Although, theoretically, the central path has an important role in primal-dual IPMs, but practically, following the central path is difficult and arduous To remedy, Yang [0,,, 3] developed the idea of arc-search in LP, which search for optimizers along an ellipse that is an approximation of the central path Yang [] showed that arcsearch along ellipse may be a better method than one-dimensional search methods because the algorithm is proved to be polynomial with a better bound than the bounds of all existing higher-order algorithms Then, the arc-search techniques were applied to primal-dual path-following IPM for convex quadratic programming (CQP) by Yang [4] The algorithm enjoyed currently the best known polynomial complexity bound O( nlog ε ) Later, Yang et al [5, 6] introduced two wide neighborhood infeasible Corresponding author E-mail addresses: 55770065@63com (B Yuan), zmwang@ctgueducn (M Zhang), zhengweihuang@ctgueducn (Z Huang) Received January 6, 08; Accepted August, 08

B YUAN, M ZHANG, Z HUANG IPMs with arc-search for LP and symmetric optimization (SO) with the complexity bounds O(n 5 4 log ε ) and O(r 5 4 log ε ), respectively Pirhaji et al [7] explored a l -neighborhood infeasible IPM for LCP with the iteration bound O(nlog ε ) Here, our main purpose is to generalize the arc-search feasible IPM for LP [, 8] to LCP Compared with [6, 7], the new algorithm has a weak restrict on step size and works in the negative infinity neighborhood of the central path, which is a popular wide neighborhood where the software packages run Moreover, the algorithm enjoys the iteration complexity O(nlog (x0 ) T s 0 ε ), which is abound at least as good as the best bound of existing wide neighborhood IPMs The numerical tests show that the proposed algorithm has a tendency to solve problems faster than some other IPMs The outline of this paper is organized as follows In Section, we recall some fundamental concepts of the LCP such as the central path and its ellipsoidal estimation In Section 3, we introduce a wide neighborhood primal-dual IPM with arc-search for the LCP In Section 4, we give the technical results that are required in the convergent analysis and obtain the complexity bound of algorithm In Section 5, we provide some preliminary numerical results Finally, we close the paper by some conclusions in Section 6 We use the following notations throughout the paper All vectors are considered to be column vectors Denote the vector of all ones with appropriate dimension by e, n n-dimensional matrix space by R n n, the non-negative orthant of R n by R n +, Hadamard product of two vectors x R n and s R n by xs, the ith component of x by x i, the Euclidean norm and the -norm of x by x and x, the transpose of matrix A by A T If x R n, X:=diag(x) represents the diagonal matrix having the elements of x as diagonal entries Finally, we define an initial point of any algorithm by (x 0,s 0 ), the point after the kth iteration by (x k,s k ) THE CENTRAL PATH AND ITS ELLIPSOIDAL APPROXIMATION The LCP requires to find a pair of vectors (x,s) R n such that s = Mx + q, xs = 0, x,s 0, () where q R n and M is a n n positive semi-definite matrix, that is, x T Mx 0, for x R n Denote the feasible set of problem () by and the interior feasible set F = {(x,s) R n R n : s = Mx + q, (x,s) 0}, () F 0 = {(x,s) R n R n : s = Mx + q, (x,s) > 0} (3) Throughout the paper, we assume that the interior-point condition (IPC) holds [9], ie, F 0 is not empty Basic idea of the feasible IPMs is to replace the second equation in () by the perturbed equation xs = µe, with µ > 0, to get the following parameterized system s = Mx + q, xs = µe, (4) x,s 0

A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 3 If the LCP satisfies the IPC, then system (4) has a unique solution for every µ > 0, denoted by (x(µ), s(µ)) The set of all such solution constructs a homotopy path, which is called the central path of the LCP and used as a guideline to solution of the LCP [0] Thus, the central path is an arc in R n parameterized as a function of µ defined as C = {(x(µ),s(µ)) : µ > 0} (5) If µ 0, then the limit of the central path exists and is an optimal solution for LCP Recently, Yang [0, ] defined an ellipse ξ (α) in a n-dimensional space to approximate the central path C, as following ξ (α) = {(x(α),s(α)) : (x(α),s(α)) = acos(α) + bsin(α) + c}, (6) where a, b R n are the axes of the ellipse, orthogonal to each other, and c R n is the center of the ellipse Let z = (x(µ),s(µ)) = (x(α 0 ),s(α 0 )) ξ (α) be close to or on the central path Using the approach of Yang [], we define the first and second derivatives at (x(α 0 ),s(α 0 )) to have the form as if they were on the central path, which satisfy [ ] [ ] [ ] M I ẋ 0 =, (7) S X ṡ xs σ µe [ M S ] [ I X ẍ s ] = [ 0 ẋṡ ], (8) where µ = xt s n, σ (0, 4 ) is the center parameter Let α [0, π ] and (x(α),s(α)) be updated by (x,s) after the searching along the ellipse Similar to the proof of Theorem 3 in [], we have the following theorem Theorem Let (x(α),s(α)) be an arc defined by (6) passing through a point (x,s) ξ (α), and its first and second derivatives at (x,s) be (ẋ,ṡ) and (ẍ, s), which are defined by (7) and (8) Then, an ellipse approximation of the central path is given by x(α) = x sin(α)ẋ + ( cos(α))ẍ, (9) s(α) = s sin(α)ṡ + ( cos(α)) s (0) For convenience of reference, assume g(α) := cos(α) Using Theorem, (7) and (8), we get where x(α)s(α) = xs sin(α)(xs σ µe) + χ(α), () χ(α) := g (α)ẋṡ sin(α)g(α)(ẋ s + ṡẍ) + g (α)ẍ s () We will use negative infinity neighborhood of the central path in the literature, defined as N (γ) = {(x,s) F 0 : min(xs) γµ}, (3) where γ (0, ) is a constant independent of n This neighborhood is widely used in implementations too In order to facilitate the analysis of the algorithm, we denote sin( ˆα) := max{sin(α) : (x(α),s(α)) N (γ), sin(α) [0,], α [0, π ]} (4)

4 B YUAN, M ZHANG, Z HUANG 3 THE ARC-SEARCH INTERIOR-POINT ALGORITHM The proposed algorithm starts from an initial arbitrary point (x 0,s 0 ) N (γ), which is close to or on the central path C Using an ellipse that passes through the point (x 0,s 0 ), the algorithm approximates the central path and gets a new iterate (x(α),s(α)), which makes a certain decline of the duality gap The procedure is repeated until find an ε-approximate solution of the LCP A more formal description of the algorithm is presented next Algorithm : (Arc-search IPM for LCP) Input An accuracy parameter ε > 0, neighborhood parameters γ (0, ), center parameter σ (0, 4 ), initial point (x 0,s 0 ) with µ 0 = (x0 ) T s 0 n, and k := 0 If (x k ) T s k ε, then stop for iteration k =,, step Solve the systems (7) and (8) to get (ẋ,ṡ) and (ẍ, s); step Compute sin( ˆα k ) by (4), and let (x k+,s k+ ) = (x( ˆα k ),s( ˆα k )); step 3 Calculate µ k+ = (xk+ ) T s k+ n, k := k +, and go to step end(for) Based on the previous discussion, we state several lemmas which are necessary for convergence analysis of the algorithm Lemma 3 Let (x,s) be a strictly feasible point of LCP, (ẋ,ṡ) and (ẍ, s) meet (7) and (8) respectively, (x(α),s(α)) be calculated using (9) and (0) Then Mx(α) s(α) = q Proof Since (x,s) is a strict feasible point, we find from Theorem, (7) and (8) that Mx(α) s(α) = M[x sin(α)ẋ + ( cos(α))ẍ] [s sin(α)ṡ + ( cos(α)) s] = Mx sin(α)mẋ + ( cos(α))mẍ s + sin(α)ṡ ( cos(α)) s = Mx s sin(α)(mẋ ṡ) + ( cos(α))(mẍ s) = q This completes the proof Lemma 3 Let ẋ,ṡ,ẍ and s be defined in (7) and (8), and let M be a positive semi-define matrix Then the following relations hold: ẋ T ṡ = ẋ T Mẋ 0, ẍ T s = ẍ T Mẍ 0, (3) ẍ T ṡ = ẋ T s = ẋ T Mẍ, (ẋ T ṡ)( cos(α)) (ẍ T s)sin (α) (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) (3) (ẋ T ṡ)( cos(α)) + (ẍ T s)sin (α), (ẋ T ṡ)sin (α) (ẍ T s)( cos(α)) (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) (33) (ẋ T ṡ)sin (α) + (ẍ T s)( cos(α))

A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 5 Proof Pre-multiplying ẋ T and ẍ T to the first rows of (7) and (8) respectively, we have ẋ T ṡ = ẋ T Mẋ and ẍ T s = ẍ T Mẍ The first two inequalities of (3) follow from the fact that M is positive semi-definite Similarly, we also have ẍ T Mẋ = ẍ T ṡ and ẋ T Mẍ = ẋ T s, which mean ẍ T ṡ = ẋ T s = ẋ T Mẍ Since M is a positive semi-definite matrix, we have [( cos(α))ẋ + sin(α)ẍ] T M[( cos(α))ẋ + sin(α)ẍ] = (ẋ T Mẋ)( cos(α)) + (ẋ T Mẍ)sin(α)( cos(α)) + (ẍ T Mẍ)sin (α) = (ẋ T ṡ)( cos(α)) + (ẍ T s)sin (α) + (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) 0 From the above result, we can get the first inequality of (3) Similarly, we also have [( cos(α))ẋ sin(α)ẍ] T M[( cos(α))ẋ sin(α)ẍ] = (ẋ T Mẋ)( cos(α)) (ẋ T Mẍ)sin(α)( cos(α)) + (ẍ T Mẍ)sin (α) = (ẋ T ṡ)( cos(α)) + (ẍ T s)sin (α) (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) 0, which implies the second inequality of (3) Substituting ( cos(α))ẋ and sin(α)ẍ by sin(α)ẋ and ( cos(α))ẍ respectively, following the same way, we obtain (33) immediately Remark 33 In the current LCP case, (3) implies that iterative directions do not have orthogonality, which holds in the LP case This yields some difficulties in the analysis of the algorithm for LCP In what follows, we will develop some new results on the iterative directions Lemma 34 [] For α [0, π ], sin(α) sin (α) = cos (α) cos(α) To estimate the upper bound for µ(α), we give the following lemma Lemma 35 Let (x,s) N (γ), (ẋ,ṡ) and (ẍ, s) be calculated from (7) and (8) Let x(α) and s(α) be defined as (9) and (0) respectively Then µ(α) = µ[ ( σ)sin(α)] + n [( cos(α)) (ẍ T s ẋ T ṡ) sin(α)( cos(α))(ẋ T s + ṡ T ẍ)] µ[ ( σ)sin(α)] + n [ẍt s(sin 4 (α) + sin (α))]

6 B YUAN, M ZHANG, Z HUANG Proof Using (9), (0), (7) and (8), we get nµ(α) = x(α) T s(α) = [x T ẋ T sin(α) + ẍ T ( cos(α))][s ṡsin(α) + s( cos(α))] = x T s x T ṡsin(α) + x T s( cos(α)) ẋ T ssin(α) + ẋ T ṡsin (α) ẋ T ssin(α)( cos(α)) + ẍ T s( cos(α)) ẍ T ṡsin(α)( cos(α)) + ẍ T s( cos(α)) = x T s (x T ṡ + s T ẋ)sin(α) + (x T s + s T ẍ)( cos(α)) (ẋ T s + ṡ T ẍ)sin(α)( cos(α)) + ẋ T ṡsin (α) + ẍ T s( cos(α)) = nµ[ ( σ)sin(α)] + ( cos(α)) (ẍ T s ẋ T ṡ) (ẋ T s + ṡ T ẍ)sin(α)( cos(α)) nµ[ ( σ)sin(α)] + ( cos(α)) (ẍ T s ẋ T ṡ) + ( cos(α)) ẋ T ṡ + sin (α)ẍ T s = nµ[ ( σ)sin(α)] + ( cos(α)) ẍ T s + sin (α)ẍ T s nµ[ ( σ)sin(α)] + (sin 4 (α) + sin (α))(ẍ T s), where the first inequality follows from Lemma 3, and the second inequality is from Lemma 34 This proves the lemma 4 THE COMPLEXITY ANALYSIS In the following, we present some technical results which are required for establishing the main results of convergence analysis For simplicity in the analysis, we omit the indicator k and set D = X S, where x,s R n + Lemma 4 [] Let u,v R n Then uv Du D v ( Du + D v ) In what follows, we give the upper bounds for Dẋ, D ṡ, Dẍ and D s, which are very important to our subsequent analysis Lemma 4 If (x,s) N (γ), D = X S and (ẋ,ṡ) is the solution of (7) Then and where β σ + σ γ Dẋ + D ṡ β µn, ẋṡ β µn, Dẋ β µn, D ṡ β µn, Proof Pre-Multiplying the second equation in (7) by (XS), we get Dẋ + D ṡ = (XS) (xs σ µe)

A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 7 Then, taking squared norm on both sides yields From ẋ T ṡ 0, we have Dẋ + ẋ T ṡ + D ṡ = (xs) σ µn + (XS) σ µe Using this result and Lemma 4, we obtain x T s σ µn + σ µe (xs) min µn σ µn + σ µn γ β µn Dẋ + D ṡ β µn Dẋ β µn, D ṡ β µn, and This proves this lemma ẋṡ β µn Lemma 43 If (x,s) N (γ), D = X S and (ẍ, s) is the solution of (8), then we have and where β = β γ Dẍ + D s β µn, ẍ s β µn, Dẍ β µn, D s β µn, Proof Pre-multiplying (XS) to the second equation in (8), taking squared norm on both sides, and using ẍ T s 0, we obtain Dẍ + D s (XS) ( ẋṡ) From the above inequality and Lemma 4, we get ẋṡ (xs) min 4 ẋṡ γµ β µn, Dẍ β µn, D s β µn, and This completes the proof ẍ s β µn Using Lemma 4 and Lemma 43, we have the following lemma Lemma 44 If β 3 = β β, then ẋ s Dẋ D s β 3 µn 3, ẍṡ Dẍ D ṡ β 3 µn 3

8 B YUAN, M ZHANG, Z HUANG Let sin( ˆα 0 ) = β 4 n, where β 4 = β +β +4β 3 σ( γ) Then we get the next lemmas Lemma 45 Let χ(α) be defined as () For all α (0, ˆα 0 ], we have χ(α) sin(α)µ( γ)σe Proof Using Lemma 4, Lemma 43, Lemma 44 and g(α) sin (α), we have χ(α) = g (α)ẋṡ sin(α)g(α)(ẋ s + ṡẍ) + g (α)ẍ s [ sin 4 (α) ẋṡ sin 3 (α) ẋ s + ṡẍ sin 4 (α) ẍ s ]e [ sin4 (α)β µn + sin 3 (α)β 3 µn 3 + sin4 (α)β µn ]e sin(α)µ[sin3 ( ˆα 0 )β n + 4sin ( ˆα 0 )β 3 n 3 + sin 3 ( ˆα 0 )β n ]e = sin(α)µ[ β β4 3 + 4β 3 n β4 n + β β4 3n]e sin(α)µ β 4 [β + β + 4β 3 ]e = sin(α)µ( γ)σe This completes the proof Lemma 46 Let (x,s) N (γ), sin( ˆα) be defined by (4) Then, for all sin(α) [0,sin( ˆα 0 )], we have (x(α),s(α)) N (γ) and sin( ˆα) sin( ˆα 0 ) Proof Using (), (3), Lemma 45 and 35, we have min(x(α)s(α)) = min[xs sin(α)(xs σ µe) + χ(α)] min(xs)( sin(α)) + sin(α)σ µ + min(χ(α)) ( sin(α))γµ + sin(α)σ µ sin(α)( γ)σ µ = γµ(α) σγµ sin(α) + sin(α)σ µ sin(α)( γ)σ µ γ n [( cos(α)) (ẍ T s ẋ T ṡ) sin(α)( cos(α))(ẋ T s + ṡ T ẍ)] = γµ(α) + sin(α)( γ)σ µ sin(α)( γ)σ µ γ n ( cos(α)) (ẍ T s ẋ T ṡ) + γ n sin(α)( cos(α))(ẋt s + ṡ T ẍ) γµ(α) + sin(α)( γ)σ µ γ n ( cos(α)) (ẍ T s + ẋ T ṡ) + γ n sin(α)( cos(α))(ẋt s + ṡ T ẍ) γµ(α) + sin(α)( γ)σ µ γ n sin (α)(ẋ T ṡ) γ n ( cos(α)) (ẍ T s) + γ n sin(α)( cos(α))(ẋt s + ṡ T ẍ) γµ(α) where the last three inequalities follow from the facts ẋ T ṡ 0, Lemma 34 and (33), respectively, which imply x(α)s(α) > 0 Since x(α) and s(α) have continuity in [0, ˆα 0 ] and (x,s) > 0, hence x(α) >

A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 9 0, s(α) > 0 By (3) and (4), we have (x(α),s(α)) N (γ) Using (4), we obtain sin( ˆα) sin( ˆα 0 ) The proof is completed Now, we are ready to present the iteration complexity bound of Algorithm To this end, we need to obtain a new upper bound of µ(α) From Lemma 35 and Lemma 43, it follows that for α [0, ˆα 0 ] Hence, µ(α) µ[ ( σ)sin(α)] + n [ẍt s(sin 4 (α) + sin (α))] µ[ ( σ)sin(α)] + n [ ẍ s (sin 4 (α) + sin (α))] µ[ ( σ β n (sin3 (α) + sin(α)))sin(α)] µ[ ( σ β β 3 4 n β β 4 )sin(α)] ( 4 sin(α))µ, where the last two inequalities come from sin( ˆα 0 ) = β 4 n, β < β 4 and 0 < σ < 4 Theorem 47 Let (x 0,s 0 ) N (γ), sin( ˆα 0 ) = β 4 n Then Algorithm terminates in at most O(nlog (x0 ) T s 0 ε ) iterations Proof Using the relation sin( ˆα) sin( ˆα 0 ), we have µ( ˆα) ( 4 sin( ˆα))µ ( 4 sin( ˆα 0))µ = ( 4β 4 n )µ Then, we obtain (x k ) T s k ( 4β 4 n )k (x 0 ) T s 0 Thus the inequality (x k ) T s k ε is satisfied if ( 4β 4 n )k (x 0 ) T s 0 ε Taking logarithms, we get Note that k log( 4β 4 n ) log ε (x 0 ) T s 0 log( 4β 4 n ) 4β 4 n 0, Thus, for k 4β 4 nlog (x0 ) T s 0 ε, we have (x k ) T s k ε This completes the proof

0 B YUAN, M ZHANG, Z HUANG 5 NUMERICAL RESULTS In this section, we represent some numerical results for our new Algorithm and the large-update algorithm based on trigonometric kernel function (ψ(t) = t + 6 π( t) π tan(h(t)),with h(t) = 4t+ ) and classical kernel function (ψ(t) = t logt) in [] for solving the same problems Moreover, the results show that our algorithm is more effective We list the number of iteration(iter), the duality gap(gap) and the total CPU time(time) when the algorithms terminate Considering the following test problems 5 Text problems Problem M = 0 4 0 0 0, q = / 4 3/, x 0 = 5 5 Problem [3] Problem 3 [4] 5 6 6 M = 6 9 0 6 0 4n 3 M = 4 0 0 4 0 0 4 0 0 0 0 0 0 0 4, q =, q =, x 0 =, x 0 = 5 The number of iterations for our algorithm In the following, we select the different optimization parameters γ,σ and accuracy parameter ε = 0 6 for the Algorithm The number of iterations for test problems are stated in Tables -3 TABLE The number of iterations for Problem σ γ Gap Iter Time /6 / 776478094e 007 9 709 /8 / 657336555e 007 8 0908 /8 /5 657336555e 007 8 0965 /0 /0 740538e 007 7 06 Taking different dimensions and parameters σ,γ, the results are displayed in Tables -3 We find that σ has more obvious effects on iteration numbers than γ For Table and Table 3, the smaller σ is, the better iteration numbers will be For Table, the larger σ is, the better iteration numbers will be

σ γ A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH TABLE The number of iterations for Problem n=0 n=5 Iter Gap Time Iter Gap Time /6 / 9 5084e-007 4537 3 9346e-007 509 /0 /0 9 5608e-007 4540 4 43e-007 5959 /5 /0 0 4493e-007 6667 6 465e-007 7488 /5 /30 0 7447e-007 5874 6 706e-007 796 n=0 n=5 n=30 Iter Gap Time Iter Gap Time Iter Gap Time 7 63e-007 6540 3 37743e-007 8908 33 3009e-007 0345 8 558e-007 6358 3 655e-007 890 35 55948e-007 8678 3 7983e-008 8066 35 807e-007 0960 39 938e-007 93 30 8733e-007 9950 35 67943e-007 96 38 34859e-007 4455 TABLE 3 The number of iterations for Problem 3 σ γ n=0 n=50 n=00 Iter Gap Time Iter Gap Time Iter Gap Time /6 / 73e-007 4038 3 79530e-007 5689 5 388e-007 757 /0 /0 9 43594e-007 3646 078e-007 5790 3 4934e-007 7395 /0 /5 9 46934e-007 3444 5584e-007 544 3 5948e-007 7867 /5 /30 9 68965e-008 35 03e-007 4850 7345e-007 73 n=00 n=500 n=000 Iter Gap Time Iter Gap Time Iter Gap Time 7 36773e-007 3 0 6807e-007 580 4 7479e-007 876 5 33748e-007 6358 8 9434e-007 5598 503e-007 068 5 50650e-007 649 9 0488e-007 55357 7784e-007 90403 4 5970e-007 75 8 7709e-007 5088 359e-007 9095 53 The number of iterations for large-update algorithm based on kernel functions in [] We take different barrier update parameters θ, threshold parameter τ = 5, and accuracy ε = 0 4 The iteration numbers for test problems based on the trigonometric kernel function and classical kernel function are presented in Tables 4-9, respectively Comparing the results in Tables 4-9, it suggests that, with higher accuracy, our algorithm has less iteration number It shows that our algorithm is more efficient in practice and the iteration number is insensitive to the size of problem We can intuitively see that searching along an ellipse that approximates

B YUAN, M ZHANG, Z HUANG TABLE 4 The number of iterations for Problem based on trigonometric kernel function θ Gap Iter 03 888905766e 004 46 05 6767733e 004 3 07 479478804e 004 3 TABLE 5 The number of iterations for Problem based on classical kernel function θ Gap Iter 03 7655850e 004 43 05 65537368e 004 8 07 339090763e 004 0 TABLE 6 The number of iterations for Problem based on trigonometric kernel function θ n=0 n=5 n=0 n=5 n=30 03 73 83 89 98 05 05 49 54 57 60 63 07 39 4 46 48 5 TABLE 7 The number of iterations for Problem based on classical kernel function θ n=0 n=5 n=0 n=5 n=30 03 76 86 95 05 05 5 56 60 65 68 07 4 45 48 49 53 TABLE 8 The number of iterations for Problem 3 based on trigonometric kernel function θ n=0 n=50 n=00 n=00 n=500 n=000 03 63 83 98 3 76 05 4 57 69 84 9 0 07 34 44 54 65 66 84 TABLE 9 The number of iterations for Problem 3 based on classical kernel function θ n=0 n=50 n=00 n=00 n=500 n=000 03 49 73 78 8 9 9 05 33 39 47 58 69 7 07 3 33 36 39 4 56 the central path is promising and attractive Furthermore, the results for our algorithm show very slow growth as n increases, which is precisely what is hoped for interior-point algorithms

A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 3 6 CONCLUDING REMARKS This paper generalized a wide neighborhood IPM with the arc-search from the LP to the LCP, which searches the optimizers along the ellipses that approximate the central path Moreover, the iteration complexity bound of the proposed algorithm is devised, namely O(nlog (x0 ) T s 0 ε ) The numerical results show that the new proposed algorithm is well promising in practice in comparison with some other IPMs We expect that the same idea will be proved to be efficient for the P (κ) LCP Acknowledgements The authors were supported by Natural Science Foundation of China (No7470) and Excellent Foundation of Graduate Student of China Three Gorges University (07YPY08) REFERENCES [] NK Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (984), 373-395 [] M Kojima, S Mizuno, A Yoshise, A primal-dual interior-point algorithm for linear programming, Progress in Mathematial Programming; Interior Point Related Methods, New York, Springer-Verlag, (989), 9-47 [3] YQ Bai, ME Ghami, C Roos, A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization, SIAM J Optim 5 (004), 0-8 [4] SJ Wright, A path-following interior-point algorithm for linear and quadratic problems, J Anal Oper Res 6 (996), 03-30 [5] YE Nesterov, Interior-point methods in convex programming: Theory and Applications, Society for Industrial and Applied Mathematics Philadelphia, 994 [6] H Mansouri, M Pirhaji, A polynomial interior-point algorithm for monotone linear complementarity problems, J Optim Theory Appl 57 (03), 45-46 [7] RW Cottle, JS Pang, RE Stone, The linear complementarity problem, New York, Academic press, 99 [8] P Huang, Y Ye, An asymptotical O( nl)-iteration path-following linear programming algorithm that use wide neighborhoods, SIAM J Optim 6 (996), 570-586 [9] B Jansen, C Roos, T Terlaky, Improved complexity using higher-order correctors for primal-dual Dikin affine scaling, Math Program 76 (996), 7-30 [0] Y Yang, Arc-search path-following interior-point algorithms for linear programming, Optimization Online, 009 [] Y Yang, A polynomial arc-search interior-point algorithms for linear programming, J Optim Theory Appl 58 (03), 859-873 [] Y Yang, CurveLP-A MATLAB implementation of an infeasible interior-point algorithm for linear programming, Numer Algo 74 (06), -30 [3] Y Yang, M Yamashita, An arc-search O(nL) infeasible-interior-point algorithm for linear programming, Optim Lett (08), 78-798 [4] Y Yang, A polynomial arc-search interior-point algorithms for convex quadratic programming, Eur J Oper Res 5 (0), 5-38 [5] X Yang, Y Zhang, H Liu, A wide neighborhood infeasible-interior-point method with arc-search for linear programming, J Comput Math Appl 5 (06), 09-5 [6] X Yang, H Liu, Y Zhang, An arc-search infeasible-interior-point method for symmetric optimization in a wide neighborhood of the central path, Optim Lett (07), 35-5 [7] M Pirhaji, M Zangiabadi, H Mansouri, An l -neighborhood infeasible interior-point algorithm for linear complementarity problems, 4OR 5 (07), -3 [8] XM Yang, HW Liu, CH Liu, Arc-search interior-point algorithm, J Jilin Univ (Science Edition), 5 (04),693-697 [9] C Roos, T Terlaky, J Vial, Interior-point methods for linear optimization, Spring, New York, 006 [0] M Kojima, N Megiddo, T Noma, A Yoshise, A unified approach to interior point algorithms for linear complementarity problems, Lecture Notes in Computer Science, vol 538, Springer, Berlin, 99

4 B YUAN, M ZHANG, Z HUANG [] Y Zhang, DT Zhang, On polynomiality of the Methrotra-type predictor-corrector interior-point algorithms, Math Program 68 (995), 303-38 [] ME Ghami, Primal-Dual algorithms for P (κ) linear complementarity problems based on kernel-function with trigonometric barrier term, Optim Theory Decis Mak Oper Res Appl 3 (03), 33-349 [3] PT Harker, JS Pang, A damped Newton method for linear complementarity problem, Lect Appl Math 6 (990), 65-84 [4] C Geiger, C Kanzow, On the resolution of monotone complementarity problems, Comput Optim Appl 5 (996), 55-73