Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Similar documents
Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Corrector predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Corrector-predictor methods for sufficient linear complementarity problems

On Mehrotra-Type Predictor-Corrector Algorithms

Interior Point Methods in Mathematical Programming

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

CCO Commun. Comb. Optim.

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

A priori bounds on the condition numbers in interior-point methods

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Interior Point Methods for Mathematical Programming

Interior-Point Methods

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

ON A CLASS OF SUPERLINEARLY CONVERGENT POLYNOMIAL TIME INTERIOR POINT METHODS FOR SUFFICIENT LCP

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

An EP theorem for dual linear complementarity problems

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

c 2005 Society for Industrial and Applied Mathematics

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

New stopping criteria for detecting infeasibility in conic optimization

Conic Linear Optimization and its Dual. yyye

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

A Smoothing Newton Method for Solving Absolute Value Equations

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999.

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A new primal-dual path-following method for convex quadratic programming

Optimality, Duality, Complementarity for Constrained Optimization

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

Semidefinite Programming

15. Conic optimization

Lecture 5. The Dual Cone and Dual Problem

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

Lecture: Algorithms for LP, SOCP and SDP

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

4. Algebra and Duality

Lecture: Cone programming. Approximating the Lorentz cone.

An inexact subgradient algorithm for Equilibrium Problems

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Lineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások

A Simpler and Tighter Redundant Klee-Minty Construction

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Local Self-concordance of Barrier Functions Based on Kernel-functions

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

Limiting behavior of the central path in semidefinite optimization

10 Numerical methods for constrained problems

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Interior-Point Methods for Linear Optimization

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

Operations Research Lecture 4: Linear Programming Interior Point Method

Self-Concordant Barrier Functions for Convex Optimization

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

DEPARTMENT OF MATHEMATICS

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Optimality Conditions for Constrained Optimization

A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

New Infeasible Interior Point Algorithm Based on Monomial Method

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

A derivative-free nonmonotone line search and its application to the spectral residual method

Conic Linear Programming. Yinyu Ye

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

On implementing a primal-dual interior-point method for conic quadratic optimization

On the complexity of computing the handicap of a sufficient matrix

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

arxiv: v1 [math.na] 25 Sep 2012

A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE

Infeasible path following algorithms for linear complementarity problems

A strongly polynomial algorithm for linear systems having a binary solution

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,

Iterative Methods for Solving A x = b

5.6 Penalty method and augmented Lagrangian method

arxiv: v1 [math.oc] 21 Jan 2019

A double projection method for solving variational inequalities without monotonicity

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems

A Strongly Polynomial Simplex Method for Totally Unimodular LP

The Q Method for Symmetric Cone Programmin

IMPLEMENTATION OF INTERIOR POINT METHODS

Transcription:

Copyright information to be inserted by the Publishers Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Florian A. Potra and Xing Liu Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, MD150, USA Two predictor-corrector methods of order m = Ωlog n) are proposed for solving sufficient linear complementarity problems. The methods produce a sequence of iterates in the N neighborhood of the central path. The first method requires an explicit upper bound κ of the handicap of the problem while the second method does not. Both methods have O1 + κ) 1+1/m nl) iteration complexity. They are superlinearly convergent of order m + 1 for nondegenerate problems and of order m + 1)/ for degenerate problems. The cost of implementing one iteration is at most On 3 ) arithmetic operations. Keywords: linear complementarity, interior-point, path-following, predictor-corrector NOTE: in this report we correct an error from the paper with the same title published in Optimization Methods and Software, 0, 1 005), 145 168, see Erratum at http://www.math.umbc.edu/ potra/erratum.pdf). 1 Introduction Primal-dual interior-point methods play an important role in modern mathematical programming. They have been widely used to obtain strong theoretical results and they have been successfully implemented in software packages for solving linear programming LP), quadratic programming QP), semidefinite programming SDP), and many other problems. For an excellent analysis of primal-dual interior-point methods and their implementation see the excellent monograph of Stephen Wright [4]. The MTY predictor-corrector algorithm proposed by Mizuno, Todd and Ye [1] is a typical representative of a primal-dual interior-point method for LP. It has O nl) iteration complexity, which is the best iteration complexity obtained so far for any interior-point method. Moreover, the duality gap of the sequence generated by the MTY algorithm converges to zero quadratically [7]. The MTY algorithm was the first algorithm for LP having both polynomial complexity and Work supported in part by the National Science Foundation, Grant No. 0139701. 1

F. A. Potra AND X. Liu superlinear convergence. Ji, Potra and Huang [8] generalized the MTY algorithm to monotone linear complementarity problems LCP). The method has O nl) iteration complexity and superlinear convergence, under the assumption that the LCP is non-degenerate and the iteration sequence converges. It turns out that these assumptions are not restrictive. Thus the convergence of the iteration sequence follows from a general result of [4] and, according to [13], the nondegeneracy i.e. the existence of a strict complementarity solution) is a necessary condition for superlinear convergence. We note that in [13] it is shown that in the degenerate case a large class of first order interior-point methods, which contains the MTY algorithm, can only achieve linear convergence with factor at most.5. A direct proof of the superlinear convergence of the MTY algorithm for nondegenerate LCPs, without using the convergence of the iteration sequence, is contained in [6]. The MTY algorithm operates in a l neighborhood of the central path. It is well known however that primal-dual interior-point methods have a better practical performance in a wider neighborhood of the central path. The most efficient primal-dual interiorpoint methods operate in the so called N neighborhood, to be defined later in the present paper. Unfortunately, the iteration complexity of the predictor-corrector methods that use wide neighborhoods are worse than the complexity of the corresponding methods for small neighborhoods. By using the analysis of Anstreicher and Bosch [3] it follows that the iteration complexity of a straightforward implementation of a predictor-corrector method in the large neighborhood of the central path would be On 3/ L). Gonzaga [5] proposed a predictor-corrector method using the N neighborhood of the central path that has OnL) iteration complexity. In contrast with the MTY algorithm that uses a predictor step followed by a corrector step at each iteration, Gonzaga s algorithm uses a predictor step followed by a variable number of corrector steps at each iteration. There are no sharp estimates on the number of the corrector steps that are needed at each iteration. However by using a very elegant analysis Gonzaga was able to prove that his algorithm needs at most OnL) predictor and corrector steps. The results of [3, 5] show that it is more difficult to develop and analyze MTY type of predictor-corrector methods in large neighborhoods. The best iteration complexity achieved by any known interior-point method in the large neighborhood using first order information is OnL). As shown in [6, 8], the iteration complexity can be reduced to O nl) by using higher order information. However the algorithms presented in those papers are not of MTY type and it appears that they are not superlinearly convergent. A higher order algorithm of MTY type in the N neighborhood with O nl) complexity and superlinear convergence has recently been proposed in [15]. The existence of a central path is crucial for interior-point methods. An important result of the 1991 monograph of Kojima et al. [9] shows that the central path exists for any P linear complementarity problem, provided that the relative interior of its feasible set is nonempty. We recall that every P linear complementarity problem is a P κ) problem for some κ 0, i.e. P = κ 0 P κ). The class P 0) coincides with the class of monotone linear complementarity prob-

A wide neighborhood predictor-corrector method for LCP 3 lems, and 0 κ 1 κ implies P κ 1 ) P κ ). A surprising result of Väliaho [] from 1996 showed that the class of P matrices coincides with the class of sufficient matrices. Therefore, the interior-point methods of [9] can solve any sufficient linear complementarity problem. Of course, the computational complexity of the algorithm depends on the parameter κ. The best known iteration complexity of an interior-point method for a P κ) problem is O1 + κ) nl). No superlinear complexity results were given for the interior-point methods of [9]. In 1995 Miao [10] extended the MTY predictor-corrector method for P κ) linear complementarity problems. His algorithm uses the l neighborhood of the central path, has O1 + κ) nl) iteration complexity, and is quadratically convergent for nondegenerate problems. However, the constant κ is explicitly used in the construction of the algorithm. Or, it is well known that this constant is very difficult to estimate for many sufficient linear complementarity problems. The predictorcorrector methods described in [16] improve on Miao s algorithm in several ways. First, the algorithms do not depend on the constant κ, so that the same algorithm is used for any sufficient complementarity problem. Second, the neighborhoods used by the algorithms of [16] are slightly larger than those considered in [10]. Third, by employing a higher order predictor, the algorithms of [16] may attain arbitrarily high orders of convergence on nondegenerate problems. Finally, by using the fastsafe-improve strategy of Wright and Zhang [5], the algorithms of [16] require asymptotically only one matrix factorization per iteration, while Miao s algorithm, as well as the original MTY algorithm, require two matrix factorizations at every iteration. While the algorithms of [16] do not depend on the constant κ, their computational complexity does: if the problem is a P κ) linear complementarity problem they terminate in at most O1 + κ) nl) iterations. The predictor-corrector algorithms presented in [17, 18] are superlinearly convergent even for degenerate problems. More precisely the Q-order of convergence of the complementarity gap is for nondegenerate problems and 1.5 for degenerate problems. The algorithms of [17, 18] are first order methods that do not belong to the class of interior-point methods considered in [13], so that the fact that they are superlinearly convergent for degenerate problems does not contradict the result of that paper. In the degenerate case superlinear convergence is achieved by employing an idea of Mizuno [11] which consists in identifying indices for which strict complementarity does not hold, which is possible when the complementarity gap is small enough, and by using an extra backsolve in order to accelerate the convergence of the corresponding variables. Predictor-corrector algorithms with arbitrarily high order of convergence for degenerate sufficient linear complementarity problems were given in [0]. The algorithms depend on the constant κ, use a l neighborhood of the central path and, as shown in [19], have O1 + κ) nl) iteration complexity for P κ) linear complementarity problems. A general local analysis of higher order predictor-corrector methods in a l neighborhood for degenerate sufficient linear complementarity problems is given by Zhao and Sun [9], who also propose a new algorithm that does not need a corrector step. The latter algorithm does not follow the traditional central path. Instead a new analytic path is used at each iteration. No complexity results

4 F. A. Potra AND X. Liu are given. All the above mentioned interior-point methods for sufficient linear complementarity problem use small neighborhoods of the central path. In a recent monograph [7], Peng, Roos and Terlaky propose the use of larger neighborhoods of the central path defined by means of self-regular functions. They propose an interior-point algorithm for solving a class of P κ) nonlinear complementarity problems based on such larger neighborhoods. Their algorithm does not depend on κ. They establish the complexity of the algorithm in terms of n only, by tacitly assuming that κ is a finite constant. By analyzing the proof of Theorem 4.4.10 of the monograph it follows that the iteration complexity of their algorithm with q = log n) when applied to a P κ) linear complementarity problem is O1 + κ) log n nl). Since at each main iteration of their algorithm the complementarity gap is reduced by a given constant, their algorithm is only linearly convergent. A superlinear interior-point algorithm for sufficient linear complementarity problems in the N neighborhood has been proposed by Stoer [1]. This algorithm is an adaptation of the second algorithm of [9] for the large neighborhood. No complexity results are proved. In the present paper we will propose several predictor-corrector methods for sufficient horizontal linear complementarity problems HLCP) in the N neighborhood of the central path that extends the algorithm of [15]. HLCP is a slight generalization of the standard linear complementarity problem LCP). As shown by Anitescu et al. [1], different variants of the P κ) linear complementarity problem, including LCP, HLCP, mixed LCP and geometric LCP are equivalent in the sense that any complexity or superlinear convergence result proved for any of the formulations is valid for all formulations. We choose to work on P κ) HLCP because of its symmetry. We will start by describing a first order predictor-corrector method for the P κ) HLCP that depends explicitly on the constant κ. We will prove that the algorithm has O1 + κ)nl) iteration complexity for general P κ) problems and is quadratically convergent for nondegenerate problems. If κ is not known, we will propose a first order predictor-corrector method with O1 + χ)nl) iteration complexity, where χ is the handicap of the sufficient linear complementarity problem, i.e. the smallest κ 0 for which the problem is a P κ) problem. Both algorithms belong to the class of interior-point methods considered in [13] so that they are not superlinearly convergent on degenerate problems. Of course we could modify them using the ideas of [11], [17], [18] to obtain an algorithm with Q-order 1.5 on degenerate problems. However this would depend on an efficient identification of the indices for which strict complementarity does not hold and would require an extra backsolve. Or with an extra backsolve we can construct a second order method that has Q-order 1.5 on degenerate problems and does not depend on identification of the indices for which strict complementarity fails. More generally, by using a predictor of order m we obtain algorithms with O1 + κ) 1+1/m n 1/+1/1+m) L) iteration complexity if κ is given and O1 + χ) 1+1/m n 1/+1/1+m) L) iteration complexity if κ is not given. The higher order methods are superlinearly convergent even for degenerate P κ) HLCP. More precisely the Q-order of convergence of the complementarity gap is m + 1 for nondegenerate problems and m + 1)/ for degenerate problems. By choosing m = Ωlog n), the iteration complexity of our

A wide neighborhood predictor-corrector method for LCP 5 algorithms reduces to O1 + κ) 1+1/m nl) and to O1 + χ) 1+1/m nl), respectively. The results of the present paper represent a nontrivial generalization of the results of [15] since some of the proof techniques do not carry over from the monotone case, and since the algorithms of [15] had to be modified in such a way that they do not depend on κ. To our knowledge the algorithms presented in this paper have the lowest complexity bounds of any interior-point methods for sufficient linear complementarity problems acting in the N neighborhood of the central path. Moreover they are predictor-corrector methods that are superlinearly convergent even for degenerate problems. Conventions. We denote by IN the set of all nonnegative integers. IR, IR +, IR ++ denote the set of real, nonnegative real, and positive real numbers respectively. Given a vector x, the corresponding upper case symbol X denotes the diagonal matrix defined by the vector. We denote component-wise operations on vectors by the usual notations for real numbers. Thus, given two vectors u, v of the same dimension, uv, u/v, etc. will denote the vectors with components u i v i, u i /v i, etc, that is, uv Uv. Also if f is a scalar function and v is a vector, then fv) denotes the vector with components fv i ). For example if v IR +, n then v denotes the vector with components v i, v denotes the vector with components v i, and 1-v denotes the vector with components 1 v i. Traditionally the vector 1 v is written as e v, where e is the vector of all ones. The inequalities v 0 and v > 0 are also understood componentwise. [v] denotes the negative part of the vector v, [v] = max{ v, 0}. If x, s IR n, then the vector z IR n obtained by concatenating x and s will be denoted by x, s, i.e., [ ] xs z = x, s = = [ x T, s T ] T. 1.1) Throughout this paper the mean value of xs will be denoted by µz) = xt s n. 1.) If. is a vector norm on IR n, and A is a matrix, then the operator norm induced by. is defined in the usual manner by A = max{ Ax ; x = 1}. We use the notations O ), Ω ), Θ ), and o ) in the standard way: If {τ k } is a sequence of positive numbers tending to 0 or, and {x k } is a sequence of vectors, then x k = Oτ k ) means that there is a constant ϑ such that for every k IN, x k ϑτk ; if x k > 0, x k = Ωτ k ) means that x k ) 1 = O1/τ k ). If we have both x k = Oτ k ) and x k = Ωτ k ), we write x k = Θτ k ). Finally, x k = oτ k ) means that lim k x k /τk = 0. For any real number ρ, ρ denotes the smallest integer greater or equal to ρ.

6 F. A. Potra AND X. Liu The P κ) horizontal linear complementarity problem Given two matrices Q, R IR n n, and a vector b IR n, the horizontal linear complementarity problem HLCP) consists in finding a pair of vectors z = x, s such that xs = 0 Qx + Rs = b x, s 0..1) The standard monotone) linear complementarity problem SLCP or simply LCP) is obtained by taking R = I, and Q positive semidefinite. Let κ 0 be a given constant. We say that.1) is a P κ) HLCP if Qu + Rv = 0 implies 1 + 4κ) u i v i + u i v i 0, for any u, v IR n i I + i I where I + = {i : u i v i > 0}, I = {i : u i v i < 0}. If the above condition is satisfied we say that Q, R) is a P κ) pair and we write Q, R) P κ). If Q, R) belongs to the class P = κ 0 P κ) then we say that.1) is a P HLCP. In case R = I, Q, I) is a P κ) pair if and only if Q is a P κ) matrix in the sense that: 1 + 4κ) x i [Qx] i + x i [Qx] i 0, x IR n i Î+ i Î where Î+ = {i : x i [Qx] i > 0}, Î = {i : x i [Qx] i < 0}. Problem.1) is then called a P κ) LCP and it is extensively discussed in [9]. A matrix Q is called column sufficient if xqx) 0 implies xqx) = 0, and row sufficient if xq T x) 0 implies xq T x) = 0. A matrix that is both row sufficient and column sufficient is called a sufficient matrix. Väliaho s result [] states that a matrix is sufficient if and only if it is a P κ) matrix for some κ 0. By extension, a P HLCP will be called a sufficient HLCP and a P pair will be called a sufficient pair. The handicap of a sufficient pair Q, R) is defined as χq, R) := min{κ : κ 0, Q, R) P κ)}..) A general expression for the handicap of a sufficient matrix and a method for determining it is described in [3]. We denote the set of all feasible points of HLCP by and its solution set by F = {z = x, s IR n + : Qx + Rs = b}, F = {z = x, s F : x s = 0}.

A wide neighborhood predictor-corrector method for LCP 7 The relative interior of F, which is also known as the set of strictly feasible points or the set of interior points, is given by F 0 = F IR n ++. It is known see, for example, [9]) that if F 0 is nonempty, then the nonlinear system, xs = τe Qx + Rs = b has a unique positive solution for any τ > 0. The set of all such solutions defines the central path C of the HLCP, that is, where C = {z IR n ++ : F τ z) = 0, τ > 0}, [ F τ z) = xs τe Qx + Rs b If F τ z) = 0, then it is easy to see that τ = µz), where µz) is given by 1.). The wide neighborhood N α), in which we will work in the present paper, is given by ]. N α) = {z F 0 : δ z) α }, where 0 < α < 1 is a given parameter and [ ] δ z) xs := µz) e is a proximity measure of z to the central path. Alternatively, if we denote Dβ) = {z F 0 : xs βµz)}, then the neighborhood N α) can also be written as N α) = D1 α). 3 The first order predictor-corrector method In the predictor step we are given a point z = x, s Dβ), where β is a given parameter in the interval 0, 1), and we compute the affine scaling direction at z: w = u, v = F 0z) 1 F 0 z). 3.1) We want to move along that direction as far as possible while preserving the condition zθ) D 1 γ)β). The predictor step length is defined as } θ = sup { θ > 0 : zθ) D1 γ)β), θ [0, θ], 3.) where zθ) = z + θw

8 F. A. Potra AND X. Liu and 1 β γ := 1 + 4κ)n + 1. 3.3) The output of the predictor step is the point z = x, s = zθ) D1 γ)β). 3.4) In the corrector step we are given a point z D1 γ)β) and we compute the Newton direction of F µz) at z: w = u, v = F µz ) z ) 1 F µz ) z ), 3.5) which is also known as the centering direction at z. We denote x θ) = x + θu, s θ) = s + θv, z θ) = x θ), s θ), µ = µz ), µ θ) = µz θ)) 3.6) and we determine the corrector step length as The output of the corrector is the point θ + = argmin {µθ) : zθ) Dβ)}. 3.7) z + = x +, s + = zθ + ) Dβ). 3.8) Since z + Dβ) we can set z z + and start another predictor-corrector iteration. This leads to the following algorithm. Algorithm 1 Given κ χq, R), β 0, 1) and z 0 Dβ) : Compute γ from 3.3); Set µ 0 µz 0 ), k 0; repeat predictor step) Set z z k ; r 1. Compute affine scaling direction 3.1); r. Compute predictor steplength 3.); r 3. Compute z from 3.4); If µz ) = 0 then STOP: z is an optimal solution; If z Dβ), then set z k+1 z, µ k+1 µz ), k k + 1, and RETURN; corrector step) r 4. Compute centering direction 3.5); r 5. Compute centering steplength 3.7); r 6. Compute z + from 3.8); Set z k+1 z +, µ k+1 µz + ), k k + 1, and RETURN; until some stopping criterion is satisfied. A standard stopping criterion is x k T s k ɛ. 3.9)

A wide neighborhood predictor-corrector method for LCP 9 We will see that if the problem has a solution for any ɛ > 0 Algorithm 1 terminates in a finite number say K ɛ ) of iterations. If ɛ = 0 then the problem is likely to generate an infinite sequence. However it may happen that at a certain iteration let us say at iteration K 0 ) an exact solution is obtained, and therefore the algorithm terminates at iteration K 0. If this unlikely) phenomenon does not happen we set K 0 =. We now describe some possible implementations of the steps in the repeat segment of Algorithm 1: In step r 1 of Algorithm 1, the affine scaling direction w = u, v can be computed as the solution of the following linear system: su + xv = xs Qu + Rv = 0. 3.10) In step r, we find the largest θ that satisfies where xθ)sθ) 1 γ)βµθ), xθ) = x + θu, sθ) = s + θv, µ = µz), µθ) = µzθ)) = xθ) T sθ)/n. 3.11) According to 3.10) we have Let us denote xθ)sθ) = 1 θ)xs + θ uv, µθ) = 1 θ)µ + θ u T v/n. 3.1) p = xs µ, q = uv µ. From lemmas 3.1 and 3. to be proved later in this section) it follows that κn e T q.5n, which implies that the discriminant of the quadratic equation µθ) = 0 is always non-negative. The smallest positive root of µθ) = 0 is Therefore The relation θ 0 = 1 + 1 4e T q/n. 3.13) µθ) > µθ 0) = 0, for all 0 θ < θ 0. 3.14) xθ)sθ) 1 γ)βµθ) 3.15) can be written as the following system of quadratic inequalities 1 θ) p i 1 γ)β) + θ q i 1 γ)βe T q/n ) 0, i = 1,..., n. 3.16) Since z Dβ) the above inequalities are satisfied for θ = 0. The i-th inequality above holds for all θ 0, θ i ], where if i 0 θ i = 1 if q i 1 γ)βe T q/n = 0 p i 1 γ)β) if i > 0 and q i 1 γ)βe T q/n 0 p i 1 γ)β+ i 3.17)

10 F. A. Potra AND X. Liu where i = p i 1 γ)β) 4 p i 1 γ)β) q i 1 γ)βe T q/n ). is the discriminant of the i th quadratic function in 3.16). By taking we have θ = min { θ i : 0 i n }. 3.18) xθ)sθ) 1 γ)βµθ) > 1 γ)βµθ) 0, for all 0 θ < θ. 3.19) From 3.10) and 3.11) it follows that Qxθ) + Rsθ) = b, and by using a standard continuity argument we can prove that xθ) > 0, sθ) > 0 for all θ 0, θ ), which implies that the point z = zθ ) given by the predictor step satisfies z F 0. If θ = θ 0, then µz ) = 0, so that z is an optimal solution of our problem, i.e. z F ; If µz ) > 0, then z D1 γ)β). If z / Dβ), a corrector step is performed. In step r 4 of Algorithm 1, the centering direction can be computed as the solution of the following linear system s u + x v = µz ) x s Qu + Rv = 0. 3.0) In step r 5, we determine the corrector step length θ + as follows: From 3.6) and 3.0) it follows that The relation x θ)s θ) = 1 θ)x s + θµ + θ u v, µ θ) = µ + θ u T v /n. 3.1) x θ)s θ) βµ θ) 3.) is equivalent to the following system of quadratic inequalities in θ where f iθ) := p i β + θ 1 p i) + θ q i βe T q /n ) 0, i = 1,..., n 3.3) p = x s µ, q = u v µ. Let us denote the leading coefficient by α i and the discriminant of f iθ) by i: α i = q i βe T q /n, i = 1 p i) 4 q i βe T q /n ) p i β). If i 0 and α i 0 we denote by ˇθ i and ˆθ i the smallest and the largest root of f iθ) respectively, i.e. θ ˇ i = p i 1 signαi) i, θ ˆ i = p i 1 + signαi) i. α i α i In the proof of Theorem 3.3 we will show that 3.3) has a solution, so that the following situation cannot occur for any i = 1,..., n i < 0 and α i < 0.

A wide neighborhood predictor-corrector method for LCP 11 By analyzing all possible situations, we conclude that the i th inequality in 3.3) will be satisfied for all θ T i, where, ), if i < 0 and α i > 0 T i =, ˇθ i] [ˆθ i, ), if i 0 and α i > 0 [ˇθ i, ˆθ i ], if i 0 and α i < 0, p i β)/p i 1) ], if α i = 0 and p i > 1 [ p i β)/p i 1), ), if α i = 0 and p i < 1, ), if α i = 0 and p i = 1. It follows that 3.) holds for all θ T where T = n T i R+. n 3.4) i=1 We define the step length θ + for the corrector by { min θ + = θ T θ if u T v 0 max θ T θ if u T v < 0. 3.5) It can be proved that T is bounded below when u T v 0 and is bounded above when u T v < 0. In proof of Theorem 3.3 we will show that T is non-empty so that 3.5) is well defined. We notice by 3.5) and 3.11), µ θ +) µ θ), θ T. 3.6) With θ + determined as above, the corrector step produces a point and another predictorcorrector iteration can be performed. Polynomial Complexity. In what follows we will prove that the step length θ computed in the predictor step is bounded below by a quantity of the form σ/1 + 4κ)n + ), where σ is a positive constant. This implies that Algorithm 1 has O1 + κ)nl)-iteration complexity. The following two technical lemmas will be used in the proof of our main result. LEMMA 3.1. Assume that HLCP.1) is P κ), and let w = u, v be the solution of the following linear system su + xv = a Qu + Rv = 0 where z = x, s IR n ++ and a IR n are given vectors, and consider the index sets: I + = {i : u i v i > 0}, I = {i : u i v i < 0}.

1 F. A. Potra AND X. Liu Then the following inequalities are satisfied: 1 1 + 4κ u v u i v i 1 xs) 1/ a 4 i I + Proof The second inequality is well known for the monotone HLCP and it extends trivially to the P κ) HLCP. To prove the first inequality, we assume that for index t we have u t v t = max u i v i = u v i. If u t v t 0, then u t v t = u t v t u i v i ; i I + If u t v t < 0, then u t v t = u t v t u i v i 1 + 4κ) u i v i ; i I i I + Thus the first inequality holds in either case.. LEMMA 3.. Assume that HLCP.1) is P κ), and let w = u, v be the solution of the following linear system su + xv = a Qu + Rv = 0 where z = x, s IR ++ n and a IR n are given vectors. Then the following inequality holds: u T v κ xs) 1/ a. 3.7) Proof Using Lemma 3.1 we can write u T v = u i v i + u i v i = 1 + 4κ) u i v i + u i v i 4κ u i v i i I + i I i I + i I i I + 4κ u i v i κ xs) 1/ a. i I + The following result implies that Algorithm 1 has O1+κ)nL) iteration complexity. THEOREM 3.3. Algorithm 1 is well defined and ) µ k+1 1 3 1 β)β µ k, k = 0, 1,... 3.8) 1 + 4κ)n + )

A wide neighborhood predictor-corrector method for LCP 13 Proof According to Lemma 3.1 and Lemma 3. we have q.51 + 4κ)n, κn e T q i I +w) q i.5n. 3.9) Moreover, in the predictor step we have z Dβ), so that p i 1 γ)β > 0. Hence the quantity defined in 3.17) satisfies p i 1 γ)β) θ i p i 1 γ)β + p i 1 γ)β) 4p i 1 γ)β)q i 1 γ)β et q n ) p i 1 γ)β). p i 1 γ)β + p i 1 γ)β) + 4p i 1 γ)β) q + 14 1 γ)β) Since the function t t/t + t + 4at) is increasing on 0, ) for any a > 0, we deduce from above that βγ θ i βγ + β γ + 4βγ q + 1/4) = 1 + 1 + βγ) 1 4 q + 1) 1 + 1 + βγ) 1 1 + 4κ)n + 1) = β1 β) β1 β) + 1 + 4κ)n + 1) + β1 β). Since β1 β) + 1 + 4κ)n + 1) + β1 β) we deduce that.5 + 1 + 4κ)n + 1) +.5 < 1 + 4κ)n + θ i > θ := 1 β)β 1 + 4κ)n +. 3.30) Notice that for any κ > 0 and n 1, θ 0 1 + 1 + 4κ > θ. It follows that the quantity defined in 3.18) satisfies θ > θ. Relations 3.19), 3.1) and 3.9) imply µ = µθ ) < µ θ) 1 θ) +.5 θ ) µ = 1 1.5 θ ) θ ) µ. Since we assume that n and κ 0, we have 1.5 θ 1 β)β 1 β)β = 1 1 + 4κ)n + ) 1 8 and we obtain µ 1 1 16 = 15 16 ) 1 15 1 β)β µ. 3.31) 81 + 4κ)n + )

14 F. A. Potra AND X. Liu Let us analyze now the corrector. Since z D1 γ)β), we have x s µ µ n µ n x i s n i = n + x s x i s i µ = µ 1 1 γ)β n n x i s i 1 γ)β i=1 i=1 and by applying Lemma 3.1 we deduce that u v 1 + 4κ)ξn 4 µ, i I +w ) u i v i ξn 4 By substituting γ into the above equation, we get From 3.1) it follows that and Therefore x θ)s θ) µ x θ)s θ) βµ θ) µ ξ = i=1 1 β)1 + 4κ)n + 1 + β) 1 + 4κ)n + β)β µ, where ξ := 1 1 γ)β 1 γ)β. 3.3). 3.33) 1 + 4κ)n 1 θ)1 γ)β + θ ξθ 4 1 + 4κ)n = 1 γ)β + θ1 1 γ)β) ξθ 4 µ θ) 1 +.5 ξθ ) µ. 3.34) gθ) := γβ + θ1 1 γ)β).5ξ1 + 4κ)n + β)θ. Using 3.33) together with the definition 3.3) of γ we obtain 1 β gθ) = 4β1 + 4κ)n + 1) 1 + 4κ)n + 1)θ β)1 + 4κ)n + 1 + β)θ β) β θ ). Since β g 1 + 4κ)n + 1 ) = 1 β)β 1 + 4κ)n + 1) 0, we deduce that β/1 + 4κ)n + 1) T. By using 3.34) and n we have 1 + β µ + = µ θ + ) µ 1 + 4κ)n + 1 ) Given that n and κ 0, we have 1+4κ)n+1+β 1+4κ)n+β implies µ + 1 + 1 β)β1 + 4κ)n + 1 + β) 1 + 4κ)n + 1) 1 + 4κ)n + β) ) µ. < 3, so that the above inequality ) 31 β)β 1 + 4κ)n + 1) µ. 3.35)

A wide neighborhood predictor-corrector method for LCP 15 Finally, by using 3.31), we obtain ) µ + 1 15 ) 1 β)β 31 β)β 1 + 81 + 4κ)n + ) 1 + 4κ)n + 1) µ ) 1 15 ) 1 β)β 3β 1 β) 1 + µ 81 + 4κ)n + ) 1 + 4κ)n1 + 4κ)n + ) ) 1 15 1 β)β 81 + 4κ)n + ) + 3β 1 β) µ 1 + 4κ)n1 + 4κ)n + ) 15 1 8 3 ) ) 1 β)β 1 β)β µ 1 + 4κ)n 1 + 4κ)n + ) ) 1 3 1 β)β µ. 1 + 4κ)n + ) The last inequality holds since 1 β)β.5 and 1 + 4κ)n imply 15 8 3 1 β)β 1 + 4κ)n 3. 3.36) The proof is complete. The next corollary is an immediate consequence of the above theorem. COROLLARY 3.4. Algorithm 1 with stopping criterion 3.9) produces a point z k Dβ) with x k T s k ε in at most O 1 + κ)n log x 0 T s 0 /ε )) iterations. Quadratic Convergence We will next prove that the sequence {µz k )} is quadratically convergent to zero in the sense that µ k+1 = Oµ k). 3.37) We do need the assumption that our P κ) HLCP is nondegenerate, i.e. the set of all strictly complementary solutions F # := {z = x, s F : x + s > 0} is non-empty. In fact, this assumption is not restrictive and it is needed for a large class of interior-point methods using only first order derivatives to obtain superlinear convergence, including the MTY predictor-corrector method [4, 13]. The following result was proved for standard monotone LCP in [6] and for HLCP in [4]. Its extension for P κ) HLCP is straightforward see []). LEMMA 3.5. If F # then the solution w = u, v of 3.10) satisfies where µ = µz) is given by 1.). u i v i = Oµ ), i {1,,..., n},

16 F. A. Potra AND X. Liu With the help of this lemma the quadratic convergence result of [15] automatically extends to our case. THEOREM 3.6. If HLCP has a strictly complementary solution, then the sequence {µ k } generated by Algorithm 1 with no stopping criterion converges quadratically to zero in the sense that 3.37) is satisfied. Algorithm 1 depends on a given parameter κ χq, R) because of the choice of γ from 3.3). However, in many applications it may very expensive to find a good upper bound for the handicap χq, R) [3]. Therefore we propose an algorithm that does not depend on κ. Initially we set κ = 1 and use Algorithm 1 for this value of κ. If at a certain iteration the corrector fails to produce a point in Dβ), then we conclude that the current value of κ is too small. In this case we double the value of κ and restart Algorithm 1 from the last point produced in Dβ). Clearly we have to double the value of κ at most log χq, R) times. This leads to the following algorithm. Algorithm 1A Given β 0, 1) and z 0 Dβ) : Set κ 1, µ 0 µz 0 ), k 0; repeat Compute γ from 3.3); predictor step) Set z z k ; r 1. Compute affine scaling direction 3.1); r. Compute predictor steplength 3.); r 3. Compute z from 3.4); If µz ) = 0 then STOP: z is an optimal solution; If z Dβ), then set z k+1 z, µ k+1 µz ), k k + 1, and RETURN; corrector step) r 4. Compute centering direction 3.5); r 5. Compute centering steplength 3.7); r 6. Compute z + from 3.8); if z + Dβ), set z k+1 z +, µ k+1 µz + ), k k + 1, and RETURN; else, set κ κ and z k+1 z k, µ k+1 µz k ), k k + 1, and RETURN; until some stopping criterion is satisfied. Using Theorem 3.3, Corollary 3.4, and, Theorem 3.6 we obtain the following result. THEOREM 3.7. Algorithm 1A with stopping criterion 3.9) produces a point z k Dβ) with x k T s k ε in at most O 1 + χq, R)) n log x 0 T s 0 /ε )) iterations. If HLCP has a strictly complementary solution, then the sequence {µ k } generated by Algorithm 1A with no stopping criterion converges quadratically to zero in the sense that 3.37) is satisfied.

A wide neighborhood predictor-corrector method for LCP 17 Proof Let κ be the largest value of κ used in Algorithm 1A. We have clearly κ < χq, R). Consider now that at iteration k of Algorithm 1A we have κ < χq, R). If the corrector step is accepted, i.e. if z + Dβ), then z k+1 = z + and by inspecting the proof of 3.8) it follows that µ k+1 1 3 1 β)β 1 + 4χQ, R))n + ) ) µ k 1 3 ) 1 β)β µ k. 1 + 8χQ, R))n + ) This is easily seen because Lemma 3.1 and Lemma 3. hold for κ = χq, R) while the bound on the predictor step size depends on γ which is decreasing in κ. If κ χq, R) then the corrector step is never rejected and from 3.8) and the fact κ κ < χq, R), we have µ k+1 1 3 1 β)β 1 + 4κ)n + ) ) µ k 1 3 ) 1 β)β µ k. 1 + 8χQ, R))n + ) Since there can be at most log κ rejections we obtain the desired complexity result. In case the problem is nondegenerate we always have µ + = Oµ ) and since there are only a finite number of corrector rejections it follows that for sufficiently large k we have µ k+1 = Oµ k ). Let us end this section by remarking that even if χq, R) is known, it is not very clear whether Algorithm 1 with κ = χq, R) is more efficient than Algorithm 1A on a particular problem. Indeed it may happen that the corrector step in Algorithm 1A is accepted for smaller values of κ for some iterations, and those iterations will yield a better reduction of the complementarity gap. 4 A higher order predictor corrector The higher order predictor uses higher derivatives of the central path. point z = x, s Dβ) we consider the curve given by zθ) = z + Given a m w i θ i 4.1) where w 1 is just the affine scaling direction used in the first order predictor, and w i are the directions related to the higher derivatives of the central path see [0]). The vectors w i = u i, v i can be obtained as the solutions of the following linear systems { su 1 + xv 1 = γµe 1 + ɛ)xs Qu 1 + Rv 1, = 0 { su + xv = ɛxs u 1 v 1 Qu + Rv, 4.) = 0 { su i + xv i = i 1 j=1 uj v i j Qu i + Rv i, i = 3,..., m, = 0 i=1

18 F. A. Potra AND X. Liu where { 0, if HLCP is nondegenerate ɛ = 1, if HLCP is degenerate. 4.3) The m linear systems above have the same matrix, so that their numerical solution requires only one matrix factorization and m backsolves. This involves On 3 ) + m On ) arithmetic operations. Since the case m = 1 has been analyzed in the previous section, for the remainder of this paper we assume that m. Given predictor 4.1), we want to choose the step size θ such that we have µθ) as small as possible while still keeping the point in the neighborhood D1 γ)β). We define } ˇθ = sup { θ > 0 : zθ) D1 γ)β), θ [0, θ], 4.4) where γ is given by 3.3), and β is a given parameter chosen in the interval 0,1). From 4.1) - 4.) we deduce that xθ)sθ) = 1 θ) 1+ɛ xs + µθ) = 1 θ) 1+ɛ µ + where h i = m i=m+1 m i=m+1 m j=i m θ i h i, θ i e T h i /n), u j v i j. 4.5) Therefore the computation of 4.4) involves the solution of a system of polynomial inequalities of order m in θ. While is possible to obtain an accurate lower bound of the exact solution by using linear search, in the present paper we only give a simple lower bound in explicit form which is sufficiently good for proving our theoretical results. The predictor step length θ is chosen to minimize µθ) in the interval [ 0, ˇθ ], i.e. θ = argmin { µθ) : θ [ 0, ˇθ ] }, z = zθ ). 4.6) We have z D1 γ)β) by construction. Using the same corrector as in the previous section we obtain z + Dβ). By replacing the predictor in Algorithm 1, with the predictor described above we obtain: Algorithm Given κ χq, R), β 0, 1), an integer m, and z 0 Dβ) : Compute γ from 3.3); Set µ 0 µz 0 ), k 0; repeat predictor step) Compute directions w i = u i, v i, i = 1,..., m by solving 4.); Compute ˇθ from 4.4); Compute z from 4.6) ; If µz ) = 0 then STOP: z is an optimal solution;

Let us denote A wide neighborhood predictor-corrector method for LCP 19 If z Dβ), then set z k+1 z, µ k+1 µz ), k k + 1, and RETURN; corrector step) Compute centering direction 3.5) by solving 3.0); Compute centering steplength 3.5); Compute z + from 3.8); Set z k+1 z +, µ k+1 µz + ), k k + 1, and RETURN; until some stopping criterion is satisfied. η i = Du i + D 1 v i, where D = X 1/ S 1/. The following lemma that will be used in the proof of the main result of this section. LEMMA 4.1. The solution of 4.) satisfies 1 Du i 1 + κ + D 1 v i η i ) 1 + κ α 1 + κ)1 + ɛ) i i βµ n/β, where the sequence α i = 1 ) i i i 1 1 i 4i is the solution of the following recurrence scheme i 1 α 1 = 1, α i = α j α i j. Proof The first part of the inequality follows immediately, since by using 4.) and Lemma 3. we have Du i + D 1 v i = Du i + ui T v i + D 1 v i j=1 Du i + D 1 v i κ Du i + D 1 v i. By multiplying the first equations of 4.) with xs) 1/ we obtain Du 1 + D 1 v 1 = 1 + ɛ)xs) 1/ Du + D 1 v = ɛxs) 1/ xs) 1/ u 1 v 1 i 1 Du i + D 1 v i = xs) 1/ Du j D 1 v i j, i = 3,..., m. Using Lemma3., Corollary.3 of [14], and the fact z Dβ), we deduce that η 1 = 1 + ɛ) nµ, j=1 η ɛ nµ ɛu 1 ) T v 1 ) + u1 v 1 βµ

0 F. A. Potra AND X. Liu ɛ nµ + ɛκη 1 + 1 8βµ 1 + 4κ + 8κ )η 4 1 = ɛ nµ + ɛκ1 + ɛ) nµ + 1 8βµ 1 + 4κ + 8κ )1 + ɛ) 4 n µ. We want to prove that the inequality holds for i =, i.e., η This inequality holds provided 1 + κ) n µ1 + ɛ) 4. 4β ɛ nµ + ɛκ1 + ɛ) nµ n µ 8β 1 + 4κ)1 + ɛ)4, which is trivially satisfied for both ɛ = 0 and ɛ = 1. Finally, for i 3, we have η i 1 i 1 Du j βµ D 1 v i j, i = 3,..., m. j=1 Since Du j D 1 v i j + Du i j D 1 v j we obtain Du j + D 1 v ) j 1/ Du i j + D 1 v i j 1 + κ)η j η i j, η i 1 + κ βµ i 1 η j η i j, i =,..., m. j=1 ) 1/ The required inequalities are then easily proved by mathematical induction. THEOREM 4.. Algorithm is well defined and for each n 14 we have ) β 3 1 β µ k+1 1.0011 1 + κ) 1+1/mm+1)) n m+1 µ k, k = 0, 1,... 1 + 4κ)n + Proof An upper bound of h i, i = m + 1, m +,..., m can be obtained by writing h i = 1 m j=i m Du j D 1 v i j i 1 Du j D 1 v i j j=1 i 1 Du j D 1 v i j + Du i j D 1 v j ) j=1

1 = It follows that m θ i h i i=m+1 A wide neighborhood predictor-corrector method for LCP 1 i 1 Du j + D 1 v j Du i j + D 1 v i j j=1 1 + κ i 1 j=1 η j η i j βµ 1 + κ 1 + κ)1 + ɛ) βµ + κ)1 + ɛ) 1 n/β) i α i 1 + κ βµ 1 + κ)1 + ɛ) i n/β). 1 + κ)i βµ 41 + κ)θ n/β 1 + κ)m + 1) βµ 41 + κ)θ n/β 1 + κ)m + 1) 1 ) m+1 m 1 ) m+1 41 + κ)θ n/β i 1 n/β) i α j α i j j=0 )). j=1 41 + κ)θ ) j n/β For the remainder of this proof we assume that in the predictor step we have θ β 11 + κ) n which is a necessary condition for the last inequality above. m and κ 0, we deduce that m i=m+1 θi h i 3βµ 1+κ)m+1) βµ 41 + κ)θ n/β 41 + κ)θ n/β) m+1. ) m+1 4.7) In this case, since Since et a n e is the projection of the vector a onto e, we have Therefore, z Dβ) and 4.5) imply a et a n e a and a et e a. n xθ)sθ) 1 γ)βµθ) = 1 θ) 1+ɛ) xs + m i=m+1 θ i h i 1 γ)β = 1 θ) 1+ɛ) xs µβ) + γβ1 θ) 1+ɛ) µ + [ 1 θ) 1+ɛ) µ + m i=m+1 m i=m+1 θ i h i et h i n ) ] θ i et h i n

F. A. Potra AND X. Liu m +1 1 γ)β) θ i et h i n i=m+1 γβ1 θ) 1+ɛ) m µ 1 γ)β) θ i h i. i=m+1 Therefore the inequality xθ)sθ) 1 γ)βµθ) holds for any θ satisfying 4.7) and γ1 θ) 41 + κ)θ m+1 n/β). It is easy to check that both the above inequality and 4.7) are satisfied by β θ := 11 + κ) 1+1/mm+1)) n m+1 γ. 4.8) Moreover, θ defined above also satisfies µ θ) [1 θ) + n β 41 + κ) θ ) ] m+1 n/β µ 1 5 9 θ)µ. 4.9) From 4.4) it follows that θ ˇθ, and according to 4.6) and 4.9) we have 5 β m+1 ) 1 β µ = µθ ) µ θ) 1 1081 + κ) 1+1/mm+1)) n m+1 µ. 1 + 4κ)n + 1 4.10) Let us analyze now the corrector. From 3.35) and 4.10) we have ) 3β 1 β) µ + 1 + 1 + 4κ)n + 1) µ ) 3β 1 β) 1 + 1 + 4κ)n + 1) 5 β m+1 ) 1 β 1 1081 + κ) 1+1/mm+1)) n m+1 µ 1 + 4κ)n + 1 5 β m+1 1 β 1 1081 + κ) 1+1/mm+1)) n m+1 1 + 4κ)n + 1 ) 3β 1 β) + 1 + 4κ)n + 1) µ 5 β 3 1 β 1 1081 + κ) 1+1/mm+1)) n m+1 1 + 4κ)n + ) 3β 1 β) + µ 1 + 4κ)n1 + 4κ)n + )

where Since A wide neighborhood predictor-corrector method for LCP 3 = 1 1 λ ) 5 β 3 ) 1 β 1081 + κ) 1+1/mm+1)) n m+1 µ, 1 + 4κ)n + λ = 16 5 3 β 1 β) n1 + 4κ)n + ) m/m+1) 1+4κ 1+κ) 1+1/mm+1)). 4.11) 1 + 4κ)n + ) m/m+1) 1 + 4κ 1 + κ) n + 1+1/mm+1)) )1/, and n 14, m, κ 0, and β 3 1 β) is maximized for β = 3/7, we deduce that ) β 3 1 β µ + 1.0011 1 + κ) 1+1/mm+1)) n m+1 µ, 1 + 4κ)n + which completes the proof. The next complexity result follows immediately from the above theorem, COROLLARY 4.3. Algorithm with stopping criterion 3.9) produces a point z k Dβ) with x k T s k ε in at most O 1 + κ) 1+1/m n 1/+1/m+1) log x 0 T s 0 /ε )) iterations. Proof The result follows immediately from the fact that 1 + κ) 1+1/mm+1)) 1 + 4κ) 1/m+1) = O1 + κ) 1+1/m ). COROLLARY 4.4. Algorithm with stopping criterion 3.9) and with m = Ωlog n) produces a point z k Dβ) with x k T s k ε in at most O 1 + κ) 1+1/m n log x 0 T s 0 /ε )) iterations. Proof since m = Ωlog n), C 1 s.t. m C 1 log n, so that n 1 1 m+1 n C 1 log n+1 = C, where C exp 1 C 1 ) is a constant. The result thus follows immediately from the previous corollary. An obvious choice for m is m = log n 1. However, since lim n n 1/nω = 1 for any ω 0, 1), we can choose m = n ω 1 for some value of ω 0, 1). This choice was initially suggested by Roos private communication) and subsequently used in [8] and [15]. In the following table we give the values for this choice of m with ω = 1/10. The numerical implementation of a predictor of order m requires a matrix factorization and m backsolves. If the matrices Q and R are full, the cost of a matrix factorization is On 3 ) arithmetic operations, while the cost of a backsolve is On ) arithmetic operations. The above choices of m ensure that the cost of implementing the higher order prediction is dominated by the cost of the factorization. Next we show that the complementarity gap of the sequence produced by Algorithm with no stopping criterion is superlinearly convergent even when the

4 F. A. Potra AND X. Liu TABLE 4.1: n 10 4 10 5 10 6 10 7 10 8 10 9 10 10 n.1 1 3 3 5 6 7 10 problem is degenerate. More precisely we have µ k+1 = Oµ m+1 k ) the HLCP.1) is nondegenerate, and µ k+1 = Oµ m+1)/ k ) otherwise. The proof is based on the following lemma which is a consequence of the results about the analyticity of the central path from [0]. LEMMA 4.5. If HLCP.1) is sufficient then the solution of 4.) satisfies and u i = Oµ i ), v i = Oµ i ), i = 1,..., m, if HLCP.1) is nondegenerate, u i = Oµ i/ ), v i = Oµ i/ ), i = 1,..., m, if HLCP.1) is degenerate. By using the above lemma we can extend the superlinear convergence result of [15] to sufficient complementarity problems. THEOREM 4.6. The sequence µ k produced by Algorithm with no stopping criterion satisfies and µ k+1 = Oµ m+1 k ), if HLCP.1) is nondegenerate, 4.1) µ k+1 = Oµ m+1)/ k ), if HLCP.1) is degenerate. 4.13) In order to use Algorithm we fist have to find a constant κ that is greater or equal to the handicap χq, R). The following algorithm does not require finding an upper bound for the handicap and therefore it can be applied to any sufficient HLCP. Algorithm A Given β 0, 1), an integer m, and z 0 Dβ) : Set µ 0 µz 0 ), k 0, and κ 1; repeat Compute γ from 3.3); predictor step) Compute directions w i = u i, v i, i = 1,..., m by solving 4.); Compute ˇθ from 4.4); Compute z from 4.6) ; If µz ) = 0 then STOP: z is an optimal solution; If z Dβ), then set z k+1 z, µ k+1 µz ), k k + 1,

A wide neighborhood predictor-corrector method for LCP 5 and RETURN; corrector step) Compute centering direction 3.5) by solving 3.0); Compute centering steplength 3.5); Compute z + from 3.8); if z k+1 Dβ), set z k+1 z +, µ k+1 µz + ), k k + 1, and RETURN; else, set κ κ and z k+1 z k, µ k+1 µz k ), k k + 1, and RETURN; until some stopping criterion is satisfied. By using an analysis similar to the one employed in the previous section we obtain the following result. THEOREM 4.7. Algorithm A is well defined for any sufficient HLCP and the following statements hold: i)it produces a point z k Dβ) with x k T s k ε in at most O 1 + χq, R) 1+1/m) n 1/+1/m+1) log x 0 T s 0 /ε )) iterations; ii)if we choose m = Ωlog n), a point z k Dβ) with x k T s k ε is produced in at most O 1 + χq, R) 1+1/m) n log x 0 T s 0 /ε )) iterations; iii) the normalized complementarity gap satisfies 4.1) and 4.13). 5 Summary We have presented a first order and an m th order predictor-corrector interiorpoint algorithm for sufficient HLCP that depend explicitly on an upper bound κ of the handicap χq, R) of the HLCP. They produce a point x, s in the N neighborhood of the central path with complementarity gap x T s ε in at most O 1 + κ)n log x 0 T s 0 /ε )) and O 1 + κ) 1+1/m n log x 0 T s 0 /ε )) iterations respectively. The first order method is Q-quadratically convergent for nondegenerate problems, while the m th order method is Q-superlinearly convergent of order m + 1 for nondegenerate problems and of order m + 1)/ for degenerate problems. We have also presented a first order and a high order predictor-corrector method for sufficient HLCP that do not require an explicit upper bound κ of the handicap χq, R) and therefore can be applied to any sufficient HLCP. Their iteration complexity and superlinear convergence properties are similar to that of the previous methods with κ = χq, R). The cost of implementing one iteration of our algorithms is On 3 ) arithmetic operations. The cost is dominated by the cost of the two matrix factorizations required both by the first order method and the high order method.

6 F. A. Potra AND X. Liu Acknowledgements The authors would like to thank two anonymous referees for their comments that lead to a better presentation of our results. References [1] M. Anitescu, G. Lesaja, and F. A. Potra. Equivalence between different formulations of the linear complementarity problem. Optimization Methods & Software, 73):65 90, 1997. [] M. Anitescu, G. Lesaja, and F. A. Potra. An infeasible interior point predictor corrector algorithm for the P -Geometric LCP. Applied Mathematics & Optimization, 36):03 8, 1997. [3] K.M Anstreicher and R.A. Bosch. A new infinity-norm path following algorithm for linear programming. SIAM J. Optim., 5):36 46, 1995. [4] J. F. Bonnans and C. C. Gonzaga. Convergence of interior point algorithms for the monotone linear complementarity problem. Mathematics of Operations Research, 1:1 5, 1996. [5] C. C. Gonzaga. Complexity of predictor-corrector algorithms for LCP based on a large neighborhood of the central path. SIAM J. Optim., 101):183 194 electronic), 1999. [6] P-F. Hung and Y. Ye. An asymptotical O nl)-iteration path-following linear programming algorithm that uses wide neighborhoods. SIAM Journal on Optimization, 63):159 195, August 1996. [7] C. Roos J. Peng and T. Terlaky. Self-regularity: a new paradigm for primal-dual interiorpoint algorithms. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ, 00. [8] J. Ji, F. A. Potra, and S. Huang. A predictor-corrector method for linear complementarity problems with polynomial complexity and superlinear convergence. Journal of Optimization Theory and Applications, 841):187 199, 1995. [9] M. Kojima, N. Megiddo, T. Noma, and A. Yoshise. A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems, volume 538 of Lecture Notes in Comput. Sci. Springer-Verlag, New York, 1991. [10] J. Miao. A quadratically convergent O1 + k) nl)-iteration algorithm for the P k)-matrix linear complementarity problem. Mathematical Programming, 69:355 368, 1995. [11] S. Mizuno. A superlinearly convergent infeasible-interior-point algorithm for geometrical LCPs without a strictly complementary condition. Math. Oper. Res., 1):38 400, 1996. [1] S. Mizuno, M. J. Todd, and Y. Ye. On adaptive-step primal-dual interior-point algorithms for linear programming. Mathematics of Operations Research, 184):964 981, 1993. [13] R. D. C. Monteiro and S. J. Wright. Local convergence of interior-point algorithms for degenerate monotone LCP. Computational Optimization and Applications, 3:131 155, 1994. [14] F. A. Potra. An OnL) infeasible interior point algorithm for LCP with quadratic convergence. Annals of Operations Research, 6:81 10, 1996. [15] F. A. Potra. A superlinearly convergent predictor-corrector method for degenerate LCP in a wide neighborhood of the central path with O nl)-iteration complexity. Math. Programming, 100:317 337, 004. [16] F. A. Potra and R. Sheng. A large-step infeasible interior point method for the P -matrix LCP. SIAM Journal on Optimization, 7):318 335, 1997. [17] F. A. Potra and R. Sheng. A path following method for LCP with superlinearly convergent iteration sequence. Ann. Oper. Res., 81:97 114, 1998. Applied mathematical programming and modeling, III APMOD95) Uxbridge). [18] F. A. Potra and R. Sheng. Superlinearly convergent infeasible interior point algorithm for degenerate LCP. Journal of Optimization Theory and Applications, 97):49 69, 1998. [19] J. Stoer and M. Wechs. Infeasible-interior-point paths for sufficient linear complementarity problems and their analyticity. Math. Programming, 833, Ser. A):407 43, 1998.

A wide neighborhood predictor-corrector method for LCP 7 [0] J. Stoer, M. Wechs, and S. Mizuno. High order infeasible-interior-point methods for solving sufficient linear complementarity problems. Math. Oper. Res., 34):83 86, 1998. [1] Josef Stoer. High order long-step methods for solving linear complementarity problems. Ann. Oper. Res., 103:149 159, 001. Optimization and numerical algebra Nanjing, 1999). [] H. Väliaho. P -matrices are just sufficient. Linear Algebra and its Applications, 39:103 108, 1996. [3] H. Väliaho. Determining the handicap of a sufficient matrix. Linear Algebra Appl., 53:79 98, 1997. [4] S. J. Wright. Primal Dual Interior Point Methods. SIAM Publications, Philadephia, 1997. [5] S. J. Wright and Y. Zhang. A superquadratic infeasible interior point algorithm for linear complementarity problems. Mathematical Programming, 73:69 89, 1996. [6] Y. Ye and K. Anstreicher. On quadratic and O nl) convergence of predictor-corrector algorithm for LCP. Mathematical Programming, 63):537 551, 1993. [7] Y. Ye, O. Güler, R. A. Tapia, and Y. Zhang. A quadratically convergent O nl)-iteration algorithm for linear programming. Mathematical Programming, 59):151 16, 1993. [8] G. Zhao. Interior point algorithms for linear complementarity problems based on large neighborhoods of the central path. SIAM J. Optim., 8):397 413 electronic), 1998. [9] Gongyun Zhao and Jie Sun. On the rate of local convergence of high-order-infeasiblepath-following algorithms for P -linear complementarity problems. Comput. Optim. Appl., 143):93 307, 1999.