Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Similar documents
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

A Note on KKT Points of Homogeneous Programs 1

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see

A continuation method for nonlinear complementarity problems over symmetric cone

Department of Social Systems and Management. Discussion Paper Series

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Smoothing Newton Method for Solving Absolute Value Equations

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

Polynomial complementarity problems

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Error bounds for symmetric cone complementarity problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems

Strict Feasibility Conditions in Nonlinear Complementarity Problems 1

Properties of Solution Set of Tensor Complementarity Problem

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1

A class of Smoothing Method for Linear Second-Order Cone Programming

Implications of the Constant Rank Constraint Qualification

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm

Bulletin of the. Iranian Mathematical Society

Lecture 7 Monotonicity. September 21, 2008

c 2000 Society for Industrial and Applied Mathematics

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

A Value Estimation Approach to the Iri-Imai Method for Constrained Convex Optimization. April 2003; revisions on October 2003, and March 2004

An improved generalized Newton method for absolute value equations

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

Technische Universität Dresden Herausgeber: Der Rektor

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

Absolute value equations

A Regularization Newton Method for Solving Nonlinear Complementarity Problems

Tensor Complementarity Problem and Semi-positive Tensors

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm

Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018

On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean

Interior-Point Methods for Linear Optimization

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata

A Primal-Dual Interior-Point Method for Nonlinear Programming with Strong Global and Local Convergence Properties

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

Spectral gradient projection method for solving nonlinear monotone equations

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems

system of equations. In particular, we give a complete characterization of the Q-superlinear

Optimality, Duality, Complementarity for Constrained Optimization

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Proximal-like contraction methods for monotone variational inequalities in a unified framework

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A convergence result for an Outer Approximation Scheme

Lecture 3. Optimization Problems and Iterative Algorithms

A derivative-free nonmonotone line search and its application to the spectral residual method

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

c 2005 Society for Industrial and Applied Mathematics

Numerical Optimization

Introduction to Real Analysis Alternative Chapter 1

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

On the Local Convergence of an Iterative Approach for Inverse Singular Value Problems

The Steepest Descent Algorithm for Unconstrained Optimization

Interior Methods for Mathematical Programs with Complementarity Constraints

5 Handling Constraints

Course 212: Academic Year Section 1: Metric Spaces

Primal/Dual Decomposition Methods

The Second-Order Cone Quadratic Eigenvalue Complementarity Problem. Alfredo N. Iusem, Joaquim J. Júdice, Valentina Sessa and Hanif D.

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition

A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Interior-Point Methods

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

FIXED POINT ITERATIONS

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

Mathematics for Economists

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

A proximal-like algorithm for a class of nonconvex programming

Technische Universität Dresden Institut für Numerische Mathematik. An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

c 2004 Society for Industrial and Applied Mathematics

LAGRANGIAN TRANSFORMATION IN CONVEX OPTIMIZATION

Real Analysis Notes. Thomas Goller

An interior-point gradient method for large-scale totally nonnegative least squares problems

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

An inexact subgradient algorithm for Equilibrium Problems

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent

Transcription:

A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point path-following algorithm is proposed for solving the P 0 linear complementarity problem (P 0 LCP) The condition ensuring the global convergence of the algorithm for P 0 LCPs is weaker than most previously used ones in the literature This condition can be satisfied even when the strict feasibility condition, which has often been assumed in most existing non-interior-point algorithms, fails to hold When the algorithm is applied to P and monotone LCPs, the global convergence of this method requires no assumption other than the solvability of the problem The local superlinear convergence of the algorithm can be achieved under a nondegeneracy assumption The effectiveness of the algorithm is demonstrated by our numerical experiments Key words linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path AMS subject classifications 90C30, 90C33, 90C51, 65K05 1 Introduction We consider a new path-following algorithm for the linear complementarity problem (LCP): x 0, Mx + d 0, x T (Mx + d) = 0, where M is an n by n matrix and d a vector in R n This problem is said to be a P 0 LCP if M is a P 0 matrix, and a P LCP if M is a P matrix We recall that M is said to be a P 0 matrix (see [13]) if max x i(mx) i 0 for any 0 x R n 1 i n M is said to be a P matrix (see [26]) if there exists a nonnegative constant τ 0 such that (1 + τ) x i (Mx) i + x i (Mx) i 0 for any 0 x R n, i I + i I where I + = {i : x i (Mx) i > 0} and I = {i : x i (Mx) i < 0} We first give a synopsis of non-interior-point methods and related results for complementarity problems The first non-interior-point path-following method for LCPs was proposed by Chen and Harker [6] This method was improved by Kanzow [24] who also studied other closely related methods, later called the Chen-Harker- Kanzow-Smale (CHKS) smoothing function method (see [20]) The CHKS function φ : R 3 R is defined by φ(t 1, t 2, µ) = t 1 + t 2 (t 1 t 2 ) 2 + 4µ This work was partially supported by Grant CUHK4392/99E, Research Grants Council, Hong Kong Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, and Institute of Applied Mathematics, AMSS, Chinese Academy of Sciences, Beijing, China (Email: ybzhao@amssaccn) Corresponding author Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, NT, Hong Kong (Fax: (852) 26035505; Tel: (852) 26098323; Email: dli@secuhkeduhk) 1

2 Y B ZHAO AND D LI Based on such a function, Hotta and Yoshise [20] studied the structural properties of a non-interior-point trajectory and proposed a globally convergent path-following algorithm for a class of P 0 LCPs However, no rate of convergence was reported in these papers The first global linear convergence result for the LCP with a P 0 and R 0 matrix was obtained by Burke and Xu [3], who also proposed in [4] a non-interior-point predictor-corrector algorithm for monotone LCPs which was both globally linearly and locally quadratically convergent under certain assumption Further development of non-interior-point methods can be found in [35, 5, 40, 33, 8, 7, 21] It is worth mentioning that Chen and Xiu [8] and Chen and Chen [7] proposed a class of noninterior-point methods using the Chen-Mangasarian smoothing function family [9] that includes the CHKS smoothing function as a special case Since most existing non-interior-point path-following algorithms are based on the CHKS function, these methods actually follow the central path to locate a solution of the LCP The central path is the set of solutions of the following system as the parameter µ > 0 varies: x > 0, Mx + d > 0, X(Mx + d) = µe, where X = diag(x) and e = (1,, 1) T For P 0 LCPs, it is shown (see [42, 43]) that most assumptions used for non-interior-point algorithms, for instance, the Condition 15 in [25], Condition 12 in Hotta and Yoshise [20], and the P 0 + R 0 assumption in Burke and Xu [3] and Chen and Chen [7], imply that the solution set of the problem is bounded As showed by Ravindran and Gowda in [34] the P 0 complementarity problem with a bounded solution set must have a strictly feasible point, ie, there exists an x 0 such that Mx 0 + d > 0 (This implies that a P 0 LCP with no strictly feasible point either has no solution or has an unbounded solution set) We conclude that the above-mentioned conditions all imply that the problem has a strictly feasible point Thus, for a solvable P 0 LCP without strictly feasible point (in this case, the central path does not exists), it is unknown whether most existing non-interior-point algorithms are globally convergent or not An interesting problem is how to improve these algorithms so that they are able to handle those P 0 problems with unbounded solution sets or without strictly feasible points Recently, Zhao and Li [42] proposed a new continuation trajectory for complementarity problems, which is defined as follows: x > 0, Mx + d + θ p x > 0, x i [(Mx + d) i + θ p x i ] = θ q a i, i = 1,, n, where θ is a parameter in (0,1], p (0, ) and q [1, ) are two fixed scalars, and a = (a 1,, a n ) T R n ++ is a fixed vector, for example, a = e For a P 0 matrix M, it turns out (see [42]) that the above system has a unique solution for each given parameter θ, and this solution, denoted by x(θ), is continuously differentiable on (0,1) Thus, the set {x(θ) : θ (0, 1]} forms a smooth path approaching to the solution set of the P 0 LCP as θ tends to zero Notice that for a given θ, the term Mx + d + θ p x is the Tikhonov regularization of Mx + d, which has been used to study complementarity problems by several authors such as Isac [22], Venkateswaran [36], Facchinei and Kanzow [14], Ravindran and Gowda [34], and Zhao and Li [42] We may refer the above smooth path to regularized central path A good feature of the regularized central path is that its existence and boundedness can be guaranteed under a very weak assumption In particular, the boundedness of the solution set and the strict feasibility condition are not needed for the existence of this path Combining the CHKS function and Tikhonov regularization method, Zhao and Li [43] extended

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 3 the results in [42] to non-interior-point methods, and studied the existence as well as the limiting behavior of a new non-interior-point smooth path The theoretical results established in [43] motivate us to construct a new noninterior-point path-following algorithm for P 0 LCPs The purpose of this paper is to provide such a practical numerical algorithm It is worth stressing the differences between the proposed method in this paper and previous algorithms in the literature i) The proposed algorithm follows the regularized central path instead of the central path ii) The condition ensuring the global convergence of the algorithm for P 0 LCPs is strictly weaker than those ones used in most existing non-interior-point methods The local superlinear convergence of the algorithm can be achieved under a standard nondegeneracy assumption that was used in many works such as [38, 39, 33] In particular, we also study the important special case of P LCPs, and derive some stronger results than that of the P 0 case The paper is organized as follows In section 2, we introduce some basic results and describe the algorithm In section 3, we prove the global convergence of the algorithm for a class of P 0 LCPs The local convergence analysis of the algorithm is given in section 4 The special case of P LCPs is discussed in section 5, and some numerical results are reported in section 6 Notation: R n denotes the n-dimensional Euclidean space R+ n and R++ n denote the nonnegative orthant and positive orthant, respectively A vector x 0 (x > 0) means x R+ n (x R++) n All the vectors, unless otherwise stated, are column vectors T denotes the transpose of a vector For any vector x, the capital X denotes the diagonal matrix diag(x), and for any index set I {1,, n}, x I denotes the sub-vector made up of the components x i for i I The symbol e denotes the vector in R n with all of its components equal to one For given vectors u, w, v in R n, the triplet (u, w, v) (the pair (x, y)) denotes the column vector (u T, w T, v T ) T ( (x T, y T ) T ) For any u R n, the symbol u p denotes the pth power of the vector u, ie, the vector (u p 1,, up n) T where p > 0 is a positive scalar, and U p denotes the diagonal matrix diag(u p ) For any vector x y, we denote by [x, y] the rectangular box [x 1, y 1 ] [x n, y n ] 2 A non-interior-point path-following algorithm Let p and q be two given positive scalars Define the map H : R+ n R 2n R 3n as follows: u (21) H(u, x, y) = x + y (x y) 2 + 4u q, (u, x, y) R n y (Mx + d + U p + R 2n, x) where U p = diag(u p ) and all the algebraic operations are performed componentwise The above homotopy map first appeared in [43] Clearly, if H(u, x, y) = 0 then (x, y) is a solution to the LCP; conversely, if (x, y) is a solution to the LCP, then (0, x, y) is a solution to the equation H(u, x, y) = 0 Thus, an LCP can be solved via locating a solution of the nonlinear equation H(u, x, y) = 0 From the discussion in [43], we can conclude that it is a judicious choice to use the above version of the homotopy formulation in order to deal with the LCP with an unbounded solution set Before embarking on stating the algorithm, we first introduce some results established in [43] Let (a, b, c) R n ++ R 2n be given Consider the following system with the parameter θ : (22) H(u, x, y) = θ(a, b, c), where θ (0, 1] For P 0 LCPs, it is shown in [43] that for each given θ (0, 1] the above system has a unique solution denoted by (u(θ), x(θ), y(θ)), which is also

4 Y B ZHAO AND D LI continuously differentiable with respect to θ Therefore, the following set (23) {(u(θ), x(θ), y(θ)) : H(u, x, y) = θ(a, b, c), θ (0, 1]} forms a smooth path Also, in this paper, we refer this path to as the regularized central path The existence of such a smooth path for P 0 LCPs needs no assumption (see Theorem 21 below) An additional condition is assumed to guarantee the boundedness of this path We now introduce such a condition proposed in [43] For given (a, b, c) R++ n R 2n and θ (0, 1], we define a mapping F (a,b,c,θ) : R 2n R 2n as follows: ( F (a,b,c,θ) (x, y) = Xy y M(x + 1 2 θb) d θp A p x θc where X = diag(x) and A p = diag(a p ) Condition 21 For any given (a, b, c) R++ n R 2n and scalar ˆt, there exists a scalar θ (0, 1] such that F 1 (a,b,c,θ) (D θ) is bounded, where θ (0,θ ] F 1 (a,b,c,θ) (D θ) := {(x, y) R 2n ++ : F (a,b,c,θ) (x, y) D θ } and D θ := [0, θa q ] [ θˆte, θˆte] R+ n R n The following result states that Condition 21 is weaker than most known assumptions used for non-interior-point methods An example was given in [43] to show that Condition 21 can be satisfied even when the strict feasibility condition fails to hold Proposition 21 [43] Let f = Mx + d where M is a P 0 -matrix If one of the following conditions holds, then Condition 21 is satisfied a) Condition 15 of Kojima et al [25] b) Condition 22 of Hotta and Yoshise [20] c) Assumption 22 of Chen and Chen [7] d) f is a P 0 and R 0 function [3, 7] e) f is a P -function (ie, M is a P matrix) and there is a strictly feasible point [26] f) f is a uniform P-function, ie, M is a P-matrix [27] g) The solution set of the LCP is nonempty and bounded The converse, however, is not true, ie, Condition 21 cannot imply any one of the above conditions Restricted to LCPs, the main result established in [43] are summarized as follows Theorem 21 [43] Let M be a P 0 -matrix (i) For each θ (0, 1], the system (22) has a unique solution denoted by (u(θ), x(θ), y(θ)), which is also continuously differentiable in θ (ii) If Condition 21 is satisfied, then the regularized central path (23) is bounded Hence, there exists a subsequence (u(θ k ), x(θ k ), y(θ k )) that converges, as θ k 0, to (0, x, y ) where x is a solution to the LCP For P LCPs, the only condition for the result (ii) above is the solvability of the problem Theorem 22 [43] Let M be P -matrix Assume that the solution set of the LCP is nonempty ),

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 5 (i) If p 1 and q [1, ), then the regularized central path (23) is bounded (ii) If p > 1, q [1, ) and c R n ++, then the regularized central path (23) is bounded The boundedness of the path (23) implies that the problem has a solution Combining this fact and the above result, we may conclude that the solvability of a P LCP, roughly speaking, is a necessary and sufficient condition for the boundedness of the regularized central path For monotone LCPs, we have a much stronger result than the above, ie, the entire path (23) is convergent as θ 0 The property of the limiting point of this path, as θ 0, depends on the choice of the scalars p and q Theorem 23 [43] Let M be a positive semi-definite matrix Assume that the solution set of the LCP is nonempty (i) If p 1 and q [1, ), then the regularized central path (23) converges, as θ 0, to the unique least N-norm solution of the LCP, where N = A p/2 (ii) If p > 1, q [1, ) and c R n ++, then the regularized central path (23) converges, as θ 0, to a maximally complementary solution of the LCP We now introduce the algorithm We choose the following neighborhood around the regularized central path {(u(θ), x(θ), y(θ)) : θ (0, 1]} : N (β) = {(u, x, y) : u θa = 0, H(u, x, y) θ(a, b, c) βθ, θ (0, 1]} Denote (24) ( x + y (x y)2 + 4(θa) G θ (x, y) = q y (Mx + d + θ p A p x) ) Then, the above neighborhood reduces to N (β) = {(x, y) : G θ (x, y) θ(b, c) βθ, θ (0, 1]}, where G θ is given by (24) For a given θ (0, 1], we denote N (β, θ) = {(x, y) : G θ (x, y) θ(b, c) βθ} Throughout the paper, G θ (x, y) denotes the Jacobian of G θ (x, y) with respect to (x, y) Let ε > 0 be a given tolerance We now describe the algorithm as follows Algorithm 21: Let p (0, ), q [1, ), σ (0, 1) and α (0, 1) be given Step 1 Select (a, b, c) R++ n R 2n, (x 0, y 0 ) R 2n, θ 0 (0, 1), and β > 0 such that (x 0, y 0 ) N (β, θ 0 ) Step 2 (Approximate Newton Step) If G 0 (x k, y k ) ε, stop; otherwise, let (dˆx k, dŷ k ) solve the equation (25) G 0 (x k, y k ) + G θ k(x k, y k )(dx, dy) = 0 Let (ˆx k+1, ŷ k+1 ) = (x k, y k ) + (dˆx k, dŷ k ) If G 0 (ˆx k+1, ŷ k+1 ) ε, stop; otherwise, if (ˆx k+1, ŷ k+1 ) N (β, (θ k ) 2 ), then set θ k+1 = (θ k ) 2, (x k+1, y k+1 ) = (ˆx k+1, ŷ k+1 )

6 Y B ZHAO AND D LI Set k := k + 1, and repeat step 2 Otherwise, go to step 3 Step 3 (Centering Step) If G θ k(x k, y k ) = θ k (b, c), set (x k+1, y k+1 ) = (x k, y k ), and go to step 4 Otherwise, let (dx k, dy k ) be the solution to the equation (26) G θ k(x k, y k ) θ k (b, c) + G θ k(x k, y k )(dx, dy) = 0 Let λ k be the maximum among the values of 1, α, α 2, such that Set G θ k(x k + λ k dx k, y k + λ k dy k ) θ k (b, c) (1 σλ k ) G θ k(x k, y k ) θ k (b, c) (x k+1, y k+1 ) = (x k, y k ) + λ k (dx k, dy k ) Step 4 (Reduce θ k ) Let γ k be the maximum among the values 1, α, α 2, such that ie, (x k+1, y k+1 ) N (β, (1 γ k )θ k ), G (1 γk )θ k(xk+1, y k+1 ) (1 γ k )θ k (b, c) β(1 γ k )θ k Set θ k+1 = (1 γ k )θ k Set k := k + 1 and go to step 2 Remark 21 (i) To start the algorithm, we need an initial point within the neighborhood of the regularized central path Such an initial point can be found at no cost for the above algorithm For example, let (a, b, c) be an arbitrary triplet in R++ n R 2n, (x 0, y 0 ) be an arbitrary vector in R 2n, and θ 0 be an arbitrary scalar in (0,1) Then, choose β such that β G θ 0(x0, y 0 ) θ 0 (b, c) θ 0 Clearly, this initial point satisfies (x 0, y 0 ) N (β, θ 0 ) (ii) The step 3 of the algorithm is a centering step in the sense that it forces the iterate close to the regularized central path such that the iterate is always confined in the neighborhood of the path In the next section, we show that step 3 together with step 4 guarantees the global convergence of the algorithm Step 2 is an approximate Newton step which was shown to have good local convergence properties (see, for example, [10, 11]) This step is used to accelerate the iteration such that a local rapid convergence can be achieved Similar strategies were used in several works such as [38, 39, 28, 7, 8] We also note that linear systems (25) and (26) have the same coefficient matrix, and thus only one matrix factorization is needed at each iteration We now show that the algorithm is well-defined Proposition 22 Algorithm 21 is well-defined and satisfies the following properties: (i) θ k is monotonically decreasing, and (ii) G θ k(x k, y k ) θ k (b, c) βθ k for all k 0, ie, (x k, y k ) N (β, θ k ) for all k 0 Proof We verify that each step of the algorithm is well-defined As we pointed out in Remark 21, step 1 of the algorithm is well-defined Consider the following 2n 2n matrix ( ) G θ k(x k, y k I (X ) = k Y k )D k I + (X k Y k )D (27) k (M + (θ k ) p A p, ) I

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 7 where X k = diag(x k ), Y k = diag(y k ), and D k = diag(d k ) with d k = (d k 1,, d k n) T where d k 1 i = (x k i yi k)2 + 4(θ k ) q a q, i = 1, 2,, n i Since a R n ++, for each given θ k (0, 1) it is easy to see that I (X k Y k )D k and I + (X k Y k )D k are positive diagonal matrices for every (x k, y k ) R 2n Thus, by Lemma 54 in Kojima et al [25], the matrix G θ k(x k, y k ) is nonsingular when M is a P 0 -matrix Thus, step 2 is well-defined Since (dx k, dy k ) is a descent direction for the following function at (x k, y k ), f(x, y) = 1 2 G θ k(x, y) θk (b, c) 2 2, the line search in step 3 is well-defined, and thus the whole step 3 is well-defined We finally prove that the step 4 is well-defined For any scalar µ 1 > µ 2 0, we have (28) G µ1 (x, y) G µ2 (x, y) ( = x + y (x y)2 + 4µ q ) ( ) 1 aq x + y (x y)2 + 4µ q y (Mx + d + µ p 2 aq 1 Ap x) y (Mx + d + µ p 2 Ap x) ( = (x y)2 + 4µ q 1 aq ) (x y) 2 + 4µ q 2 aq (µ p 1 µp 2 )Ap x ( 4(µ q 1 µq 2 )aq = ) (x y)2 +4µ q 1 aq + (x y) 2 +4µ q 2 aq (µ p 1 µp 2 )Ap x ( 4(µ q ) 1 µq 2 )aq 4µ q 1 aq (µ p 1 µp 2 )Ap x max{µ q/2 1 (1 (µ 2 /µ 1 ) q ), µ p 1 (1 (µ 2/µ 1 ) p )} (2a q/2, A p x) In particular, setting (x, y) = (x k, y k ), µ 1 = θ k > 0, and µ 2 = (1 γ)θ k with γ (0, 1), we then have (29) G θ k(x k, y k ) G (1 γ)θ k(x k, y k ) max{(θ k ) q/2 (1 (1 γ) q ), (θ k ) p (1 (1 γ) p )} (2a q/2, A p x k ) There are two cases to be considered Case (i): G θ k(x k, y k ) = θ k (b, c) in step 3 Then, (x k+1, y k+1 ) = (x k, y k ) By (29), for all sufficiently small γ we have G (1 γ)θ k(x k+1, y k+1 ) (1 γ)θ k (b, c) = G (1 γ)θ k(x k, y k ) G θ k(x k, y k ) + θ k (b, c) (1 γ)θ k (b, c) G (1 γ)θ k(x k, y k ) G θ k(x k, y k ) + γθ k (b, c) max{(θ k ) q/2 (1 (1 γ) q ), (θ k ) p (1 (1 γ) p )} (2a q/2, A p x k ) + γθ k (b, c) β(1 γ)θ k Case (ii): G θ k(x k, y k ) θ k (b, c) in step 3 For this case, according to step 3 we have G θ k(x k+1, y k+1 ) θ k (b, c) (1 σλ k ) G θ k(x k, y k ) θ k (b, c) (1 σλ k )βθ k

8 Y B ZHAO AND D LI The second inequality follows from the fact that G θ k(x k, y k ) θ k (b, c) βθ k, which is evident from the construction of the algorithm Notice that 1 σλ k < 1 By (29) and the above inequality, for all sufficiently small γ we have G (1 γ)θ k(x k+1, y k+1 ) (1 γ)θ k (b, c) G (1 γ)θ k(x k+1, y k+1 ) G θk (x k+1, y k+1 ) + G θ k(x k+1, y k+1 ) θ k (b, c) +γθ k (b, c) max{(θ k ) q/2 (1 (1 γ) q ), (θ k ) p (1 (1 γ) p )} (2a q/2, A p x k+1 ) +(1 σλ k )βθ k + γθ k (b, c) (1 γ)βθ k Thus, the step 4 is well-defined We now show that all the iterates are in the neighborhood defined by the algorithm By the construction of the algorithm, it is evident that either θ k+1 = (θ k ) 2 or θ k+1 = (1 γ k )θ k So, θ k is monotonically decreasing When k = 0, it follows from step 1 that (x 0, y 0 ) N (β, θ 0 ) Assume that this property holds for k, ie, (x k, y k ) N (β, θ k ) We show that it holds for k + 1 Indeed, if step 2 is accepted, then the criterion (x k+1, y k+1 ) N (β, θ k+1 ) is satisfied, where θ k+1 = (θ k ) 2 If step 2 is rejected, then (x k+1, y k+1 ) is created by step 3 together with step 4 It follows from step 4 that (x k+1, y k+1 ) N (β, θ k+1 ) where θ k+1 = (1 γ k )θ k Thus, for all k 0, we have that (x k, y k ) N (β, θ k ), ie, G θ k(x k, y k ) θ k (b, c) βθ k 3 Global convergence for P 0 LCPs We now show that the proposed algorithm is globally convergent for P 0 LCPs provided that Condition 21 is satisfied By Proposition 22, for every k 1, the iterate (x k, y k ) satisfies the following: (31) G θ k(x k, y k ) θ k (b, c) βθ k, θ k = (1 γ k 1 )θ k 1 or θ k = (θ k 1 ) 2 Let (b k, c k ) R 2n be two auxiliary vectors determined by (32) (b k, c k ) = G θ k(xk, y k ) θ k (b, c) θ k for all k Then, {(b k, c k )} is uniformly bounded In fact, by (31), we have that (b k, c k ) β, and hence (33) βe b k βe, βe c k βe By the definition of (24), we can write (32) as x k + y k (x k y k ) 2 + 4(θ k ) q a q = θ k (b + b k ), y k (Mx k + d + (θ k ) p A p x k ) = θ k (c + c k ) By the property of CHKS-function (see Lemma 1 in [20]), the system above is equivalent to (34) (35) (36) x k 1 2 θk (b + b k ) > 0, y k 1 2 θk (b + b k ) > 0, [ X k 1 ] ( 2 θk (B + B k ) y k 1 ) 2 θk (b + b k ) = (θ k ) q a q, y k = Mx k + d + (θ k ) p A p x k + θ k (c + c k ),

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 9 where X k, B and B k are diagonal matrices corresponding to x k, b and b k, respectively Remark 31 The fact that all iterates generated by Algorithm 21 satisfy the system (34)-(36) plays a key role in the analysis throughout the paper By continuity, from (36) it follows that {y k } is bounded if {x k } is Thus, if the sequence (x k, y k ) is unbounded, then {x k } must be unbounded The following result is a minor revision of Lemma 1 in [34] Lemma 31 [42, 44] Let M be a P 0 matrix Let {z k } be an arbitrary sequence with z k and z k z for all k, where z R n is a fixed vector Then there exist a subsequence of {z k }, denoted by {z kj }, and a fixed index i 0 such that z kj i 0 and (Mz kj + d) i0 is bounded from below The next result shows that the iterative sequence {(x k, y k )} generated by Algorithm 21 is bounded under Condition 21 Theorem 31 Let M be a P 0 matrix If Condition 21 is satisfied, then the iterative sequence {(x k, y k )} generated by Algorithm 21 is bounded Proof We prove this result by contradiction Assume that {(x k, y k )} is unbounded Then {x k } is unbounded (see Remark 31) Without loss of generality, we may assume that x k Notice that θ k < 1 and b k β It follows from (34) that x k 1 2 θk (b + b k ) 1 ( b + β)e for all k 2 Thus, by Lemma 31, there exist a subsequence of {x k }, denoted also by {x k }, and an index m such that x k m and (Mx k + d) m is bounded from below By (35), for each i we have ( x k i 1 ) ( 2 θk (b i + b k i ) yi k 1 ) 2 θk (b i + b k i ) = (θ k ) q a q i, and thus, y k m 1 2 θk (b m + b k m) = (θ k ) q a q m x k m θ k (b m + b k m)/2 By using (36), the above equation can be further written as (Mx k + d) m + θ k (c m + c k m) 1 2 θk (b m + b k m) = (θ k ) p a p mx k m (θ k ) q a q m x k m θ k (b m + b k m)/2 Since b k and c k are bounded, x k m, and (Mx k + d) m is bounded from below, we conclude that the left-hand side of the above equation is bounded from below This implies that θ k 0 (since otherwise the right-hand side tends to ) In what follows, we denote by (37) x k = x k 1 2 θk (b + b k ), ȳ k = y k 1 2 θk (b + b k ) From (34) and (35), we see that ( x k, ȳ k ) > 0 for all k, and (38) X k ȳ k = (θ k ) q a q

10 Y B ZHAO AND D LI Since b k β, it follows that (39) By using (36) and (37), we have M(x k x k θ k b/2) θ k 1 2 β M ȳ k M( x k + 1 2 θk b) d (θ k ) p A p x k θ k c = Mx k + d + (θ k ) p A p x k + θ k (c + c k ) 1 2 θk (b + b k ) M( x k + 1 2 θk b) d (θ k ) p A p x k θ k c ( M(x = θ k k x k θ k b/2) θ k 1 2 (b + bk ) + c k + 1 ) 2 (θk ) p A p (b + b k ) By (39) and the boundedness of θ k, b k and c k, there exists a scalar ˆt > 0 such that ˆte M(xk x k θ k b/2) θ k for all k Therefore, 1 2 (b + bk ) c k + 1 2 (θk ) p A p (b + b k ) ˆte ȳ k M( x k + 1 2 θk b) d (θ k ) p A p x k θ k c θ k [ ˆte, ˆte] for all k Notice that q [1, ) Combination of (38) and the above leads to ( ) X F (a,b,c,θ )( x k, ȳ k ) = k ȳ k k ȳ k M( x k + 1 2 θk b) d (θ k ) p A p x k θ k c for all k Thus, θ k [0, a q ] θ k [ ˆte, ˆte] =: D θ k ( x k, ȳ k ) F 1 (a,b,c,θ k ) (D θk) for all k By Condition 21, there exists a θ such that θ (0,θ ] F 1 (a,b,c,θ) (D θ) is bounded Since θ k 0, there exists some k 0 such that for all k k 0 we have θ k θ Thus, {( x k, ȳ k )} k k0 θ k θ F 1 (a,b,c,θ k ) (D θ k) θ (0,θ ] F 1 (a,b,c,θ) (D θ) The right-hand side of the above is bounded This contradicts the left-hand side which (by assumption) is an unbounded sequence We are ready to prove the global convergence of Algorithm 21 for P 0 LCPs

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 11 Theorem 32 Let M be a P 0 -matrix Assume that Condition 21 is satisfied If (x k, y k, θ k ) is generated by Algorithm 21, then {(x k, y k )} has at least one accumulation point, and (310) lim k θk 0, lim G θ k k(xk, y k ) 0 Thus, every accumulation point of (x k, y k ) is a solution to the LCP Proof By Theorem 31, the iterative sequence {(x k, y k )} generated by the algorithm is bounded, and hence it has at least one accumulation point By Proposition 22, we have G θ k(x k, y k ) G θ k(x k, y k ) θ k (b, c) + θ k (b, c) θ k [β + (b, c) ] Thus, to show the second limiting property in (310) it is sufficient to show that θ k 0 By the construction of the algorithm, we have either θ k+1 = (1 γ k )θ k or θ k+1 = (θ k ) 2 Thus θ k is monotonically decreasing, and thus there exists a scalar 1 > θ 0 such that θ k θ If θ = 0, the desired result follows Assume the contrary that θ > 0 We now derive a contradiction Since θ > 0, the algorithm eventually phases out the approximate Newton step, and takes only step 3 and step 4 In fact, if step 2 is accepted infinite many times, then there exists a subsequence {k j } such that θ kj+1 = (θ kj ) 2 which implies that θ = θ 2 This is impossible since 0 < θ < 1 Thus, there exists a k 0 such that for all k k 0, the iterates {(x k, y k )} k k0 are generated only by step 3, and hence θ k+1 = (1 γ k )θ k for all k k 0 Since θ k θ > 0, it follows that γ k 0 Thus, for all sufficiently large k, we have (x k+1, y k+1 ) / N (β, (1 1 α γ k)θ k, that is G (1 1 α γ (x k+1, y k+1 ) k)θ k ( 1 1 ) α γ k θ k (b, c) (1 > β 1 α γ k ) θ k Since the iterate (x k+1, y k+1 ) is bounded, taking a subsequence if necessary we may assume that this sequence converges to some (ˆx, ŷ) Notice that γ k 0 Taking the limit in the above inequality, we have G θ(ˆx, ŷ) θ(b, c) β θ > 0 Since θ > 0, the matrix G θ(ˆx, ŷ) is nonsingular Let (dˆx, dŷ) be the solution to G θ(ˆx, ŷ) θ(b, c) + G θ(ˆx, ŷ)(dx, dy) = 0 Then, (dˆx, dŷ) is a strictly descent direction for G θ(x, y) θ(b, c) at (ˆx, ŷ) As a result, the line search steplengths, ˆλ (in step 3) and ˆγ (in step 4), are both positive constants Since G and G are continuous in the neighborhood of (ˆx, ŷ), it follows that (dx k, dy k, λ k, γ k ) (dˆx, dŷ, ˆλ, ˆγ), and therefore λ k, γ k must be uniformly bounded from below by some positive constant for all sufficiently large k This contradicts the fact γ k 0 Therefore, θ k 0 must hold Assume that (ˆx, ŷ) is an arbitrary accumulation point of (x k, y k ), then by (310), 0 = lim k G θ k(xk, y k ) = G 0 (ˆx, ŷ), which implies that (ˆx, ŷ) is a solution to the LCP

12 Y B ZHAO AND D LI Remark 32 We have pointed out that the global convergence of most existing non-interior-point methods for P 0 LCPs actually requires the boundedness assumption of the solution set, in which case the P 0 problem must have a strictly feasible point In order to relax this requirement, Chen and Ye [12] designed a big-m smoothing method for P 0 LCPs They proved that if the P 0 LCP has a solution and if certain condition such as x n+2 ȳ n+2 2 ε is satisfied at the accumulation point of their iterative sequence, then their algorithm is globally convergent We note that Condition 21 in this paper is quite different from the Chen and Ye s However, it is not clear what relation is between the two conditions While the global convergence for P 0 LCPs is proved under Condition 21, it should be pointed out that this condition is not necessary for the global convergence of P problems We can prove that Algorithm 21 is globally convergent provided that the P LCP has a solution Since this result cannot follow from Theorem 32, and since its proof is not straightforward, we postpone the discussion for this special case till the local convergence analysis for P 0 LCPs is complete 4 Local behavior of the algorithm Under a nondegeneracy assumption, we show in this section the local superlinear convergence of the algorithm when p = 2 q Let (x, y ) be an accumulation point of the iterative sequence (x k, y k ) generated by Algorithm 21 We make use of the following assumption that can be found also in [38, 39, 33] Condition 41 Assume that (x, y ) is strictly complementary, ie, x +y > 0, where y = Mx + d, and the matrix M II is nonsingular, where I = {i : x i > 0} While this condition for local convergence has been used by several authors, it is stronger than some existing non-interior-point algorithms Let M be a P 0 matrix Under the above condition, it is easy to verify the nonsingularity of the matrix: (41) ( G 0 (x, y I W I + W ) = M I where W = diag(w) is a diagonal matrix with w i = 1 if x i > 0, and w i = 1 otherwise If Condition 41 is satisfied, it follows easily from Proposition 25 of Qi [32] that the solution (x, y ) is a locally isolated solution On the other hand, it is well-known that that a P 0 complementarity problem has a unique solution when it has a locally isolated solution (Jones and Gowda [23] and Gowda and Sznajder [17]) Thus, Condition 41 implies the uniqueness of the solution for a P 0 LCP, and hence it implies Condition 21 By Theorem 32, we conclude that under Conditions 41 the entire sequence (x k, y k ), generated by Algorithm 21, converges to the unique solution of the P 0 LCP, ie, (x k, y k ) (x, y ) By continuity of G θ and nonsingularity of G 0 (x, y ), there exists a local neighborhood of (x, y ), denoted by N(x, y ), such that for all (x, y) N(x, y ) and all sufficiently small θ the matrix G θ (x, y) is nonsingular, and there exists a constant C and ˆθ (0, 1) such that ), G θ (x, y) 1 C for all (x, y) N(x, y ) and θ (0, ˆθ] The following result is very useful for our local convergence analysis Lemma 41 Let M be a P 0 matrix Under Condition 41, there exists a neighborhood N(x, y ) of (x, y ) such that for all (x k, y k ) N(x, y ) we have (i) G θ k(x k, y k ) G 0 (x k, y k ) κ max{(θ k ) q, (θ k ) p }, where κ is a constant (ii) G 0 (x k, y k ) G 0 (x, y ) G 0 (x k, y k )[(x k, y k ) (x, y )] = 0

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 13 Proof Let I = {i : x i > 0}, J = {j : yj > 0} Then, by strict complementarity, I J is empty and I J = {1, 2,, n} Denote η = 1 2 min{ x I, yj } We show first the following inequality: (42) W (X k Y k )D k 2 η 2 (θk ) q a q for all (x k, y k ) N(x, y ), where W is given as in (41) and D k is defined as in (27) As we have pointed out, under Conditions 41 the sequence {(x k, y k )} converges to (x, y ) For all (x k, y k ) N(x, y ), without loss of generality, we may assume that x k i y k i η > 0 for i I; (x k i y k i ) η > 0 for i J Hence, when k is sufficiently large, for each i I we have W i (x k i y k i )d k i = 1 (x k i y k i )d k i = (x k i yk i )2 + 4(θ k ) q a q i (xk i yk i ) (x k i y k i )2 + 4(θ k ) q a q i 4(θ k ) q a q i = ( ) (x k i yi k)2 + 4(θ k ) q a q i (x k i yi k)2 + 4(θ k ) q a q i + xk i yk i 4(θ k ) q a q i ( η2 ) η2 + 4(θ k ) q a q i + 4(θ k ) q a q i + η (43) 2 η 2 (θk ) q a q i Similarly, for j J we have W j (x k j y k j )d k j 2 η 2 (θk ) q a q j, which together with (43) yields the desired inequality (42) On the other hand, by strict complementarity, for every sufficiently large k it is evident that (X k Y k )D k = W where D k = diag( d k ) where ( d k ) i = 1/ (x k i yk i )2 (i = 1,, n) Thus, for every sufficiently large k we have (44) G 0 (x k, y k ) G 0 (x, y ) ( ) = I (X k Y k )D k I + (X k Y k )D k ( I W I + W M I M I ( ) = W (X k Y k )D k (X k Y k )D k W 0 0 = 0 )

14 Y B ZHAO AND D LI By using (27), (42) and (44), for every sufficiently large k we have G θ k(x k, y k ) G 0 (x k, y k ) = G θ k(x k, y k ) G 0 (x, y ) ( = W (X k Y k )D k (X k Y k )D k W (θ k ) p A p O 2 W (X k Y k )D k + (θ k ) p A p ) 4 η 2 (θk ) q a q + (θ k ) p a p κ max{(θ k ) q, (θ k ) p }, where κ = (4 a q )/η 2 + a p is a constant independent of k Result (i) is proved We now prove the result (ii) By the strict complementarity and the definition of W, it is easy to see that for every sufficiently large k the following holds: x k + y k (x k y k ) 2 = (I W )(x k x ) + (I + W )(y k y ), y k Mx k d = M(x k x ) + y k y Therefore, by using (44) and the above two equations, we have G 0 (x k, y k ) G 0 (x, y ) G 0 (x k, y k )((x k, y k ) (x, y )) ( x = k + y k ) ( ) ( (x k y k ) 2 I W I + W x y k Mx k k x d M I y k y = 0, ) as desired In the next result, we show that under Condition 41 the algorithm is at least locally superlinear The key of the proof is to show that the algorithm eventually rejects the centering step and finally switches to step 2 when the iterate approaches the solution set Theorem 41 Let M be a P 0 -matrix Let p = 2 q and β > 2 a q/2 + (b, c) Assume that Condition 41 is satisfied Then there exists a k 0 such that θ k+1 = (θ k ) 2 for all k k 0, and x k+1 x lim k x k x = 0, which implies that the algorithm is locally superlinearly convergent Proof Let N(x, y ) be a neighborhood of (x, y ) defined as in Lemma 41 We first show that for all (x k, y k ) N(x, y ), there exists a constant δ > 0 such that (ˆx k+1, ŷ k+1 ) (x, y ) δ max{(θ k ) q, (θ k ) p } (x k, y k ) (x, y ) As we have pointed out, Condition 41 implies that (x k, y k ) (x, y ), and there exist constants C and ˆθ such that G θ k(x k, y k ) 1 C for all (x k, y k ) N(x, y ) and θ k (0, ˆθ] Therefore, for all sufficiently large k, by Lemma 41 we have (ˆx k+1, ŷ k+1 ) (x, y )

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 15 = (x k, y k ) (x, y ) G θ k(x k, y k ) 1 G 0 (x k, y k ) = G θ k(x k, y k ) 1 {[ G θ k(x k, y k ) + G 0 (x k, y k )]((x k, y k ) (x, y )) G 0 (x k, y k ) + G 0 (x, y ) G 0 (x k, y k )((x k, y k ) (x, y ))} G θ k(x k, y k ) 1 [ G θ k(x k, y k ) G 0 (x k, y k )]((x k, y k ) (x, y )) + G θ k(x k, y k ) 1 [G 0 (x k, y k ) G 0 (x, y ) G 0 (x k, y k )((x k, y k ) (x, y ))] C G θ k(x k, y k ) G 0 (x k, y k ) (x k, y k ) (x, y ) Cκ max{(θ k ) q, (θ k ) p } (x k, y k ) (x, y ) Set δ = Cκ The desired inequality follows The above inequality implies that the sequence (ˆx k+1, ŷ k+1 ) also converges to (x, y ) Notice that θ k 0 (by Theorem 32) To show the local superlinear convergence of Algorithm 21, the above inequality implies that it is sufficient to show that the algorithm eventually takes the approximate Newton step alone Since (x, y ) is a strictly complementary solution, G 0 (x, y) is continuously differentiable in the neighborhood of (x, y ), and thus it must be Lipschitzian in the neighborhood of (x, y ) Hence, there exists a constant L > 0 such that for all sufficiently large k G 0 (ˆx k+1, ŷ k+1 ) G 0 (x, y ) L (ˆx k+1, ŷ k+1 ) (x, y ) Lδ max{(θ k ) q, (θ k ) p } (x k, y k ) (x, y ) = τ k max{(θ k ) q, (θ k ) p } where τ k = Lδ (x k, y k ) (x, y ) 0 as k That is, (45) G 0 (ˆx k+1, ŷ k+1 ) τ k max{(θ k ) q, (θ k ) p } for all sufficiently large k Setting (µ 1, µ 2 ) = (µ, 0) in (28), where µ (0, 1), we see from the first inequality in (28) that (46) G µ (x, y) G 0 (x, y) µ (2µ q/2 1 a q/2, µ p 1 A p x) for all (x, y) R 2n Thus, by using (45) and (46), for all sufficiently large k we have (47) G (θ k ) 2(ˆxk+1, ŷ k+1 ) (θ k ) 2 (b, c) G (θ k ) 2(ˆxk+1, ŷ k+1 ) G 0 (ˆx k+1, ŷ k+1 ) + G 0 (ˆx k+1, ŷ k+1 ) +(θ k ) 2 (b, c) (θ k ) 2 (2[(θ k ) 2 ] q/2 1 a q/2, [(θ k ) 2 ] p 1 A pˆx k+1 ) + τ k max{(θ k ) q, (θ k ) p } +(θ k ) 2 (b, c) (θ k ) 2 (2 a q/2 + (b, c) + [(θ k ) 2 ] p 1 A pˆx k+1 ) + τ k max{(θ k ) q, (θ k ) p } = β(θ k ) 2[ 2 a q/2 + (b, c) β + τ k max{(θ k ) q 2, (θ k ) p 2 } β β(θ k ) 2 + [(θk ) 2 ] p 1 A pˆx k+1 β ] The third inequality follows from that q 2 and [(θ k ) 2 ] q/2 1 1 The last inequality follows from the fact that p = 2 q, β > 2 a q/2 + (b, c), τ k 0 and [(θ k ) 2 ] p 1 A pˆx k+1 lim = 0 k 0 β

16 Y B ZHAO AND D LI Thus, from (47), the approximate Newton step is accepted at kth step provided that k is a large number Therefore, the next iterate (x k+1, y k+1 ) = (ˆx k+1, ŷ k+1 ) Repeating the above proof, we can see that at the (k + 1)th step (x k+2, y k+2 ) = (ˆx k+2, ŷ k+2 ), ie, the approximate Newton step is still accepted at (k + 1)th step By induction, we conclude that the algorithm eventually takes only the approximate Newton step Hence, for some k 0, we have θ k+1 = (θ k ) 2 for all k k 0, and lim k 0 x k+1 x / x k x = 0 The proof above shows that if an iterate (x k, y k ) lies in a sufficiently small neighborhood of (x, y ), then the next iterate still falls in this neighborhood, and much closer to the solution (x, y ) than (x k, y k ) Since the centering step is gradually phased out and only approximate Newton steps are executed at the end of iteration, the superlinear convergence of the algorithm can be achieved 5 Special cases In this section, we show some much deeper global convergence results than Theorem 32 when the algorithm is applied to P LCPs For the special case, the only assumption to assure the global convergence is the nonemptyness of the solution set In other words, this algorithm is able to solve any P LCP provided that a solution exists For a given LCP, we denote by (51) (52) (53) I = {i : x i > 0 for some solution x }, J = {j : (Mx + d) j > 0 for some solution x }, K = {k : x k = (Mx + d) k = 0 for all solution x } The above partition of the set {1, 2,, n} is unique for a given P LCP Consider the affine set: S = {(x, y) R 2n : x J K = 0, y I K = 0, y = Mx + d} In fact, S is the affine hull of the solution set of the LCP, ie, the smallest affine set containing the solution set For any ( x, ỹ) S, it is easy to see that x i ỹ i = 0 for all i = 1,, n We now prove a very useful result Lemma 51 Let ( x, ỹ) be an arbitrary vector in S Let M be a P -matrix Let {(x k, y k, θ k )} be generated by Algorithm 21 and ( x k, ȳ k ) be defined by (37) Then ( x T ȳ k + ỹ T x k (θ k ) q (1 + τn)e T a q τn min 1 i n ρk i ) ( x k x) T [(θ k ) p A p x k (54) +θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )], where ρ k i = x k i ỹ i + x i ȳ k i + ( x k i x i ){(θ k ) p a p i xk i + θ k (c i + c k i ) + 1 2 θk [(M I + (θ k ) p A p )(b + b k )] i } Proof Since ( x k, ȳ k ) > 0 and x i ỹ i = 0 for all i = 1,, n, by (38) we have It is easy to verify that ( x k i x i )(ȳ k i ỹ i ) = x k i ȳ k i x k i ỹ i x i ȳ k i + x i ỹ i = (θ k ) q a q i xk i ỹ i x i ȳ k i ȳ k = M x k + d + (θ k ) p A p x k + θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 17 Thus, we have ( x k i x i )[M( x k x)] i = ( x k i x i )[(M x k + d) i ỹ i ] = ( x k i x i ){ȳi k (θ k ) p a p i xk i θ k (c i + c k i ) 1 2 θk [(M I + (θ k ) p A p )(b + b k )] i ỹ i } = ( x k i x i )(ȳi k ỹ i ) ( x k i x i ){(θ k ) p a p i xk i +θ k (c i + c k i ) + 1 2 θk [(M I + (θ k ) p A p )(b + b k )] i } (θ k ) q a q i xk i ỹ i x i ȳi k ( x k i x i ){(θ k ) p a p i xk i +θ k (c i + c k i ) + 1 2 θk [(M I + (θ k ) p A p )(b + b k )] i } (55) (θ k ) q e T a q min 1 i n ρk i, where, ρ k i = x k i ỹ i + x i ȳ k i + ( x k i x i ){(θ k ) p a p i xk i + θ k (c i + c k i ) + 1 2 θk [(M I + (θ k ) p A p )(b + b k )] i } Therefore, by (38), (55) and the definition of the P matrix, we have x T ȳ k + ỹ T x k = ( x k x) T (ȳ k ỹ) + ( x k ) T ȳ k = ( x k x) T [M x k + d + (θ k ) p A p x k + θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k ) ỹ] + (θ k ) q e T a q = ( x k x) T (M x k + d ỹ) ( x k x) T [(θ k ) p A p x k + θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )] + (θ k ) q e T a q = ( x k x) T M( x k x) ( x k x) T [(θ k ) p A p x k + θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )] + (θ k ) q e T a q τ i I + ( x k i x i )[M( x k x)] i ( x k x) T [(θ k ) p A p x k +θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )] + (θ k ) q e T a q ( ) τn (θ k ) q e T a q min 1 i n ρk i ( x k x) T [(θ k ) p A p x k +θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )] + (θ k ) q e T a q ( ) = (θ k ) q (1 + τn)e T a q τn ( x k x) T [(θ k ) p A p x k min 1 i n ρk i +θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )] The proof is complete The following result shows that under a suitable choice of parameters our algorithm can locate a solution of the P LCP as long as a solution exists

18 Y B ZHAO AND D LI Theorem 51 Let M be a P -matrix Assume that the solution set of the P LCP is nonempty If one of the following holds, (i) p 1, (ii) p > 1, c > 1 2 (M I)b e, and 0 < β < min 1 i n ci (1/2) (M I)b 1+ M I /2, then the sequence {(x k, y k, θ k )}, generated by Algorithm 21, is bounded, and lim k θk 0, lim G θ k k(xk, y k ) = 0 Therefore, any accumulation point of (x k, y k ) is a solution to the LCP Proof We focus on the proof of the boundedness of {(x k, y k )} Let (x, y ) be an arbitrary solution to the LCP Set ( x, ỹ) = (x, y ) in Lemma 51 Since for this case ȳi kx i + xk i y i 0, we have that ρ k i ηi k := ( x k i x i ) {(θ k ) p a pi xki + θ k (c i + c ki ) + 12 } θk [(M I + (θ k ) p A p )(b + b k )] i This, together with (54), implies that ( ) (x ) T ȳ k + (y ) T x k (θ k ) q (1 + τn)e T a q τn min 1 i n ηk i ( x k x ) T {(θ k ) p A p x k (56) +θ k (c + c k ) + 1 2 θk (M I + (θ k ) p A p )(b + b k )} Dividing both sides of the above by (θ k ) p and noting that the left-hand side is nonnegative, we have ( x k x ) T A p x ( k η k ) i + τn min 1 i n (θ k ) p +(θ k ) 1 p ( x k x ) [c T + c k + 1 ] 2 (M I + (θk ) p A p )(b + b k ) (57) (θ k ) q p (1 + τn)e T a q If p 1, the right-hand side of the above inequality is bounded since q 1 and θ k < 1 This implies that the sequence { x k } is bounded (otherwise the left-hand side is unbounded from above), and thus {x k } is bounded So is {y k } by (36) The boundedness of {(x k, y k )} under (i) is proved We now prove the boundedness of (x k, y k ) in the case (ii) Consider two subcases Subcase 1: θ k 0 In this case, there exists a constant ˆθ > 0 such that 1 > θ k ˆθ It is easy to see from (57) that the sequence { x k } is bounded, and thus (x k, y k ) is bounded Subcase 2: θ k 0 In this case, by the choice of p, β and c, it is easy to see that c i + c k i + 1 2 [(M I)(b + bk )] i > c i β 1 2 (M I)(b + bk ) c i β 1 2 (M I)b 1 2 (M I)bk c i 1 2 (M I)b β ( 1 + 1 2 M I ) (58) > 0

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 19 Since θ k 0, for all sufficiently large k it follows that c i + c k i + 1 2 [(M I + (θ k) p A p )(b + b k )] i > 0 Thus, for all sufficiently large k we have ηi k θ k x i {(θ k ) p 1 a pi xki + c i + c ki + 12 } [(M I + (θk ) p A p )(b + b k )] i (59) (θ k ) p 1 (x ) T A p x k max 1 i n x i {c i + c ki + 12 [(M I + (θk ) p A p )(b + b k )] i } Since the left-hand side of (56) is nonnegative, dividing both sides of (56) by θ k and using (59), we have ( 0 (θ k ) q 1 (1 + τn)e T a q η k ) i τn min 1 i n θ k + (θ k ) p 1 (x ) T A p x k ( x k x ) T (c + c k ) 1 2 ( xk x ) T (M I + (θ k ) p A p )(b + b k ) (θ k ) q 1 (1 + τn)e T a q + τn max 1 i n x i {c i + c ki + 12 } [(M I + (θk ) p A p )(b + b k )] i +(θ k ) p 1 (1 + τn)(x ) T A p x k ( x k x ) T (c + c k ) 1 2 ( xk x ) T (M I + (θ k ) p A p )(b + b k ) It follows that ( x k ) [c T + c k (θ k ) p 1 (1 + τn)a p x + 1 ] 2 (M I + (θk ) p A p )(b + b k ) { (θ k ) q 1 (1 + τn)e T a q + τn max 1 i n x i c i + c k i + 1 2 [(M I } +(θ k ) p A p )(b + b k )] i + (x ) [c T + c k + 1 ] (510) 2 (M I + (θk ) p A p )(b + b k ) Since p > 1 and θ k 0, by a proof similar to (58), for all sufficiently large k we have c + c k (θ k ) p 1 (1 + τn)a p x + 1 2 (M I + (θk ) p A p )(b + b k ) 1 {c 12 (1 2 (M I)b e β + 12 ) } M I e > 0 Since the right-hand side of (510) is bounded and x k > 0, from the above inequality and (510) it follows that { x k } is bounded, and hence (x k, y k ) is bounded Based on the boundedness of {(x k, y k )}, repeating the proof of Theorem 32 we can prove that θ k 0 Remark 51 It is worth mentioning the difference between (i) and (ii) of the above theorem In the case (i), there is no restriction on the parameter β > 0 Thus, β can be assigned a large number so that the neighborhood is wide enough to ensure a large

20 Y B ZHAO AND D LI steplength at each iteration For the case (ii), however, the parameter β is required to be relatively small To satisfy this requirement, the initial point of Algorithm 21 can be also obtained easily For example, set x 0 = 0, a R++, n θ 0 (0, 1), and choose y 0 R++ n to be large enough such that c > 1 2 (M I)b e where b = x0 + y 0 (x 0 y 0 ) 2 + 4(θ 0 ) q a q 4(θ 0 ) q a q θ 0 = y 0 + (y 0 ) 2 + 4(θ 0 ) q a, q c = y0 (f(x 0 ) + (θ 0 ) p A p x 0 ) θ 0 = y0 f(0) θ 0 The above choice implies that G θ 0(x 0, y 0 ) = 0 Thus, (x 0, y 0 ) N (β, θ 0 ) for any β > 0 In particular, β can be taken such that 0 < β < min 1 i n c i (1/2) (M I)b 1 + M I /2 In the rest of this section, we characterize the accumulation point of the sequence {(x k, y k )} We first recall some concepts Let S denote the solution set of the LCP An element x of S is said to be the N-norm least solution, where N is a positive, definite, symmetric matrix, if N 1/2 x N 1/2 u for all u S In particular, if N = I, the solution x is called the least 2-norm solution of S An element x of S is said to be the least element of S if x u for all u S (see, for example, [30, 13]) The solution x is called a maximally complementary solution if x i > 0 for all i I, (Mx + d) i > 0 for all i J and x i = (Mx + d) i = 0 for all i K Clearly, a strictly complementary solution is a maximally complementary solution with K = Theorem 52 Let M be a P matrix Assume that the solution set of the LCP is nonempty (i) If p < 1, then every accumulation point (ˆx, ŷ) of the sequence (x k, y k ) satisfies the following property: For any solution x, there exists a corresponding index i 0 such that (511) (ˆx) T A p (ˆx x ) + τna p i 0 ˆx i0 (ˆx i0 x i 0 ) 0 Moreover, if the least element solution exists, then the entire sequence (x k, y k ) is convergent, and its accumulation point coincides with the least element solution (ii) If p > 1, c > 1 2 (M I)b e, 0 < β < lim 1 i n ci (1/2) (M I)b 1+ M I /2, and q = 1, then each accumulation point is a maximally complementary solution of the LCP Proof For p < 1, by the result (i) of Theorem 51, {x k } is bounded and θ k 0 Let (ˆx, ŷ) be an arbitrary accumulation point of {(x k, y k )} Taking the limit in (57) where x is an arbitrary solution of the LCP, we see that there exists an index i 0 such that (ˆx) T A p (ˆx x ) + τna p i 0 ˆx i0 (ˆx i0 x i 0 ) 0 Moreover, if the least element solution exists, setting x to be the least element, we conclude from the above inequality that ˆx is equal to the least element Since such an element is unique, the sequence {x k } is convergent We now consider the case (ii) By result (ii) of Theorem 51, the sequence (x k, y k ) is bounded, θ k 0, and each accumulation point of (x k, y k ) is a solution to the LCP

NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS 21 Let (x, y ) be a maximally complementary solution and I, J, K be defined by (51)- (53) Then we have (x ) T ȳ k + (y ) T x k = (x I) T ȳ k I + (y J) T x k J = (x I) T ( X k I ) 1 Xk I ȳ k I + (y J) T (Ȳ k J ) 1 Ȳ k J x k J = (θ k ) q [ (x I) T ( X k I ) 1 a q I + (y J) T (Ȳ k J ) 1 a q J] By (56) and the above inequality, we have (x I) T ( X I k ) 1 a q I + (y J) T (Ȳ J k ) 1 a q J ( (1 + τn)e T a q η k ) i τn min 1 i n (θ k ) q ( x k x ) T [(θ k ) p q A p x k +(θ k ) 1 q (c + c k ) + 1 2 (θk ) 1 q (M I + (θ k ) p A p )(b + b k )] Let (ˆx, ŷ) be an arbitrary accumulation point of the iterates Since θ k 0 and p > 1 = q, we can see that η k i /(θk ) q is bounded The right-hand side of the above inequality is bounded Since (x I, y J ) > 0, we conclude that xk I ˆx I > 0; otherwise, if ˆx i = 0 for some i I, then x i / x i, and hence the left-hand side tends to infinity, contradicting the boundedness of the right-hand side By a similar way, we have that ŷ I > 0 Thus, (ˆx, ŷ) is a maximally complementary solution Since every positive semi-definite matrix is a P -matrix with τ = 0, the result (i) above can be further improved for monotone LCPs In fact, from Theorem 23, the following result is natural since the algorithm follows the regularized central path approximately Theorem 53 Let M be a positive semi-definite matrix Assume that the solution set of the LCP is nonempty For p < 1, the entire sequence (x k, y k ), generated by Algorithm 21, converges to (ˆx, ŷ) where ˆx is the least N-norm solution with N = A p/2 In particular, if a = e is taken, the sequence converges to the (unique) least 2-norm solution Proof For the case of p < 1, setting τ = 0 in (511) we have (ˆx) T A p (ˆx x ) 0, which implies that A p/2ˆx A p/2 x Since x is an arbitrary solution, it follows that the solution ˆx is the least N-norm solution where N = A p/2 It is also easy to see from the above inequality that the solution ˆx is unique, and thus the entire sequence is convergent Remark 52 For P LCPs, the boundedness assumption of the solution set (or the strict feasibility condition) is not required for the global convergence of our algorithm Further, all results in this section can be easily extended to nonlinear P complementarity problems We notice that Ye s homogeneous model [41] for monotone LCPs, which was later generalized to nonlinear monotone complementarity problems by Andersen and Ye [2], also does not require the boundedness of the solution set (or the strict feasibility) of the original problem However, it is unknown whether the Ye s algorithm can be generalized to the nonlinear P problems 6 Numerical examples Algorithm 21 were tested on some LCPs, nonlinear complementarity problems (NCPs), and nonlinear programming problems (NLPs) which can be written as complementarity problems by KKT optimality conditions For all test examples, common parameters and initial points were used in our algorithm