Research Article A Penalized-Equation-Based Generalized Newton Method for Solving Absolute-Value Linear Complementarity Problems

Similar documents
Bulletin of the. Iranian Mathematical Society

An improved generalized Newton method for absolute value equations

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

A FAST CONVERGENT TWO-STEP ITERATIVE METHOD TO SOLVE THE ABSOLUTE VALUE EQUATION

Research Article The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

A Smoothing Newton Method for Solving Absolute Value Equations

Error bounds for symmetric cone complementarity problems

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Research Article Residual Iterative Method for Solving Absolute Value Equations

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

Tensor Complementarity Problem and Semi-positive Tensors

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method

Linear Algebra Massoud Malek

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A Power Penalty Method for a Bounded Nonlinear Complementarity Problem

Polynomial complementarity problems

THE solution of the absolute value equation (AVE) of

Research Article A Novel Filled Function Method for Nonlinear Equations

A note on the unique solution of linear complementarity problem

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Absolute Value Programming

Research Article Strong Convergence of a Projected Gradient Method

Research Article Existence and Duality of Generalized ε-vector Equilibrium Problems

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem

Research Article A Two-Grid Method for Finite Element Solutions of Nonlinear Parabolic Equations

Spectral gradient projection method for solving nonlinear monotone equations

Research Article Sufficient Optimality and Sensitivity Analysis of a Parameterized Min-Max Programming

A class of Smoothing Method for Linear Second-Order Cone Programming

Properties of Solution Set of Tensor Complementarity Problem

Research Article Fixed Point Theorems of Quasicontractions on Cone Metric Spaces with Banach Algebras

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Research Article A Characterization of E-Benson Proper Efficiency via Nonlinear Scalarization in Vector Optimization

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

Absolute value equations

A Continuation Method for the Solution of Monotone Variational Inequality Problems

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Research Article A New Roper-Suffridge Extension Operator on a Reinhardt Domain

Research Article An Iterative Algorithm for the Split Equality and Multiple-Sets Split Equality Problem

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

Step lengths in BFGS method for monotone gradients

Research Article Existence and Uniqueness of Homoclinic Solution for a Class of Nonlinear Second-Order Differential Equations

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

Research Article Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

Research Article The Solution by Iteration of a Composed K-Positive Definite Operator Equation in a Banach Space

Numerical Optimization

GENERALIZED second-order cone complementarity

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

Affine covariant Semi-smooth Newton in function space

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

Research Article Existence of Periodic Positive Solutions for Abstract Difference Equations

Research Article Constrained Solutions of a System of Matrix Equations

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Research Article Existence for Elliptic Equation Involving Decaying Cylindrical Potentials with Subcritical and Critical Exponent

Research Article Circle-Uniqueness of Pythagorean Orthogonality in Normed Linear Spaces

Research Article Hermite Wavelet Method for Fractional Delay Differential Equations

Research Article A Note on Optimality Conditions for DC Programs Involving Composite Functions

Solving Separable Nonlinear Equations Using LU Factorization

Research Article Quasilinearization Technique for Φ-Laplacian Type Equations

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Research Article Algorithms for a System of General Variational Inequalities in Banach Spaces

system of equations. In particular, we give a complete characterization of the Q-superlinear

Research Article Particle Swarm Optimization-Proximal Point Algorithm for Nonlinear Complementarity Problems

A continuation method for nonlinear complementarity problems over symmetric cone

Primal-Dual Bilinear Programming Solution of the Absolute Value Equation

Research Article Remarks on the Regularity Criterion of the Navier-Stokes Equations with Nonlinear Damping

Research Article The Dirichlet Problem on the Upper Half-Space

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Research Article On a Quasi-Neutral Approximation to the Incompressible Euler Equations

KingSaudBinAbdulazizUniversityforHealthScience,Riyadh11481,SaudiArabia. Correspondence should be addressed to Raghib Abu-Saris;

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see

A derivative-free nonmonotone line search and its application to the spectral residual method

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

Research Article Solvability of a Class of Integral Inclusions

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

Research Article Solution of Fuzzy Matrix Equation System

Absolute Value Equation Solution via Dual Complementarity

A Regularization Newton Method for Solving Nonlinear Complementarity Problems

Linear Complementarity as Absolute Value Equation Solution

Department of Mathematics, College of Sciences, Shiraz University, Shiraz, Iran

Research Article Semi-Online Scheduling on Two Machines with GoS Levels and Partial Information of Processing Time

Research Article Solvability for a Coupled System of Fractional Integrodifferential Equations with m-point Boundary Conditions on the Half-Line

Lecture 3. Optimization Problems and Iterative Algorithms

Research Article On the Idempotent Solutions of a Kind of Operator Equations

Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1

Research Article Some Results on Characterizations of Matrix Partial Orderings

Research Article He s Variational Iteration Method for Solving Fractional Riccati Differential Equation

A New Approach for Solving Dual Fuzzy Nonlinear Equations Using Broyden's and Newton's Methods

Research Article Completing a 2 2Block Matrix of Real Quaternions with a Partial Specified Inverse

Research Article Attracting Periodic Cycles for an Optimal Fourth-Order Nonlinear Solver

Research Article Asymptotic Behavior of the Solutions of System of Difference Equations of Exponential Form

Research Article The Existence of Countably Many Positive Solutions for Nonlinear nth-order Three-Point Boundary Value Problems

Research Article Multiple-Decision Procedures for Testing the Homogeneity of Mean for k Exponential Distributions

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Research Article Identifying a Global Optimizer with Filled Function for Nonlinear Integer Programming

Research Article Mathematical Model and Cluster Synchronization for a Complex Dynamical Network with Two Types of Chaotic Oscillators

Transcription:

Mathematics, Article ID 560578, 10 pages http://dx.doi.org/10.1155/2014/560578 Research Article A Penalized-Equation-Based Generalized Newton Method for Solving Absolute-Value Linear Complementarity Problems Yuan Li, Hai-Shan Han, and Dan-Dan Yang College of Mathematics, Inner Mongolia University for the Nationalities, The Inner Mongolia Autonomous Region, Tongliao 028000, China Correspondence should be addressed to Yuan Li; imunliyuan@163.com Received 7 June 2014; Accepted 26 August 2014; Published 9 September 2014 Academic Editor: Liwei Zhang Copyright 2014 Yuan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider a class of absolute-value linear complementarity problems. We propose a new approximation reformulation of absolute value linear complementarity problems by using a nonlinear penalized equation. Based on this approximation reformulation, a penalized-equation-based generalized Newton method is proposed for solving the absolute value linear complementary problem. We show that the proposed method is globally and superlinearly convergent when the matrix of complementarity problems is positive definite and its singular values exceed 1. Numerical results show that our proposed method is very effective and efficient. 1. Introduction Let F:R n R n be a given function. The complementarity problems, CP(F for short, is to find a solution of the system x 0, F(x 0, F(x T x=0. (1 The CP(F is called the linear complementarity problems (for short LCP if F is an affine mapping of the form F (x =Ax b, (2 where A R n n and b R n.otherwise,thecp(f is called the nonlinear complementarity problems (NCP(F. The systematic study of the finite-dimensional CP(F began in the mid-1960s; in a span of five decades, the subject has developed into a very fruitful discipline in the field of mathematical programming. The developments include a rich mathematical theory, a host of effective solution algorithms, a multitude of interesting connections to numerous disciplines, and a wide range of important applications in engineering and economics (see, e.g., [1 4] and the references therein. Generalized Newton method (semismooth Newton method is one of efficacious algorithms for solving CP(F. The main idea of semismooth Newton method is based on an equivalent reformulation of the complementarity problems consisting of a nonsmooth equation and then solving the nonsmooth equation by Newton type method (see, e.g., [5, 6]. Most reformulations of the CP(F are based on the Fischer-Burmeister function [7] (see, e.g., [8 10] and the references therein. Chen et al. [11] introduced a penalized Fischer-Burmeister function and proposed a new semismooth Newton method based on this new NCP function. Kanzow and Kleinmichel [12] proposed a new, one-parametric class of NCP functions based on Fischer- Burmeister function and gave a semismooth Newton method via these NCP functions. Kanzow [13] researched an inexact semismooth Newton method based on Fischer-Burmeister function and penalized Fischer-Burmeister function. Ito and Kunisch [14] studied a semismooth Newton method based on the max-type NCP function. All generalized Newton methods mentioned above involve the continuously differentiable assumption on F in CP(F. The existed generalized Newton methods proposed are based on the equivalent reformulation via NCP functions. To the best of our knowledge, until now, there exist very few literature resources to study the complementary problems whentheinvolvedfunctionf is not differentiable. However,

2 Mathematics as in many practical problems, F is not differentiable; for instance, F(x = Ax x b; seenooretal.[15]. This is our focus in this paper. In this paper, we consider the following absolute-value linear complementarity problems (for short AVLCP (A, b: find x R n such that x 0, Ax x b 0, (Ax x b T x=0, (3 where A R n n and b R n. The AVLCP(A, bisaspecialcasewhenfis a piecewiselinear function F = Ax x bin CP(Fandcanbeviewedas an extension of the LCP. This complementarity problem was first introduced and studied by Noor et al. [15]in2012.Noor et al. proposed a generalized AOR method via establishing the equivalence between AVLCP(A, b andthefixedpoint problem using the projection operator, but the convergence rate is linear. Moreover, the study on AVLCP(A, b isinits infancy and, to the authors knowledge, there has been no work except for the above-mentioned results of Noor et al. [15]. These observations motivated us to further improve the theory and numerical system for solving AVLCP(A, b. We use the penalty technique to show that the AVLCP(A, b in R n are approximately equivalent to a nonlinear penalized equation, which was first introduced for solving AVLCP(A, b. It is worth mentioning that the penalty technique has been widely used in solving nonlinear programming, but it seems that there is a limited study for complementarity problems (see [16 18]. We show that the solution to this penalized equation converges to that of the AVLCP(A, b at an exponential rate when the penalized parameter tends to infinity. We again use the generalized Jacobian based on subgradient to analyze a generalized Newton method for solving the nonlinear penalized equation under some mild assumptions. The algorithm will be shown to be superlinearly convergent and can start from an arbitrary point. Preliminary numerical experiments are also given to show the effectiveness and efficiency of the proposed method. The rest of this paper is organized as follows. In Section 2, we present some notations and the well-known results. In Section 3, we provide a penalized equation for approximating the AVLCP(A, b anditsproperties.insection 4, ageneralized Newton method is introduced for solving the penalized equation. In Section 5, we introduce the numerical results of our methods. 2. Preliminaries For convenience, we will now briefly explain some of the terminologies that will be used in the next section. R n denotes the n-dimensional Euclidean space. All vectors in R n arecolumnvectors.leta=(a ij R n n be an n nreal matrix. The scalar product of two vectors x and y is denoted by x T y.forp>1,thep-norm of x=(x 1,x 2,...,x n T R n is defined as x p = ( n x i p 1/p.Whenp = 2, the p-norm becomes the 2-norm x = (x T x 1/2.Forany x=(x 1,x 2,...,x n R n and α R, x α =(x α 1,xα 2,...,xα n. x stands for the vector in R n of absolute values of components of x. sign(x will denote the vector with components equal to 1, 0, or 1 depending on whether the corresponding component of x is positive, zero, or negative. diag(sign(x will denote a diagonal matrix corresponding to sign(x. The plus function [x], which replaces the negative components of x by zeros, is a projection operator that projects x onto the nonnegativeorthant;namely,[x] = max{x, 0}.Forasolvable matrix equation By = d,wewillusethematlabbackslash B\dto denote a solution y. The generalized Jacobians x of x and [x of [x]1/l based on a subgradient [20] oftheir components are given by the diagonal matrices D(x and D (x, respectively, where D(x = diag(sign(x and D (x isthethediagonalmatrixwhosediagonalentriesareequal to (1/lx (1/l 1 i,0,orarealnumberσ [0, 1] depending on whether the corresponding component of x is positive, negative, or zero. Definition 1 (see [21]. Let A= R n n be a matrix; the matrix A is called (1 positive definite if there exists a constant γ>0such that x T Ax γ x 2 for any x R n, (2 bounded if there exists a constant β > 0 such that Ax β x for any x R n. Lemma 2 (Hölder s inequality. Let x, y R n.then n xt y x p y q =( x i p 1/p n ( y i q 1/q, (4 where p>1and q>1are real numbers such that (1/p (1/q = 1. Lemma 3 (see [15]. LetKbeaclosedandconvexsetinR n. Avectorx solves the AVLCP (A, b ifandonlyifx solves the following absolute-value variational inequalities: Ax x b,y x 0, y K. (5 Lemma 4. Let A R m n, x R n.then Ax A F x, (6 where A F denotes the Frobenius norm A 2 F = tr(at A = m n j=1 a ij 2. Lemma 5 (see [22]. The singular values of the matrix A R n n exceed1ifandonlyiftheminimumeigenvalueofa T A exceeds 1. Lemma 6 ((Banach perturbation lemma [21]. Let A, C R n n and assume that A is invertible with A 1 α.if A C β and αβ < 1,thenC is also invertible and C 1 α 1 αβ. (7

Mathematics 3 Lemma 7 (see [23]. Let x, y R n.then x y 2 x y. (8 Lemma 8. Let x, y R n.then [x] [y] x y. (9 3. A Penalized-Equation Approximation Reformulation of AVLCP(A, b In this section, we construct a nonlinear penalized equation corresponding to absolute-value linear complementarity problem (3. Find x R n such that Ax x [x =b, (10 where >1is the penalized parameter and l 1. We will prove that the solution to the penalized equation (10 converges to that of the AVLCP(A, b. Thus, we make the following assumptions on the system matrix A: (A1 A is positive definite and γ>1in Definition 1; (A2 the entries of A satisfy a ii >0, a ij 0for all i, j = 1,2,...,nwith i =j. Under assumption (A1, the solution of AVLCP(A, b is unique [15]. Our main results in this section are as follows. First, we start our discussion with the following lemma. Lemma 9. Let x be the solution to nonlinear penalized equation (10. Then there exists a positive constant C 0,independent of n, x,and such that [x ] C 0, (11 l/2 where and l are parameters used in (10. Proof. Left-multiplying both sides of (10by[x ] T gives [x ] T Ax [x ] T x [x ] T [x =[x ] T b. (12 Without loss of generality, we assume that [x ] =(u T 1,0T, where u 1 R m, 0 m n, u 1 0.Othercasescanbe transformed into this by reordering the system. When m=0,then[x ] =0;thus(11 is trivially satisfied. We only consider the case when m 1.Inthiscasex can be decomposed into x =(u T 1,uT 2 T,whereu 2 R n m, u 2 0, We now decompose A into A=( A 11 A 12 A 21 A 22, (13 where A 11 R m m, A 12 R m (n m, A 21 R (n m m,and A 22 R (n m (n m ;then(12becomes u T 1 A 11u 1 u T 1 A 12u 2 [x ] T x [x ] p p = [x ] T b, (14 where p = 1 (1/l. From assumption (A1 and u 1 0,wehaveu T 1 A 11u 1 0, and u T 1 A 11u 1 = [x ] T A[x ] γ [x ] 2 [x ] 2 (15 holds, where the last inequality follows from γ > 1. From assumption (A2 and u 2 0,wealsohaveu T 1 A 12u 2 0.Note that [x ] T x = [x ] 2 ; one has that u T 1 A 11u 1 [x ] T x 0. (16 It follows from (14, (16, and u T 1 A 12u 2 0that we have the following inequality: [x ] p p [x ] T b. (17 By using Lemma 2 in (17, we get [x ] p p [x ] T b [x ] p b q, (18 where q=l1satisfying (1/p(1/q = 1.Sincep 1 = 1/l, we thus have the following inequality: [x ] p 1 l b l q. (19 Since all norms on R n are equivalent for a fixed positive integer n, then it follows that there exists a positive constant a 0 such that [x ] a 0 [x ] p, (20 andthusthelefthandin(16canbewrittenas u T 1 A 11u 1 u T 1 A 12u 2 [x ] T x [x ] p p u T 1 A 11u 1 u T 1 A 12u 2 [x ] T x a p [x ] p. 0 Combiningwith (14, we obtain (21 u T 1 A 11u 1 u T 1 A 12u 2 [x ] T x a p [x ] p [x ] T b, 0 (22 together with (16; dropping the first three terms in (22and using Lemma 2,wehave [x ] p [x ] b p q. (23 a p 0 Since p = 1 (1/l (1, 2], wethushavefromtheabove inequality [x ] 1 (l1/p (ap 0 b l1 q 1/p 1 l/2 (ap 0 b l1 q 1/p = C 0, l/2 (24 where C 0 =(a p 0 b l1 q 1/p.Thus,theproofofthislemmais completed.

4 Mathematics Using Lemma 9, we can establish the relationship between solutions of penalized equation (10 andsolutions of the AVLCP(A, b. Theorem 10. Let x and x be the solution to AVLCP(A, b and nonlinear penalized equation (10,respectively.Then there exists a positive constant C, independentofn, x, x,and such that x x C, (25 l/2 where and l are parameters used in (10. Proof. Let [x ] = min{x,0};then x =[x ] [x ], (26 and the vector x x canbedecomposedas x x =x [x ] [x ] =r [x ], (27 where r =x [x ]. Let ω=x r ; one has that ω=x r = [x ] 0, (28 and therefore ω K={y R n y 0}. Replacing y in (5byω gives left-multiplying (10byr T we have r T (Ax x rt b; (29 r T Ax r T x rt [x =r T b; (30 adding up to (29and(30, we get r T A(x x r T ( x x rt [x 0; (31 further we get from (31that where r T A(x x r T ( x x 0, (32 r T [x = (x [x ] T [x = (x T [x 0. (33 One further has that r T A(x x r T ( x x =r T A(x x r T ( x x x x =r T A(x x r T (x x r T (x x =r T A(x x r T (x x 2r T [x ] =r T A([x ] r r T ([x ] r 2r T [x ] =r T A[x ] r T Ar r T r r T [x ] ; (34 thus r T A[x ] r T [x ] r T Ar r T r (γ1 r 2 ; thelastinequalityistruebythepositivedefinitenessofa. Hence (35 r T A[x ] r T [x ] 0. (36 On the other hand, using the Cauchy-Schwarz inequality, Lemma 4,and(11to(36, we get this implies that r T A[x ] r T [x ] rt A[x ] rt [x ] r A[x ] r [x ] r A F [x ] r [x ] = r [x ] ( A F 1 r C 0 l/2 ( A F 1; r C 0 ( A F 1 (γ 1 l/2 (37 = C, (38 l/2 where C =C 0 ( A F 1/(γ 1. From(11, (27, (38, and the triangle inequality, we obtain x x r [x ] C. (39 l/2 We complete this theorem. 4. A Penalized-Equation-Based Generalized Newton Method and Its Convergence In this section, we present a generalized Newton method for solving nonlinear penalized equation (10. We begin by defining the vector function specified by the nonlinear penalizedequation (10 as follows: g(x =Ax x [x b, >1. (40 Let x =(u 1,u 2,...,u n T ; a generalized Jacobian g(x of g(x is given by g (x =A D(x D (x, (41 where D(x = x = diag(sign(x and D (x = [x is a diagonal matrix whose diagonal entries are equal to (1/lu (1/l 1 i,0,orarealnumberσ [0,1] depending on whether the corresponding component of x is positive, negative, or zero. The generalized Newton method for finding asolutionoftheequationg(x =0consists of the following iteration: g(x i g(xi (xi1 x i =0. (42

Mathematics 5 Replacing g(x i by its definition (40 andsetting g(xi by (41give [A D (x i D (x i ] xi1 =b( 1 l 1[xi ]1/l. (43 Thus, solving for x i1 gives x i1 =[A D(xi D (x i ] \(b ((1 l 1[xi ]1/l, (44 which is our final generalized Newton iteration for solving the nonlinear penalized equation (10. In the following, we can establish the penalized-equation-based generalized Newton method for solving AVLCP(A, b. Algorithm 11 (penalized-equation-based generalized Newton algorithm. We have the following. Step 1. Given constants ε>0, 0 >1, μ>1, l 1,and σ [0,1]and a starting point x 0 0 R n,setk:=0. Step 2. Calculate x i1 from the generalized Newton equation [A D (x i D (x i ] x i1 =b ( 1 l 1[xi, (45 starting from x 0 associated with. Step 3. If D(x i1 =D(x i, D (x i1 =D (x i and x i1 x i ε, stop; otherwise, set i:=i1and go to Step 2.Denote the accumulation point of {x i } by x k. Step 4. If [x k ] ε, then stop; otherwise, let 1 =μ, choose new starting point x 0 1 = x k,setk:=k1,andgoto Step 2. 4.1. Existence of Accumulation Point at Each Generalized Newton Iteration. We will show that the sequence {x i } generated by generalized Newton iteration (45 converges to an accumulation point x k associated with.wefirstly give the following sufficient conditions that the generalized Newton iteration (45 is well defined. Lemma 12. If the singular values of A R n n exceed 1. Then (A D(x i D (x i 1 exists for any diagonal matrices D(x i and D (x i,where > 1, D(x i is the diagonal matrix with diagonal elements equal to ±1 or 0 and D (x i isthediagonalmatrixwithdiagonalelementsequalto (1/lu (1/l 1 i,0,orarealnumberσ [0,1]. Proof. If (A D(x i D (x i is singular, then x exists such that =0 (A D(x i D (x i x=0. (46 Thus, we have the following contradiction x T x<x T A T Ax = (Ax T Ax =x T (D (x i D (x i T (D D (x i x x T x (47 for l = 1 and (1, 2, wherethefirstinequality follows from Lemma 5. Hence,(A D(x i D (x i is nonsingular. We now establish boundness of the generalized Newton iteration (45 and thus existence of an accumulation. Theorem 13. Let the singular values of A R n n exceed 1. Then the iteration x i1 = [A D(x i D (x i ] \ (b ((1/l 1[x i of the Algorithm 11 is well defined and bounded. Consequently, there exists an accumulation point x k such that (A D(x k D (x k x k =b ((1/l 1[x k. Proof. By Lemma 12, (A D(x i D (x i 1 exists; hence, the generalized Newton iteration x i1 = [A D(x i D (x i ] \ (b ((1/l 1[x i ]1/l is well defined. We now prove the boundedness of the iterative sequence {x i }.Supposethat{x i } is unbounded; then there exists a subsequence {x i j1 } with x i j1 and nonzero such k that D(x i j = D, D (x i j k = D,where D and D are k fixed diagonal matrices extended from the finite number of possible configurations for D(x i j in the sequence {D(x i j k } k and D (x i j in the sequence {D (x i j k }, respectively,such k that the bounded subsequence {x i j1 / x i j1 k } converges to x. k Hence (A D D xij1 xi j1 By letting j,weobtain = b ((1/l 1 [x i ]1/l xi j1. (48 (A D D x =0, x =1, (49 since x i j1. This contradicts the nonsingularity (A D D which follows from Lemma 12. Therefore, the iterative sequence {x i } is bounded and there exists an accumulation point x k such that (A D(x k D (x k x k = b ((1/l 1[x k. We then establish the finite termination of generalized Newton iteration (45. Theorem 14. Let the singular values of A R n n exceed 1. If D(x i1 =D(x i, D (x i1 =D (x i,and x i1 x i ε for some i for the well defined iteration (45 in Algorithm 11, then x i1 solves the nonlinear penalized equation (10.

6 Mathematics Proof. The generalized Newton iteration x i1 =[A D(x i D (x i ] \ (b ( 1 l 1 [x i (50 is well defined by Lemma 12, andifd(x i1 = D(x i, D (x i1 =D (x i and x i1 x i ε,then 0=[A D(x i D (x i ] x i1 b ( 1 l 1[xi Now, we discuss the globally linear convergence of {x i } generated by generalized Newton iteration (45. Theorem 17. If (A D (x i 1 < 1/(4 (1 D (x i holds for all sufficiently large and D(x i =0, then the generalized Newton iteration (45 converges linearly from any starting point x 0 to a solution x k of the nonlinear penalized equation (10. Proof. Since (A D (x i 1 D(x i < 1 according to the assumption and the definition of D(x i,then(a D (x i D(x i 1 exists for any x i by Lemma 6.Wealso have by the same lemma that =Ax i1 D(x i x i1 b ( 1 l 1[xi =Ax i1 D (x i x i1 D(x i1 x i1 D (x i1 x i1 b ( 1 l 1[xi =Ax i1 xi1 [x i b (D (x i1 x i1 D (x i x i (51 [(A D (x i D (x i ] 1 = (A D (x i 1 1 (A kd (x i 1 k D(xi k 1/ (4 (1 < D (x i 1 (1/1/(4 (1 D (x i 1 k 1 3 (1 D (x i. (53 =Ax i1 xi1 [x i b D (x i1 (x i1 x i =Ax i1 giving the result. xi1 [x i1 bo(ε, Furthermore, we have the following results. Theorem 15. SupposethatthesingularvaluesofA R n n exceed 1 and x k istheuniquesolutionofthenonlinearpenalized equation (10.Then,foranyx k such that (x k i >0 (x k i >0, (x k i <0 (x k i <0, i {1,2,...,n}, (52 the generalized Newton iteration (45 reaches x k in one iteration. Proof. The theorem can be proved in a similar way to the one in [24,Lemma2.1].Weomitithere. Remark 16. Not that the sign match property (52 holds if x k is sufficiently near x k.hence,bytheorem 15, global convergence is then obvious for generalized Newton iteration (45. Let x k be a solution of the nonlinear penalized equation (10. To simplify notation, let D=D(x k, D i =D(x i, D = D (x k,andd i =D (x i.notingthat x k =Dx k, x i = D i x i, (1/l[x k = D x k,and(1/l[x i =D i x i,we have A(x i1 x k =D i x i1 D i x i1 b ( 1 l 1[xi (Dx k D x k b ( 1 l 1[x k =D i x i1 D i x i1 Dx k D x k ( 1 l 1([xi [x k =(D i x i1 Dx k (D i x i1 D x k ( 1 l 1([xi [x k =(D i x i1 x k (D i x i1 1 l [x k ( 1 l 1([xi [x k

Mathematics 7 =D i (x i1 x i x i x k [D i (x i1 x i x i 1 l [x k ] ( 1 l 1([xi [x k Applying Lemmas 7 and 8 and [x i ] [x k ] εwhen i is sufficiently large, we get xi1 x k (A Di D i 1 = xi x k Di (x i1 x k x k x i 1 l ([x i [x k D i (x i1 x k x k x i ( 1 l 1([xi k, (54 ((3 l xi x k D i (x i x k (1 1 l [xi ] [x k ] O(ε (3 D i O(ε (57 (A Di D i 1 xi x k. where the first inequality follows from (A D D x k = b ((1/l 1[x k and (A Di D i x i1 =b ((1/l 1[x i. Hence, one has that (A D i D i (x i1 x k Letting i,ε 0and taking limits in both sides of the last inequality above, we have xi1 x k (3 (1 D (x i xi x k = xi x k Di (x i x k 1 l ([x i ] [x k ] D i (x i x k ( 1 l 1([xi [x k. (55 (A Di (x i D i (x i 1 <1, (58 where the last inequality in (58followsfrom (A D(x i D (x i 1 <1/(3 (1 D i. Consequently, the sequence {x i } converges linearly to a solution x k. Thus x i1 x k =(A D i D i 1 [ xi x k Di (x i x k 1 l ([x i ] [x k ] D i (x i x k ( 1 l 1([xi [x k ]. (56 In the above proof, the choice of x k is arbitrary; hence we have the following result. Corollary 18. Assume that (A D (x i 1 < 1/(1 i k (1 D (x i and D(x i =0for any sufficiently large. Then the nonlinear penalized equation (10 has a unique solution for any b R n. Finally, we give the globally superlinear convergence of {x i } generated by generalized Newton iteration (45. Theorem 19. Assume that (A D (x i 1 < 1/(1 i k (1 D (x i and D(x i = 0 for any sufficiently large. Then the generalized Newton iteration (45 converges superlinearly from any starting point x 0 to a solution x k of the nonlinear penalized equation (10.

8 Mathematics Proof. According to Lemma 6, one has that [(A D (x i D (x i ] 1 (A D (x i 1 1 (A kd (x i 1 k D(xi k (A D (x i 1 1 (A kd (x i 1 k < 1 i k (1 D (x i, (59 Table 1: The numerical results of Example 1. Initial point x s 0 Numbers of Newton iterations x (0, 0 T 2 ( 4 11, 1 11 T (0, 1 T 2 ( 4 11, 1 11 T ( 1, 1 T 1 ( 4 11, 1 11 T (2, 1 T 2 ( 4 11, 1 11 T where the last inequality follows from the sequence (A D (x i 1 1/(1 i k (1 D (x i. Combining this with (58in Theorem 15,we have xi1 x k <(3 (1 D (x i xi x k (A D (xi D (x i 1 (60 < 3 (1 D (x i i k (1 D (x i. (61 Letting i andtaking limits in (61, we can see that the sequence {x i } generated by generalized Newton iteration (45 converges superlinearly to a solution x k. 4.2. Convergence of Penalized-Equation-Based Generalized Newton Method. In this subsection, we will focus on the convergence of Algorithm11. We first present the global convergence. Theorem 20. Let the singular values of A R n n exceed 1. Then the sequence {x k } generated by Algorithm 11 is bounded. Consequently, there exists an accumulation point x of the nonlinear penalized equation (10. Proof. Since x k is an accumulation point of {x i },itfollows from Theorem 13 that the sequence {x i } is bounded, we can thus obtain the boundness of {x k }. Hence there exists an accumulation point x of {x k } such that the nonlinear penalized equation (10holds,givingtheresults. We then establish the linear convergence of Algorithm 11. Theorem 21. Under the assumption that (A D (x k 1 < 1/(1 (1 D (x k, D(x k =0, Algorithm 11 converges linearly from any starting point x 0 0 to a solution x of the nonlinear penalized equation (10. Proof. Taking into account match property (52, the theoremcanbeprovedinasimilarwaytothatoftheorem 15. Finally, we establish the superlinear convergence of Algorithm 11. Theorem 22. Under the assumption that (A D (x k 1 < 1/(1 m k (1 D (x k (m > 1, D(x k = 0, Algorithm 11 converges superlinearly from any starting point x 0 0 to a solution x of the nonlinear penalized equation (10. Proof. Taking into account match property (52, the theorem can be proved in a similar way to that of Theorems 15 and 17. 5. Numerical Results In this section, we consider several examples to show the efficiency of the proposed method by running in MATLAB 7.5 with Intel(R Core (TM of 2 2.70 GHz and RAM of 2.0 GB. Throughout these computational experiments, the parameters used in the algorithm are set as ε=10 6, 0 =10, μ=2,andl=1. The accumulation point of Algorithm 11 is written as x. Example 1. Let the matrix A of AVLCP(A, bbegivenby A=( 2 1, (62 1 3 and b = ( 1, 0 T.ThesolutionofAVLCP(A, b isx = ( 4/11, 1/11 T. The computational results are shown in Table 1. Example 2. Let the matrix A of AVLCP(A, bbegivenby 4 1 0 0 1 4 1 0 A=(, (63 0 1 4 1 0 0 1 4 and b = ( 1, 1, 0, 0 T.ThesolutionofAVLCP(A, b is x = ( 139/551, 144/551, 30/551, 6/551 T.Thecomputational results are shown in Table 2.

Mathematics 9 Initial point x s 0 Table 2: The numerical results of Example 2. Numbers of Newton iterations ( 1, 1, 1, 1 T 1 ( 139 551, 144 30, 551 551, 6 T 551 (0,0,0,0 T 2 ( 139 551, 144 30, 551 551, 6 T 551 (1, 1, 1, 1 T 2 ( 139 551, 144 30, 551 551, 6 T 551 (1,2, 1,0 T 2 ( 139 551, 144 30, 551 551, 6 T 551 Example 3. Let the matrix A of AVLCP(A, bbegivenby 4 0 0 0 1 1 0 4 0 0 1 1 0 0 4 1 1 0 A=(, 0 0 1 4 0 0 (64 1 1 1 0 4 0 ( 1 1 0 0 0 4 and b = ( 4, 4, 4, 4, 4, 4 T.ThesolutionofAVLCP(A, b is x = ( 1, 0, 1, 1, 0, 1 T. The computational results are shown in Table 3. Example 4. Let the matrix A of AVLCP(A, bbegivenby 8, for j=i, { j=i1,,2,...,n 1, a ij = 1, for { j=i 1, i=2,3,...,n, { { 0, otherwise. x (65 Let b = (6,5,5,...,5,6 T.Thenchooseinitialpointx s 0 as x s 0 = (0,0,...,0 T.Wecomparedouralgorithmwiththe existing methods in [15, 19]. The computational results are shown in Table 4. From Table 1 to Table 4,wecanseethatourmethodhas some nice convergence which coincides with our results. 6. Conclusion In this paper, we propose a new approximation to absolutevalue linear complementarity problems (3by using the nonlinear penalized equation (10,basedonwhichageneralized Newton method is proposed for solving this penalized equation. Under suitable assumptions, the algorithm is showntobebothgloballyandsuperlinearlyconvergent. The numerical results presented showed that the generalized Newton method proposed by us is efficient. The resultsandideasofthispapermaybeusedtosolvethe Initial point x s 0 Table 3: The numerical results of Example 3. Numbers of Newton iterations (0,0,0,0,0,0 T 21 ( 1, 1, 1, 1, 1, 1 T 21 ( 2, 0, 2, 2, 0, 2 T 20 (1,1,1,1,1,1 T 21 x =(u 1, u 2, u 3, u 4, u 5, u 6 T u 1 = 1 u 2 = 5.72204e 7 u 3 = 1 u 4 = 1 u 5 = 3.8147e 7 u 6 = 1 u 1 = 1 u 2 = 5.72204e 7 u 3 = 1 u 4 = 1 u 5 = 3.8147e 7 u 6 = 1 u 1 = 1 u 2 = 5.72204e 7 u 3 = 1 u 4 = 1 u 5 = 3.8147e 7 u 6 = 1 u 1 = 1 u 2 = 5.72204e 7 u 3 = 1 u 4 = 1 u 5 = 3.8147e 7 u 6 = 1 Table 4: Computational results of Example 4. Order Numbers of iterations Iterative method [19] GAOR method[15] Algorithm 11 4 10 10 8 8 11 11 9 16 11 11 9 32 12 11 9 64 12 11 9 128 12 11 9 256 12 11 9 512 12 11 9 1024 13 11 9 absolute variational inequalities and related optimization problems. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

10 Mathematics References [1] R. W. Cottle, J.-S. Pang, and R. E. Stone, The Linear Complementarity Problems, Academic Press, San Diego, Calif, USA, 1992. [2] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer, New York, NY, USA, 2003. [3] R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer, New York, NY, USA, 1984. [4]J.NocedalandS.Wright,Numerical Optimization, Springer, New York, NY, USA, 1999. [5] L. Qi and J. Sun, A nonsmooth version of Newton s method, Mathematical Programming,vol.58,no.1 3,pp.353 367,1993. [6] L. Q. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Mathematics of Operations Research, vol.18,no.1,pp.227 244,1993. [7] A. Fischer, A special Newton-type optimization method, Optimization, vol. 24, no. 3-4, pp. 269 284, 1992. [8] F. Facchinei and C. Kanzow, A nonsmooth inexact Newton method for the solution of large-scale nonlinear complementarity problems, Mathematical Programming,vol.76,no.3,pp. 493 512, 1997. [9] T. de Luca, F. Facchinei, and C. Kanzow, A semismooth equation approach to the solution of nonlinear complementarity problems, Mathematical Programming, vol. 75, pp. 407 439, 1996. [10] N. Yamashita and M. Fukushima, Modified Newton methods for solving a semismooth reformulation of monotone complementarity problems, Mathematical Programming, vol. 76, pp. 469 491, 1997. [11] B. Chen, X. Chen, and C. Kanzow, A penalized Fischer- Burmeister NCP-function, Mathematical Programming, vol. 88, pp. 211 216, 2000. [12] C. Kanzow and H. Kleinmichel, A new class of semismooth Newton-type methods for nonlinear complementarity problems, Computational Optimization and Applications, vol. 11,no. 3, pp. 227 251, 1998. [13] C. Kanzow, Inexact semismooth Newton methods for largescale complementarity problems, Optimization Methods & Software,vol.19,no.3-4,pp.309 325,2004. [14] K. Ito and K. Kunisch, On a semi-smooth Newton method and its globalization, Mathematical Programming,vol.118,no.2,pp. 347 370, 2009. [15] M. A. Noor, J. Iqbal, K. I. Noor, and E. Al-Said, Generalized AOR method for solving absolute complementarity problems, Applied Mathematics,vol.2012,ArticleID743861,14 pages, 2012. [16] S.Wang,X.Q.Yang,andK.L.Teo, Powerpenaltymethodfora linear complementarity problem arising from American option valuation, Optimization Theory and Applications,vol. 129, no. 2, pp. 227 254, 2006. [17] S. Wang and X. Q. Yang, A power penalty method for linear complementarity problems, Operations Research Letters, vol. 36,no.2,pp.211 214,2008. [18] S. Wang and C. S. Huang, A power penalty method for solving a nonlinear parabolic complementarity problem, Nonlinear Analysis, vol. 69, no. 4, pp. 1125 1137, 2008. [19] M.A.Noor,J.Iqbal,K.I.Noor,andE.Al-Said, Onaniterative method for solving absolute value equations, Optimization Letters,vol.6,no.5,pp.1027 1033,2012. [20] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, USA, 1970. [21] J. M. Ortega and W. C. Rheinboldt, IterativeSolutionofNonlinear Equations in Several Variables, Academic Press, New York, NY, USA, 1970. [22] O. L. Mangasarian, Absolute value programming, Computational Optimization and Applications, vol.36,no.1,pp.43 53, 2007. [23] O. L. Mangasarian, A generalized Newton method for absolute value equations, Optimization Letters,vol.3,no.1,pp.101 108, 2009. [24] C. Zhang and Q. J. Wei, Global and finite convergence of a generalized Newton method for absolute value equations, JournalofOptimizationTheoryandApplications,vol.143,no. 2,pp.391 403,2009.

Advances in Operations Research Advances in Decision Sciences Applied Mathematics Algebra Probability and Statistics The Scientific World Journal International Differential Equations Submit your manuscripts at International Advances in Combinatorics Mathematical Physics Complex Analysis International Mathematics and Mathematical Sciences Mathematical Problems in Engineering Mathematics Discrete Mathematics Discrete Dynamics in Nature and Society Function Spaces Abstract and Applied Analysis International Stochastic Analysis Optimization