SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS

Size: px
Start display at page:

Download "SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS"

Transcription

1 A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica e Sistemistica Via Buonarroti 12, Roma, Italy (De Luca): deluca@newton.dis.uniroma1.it (Facchinei): soler@peano.dis.uniroma1.it 2 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D Hamburg, Germany kanzow@math.uni-hamburg.de Abstract: In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that the natural merit function associated to the equation reformulation is continuously dierentiable to develop an algorithm whose global and quadratic convergence properties can be established under very mild assumptions. Other interesting features of the new algorithm are an extreme simplicity along with a low computational burden per iteration. We include numerical tests which show the viability of the approach. Key Words: Nonlinear complementarity problem, semismoothness, smooth merit function, global convergence, quadratic convergence.

2 1 Introduction We consider the nonlinear complementarity problem, NCP(F ) for short, which is to nd a vector in IR n satisfying the conditions x 0; F (x) 0; x T F (x) = 0; where F : IR n! IR n is a given function which we shall always assume to be continuously dierentiable. The fastest algorithms for the solution of NCP(F ) are Newton-type methods, which, however, are in general only locally convergent. In the last years much attention has been devoted to techniques for globalizing these local algorithms. To this end there have been several dierent proposals, but a common scheme can be seen to underlie most of these methods: (a) reformulate NCP(F ) as a system of (possibly nonsmooth) equations (x) = 0; (b) dene a local Newton-type method for the solution of the system of equations; (c) perform a linesearch to minimize a suitable merit function, usually k(x)k 2 or k(x)k, in order to globalize the local method. There are several possibilities to redene NCP(F ) as a system of equations. The rst proposal is probably due to Mangasarian, see [29], where a class of reformulations, mainly smooth ones, is described; the smooth reformulation approach has been further explored in [9, 15, 25, 26, 27, 30, 32, 49]. In the last years, however, nonsmooth reformulations have attracted much more attention [5, 7, 8, 10, 14, 16, 18, 20, 32, 34, 35, 36, 41, 46, 54, 55], since they enjoy some denite advantages: they allow to dene superlinearly convergent algorithms also for degenerate problems, and the subproblems to be solved at each iteration tend to be more numerically stable. However, there is a price to pay: the globalization (point (c) above) becomes more complex since the merit function k(x)k 2 is nonsmooth. Furthermore, the subproblems which have to be solved at each iteration, usually (mixed) linear complementarity problems, are more complex than in the smooth reformulation case. We note that a common analytical property of the recent nonsmooth reformulations is that, though not F-dierentiable, they are B-dierentiable, so that the machinery developed in [34, 43, 44, 45] can be usefully employed. In this paper we describe a new algorithm for the solution of NCP(F ) which is both globally and superlinearly convergent. We use a nonsmooth equation reformulation which is based on the simple function (a; b) := p a 2 + b 2? (a + b) which was rst introduced by Fischer [11] and further employed by several authors, see [7, 8, 12, 16, 25, 27, 39, 52]. The resulting system of equations is not F-dierentiable, but nevertheless it is semismooth [31, 40]. Semismoothness is a stronger analytical property than B-dierentiability so that, using the recent powerful theory for the solution of semismooth equations [37, 38, 40], it is possible to develop a fast local algorithm for the solution of NCP(F ) which only requires, at each iteration, the solution of one linear system. Furthermore, the natural merit function k(x)k 2 is, surprisingly, smooth, so that global convergence through a linesearch procedure can 2

3 easily be enforced. The assumptions under which global and superlinear convergence can be proved compare favourably with those of existing algorithms and the overall algorithm seems to convey the advantages of both smooth and nonsmooth reformulations of NCP(F ) in a very simple scheme. The use of the function mentioned above to reformulate nonlinear complementarity problems as systems of nonsmooth equations seems a very promising approach. Algorithms for linear complementarity problems based on this function were considered in [12, 27], while the nonlinear case was studied in [7, 8, 16, 25]. The results of this paper, however, are an improvement on previous results because the semismooth nature of the reformulation is here fully exploited for the rst time and the overall analysis carried out is much more rened and detailed than in previous works. While we were completing this paper, we became aware of a recent report of Jiang and Qi [23] which has some similarities with this work in that the same local approach is adopted. However, the global algorithm is quite dierent and the analysis is restricted to the case in which F is a uniform P -function. This paper is organized as follows, in the next section we collect some background material, in Section 3 we state the algorithm and its main properties, Sections 4 and 5 are devoted to some technical, but fundamental results, while in Section 6 we prove and further discuss the properties of the algorithm. In Section 7 we report some preliminary numerical results to show the viability of the approach adopted. Some concluding remarks are made in the last section. Throughout this paper, the index set f1; : : : ; ng is abbreviated by the upper case letter I: For a continuously dierentiable function F : IR n! IR n ; we denote the Jacobian of F at x 2 IR n by F 0 (x); whereas the transposed Jacobian is denoted by rf (x): k k denotes the Euclidean norm and S(x; ) is the closed Euclidean sphere of center x and radius, i.e. S(x; ) = fx 2 IR n : kx? xk g. If is a subset of IR n, distfxjg := inf y2 ky? xk denotes the (Euclidean) distance of x to. If M is an n n matrix with elements M jk, j; k = 1; : : : n, and J and K are index sets such that J; K f1; : : : ng, we denote by M J;K the jjj jkj submatrix of M consisting of elements M jk, j 2 J, k 2 K and, assuming that M J;J is nonsingular, by M=M J;J the Schur-complement of M J;J in M, i.e. M=M J;J = M K;K? M K;J M J;JM?1 J;K, where K = I n J. If w is an n vector, we denote by w J the subvector with components w j, j 2 J. 2 Background Material In this section we collect several denitions and results which will be used throughout the paper. 3

4 2.1 Basic Denitions A solution to the nonlinear complementarity problem NCP(F ) is a vector x 2 IR n such that F (x ) 0; x 0; F (x ) T x = 0: Associated to the solution x we dene three index sets: := fijx i > 0g; := fijx i = 0 = F i (x )g; := fijf i (x ) > 0g The solution x is said to be nondegenerate if = ;. Denition 2.1 We say that the solution x is R-regular if rf ; (x ) is nonsingular and the Schur-complement of rf ; (x ) in! is a P -matrix (see below). rf ; (x ) rf ; (x ) rf ; (x ) rf ; (x ) Note that R-regularity coincides with the notion of regularity introduced by Robinson in [47] (see also [42], where the same condition is called strong regularity) and is strictly related to similar conditions used, e.g., in [10, 32, 36]. We shall also employ several denitions concerning matrices and functions and need some related properties. Denition 2.2 A matrix M 2 IR nn is a - P 0 -matrix if every of its principal minors is non-negative; - P -matrix if every of its principal minors is positive; - S 0 -matrix if fx 2 IR n j x 0; x 6= 0; Mx 0g 6= ;: It is obvious that every P -matrix is also a P 0 -matrix and it is known [4] that every P 0 -matrix is an S 0 -matrix. We shall also need the following characterization of P 0 -matrices [4]. Proposition 2.1 A matrix M 2 IR nn is a P 0 -matrix i for every nonzero vector x there exists an index i such that x i 6= 0 and x i (Mx) i 0. Denition 2.3 A function F : IR n! IR n is a - P 0 -function if, for every x and y in IR n with x 6= y, there is an index i such that x i 6= y i ; (x i? y i )[F i (x)? F i (y)] 0: 4

5 - P -function if, for every x and y in IR n with x 6= y, there is an index i such that (x i? y i )[F i (x)? F i (y)] > 0: - uniform P -function if there exists a positive constant such that, for every x and y in IR n, there is an index i such that (x i? y i )[F i (x)? F i (y)] kx? yk 2 : It is obvious that every uniform P -function is a P -function and that every P -function is a P 0 -function. Furthermore, it is known that the Jacobian of every continuously dierentiable P 0 -function is a P 0 -matrix and that if the Jacobian of a continuously dierentiable function is a P -matrix for every x, then the function is a P -function. If F is ane, that is if F (x) = Mx + q, then F is a P 0 -function i M is a P 0 -matrix, while F is a (uniform) P -function i M is a P -matrix (note that in the ane case the concept of uniform P -function and P -function coincide). We nally note that every monotone function is a P 0 -function, every strictly monotone function is a P -function and that every strongly monotone function is a uniform P -function. 2.2 Dierentiability of Functions and Generalized Newton's Method Let G : IR n! IR n be locally Lipschitzian; by Rademacher's theorem G is dierentiable almost everywhere. If we indicate by D G the set where G is dierentiable, we can dene the B-subdierential of G at x [38] B G(x) := and the Clarke subdierential of G at x as lim x k 2D G x k!x G 0 (x k := B G(x); where co denotes the convex hull of a set. Semismooth functions were introduced in [31] and immediately shown to be relevant to optimization algorithms. Recently the concept of semismoothness has been extended to vector valued functions [40]. Denition 2.4 Let G : IR n! IR n be locally Lipschitzian at x 2 IR n. We say that G is semismooth at x if lim Hv 0 (1) exists for any v 2 IR n. H2@G(x+tv 0 ) v 0!v;t#0 5

6 Semismooth functions lie between Lipschitz functions and C 1 functions. Note that this class is strictly contained in the class of B-dierentiable functions. Furthermore it is known that if G is semismooth at x then it is also directionally dierentiable there, and its directional derivative in the direction v is given by (1). A slightly stronger notion than semismoothness is strong semismoothness, de- ned below (see [38] and [40] where, however, dierent names are used). Denition 2.5 Suppose that G is semismooth at x. We say that G is strongly semismooth at x if for any H + d), and for any d! 0, Hd? G 0 (x; d) = O(kdk 2 ): In the study of algorithms for the local solution of semismooth systems of equations, the following regularity condition plays a role similar to that of the nonsingularity of the Jacobian in the study of algorithms for smooth systems of equations. Denition 2.6 We say that a semismooth function G : IR n! IR n is BD-regular at x if all the elements B G(x) are nonsingular. A generalized Newton method for the solution of a semismooth system of n equations G(x) = 0 can be dened as x k+1 = x k? (H k )?1 G(x k ); H k B G(x k ); (2) (H k can be any element B G(x k )). The following result holds [38]. Theorem 2.2 Suppose that x is a solution of the system G(x) = 0 and that G is semismooth and BD-regular at x. Then the iteration method (2) is well dened and convergent to x superlinearly in a neighborhood of x. If, in addition, G is directionally dierentiable in a neighborhood of x and strongly semismooth at x, then the convergence rate of (2) is quadratic. We nally give the denition of SC 1 function. Denition 2.7 A function f : IR n! IR is an SC 1 function if f is continuously dierentiable and its gradient is semismooth. SC 1 functions can be viewed as functions which lie between C 1 and C 2 functions. 2.3 Reformulation of NCP(F ) Our reformulation of NCP(F ) as a system of equations is based on the following two variables convex function: (a; b) := p a 2 + b 2? (a + b): The most interesting property of this function is that, as it is easily veried, (a; b) = 0 () a 0; b 0; ab = 0; (3) 6

7 note also that is continuously dierentiable everywhere but in the origin. The function was introduced by Fischer [11] in 1992, since then it has attracted the attention of many researchers and it has proved to be a valuable tool in nonlinear complementarity theory [7, 8, 12, 16, 22, 23, 25, 27, 39, 52]. An up-to-date review on the uses of the function can be found in [13]. Exploiting (3) it is readily seen that the nonlinear complementarity problem is equivalent to the following system of nonsmooth equations: (x) := (x 1 ; F 1 (x)). (x i ; F i (x)). (x n ; F n (x)) It is then also obvious that the nonnegative function (x) := 1 2 k(x)k2 = 1 2 nx i= = 0: (x i ; F i (x)) 2 is zero at a point x if and only if x is a solution of NCP(F ), so that solving NCP(F ) is equivalent to nding the unconstrained global solutions of the problem fmin (x)g. The following results have been proven in [7] or easily follow from [39, Lemma 3.3]. Part (c) has also been shown in the recent paper [22]. Theorem 2.3 It holds that (a) is semismooth everywhere; furthermore, if every F i is twice continuously differentiable with Lipschitz continuous Hessian, then is strongly semismooth everywhere; (b) is continuously dierentiable; furthermore, if every F i is an SC 1 function, then also is an SC 1 function; (c) If F is a uniform P -function then the level sets of are bounded. 3 The Algorithm In this section we present the algorithm for the solution of NCP(F ). Roughly speaking this algorithm can be seen as an attempt to solve the semismooth system of equations (x) = 0 by using the generalized Newton's method described in the previous section (see (2)). To ensure global convergence, a linesearch is performed to minimize the smooth merit function ; if the search direction generated according to (2) is not a \good" descent direction, we resort to the negative gradient of. Global Algorithm 7

8 Data: x 0 2 IR n, > 0, p > 2, 2 (0; 1=2), " 0. Step 0: Set k = 0. Step 1: (stopping criterion) If kr (x k )k " stop. Step 2: (search direction calculation) Select an element H k B (x k ). Find the solution d k of the system If system (4) is not solvable or if the condition is not satised, set d k =?r (x k ). H k d =?(x k ): (4) r (x k ) T d k?kd k k p (5) Step 3: (linesearch) Find the smallest i k = 0; 1; 2; : : : such that (x k + 2?ik d k ) (x k ) + 2?ik r (x k ) T d k : (6) Set x k+1 = x k + 2?ik d k, k k + 1 and go to Step 1. We note that the above algorithm is virtually indistinguishable from a global algorithm for the solution of an F-dierentiable system of equations. Formally, the only point which requires some care is the calculation of an element H belonging B (x k ) in Step 2. This, however, turns out to be an easy and cheap task, as it will be discussed in Section 7. Another point which is worth of attention is that, if it exists, the direction obtained by solving (4) is always a descent direction for the function, unless (x k ) = 0. This is a standard property of the Newton direction for the solution of a smooth system of equations, but it is no longer true, in general, when the system of equations is nonsmooth. To see that this assertion is true, it is sucient to note that, as it is stated in Theorem 2.3 (b), is dierentiable at x k ; on the other hand, applying standard nonsmooth calculus rules (see [3]), we (x k ) = fr (x k )g = V T (x k ) for every V k ); so that the expression in the rightest hand side is independent of the element V k ) chosen. Hence we can take V = H k and write, taking into account (4), r (x k ) T d k = (x k ) T H k d k =?k(x k )k 2 : This fact clearly shows, once again, the similarity of our algorithm with Newton's method for the solution of a system of equations; furthermore, in our opinion, it also indicates that the direction (4) is a \good" search direction also when far from a solution of the system (x) = 0, which is not true for a general semismooth system. 8

9 The stopping criterion at Step 1 can be substituted by any other criterion without changing the properties of the algorithm. In what follows, as usual in analyzing the behaviour of algorithms, we shall assume that " = 0 and that the algorithm produces an innite sequence of points. The following result, summarizing the main properties of the algorithm, will be proved in Section 6, where we shall also discuss the properties of the algorithm in greater detail. Theorem 3.1 It holds that (a) Each accumulation point of the sequence fx k g generated by the algorithm is a stationary point of. (b) If one of the limit points of the sequence fx k g, let us say x, is an isolated solution of NCP(F ), then fx k g! x. (c) If one of the limit points of the sequence fx k g, let us say x, is a BD-regular solution of the system (x) = 0, and if each F i is an SC 1 function, then fx k g! x and 1. Eventually d k is always given by the solution of system (4) (i.e., the negative gradient is never used eventually). 2. Eventually the stepsize of one is always accepted so that x k+1 = x k + d k. 3. The convergence rate is superlinear; furthermore if each F i is twice continuously dierentiable with Lipschitz continuous Hessian, then the convergence rate is quadratic. The results stated in this theorem are somewhat \crude" and need to be completed by answering the following two questions. Under which conditions is a stationary point x of the function a global solution and hence a solution of NCP(F )? In the next section sucient and necessary-and-sucient conditions will be established in order to ensure this key property. The second question is: under which conditions is a solution of NCP(F ) a BD-regular solution of the system (x) = 0? In fact, according to Theorem 3.1, a superlinear (at least) convergence rate of the algorithm can be ensured under the assumption of BD-regularity of solutions of the system (x) = 0. More in general, the possibility of using the (hopefully) good \second order" search direction (4) even far from a solution is closely related to this issue. Conditions guaranteeing the nonsingularity of all the elements will be analyzed in Section 5. We note that, as discussed in [38], other strategies are possible to globalize the local Newton method (2) for the system (x) = 0. However the one we chose here, i.e., using the gradient of in \troublesome" situations, appears to be by far the simplest choice. The possibility of using the gradient, in turn, heavily depends of the surprising fact that kk 2 is continuously dierentiable, which is not true, in general, for the square of a semismooth system of equations. This peculiarity of the system (x) = 0 paves the way for a simple extension of practically any 9

10 standard globalization technique used in the solution of smooth systems of equations: Levenberg-Marquardt methods, trust region methods etc. In this paper we have chosen the simplest of these techniques since we were mainly interested in showing the potentialities of our approach and its numerical viability. 4 Regularity Conditions In this section we give some (necessary-and-) sucient conditions for stationary points of our merit function to be solutions of NCP(F ). We call these conditions regularity conditions. We rst recall a result of Facchinei and Soares [7, Proposition 3.1]. Lemma 4.1 For an arbitrary x 2 IR n ; we T D a (x) + rf (x)d b (x); where D a (x) = diag(a 1 (x); : : : ; a n (x)); D b (x) = diag(b 1 (x); : : : ; b n (x)) 2 IR nn diagonal matrices whose i-th diagonal element is given by a i (x) = x i k(x i ; F i (x))k? 1; b i(x) = F i (x) k(x i ; F i (x))k? 1 are if (x i ; F i (x)) 6= 0; and by a i (x) = i? 1; b i (x) = i? 1 for every ( i ; i ) 2 IR 2 such that k( i ; i )k 1 if (x i ; F i (x)) = 0: In the following, for simplicity, we sometimes suppress the dependence on x in our notation. For example, the gradient of the dierentiable function can be written as follows: r (x) = D a (x)(x) + rf (x)d b (x)(x) = D a + rf D b : In the analysis of this section, the signs of the vectors D a 2 IR n and D b 2 IR n play an important role. We therefore introduce the index sets C := fi 2 Ij x i 0; F i (x) 0; x i F i (x) = 0g R := I n C and further partition the index set R as follows: (complementary indices), (residual indices), P := fi 2 Rj x i > 0; F i (x) > 0g N := R n P (positive indices), (negative indices). Note that these index sets depend on x, but that the notation does not reect this dependence. However, this should not cause any confusion because the given vector x 2 IR n will always be clear from the context. 10

11 The names P and N of the above index sets are motivated by the following simple relations, which can be easily veried: (D a ) i > 0 () (D b ) i > 0 () i 2 P; (D a ) i = 0 () (D b ) i = 0 () i 2 C; (D a ) i < 0 () (D b ) i < 0 () i 2 N ; (7) in particular, the signs of the corresponding elements of D a and D b are the same. The following denition of regular vector is motivated by some similar denitions in the recent papers of Pang and Gabriel [36, Denition 1], More [32, Denition 3.1] and Ferris and Ralph [10, Denition 2.4]. Denition 4.1 A point x 2 IR n is called regular if for every vector z 6= 0 such that z C = 0; z P > 0; z N < 0; (8) there exists a vector y 2 IR n such that y P 0; y N 0; y R 6= 0 and y T rf (x)z 0: (9) It is dicult to compare our denition of regular vector with the corresponding ones in [10, 32, 36] since we use dierent index sets (the denition of the index sets depends heavily on the merit function chosen). Nevertheless, we think that the following points should be remarked: (a) As pointed out by More [32], the denition of regularity given by Pang and Gabriel [36] depends on the scaling of the function F; i.e., a vector x 2 IR n could be regular for NCP(F ) in the sense of Pang and Gabriel [36], but not for the equivalent problem NCP(F s ), where F s (x) = sf (x) for some positive scaling parameter s: Obviously our denition is independent of the scaling of F: (b) In the related denition of regularity given by More [32] and Ferris and Ralph [10], a condition similar to (9) is employed. However, they have to assume y T rf (x)z > 0 instead of (9). In this respect, our denition of regularity seems to be weaker. For example, if rf (x) is a positive semidenite matrix, then we can choose y = z and directly obtain from Denition 4.1 that x is a regular vector. More in general, whenever directly comparable, our regularity conditions appear to be weaker than those introduced in [10, 32, 36]. The following result shows why the notion of regular vector is so important. 11

12 Theorem 4.2 A vector x 2 IR n is a solution of NCP(F ) if and only if x is a regular stationary point of : Proof. First assume that x 2 IR n is a solution of NCP(F ). Then x is a global minimum of and hence a stationary point of by the dierentiability of this function. Moreover, P = N = ; in this case, and therefore the regularity of x holds vacuously since z = z C, and there exists no nonzero vector z satisfying conditions (8). Suppose now that x is regular and that r (x) = 0: As mentioned at the beginning of this section, the stationary condition can be rewritten as Consequently, we have D a + rf (x)d b = 0: y T (D a ) + y T rf (x)(d b ) = 0 (10) for any y 2 IR n : Assume that x is not a solution of NCP(F ). Then R 6= ; and hence, by (7), z := D b is a nonzero vector with z C = 0; z P > 0; z N < 0: Recalling that the components of D a and z = D b have the same signs, and taking y 2 IR n from the denition of regular vector, we have (since y R 6= 0), and y T (D a ) = y T C (D a ) C + y T P(D a ) P + y T N (D a ) N > 0 (11) y T rf (x)(d b ) = y T rf (x)z 0: (12) The inequalities (11) and (12) together, however, contradict condition (10). Hence R = ;: This means that x is a solution of NCP(F ). 2 From Theorem 4.2 and the remark (b) after Denition 4.1, we directly obtain that a stationary point x is a solution of NCP(F ) if the Jacobian matrix F 0 (x) is positive semidenite. In particular, we have the result that all stationary points of are solutions of NCP(F ) for monotone functions F; i.e., we reobtain in this way a recent result of Geiger and Kanzow [16, Theorem 2.5]. However, in the following we present some weaker conditions which also ensure that stationary points are global minimizers of and hence solutions of NCP(F ). Let T 2 IR jrjjrj denote the diagonal matrix T = diag(t 1 ; : : : ; t jrj ) with entries ( +1 if i 2 P; t i :=?1 if i 2 N ; and note that T T = I: Using this notation, we can prove the following result. 12

13 Theorem 4.3 Let x 2 IR n be a stationary point of ; and assume that the matrix T F 0 (x) RR T is an S 0 -matrix. Then x is a solution of NCP(F ). Proof. We shall prove that x is regular. The result then follows from Theorem 4.2. By the denition of S 0 -matrix and the assumptions made, there exists a nonzero vector ~y R such that ~y R 0 and T F 0 (x) RR T ~y R 0: (13) Let y be the unique vector such that by the denition of T and (13) we have that y R 6= 0 and y C = 0; y R = T ~y R ; (14) y P 0; y N 0: Then it is easy to see that, for every z 2 IR n such that z 6= 0 and we can write, recalling (14), z C = 0; z P > 0; z N < 0; y T rf (x)z = yrrf T (x) RR z R = yr(t T T )rf (x) RR (T T )z R = (yr T T )(T rf (x) RRT )(T z R ) = ~y RT T (F 0 (x) RR ) T T (T z R ) 0; (15) where the last inequality follows by (13) and the fact that (T z R ) > 0. 2 In the following corollary, we summarize some simple, but noteworthy consequences of Theorem 4.3. First recall that a vector x 2 IR n is called feasible for NCP(F ) if x 0 and F (x) 0: Corollary 4.4 It holds that (a) If x 2 IR n is a stationary point of such that the submatrix F 0 (x) RR is a P 0 -matrix, then x is a solution of NCP(F ). (b) If F : IR n! IR n is a P 0 -function, then all the stationary points of are solutions of NCP(F ). (c) If x 2 IR n is a feasible stationary point of such that the submatrix F 0 (x) RR is an S 0 -matrix, then x is a solution of NCP(F ). Proof. (a) Since a square matrix M is a P 0 -matrix if and only if the matrix DMD is a P 0 -matrix for all nonsingular diagonal matrices D; the assertion is a direct consequence of Theorem 4.3 and the fact that every P 0 -matrix is also an S 0 -matrix. (b) Since F is a continuously dierentiable P 0 -function, its Jacobian F 0 (x) is a P 0 -matrix for all x 2 IR n ; see More and Rheinboldt [33, Theorem 5.8]. The 13

14 second assertion is therefore a direct consequence of part (a) since, by denition, all principal submatrices of a P 0 -matrix are also P 0 -matrices. (c) Since x is assumed to be a feasible vector, we have N = ;: Hence the diagonal matrix T introduced before Theorem 4.3 reduces to the identity matrix, and part (c) follows directly from Theorem Note that part (b) of Corollary 4.4 has recently been proved also by Facchinei and Soares [7, Theorem 4.1]. Before proving our next result, we introduce some notation. Without loss of generality, assume that the Jacobian F 0 (x) be partitioned in the following way: and dene F 0 (x) = 0 F 0 (x) CC F 0 (x) CP F 0 (x) CN F 0 (x) PC F 0 (x) PP F 0 (x) PN F 0 (x) N C F 0 (x) N P F 0 (x) N N J(x) := T F 0 (x)t; where the transformation matrix T has the diagonal structure T := 0 +I C 0 +I P 0?I N (if C = ;; this matrix coincides with the matrix T introduced before Theorem 4.3, so we use the same symbol here). Then we have J(x) = 0 1 C A F 0 (x) CC F 0 (x) CP?F 0 (x) CN F 0 (x) PC F 0 (x) PP?F 0 (x) PN?F 0 (x) N C?F 0 (x) N P F 0 (x) N N 1 C A ; 1 C A : (16) Transformed Jacobian matrices of this kind were also considered by Pang and Gabriel [36], More [32] and Ferris and Ralph [10]. Using this matrix, we can prove the following result. Theorem 4.5 Let x 2 IR n be a stationary point of ; and assume that the submatrix F 0 (x) CC is nonsingular and the Schur-complement of this matrix in J(x) is an S 0 - matrix. Then x is a solution of NCP(F ). Proof. We rst show that there exist ~y P ; ~y N 0 with ~y R 6= 0 and q P ; q N 0 such that J(x)~y = q; (17) where ~y = (~y C ; ~y P ; ~y N ) T ; q = (0 C ; q P ; q N ) T 2 IR n : In view of (16), system (17) can be rewritten as F 0 (x) CC ~y C + J(x) CR ~y R = 0 C ; (18) J(x) RC ~y C + J(x) RR ~y R = q R : (19) 14

15 Solving the rst equation for ~y C yields ~y C =?F 0 (x)?1 CC J(x) CR ~y R : (20) Substituting this into (19) and rearranging leads to J(x)RR? J(x) RC F 0 (x)?1 CC J(x) CR ~yr = q R : (21) Since, by assumption, the matrix of the linear system (21) is an S 0 -matrix, there exists an ~y R 0; ~y R 6= 0; and a q R 0 satisfying this system. Then dene ~y C by (20) and set y := T ~y: Then y C = ~y C ; y P = ~y P 0; y N =?~y N 0 and y R 6= 0; i.e., the vector y 2 IR n satises all the conditions required in Denition 4.1 for a regular vector x: Now let z 6= 0 be an arbitrary vector with z C = 0; z P > 0 and z N < 0: Then ~z := T z satises the conditions ~z C = 0 C ; ~z P > 0; ~z N > 0; and we therefore obtain from (17) and the fact that T T = I : y T rf (x)z = z T F 0 (x)y = z T T T F 0 (x)t T y = ~z T J(x)~y = ~z T q = ~z T Pq P + ~z T N q N 0; i.e., x is regular. 2 If C = ;; then Theorem 4.5 reduces to Theorem 4.3. We note that similar results were shown in [10, 32, 36], but in all these papers a certain Schur-complement is assumed to be an S-matrix, which, again, is a stronger assumption than the one used in our Theorem 4.5. In order to illustrate our theory, consider NCP(F ) with F : IR 4! IR 4 being dened by 0 3x 2 + 2x 1 1x 2 + 2x 2 + x x 4? 6 F (x) := 2x B x 1 + x x 3 + 2x 4? C 3x x 1 x 2 + 2x x 3 + 3x 4? 1 A : x x x 3 + 3x 4? 3 This small example is due to Kojima [28] and is a standard test problem for nonlinear complementarity codes. Harker and Xiao [20] report a failure of their method for this example. Their method converges to the point ^x = (1:0551; 1:3347; 1:2681; 0) T ; where F (^x) = (4:9873; 6:8673; 9:8472; 5:9939) T ; 15

16 which is obviously not a solution of NCP(F ). The corresponding Jacobian matrix is 0 1 9:0001 7: B F 0 5:2204 2: C (^x) = C 7:6653 6: A : 2:1102 8: Obviously, F 0 (^x) is an S 0 -matrix. Since ^x is also feasible, Corollary 4.4 (c) guarantees that our method will not terminate at ^x: 5 Nonsingularity Conditions In this section we study conditions which guarantee that all elements are nonsingular. We begin with three lemmas that will be needed later in this section. Lemma 5.1 If M 2 IR nn is a P 0 -matrix, then every matrix of the form D a + D b M is nonsingular for all positive denite (negative denite) diagonal matrices D a ; D b 2 IR nn. Proof. We only consider the case in which the two matrices are positive definite, the other case is analogous. Assume that M is a P 0 -matrix and D a := diag(a 1 ; : : : ; a n ); D b := diag(b 1 ; : : : ; b n ) are positive denite diagonal matrices. Then (D a + D b M)q = 0 for some q 2 IR n implies D a q =?D b Mq and therefore q i =?(b i =a i )(Mq) i (i 2 I): Since M is a P 0 -matrix, we get from Proposition 2.1 that q = 0; i.e., the matrix D a + D b M is nonsingular. 2 We note that it is also possible to prove the converse direction in the above lemma, i.e., the matrix D a + D b M is nonsingular for all positive (negative) denite diagonal matrices D a ; D b 2 IR nn if and only if M is a P 0 -matrix. In the following, however, we only need the direction stated in Lemma 5.1. Lemma 5.2 Let M be any matrix and consider the following partition Assume that (a) M L;L is nonsingular, (b) M=M L;L is a P -matrix (P 0 -matrix),! M = M L;L M L;J : M J;L M J;J 16

17 then, for every index set K such that L K L [ J, M K;K =M L;L is a P -matrix (P 0 -matrix). Proof. The assertion easily follows by the fact that M K;K =M L;L is a principal submatrix of M=M L;L which, in turn, is a P -matrix (P 0 -matrix). 2 We now introduce some more index sets: := fi 2 Ij x i > 0; F i (x) = 0g; := fi 2 Ij x i = 0; F i (x) = 0g; := fi 2 Ij x i = 0; F i (x) > 0g; := I n f [ [ g (once again, we suppress in our notation the dependence on x of the above index sets). If the point x under consideration is a solution of NCP(F ) then = ; and the sets, and coincide with those already introduced in Section 2. Using the notation of the previous section, the index sets ; and form a partition of the set C; whereas corresponds to what has been called R in Section 4 (here we changed the name of this set just for uniformity of notation). Note that denotes the set of degenerate indices at which is not dierentiable. Further note that at an arbitrary point x 2 IR n ; the index sets ; and, in particular, are usually very small (and quite often empty). The last lemma is introduced in order to simplify the proof of the main Theorem (Theorem 5.4). It gives conditions which guarantee that all elements are nonsingular, assuming = ;. This result will then be used to prove the general case 6= ;: Lemma 5.3 Let x 2 IR n be given and set M := F 0 (x). Assume that (a) (x) = ;; (b) M ; is nonsingular, (c) the Schur-complement M [;[ =M ; is a P 0 -matrix. Then the function is dierentiable at x and the Jacobian matrix 0 (x) is nonsingular. Proof. Since (x) = ;; the function is dierentiable at x with 0 (x) = D a +D b M where D a and D b are the diagonal matrices introduced in Lemma 4.1. Let q 2 IR n be an arbitrary vector with (D a + D b M)q = 0: Writing D a = 0 D a; 17 D a; D a; 1 C A ;

18 D b = M = 0 0 D b; D b; D b; M ; M ; M ; M ; M ; M ; M ; M ; M ; q = (q ; q ; q ) T ; where D a; ; D a; : : : are abbreviations for the diagonal matrices (D a ) ; ; (D a ) ; : : : ; the equation (D a + D b M)q = 0 can be rewritten as 1 C A ; 1 C A ; M ; q + M ; q + M ; q = 0 ; (22) D a; q = 0 ; (23) D a; q + D b; M ; q + D b; M ; q + D b; M ; q = 0 ; (24) where we have taken into account that, by Lemma 4.1, D a; = 0 ; ; D b; = 0 ; and that D b; is nonsingular. Since the diagonal matrix D a; is also nonsingular, we directly obtain q = 0 (25) from (23). Hence equations (22) and (24) reduce to M ; q + M ; q = 0 ; (26) D a; q + D b; M ; q + D b; M ; q = 0 : (27) Due to the nonsingularity of the submatrix M ; ; we directly obtain from (26) q =?M?1 ; M ;q : (28) Substituting this into (27) and rearranging terms yields Da; + D b; h M;? M ; M?1 ;M ; i q = 0 : (29) According to the negative deniteness of the diagonal matrices D a; and D b; ; we obtain from assumption (c), equation (29) and Lemma 5.1 that q = 0 : This, in turn, implies that q = 0 by (28). In view of (25), we therefore have q = 0; which proves the desired result. 2 Theorem 5.4 Let x 2 IR n be given, and set M := F 0 (x). Assume that (a) the submatrices M ~;~ are nonsingular for all ~ ( [ ); (b) the Schur-complement M [[;[[ =M ; is a P 0 -matrix. Then all elements G are nonsingular. 18

19 Proof. We rst note that by Lemma 5.2 and the assumptions made, the Schurcomplement M [^;[^ =M ; is a P 0 -matrix for every ^ ( [ ). Let us now consider ~ ( [ ): We prove that the Schur-complement M ~[ ~ ;~[ ~ =M ~;~ is a P 0 -matrix for all ~ [ [( [ ) n ~]: By assumption (a) and the quotient formula for Schur-complements (see Cottle, Pang and Stone [4, Proposition 2.3.6]), we have that M ~;~ =M ; is nonsingular and M ~[ ~ ;~[ ~ =M ~;~ = M ~[ ~ ;~[ ~ =M ; = (M~;~ =M ; ) : (30) By the rst part of the proof, the matrix M ~[ ~ ;~[ ~ =M ; is a P 0 -matrix. Hence, by using Lemma 2.3 of Chen and Harker [2], the Schur-complement on the right-hand side of (30) is a P 0 -matrix. Therefore the matrix on the left-hand side of (30) is also a P 0 -matrix. Now, recalling that denotes the set of degenerate indices at which (x i ; F i (x)) = 0; and considering ( i ; i ) 2 IR 2 such that k( i ; i )k 1 as in Lemma 4.1, let us partition the index set into the following three subsets: 1 := fi 2 j i > 0; i = 0g; 2 := fi 2 j i = 0; i > 0g; 3 := n ( 1 [ 2 ); and dene ~ := [ 1 ; ~ := [ 2 and ~ := [ 3 : It is now very easy to see that the nonsingularity of any element can be proved following exactly the lines of the proof of Lemma 5.3 by replacing the index sets ; and by ~; ~ and ~ ; respectively, and taking into account assumptions (a) and (b). 2 In view of Lemma 5.2 the assumptions (c) of Lemma 5.3 and (b) of Theorem 5.4 are satised, e.g., if the Schur-complement M=M ; is a P 0 -matrix. Due to Chen and Harker [2, Lemma 2.3], Schur-complements of nonsingular submatrices of a P 0 - matrix are again P 0 -matrices, hence assumption (b) of Theorem 5.4 is satised in particular if F 0 (x) is a P 0 -matrix. In turn it is known, see [33, Theorem 5.8], that F 0 (x) is a P 0 -matrix if F itself is a P 0 -function. In the next two corollaries we point out two simple situations in which the assumptions of the previous Theorem are satised. The rst of the two corollaries was already proved by Facchinei and Soares [8, Proposition 3.2]. Corollary 5.5 Let x 2 IR n be an R-regular solution of NCP(F ). Then all the elements of the generalized ) are nonsingular. Proof. Since x is a solution of NCP(F ), we have (x ) = ;: We prove that assumptions (a) and (b) of Theorem 5.4 are satised. Assumption (a) follows by R- regularity and by [35, Lemma 1], while assumption (b) is obvious by the R-regularity. 2 19

20 Due to the upper semi-continuity of the generalized Jacobian (see Clarke [3, Proposition (c)]), Corollary 5.5 remains true for all x in a suciently small neighbourhood of an R-regular solution x of NCP(F ). Corollary 5.6 Suppose that F 0 (x) is a P -matrix. generalized are nonsingular. Then all the elements of the Proof. Since every principal submatrix of a P -matrix is nonsingular and the Schurcomplement of every principal submatrix of a P -matrix is a P -matrix, we obviously have that the assumptions of Theorem 5.4 are satised. 2 Jiang and Qi [23, Proposition 1] recently proved that all elements of the generalized are nonsingular if F is a uniform P -function. Since it is not dicult to see that the Jacobian matrices of a uniform P -function are P -matrices, we reobtain their result as a direct consequence of Corollary Convergence of the Algorithm The main aim of this section is to prove and discuss Theorem 3.1. Proof of point (a) of Theorem 3.1. The proof is by contradiction. We rst note that it can be assumed, without loss of generality, that the direction is always given by (4). In fact if, for an innite set of indices K, we have d k =?r (x k ) for all k 2 K, then any limit point is a stationary point of by well known results (see, e.g., Proposition 1.16 in [1]). Suppose now, renumbering if necessary, that fx k g! x and that r (x ) 6= 0; Since the direction d k always satises (4), we can write k(x k )k = kh k d k k kh k k kd k k (31) from which we get kd k k k(xk )k kh k k (32) (recall the kh k k cannot be 0, otherwise (31) would imply (x k ) = 0, so that x k would be a stationary point and the algorithm would have stopped). We now note that 0 < m kd k k M (33) for some positive m and M. In fact if, for some subsequence K, fkd k kg K! 0 we have from (32), that fk(x k )kg K! 0 because H k is bounded on the bounded sequence fx k g by known properties of the generalized Jacobian. But then, by continuity, (x ) = 0 so that x is a solution of the nonlinear complementarity problem, thus contradicting the assumption r (x ) 6= 0. On the other hand kd k k cannot be 20

21 unbounded because, taking into account that r (x k ) is bounded and p > 2, this would contradict (5). Then, since (6) holds at each iteration and is bounded from below on the bounded sequence fx k g we have that f (x k+1 )? (x k )g! 0 which implies, by the linesearch test, f2?ik r (x k ) T d k g! 0: (34) We want to show that 2?ik is bounded away from 0. Suppose the contrary. Then, subsequencing if necessary, we have that f2?ik g! 0 so that at each iteration the stepsize is reduced at least once and (6) gives (x k + 2?(ik?1) d k )? (x k ) 2?(ik?1) > r (x k ) T d k : (35) By (33) we can assume, subsequencing if necessary, that fd k g! d 6= 0, so that, passing to the limit in (35), we get r (x ) T d r (x ) T d: (36) On the other hand we also have, by (5), that r (x ) T d?kdkp < 0, which contradicts (36); hence 2?ik is bounded away from 0. But then (34) and (5) imply that fd k g! 0, thus contradicting (33), so that r (x ) = 0. 2 Proof of point (b) of Theorem 3.1. Let fx k g be the sequence of points generated by the algorithm and let be x the locally unique limit point in the statement of the theorem; then x is an isolated global minimum point of. Denote by the set of limit points of the sequence fx k g; we have that x belongs to which is therefore a nonempty set. Let be the distance of x to n x, if x is not the only limit point of fx k g, 1 otherwise, i.e. 8 < inf y2nx fky? x kg if n x 6= ; = : 1 otherwise; since x is an isolated solution, we have > 0. Let us now indicate by 1 and 2 the following sets, 1 = fy 2 IR n : dist(yjg =4g; 2 = fy 2 IR n : kyk kx k + g: We have that for k suciently large, let us say for k k, x k belongs at least to one of the two sets 1, 2. Let now K be the subsequence of all k for which kx k?x k =4 (this set is obviously nonempty because x is a limit point of the sequence). Since all points of the subsequence fx k g K are contained in the compact set S(x ; =4) and every limit point of this sequence is also a limit point of fx k g, we have that all the subsequence fx k g K converges to x, the unique limit point of fx k g in S(x ; =4). Furthermore, since x is a solution of NCP(F ) we have that fkr (x k )kg K! 0 21

22 which in turn, by (5), implies that fd k g tends to 0. So we can nd ~ k k such that kd k k =4 if k 2 K and k ~ k. Let now ^k be any xed k ~ k belonging to K; we can write: distfx^k+1 j n x g inf y2nx fky? x kg? (kx? x^kk + kx^k? x^k+1 k)? =4? =4 = =2: (37) This implies that x^k+1 cannot belong to 1 n S(x ; =4); on the other hand, since x^k+1 = x^k + ^kd^k for some ^k 2 (0; 1], we have kx^k+1 k kx^kk + k^kd^kk kx + (x^k? x )k + kd^kk kx k + kx^k? x k + kd^kk kx k + =4 + =4; so that x^k+1 does not belong to 2. Hence we get that x^k+1 belongs to S(x ; =4). But then, by denition, we have that ^k+1 2 K, so by induction (recall that ^k+1 > ~ k also, so that kd^k+1 k =4) we have that every k > ~ k belongs to K and the whole sequence converges to x. 2 Remark 6.1 We explicitly point out that in the proof of point (b) we have shown that if the sequence of points generated by the algorithm is converging to a locally unique solution of the nonlinear complementarity problem, then fkd k kg! 0. This fact will be used also in the proof of point (c). Proof of point (c) of Theorem 3.1. The fact that fx k g! x follows by part (b) noting that the BD-regularity assumption implies, by [37, Proposition 3], that x is an isolated solution of the system (x) = 0 and hence also of NCP(F ). Then, we rst prove that locally the direction is always the solution of system (4) and then that eventually the stepsize of one satises the linesearch test (6), so that the algorithm eventually reduces to the local algorithm (2) and the assertions on the convergence rate readily follow from Theorems 2.2 and 2.3. Since fx k g converges to a BD-regular solution of the system (x) = 0, we have, by Lemma 2.6 [38], that the determinant of H k is bounded away from 0 for k suciently large. Hence system (4) always admits a solution for k suciently large. We want to show that this solution satises, for some positive 1, the condition We rst note that, by (4), r (x k ) T d k? 1 kd k k 2 (38) kd k k k(h k )?1 k k(x k )k; so that, recalling that r (x k ) can be written as H k (x k ), r (x k ) T d k =?k(x k )k 2? kdk k 2 22 M 2 ; (39)

23 where M is an upper bound on k(h k )?1 k (note that this upper bound exists because the determinant of H k is bounded away from 0). Then (38) follows by (39) by taking 1 1=M 2. But now it is easy to see, noting that fkd k kg! 0, that (38) eventually implies (5) for any p > 2 and any positive. Now to complete the proof of the theorem it only remains to show that eventually the stepsize determined by the Armijo test (6) is 1, that is, that eventually i k = 0. But this immediately follows by Theorem 3.2 in [6], taking into account (38) and the fact that is SC 1 by Theorem We can now use Theorem 3.1 and the results of Sections 4 and 5 to easily obtain the following two theorems. Theorem 6.1 Let x be a limit point of the sequence generated by the algorithm. Then x is a solution of NCP(F ) i it is a regular point according to Denition 4.1. In particular x is a solution of NCP(F ) if it satises any of the conditions of Theorem 4.3, Corollary 4.4, Theorem 4.5, Theorem 5.4, Corollary 5.5, or Corollary 5.6. Proof. The proof is obvious except for the fact that the conditions of Theorem 5.4, Corollary 5.5, or Corollary 5.6 are sucient to guarantee that x is a solution of NCP(F ). But to prove this it suces to note that, by Theorem 3.1 (a), r (x ) = 0. But employing elementary rules on the calculation of subgradients, it can be shown that r (x ) ) T (x ) (see discussion in Section 3). If x is not a solution of NCP(F ), then (x ) 6= 0, so that, since Theorem 5.4, Corollary 5.5, and Corollary 5.6 guarantee the nonsingularity of all the elements ), it cannot be r (x ) = 0 (note that to prove the result it would be sucient to show that there exists at least one nonsingular element )). 2 The next result is an immediate consequence of the analysis carried out in Section 5. Theorem 6.2 Let x be a limit point of the sequence generated by the algorithm, and suppose that x satises any of the conditions of Theorem 5.4, Corollary 5.5, or Corollary 5.6. Then x is a BD-regular solution of system (x) = 0 so that all the assertions of point (c) of Theorem 3.1 hold true. It is interesting to note that Theorem 5.4, Corollary 5.5, or Corollary 5.6 guarantee the nonsingularity of all the elements ) while, according to Theorem 2.3, BD-regularity, i.e. nonsingularity of all the elements B (x ), is sucient to have superlinear/quadratic convergence. Hence a fast convergence rate could take place also when the conditions of Theorem 5.4, Corollary 5.5, or Corollary 5.6 are not met. We illustrate this with a simple example. Consider the unidimensional complementarity problem with F (x) =?x; obviously the unique solution of this problem is x = 0. For this problem we have (x) = p x 2 + x 2? x + x = p 2jxj. B (0) = f? p 2; p 2g, so that x = 0 is a BD-regular solution of (x) = 0. 23

24 On the other hand none of the conditions of Theorem 5.4, Corollary 5.5, or Corollary 5.6 can be satised = [? p 2; p 2] contains the singular element 0. To complete the discussion of the properties of the algorithm we note that if the function F is a uniform P -function then the level sets of are bounded by Theorem 2.3 (c) so that at least one limit point of the sequence produced by the algorithm exists. By Theorem 3.1 (a), each limit point is a stationary point of ; and by Corollary 4.4 (b) every stationary point is a solution of NCP(F ). Since a complementarity problem with a uniform P -function has a unique solution, we obtain the result that any sequence fx k g generated by our algorithm converges to this unique solution. We note that the case of a uniform P -function is, basically, the case considered in [23]. 7 Numerical Results In this section we perform some numerical tests in order to show the viability of the approach proposed. In Section 3 we left unanswered the problem of calculating an element B (x), which is needed in Step 2 of the algorithm. Our rst task is therefore to show how this can be accomplished. Procedure to evaluate an element H belonging B (x) Step 1: Set = fi : x i = 0 = F i (x)g. Step 2: Choose z 2 IR n such that z i 6= 0 for all i belonging to. Step 3: For each i 62 set the ith row of H equal to!! x i kx i ; F i (x)k? 1 F e T i (x) i + kx i ; F i (x)k? 1 rf i (x) T : (40) Step 4: For each i 2 set the ith row of H equal to!! z i kz i ; rf i (x) T zk? 1 rf e T i (x) T z i + kz i ; rf i (x) T zk? 1 rf i (x) T : (41) Theorem 7.1 The element H calculated by the above procedure is an element B (x). Proof. We shall build a sequence of points fy k g where (x) is dierentiable and such that r(y k ) T tends to H; the theorem will then follow by the very denition of B-subdierential. Let y k = x + " k z, where z is the vector of Step 2 and f" k g is a sequence of positive numbers converging to 0. Since, if i 62, either x i 6= 0 or F i (x) 6= 0, and 24

25 z i 6= 0 for all i 2, we can assume, by continuity, that " k is small enough so that, for each i, either y k i 6= 0 or F i (y k ) 6= 0, and is therefore dierentiable at y k. Now, if i does not belong to ; it is obvious, by continuity, that the ith row of r(y k ) T tends to the ith row of H; so the only case of concern is when i belongs to. We recall that, according to Lemma 4.1, the ith row of r(y k ) T is given by (a i (y k )? 1)e T i + (b i (y k )? 1)rF i (y k ) T : (42) where a i (y k ) = " k z i q(" k ) 2 z 2i + F i (y k ) 2 ; b i (y k ) = F i (y k ) q(" k ) 2 z 2i + F i (y k ) 2 : We note that by the Taylor-expansion we can write, for each i 2, F i (y k ) = F i (x) + " k rf ( k ) T z = " k rf ( k ) T z; with k! x: (43) Substituting (43) in (42) and passing to the limit, we obtain, taking into account the continuity of rf that also the rows of r(y k ) T relative to indices in tend to the corresponding rows of H dened in Step 4. 2 Changing z we will obtain a dierent element B (x). In our code we chose to set z i = 0 if i 62 and z i = 1 if i 2. We note that the overhead to evaluate an element B (x) when is not dierentiable is negligible with respect to the case in which is dierentiable. This, again, is a favourable characteristic of our reformulation, which is not true, in general, for semismooth systems of equations. The implemented version of the algorithm diers from the one described in Section 3 in the use of a nonmonotone linesearch, which can be viewed as an extension of (6). To motivate this variant we rst recall that it has been often observed in the eld of nonlinear complementarity algorithms that the linesearch test used to enforce global convergence can lead to very small stepsizes; in turn this can bring to very slow convergence and even to a numerical failure of the algorithm. To circumvent this problem many heuristics have been used, see, e.g., [20, 25, 36]. Here, instead, we propose to substitute the linesearch test (6) by the following nonmonotone linesearch: (x k + 2?i d k ) W + 2?i r (x k ) T d k ; (44) where W is any value satisfying (x k ) W max j=0;1;:::;m k (x k?j ) (45) and M k are nonnegative integers bounded above for any k (we suppose for simplicity that k is large enough so that no negative superscripts can occur in (45)). This kind of linesearch has been rst proposed in [17] and since then it has proved very useful in the unconstrained minimization of smooth functions. Adopting the same proof techniques used in [17] it is easy to see that all the results described in the previous 25

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION: THEORETICAL INVESTIGATION AND NUMERICAL RESULTS 1 Bintong Chen 2, Xiaojun Chen 3 and Christian Kanzow 4 2 Department of Management and Systems Washington State

More information

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps Projected Gradient Methods for NCP 57 Recent Advances in Nonsmooth Optimization, pp. 57-86 Eds..-Z. u, L. Qi and R.S. Womersley c1995 World Scientic Publishers Projected Gradient Methods for Nonlinear

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2 A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 F. FACCHINEI 2 AND S. LUCIDI 3 Communicated by L.C.W. Dixon 1 This research was

More information

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3. Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble A New Unconstrained Dierentiable Merit Function for Box Constrained Variational Inequality Problems and a Damped Gauss-Newton Method Defeng Sun y and Robert S. Womersley z School of Mathematics University

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point

More information

SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS

SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Bayerischen Julius Maximilians Universität Würzburg vorgelegt von STEFANIA

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

S. Lucidi F. Rochetich M. Roma. Curvilinear Stabilization Techniques for. Truncated Newton Methods in. the complete results 1

S. Lucidi F. Rochetich M. Roma. Curvilinear Stabilization Techniques for. Truncated Newton Methods in. the complete results 1 Universita degli Studi di Roma \La Sapienza" Dipartimento di Informatica e Sistemistica S. Lucidi F. Rochetich M. Roma Curvilinear Stabilization Techniques for Truncated Newton Methods in Large Scale Unconstrained

More information

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Zhaosong Lu November 21, 2015 Abstract We consider the problem of minimizing a Lipschitz dierentiable function over a

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Nonlinear Analysis 46 (001) 853 868 www.elsevier.com/locate/na Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Yunbin Zhao a;, Defeng Sun

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

Rearrangements and polar factorisation of countably degenerate functions G.R. Burton, School of Mathematical Sciences, University of Bath, Claverton D

Rearrangements and polar factorisation of countably degenerate functions G.R. Burton, School of Mathematical Sciences, University of Bath, Claverton D Rearrangements and polar factorisation of countably degenerate functions G.R. Burton, School of Mathematical Sciences, University of Bath, Claverton Down, Bath BA2 7AY, U.K. R.J. Douglas, Isaac Newton

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

Technische Universität Dresden Institut für Numerische Mathematik. An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions

Technische Universität Dresden Institut für Numerische Mathematik. An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions Als Manuskript gedruckt Technische Universität Dresden Institut für Numerische Mathematik An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions F. Facchinei, A. Fischer, and

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the

58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the A Multi{Parameter Method for Nonlinear Least{Squares Approximation R Schaback Abstract P For discrete nonlinear least-squares approximation problems f 2 (x)! min for m smooth functions f : IR n! IR a m

More information

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio

1 Introduction The study of the existence of solutions of Variational Inequalities on unbounded domains usually involves the same sufficient assumptio Coercivity Conditions and Variational Inequalities Aris Daniilidis Λ and Nicolas Hadjisavvas y Abstract Various coercivity conditions appear in the literature in order to guarantee solutions for the Variational

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

School of Mathematics, the University of New South Wales, Sydney 2052, Australia

School of Mathematics, the University of New South Wales, Sydney 2052, Australia Computational Optimization and Applications, 13, 201 220 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On NCP-Functions * DEFENG SUN LIQUN QI School of Mathematics,

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

A Regularization Newton Method for Solving Nonlinear Complementarity Problems

A Regularization Newton Method for Solving Nonlinear Complementarity Problems Appl Math Optim 40:315 339 (1999) 1999 Springer-Verlag New York Inc. A Regularization Newton Method for Solving Nonlinear Complementarity Problems D. Sun School of Mathematics, University of New South

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT

A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES Patrice MARCOTTE Jean-Pierre DUSSAULT Resume. Il est bien connu que la methode de Newton, lorsqu'appliquee a

More information

On the Convergence of Newton Iterations to Non-Stationary Points Richard H. Byrd Marcelo Marazzi y Jorge Nocedal z April 23, 2001 Report OTC 2001/01 Optimization Technology Center Northwestern University,

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Chapter 1 The Real Numbers

Chapter 1 The Real Numbers Chapter 1 The Real Numbers In a beginning course in calculus, the emphasis is on introducing the techniques of the subject;i.e., differentiation and integration and their applications. An advanced calculus

More information

Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008

Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008 Lecture 19 Algorithms for VIs KKT Conditions-based Ideas November 16, 2008 Outline for solution of VIs Algorithms for general VIs Two basic approaches: First approach reformulates (and solves) the KKT

More information

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.

Solutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b. Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

A Finite Newton Method for Classification Problems

A Finite Newton Method for Classification Problems A Finite Newton Method for Classification Problems O. L. Mangasarian Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI 53706 olvi@cs.wisc.edu Abstract A fundamental

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY PREPRINT ANL/MCS-P6-1196, NOVEMBER 1996 (REVISED AUGUST 1998) MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY SUPERLINEAR CONVERGENCE OF AN INTERIOR-POINT METHOD DESPITE DEPENDENT

More information

12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)

12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x) 1.4. INNER PRODUCTS,VECTOR NORMS, AND MATRIX NORMS 11 The estimate ^ is unbiased, but E(^ 2 ) = n?1 n 2 and is thus biased. An unbiased estimate is ^ 2 = 1 (x i? ^) 2 : n? 1 In x?? we show that the linear

More information

Max-Min Problems in R n Matrix

Max-Min Problems in R n Matrix Max-Min Problems in R n Matrix 1 and the essian Prerequisite: Section 6., Orthogonal Diagonalization n this section, we study the problem of nding local maxima and minima for realvalued functions on R

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications

Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Uniqueness of Generalized Equilibrium for Box Constrained Problems and Applications Alp Simsek Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Asuman E.

More information

Semismooth Support Vector Machines

Semismooth Support Vector Machines Semismooth Support Vector Machines Michael C. Ferris Todd S. Munson November 29, 2000 Abstract The linear support vector machine can be posed as a quadratic program in a variety of ways. In this paper,

More information

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones

Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National

More information

A continuation method for nonlinear complementarity problems over symmetric cone

A continuation method for nonlinear complementarity problems over symmetric cone A continuation method for nonlinear complementarity problems over symmetric cone CHEK BENG CHUA AND PENG YI Abstract. In this paper, we introduce a new P -type condition for nonlinear functions defined

More information

Zangwill s Global Convergence Theorem

Zangwill s Global Convergence Theorem Zangwill s Global Convergence Theorem A theory of global convergence has been given by Zangwill 1. This theory involves the notion of a set-valued mapping, or point-to-set mapping. Definition 1.1 Given

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization 5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The

More information

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f 1 Bernstein's analytic continuation of complex powers c1995, Paul Garrett, garrettmath.umn.edu version January 27, 1998 Analytic continuation of distributions Statement of the theorems on analytic continuation

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

5 Overview of algorithms for unconstrained optimization

5 Overview of algorithms for unconstrained optimization IOE 59: NLP, Winter 22 c Marina A. Epelman 9 5 Overview of algorithms for unconstrained optimization 5. General optimization algorithm Recall: we are attempting to solve the problem (P) min f(x) s.t. x

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information