Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Size: px
Start display at page:

Download "Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences"

Transcription

1 Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Houyuan.Jiang@cmis.csiro.au Abstract : By introducing another variable and an additional equation, we describe a technique to reformulate the nonlinear complementarity problem as a square system of equations. Some useful properties of this new reformulation are explored. These properties show that this new reformulation is favourable compared with some pure nonsmooth equation reformulation and smoothing reformulation because it combines some advantages of both nonsmooth equation based methods and smoothing methods. A damped generalized Newton method is proposed for solving the reformulated equation. Global and local superlinear convergence can be established under some mild assumptions. Numerical results are reported for a set of the standard test problems from the library MCPLIB. AMS (MOS) Subject Classications. 90C33, 65K10, 49M15. Key Words. Nonlinear complementarity problem, Fischer-Burmeister functional, semismooth equation, Newton method, global convergence, superlinear convergence. 1 Introduction We are concerned with the solution of the nonlinear complementarity problem (NCP) [35]. Let F : < n! < n be continuously dierentiable. Then the NCP is to nd a vector x 2 < n such that x 0; F (x) 0; F (x) T x = 0: (1) Reformulating the NCP as a constrained or unconstrained smooth optimization problem, and as constrained or unconstrained systems of smooth or nonsmooth equations, has been a popular strategy in the last decade. Based on these reformulations, many algorithms such as merit function methods, smooth or nonsmooth equation methods, smoothing methods, and interior point methods have been proposed. In almost all these methods, one usually tries to apply techniques in traditional nonlinear programming or systems of smooth equations to the reformulated problem considered. Dierent descent methods have been developed for the NCP by solving the system of nonsmooth equations reformulated by means of the Fischer-Burmeister functional [18]. See for example [10, 16, 17, 19, 25, 26, 27, 37, 42, 45]. In particular, global convergence of the damped generalized Newton method and the damped modied Gauss-Newton method for the Fischer-Burmeister functional reformulation of the NCP has been established in [25]. 1 This work was carried out initially at The University of Melbourne and was supported by the Australian Research Council. 1

2 A number of researchers have proposed and studied dierent smoothing methods. We refer the reader to [1, 2, 3, 4, 5, 6, 7, 14, 15, 20, 21, 23, 29, 30, 31, 32, 33, 41, 43, 44] and references therein. The main feature of smoothing methods is to reformulate the NCP as a system of nonsmooth equations, and then to approximate this system by a sequence of systems of smooth equations by introducing one or more parameters. Newton-type methods are applied to these smooth equations. Under certain assumptions, these solutions of smooth systems converge to the solution of the NCP by appropriately controlling these parameters. It seems that a great deal of eort is usually needed to establish global convergence of smoothing methods. The introduction of parameters results in underdetermined systems of equations, which may be the reason from our viewpoint that makes global convergence analysis complicated. The use of smoothing methods by means of the Fischer-Burmeister functional starts from Kanzow [29] for the linear complementarity problem. It has now become one of the main smoothing tools to solve the NCP and related problems. In particular, Kanzow [30] and Xu [44] have proved global as well as local superlinear convergence of their smoothing method for the NCP with uniform P -functions respectively. Burke and Xu [1] proved global linear convergence of their smoothing method for the linear complementarity problem with both the P 0 -matrix and S 0 -matrix properties. Global convergence and local fast convergence analysis is usually complicated because some techniques are required in order to drive the smoothing parameter to zero. This feature seems to be shared by the other smoothing methods mentioned in the last paragraph. Motivated by the above points, we shall introduce a technique to approximate the system of nonsmooth equations by a square system of smooth equations. This can be fullled by introducing a new parameter and a new equation. The solvability of the generalized Newton equation of this system can be guaranteed under very mild conditions. Since the reformulated system still gives rise to a smooth merit function, it turns out that the global convergence of the generalized Newton method can be established by following the standard analysis with some minor modications. Moreover, the damped modied Gauss-Newton method to the smooth equations can be extended to our system of nonsmooth equations without diculties. We would like to use the Fischer-Burmeister functional [18] to demonstrate our new technique though it may be adapted for other smoothing methods. In Section 2, the NCP is reformulated as a square system of equations by introducing a parameter, an additional equation and using the Fischer-Burmeister functional. We then study various properties which include semismoothness of the new system, equivalence between the new system and the NCP, and dierentiability of the least square merit function of the new system. Section 3 is devoted to study of sucient conditions that ensure nonsingularity of generalized Newton equations, a stationary point of the least square merit function to be a solution of the NCP, and boundedness of the level set associated with the least square merit function, respectively. In Section 4, we propose a damped generalized Newton method for solving this new system. Its global and local superlinear convergence can be established under mild conditions. Numerical results are reported for a set of the test problems from the library MCPLIB. We conclude the paper by oering some remarks in the last section. The following notion is used throughout the paper. For the vector x; y 2 < n, x T is the transpose of x and thus x T y is the inner product of x and y. kxk indicates the Euclidean norm of the vector x 2 < n. For a given matrix M = (m ij ) 2 < nn and the index sets I; J f1;...; ng, M IJ denes the submatrix of M associated with the row 2

3 indexes in I and the column indexes in J. For a continuously dierentiable functional f : < n! <, its gradient at x is dened by rf(x). If the function F : < n! < n is continuously dierentiable at x, then let F 0 (x) denote its Jacobian at x. If F : < n! < n is locally Lipschitz continuous at x, (x) indicates its Clarke generalized Jacobian at x [8]. The notion (A) () (B) means that the statements (A) and (B) are equivalent. 2 Reformulations and Equivalence In order to reformulate the NCP (1), let us recall two basic functions. The rst one is now known as the Fischer-Burmeister functional [18] which is dened by : < 2! < (b; c) p b 2 + c 2? (b + c): The second one, denoted by : < 3! <, is a modication of or a variation of its counterpart of in < 3. More precisely, : < 3! < is dened by (a; b; c) p a 2 + b 2 + c 2? (b + c): Note that the function is introduced to study linear complementarity problems by Kanzow in [29], where a is treated as a parameter rather than an independent variable. Using these two functionals, we dene two functions associated with the NCP as follows. For any given x 2 < n and 2 <, dene H : < n! < n by H(x) 0 (x 1 ; F 1 (x)). (x n ; F n (x)) 1 C A and G : < n+1! < n+1, ~ G : < n! < n by G(; x) 0 e? 1 (; x 1 ; F 1 (x)). (; x n ; F n (x)) 1 C A e? 1 ~G(; x)! ; where e is the Euler constant (or the natural logrithmic base). Consequently, we may dene two systems of equations: H(x) = 0 (2) and G(; x) = 0: (3) Note that the rst system has been extensively studied for the NCP (See for example [10, 16, 17, 19, 25, 26, 27, 37, 42, 45] and the references therein). If the rst equation is removed in the second system, then it reduces to the system introduced by Kanzow [29] for proposing smoothing or continuation methods to solve the LCP. Thereafter, this smoothing technique has been used for solving other related problems (See for example [1, 15, 20, 23, 29, 30, 31, 44]). The novelty of this paper is to introduce the rst equation, which makes (3) a square system. As it will be seen later, this new feature will overcome some diculties 3

4 encountered by the generalized Newton-type methods based on the system (2), and facilitate the analysis of global convergence, which is, from our point of view, usually complicated in the smoothing methods. Some nice properties for the methods based on the system (2) can be established for the similar methods based on (3). Moreover, our analysis is much closer to the spirit of the classical Newton method than smoothing methods. The global convergence analysis of the generalized Newton and the modied Gauss-Newton method for the system (2) has been done in [25]. In the sequel, the second system will be the main one to be considered despite some connections and dierences between (2) and (3) are explored. One may dene other functions which may play the same role as e? 1. For simplicity of analysis, we use this special function in the sequel. See the discussions in Section 6 for more details on how to dene these kinds of functions. The least squares of H and G are denoted by and, namely, (x) 1 2 kh(x)k2 ; (; x) 1 2 kg(; x)k2 : and are usually called merit functions. The denitions of the functions H and G heavily depend on the functional and respectively. Certainly, the study of some fundamental properties of and will help to get more insights into the functions H and G. Let E : < n! < n be locally Lipschitz continuous at x 2 < n. Then the Clarke generalized of E at x is well-dened and can be characterized by the convex hull of the following set f lim E 0 (x k )j E is dierentiable at x k 2 < n g: x is a nonempty, convex and compact set for any xed x [8]. E is said to be semismooth at x 2 < n if it is directionally dierentiable at x, i.e., E 0 (x; d) exists for any d 2 < n, and if V d? E 0 (x; d) = o(kdk) for any d! 0 and V + d). E is said to be strongly semismooth at x if it is semismooth at x and V d? E 0 (x; d) = O(kdk 2 ): See [39, 36, 19] for other characterizations and dierential calculus of semismoothness and strong semismoothness. We now present some properties of, G and. Note that similar properties for, H and have been studied in [10, 17, 18, 22, 27, 28]. Lemma 2.1 bc = 0. (i) When a = 0, then (a; b; c) = 0 if and only if b 0, c 0 and (ii) is locally Lipschitz, directionally dierentiable and strongly semismooth on < 3. Furthermore, if a 2 + b 2 + c 2 > 0, then is continuously dierentiable at (a; b; c) 2 < 3. Namely, is continuously dierentiable except at (0; 0; 0). The generalized Jacobian of at (0; 0; 0) (0; 0; 0) = O 80 >< B >: 1 A j 2 + ( + 1) 2 + ( + 1) > = >; : 4

5 (iii) 2 is smooth on < 3. The gradient of 2 at (a; b; c) 2 < 3 is r 2 (a; b; c) = 2 (a; b; c)@ (a; b; c): b (a; b; c)@ c (a; b; c) 0 for any (a; b; c) 2 < 3. If (0; b; c) 6= 0, b (0; b; c)@ c (0; b; c) > 0. (v) 2 (0; b; c) = 0 b 2 (0; b; c) = 0 c 2 (0; b; c) = 0 b 2 (0; b; c) c 2 (0; b; c) = 0. Proof. (i) Note that (0; b; c) = (b; c). The result can be veried easily. (ii) Note that p a 2 + b 2 + c 2 is the Euclidean norm of the vector (a; b; c) T. Then p a2 + b 2 + c 2 is locally Lipschitz, directionally dierentiable and strongly semismooth on < 3.?(b + c) is continuously dierentiable on < 3, hence locally Lipschitz, directionally dierentiable and strongly semismooth on < 3. Fischer [19] has proved that the composition of strongly semismooth functions is still strongly semismooth. Therefore, is locally Lipschitz, directionally dierentiable and strongly semismooth on < 3. If a 2 + b 2 + c 2 > 0, p a 2 + b 2 + c 2 is continuously dierentiable at (a; b; c), and so is. Let d 2 < 3 and d 6= 0. Then is continuously dierentiable at td for any t > 0. And r (td) = ( d 1 q ; d d2 + 2 d2 3 d 2 q d d2 2 + d2 3? 1; For simplicity, let r (td) be denoted by (; ; ) T. Clearly, 2 + ( + 1) 2 + ( + 1) 2 = 1: d 3 q d d2 2 + d2 3? 1) T : Let t tend to zero. By the semicontinuity property of the Clarke Jacobian, we obtain that (; ; ) (0; 0; 0): It follows from the convexity of the generalized Jacobian that On the other hand, for any (a; b; c) 6= 0, (0; 0; 0): (r a (a; b; c)) 2 + (r b (a; b; c) + 1) 2 + (r c (a; b; c) + 1) 2 = 1: By the denition of the Clarke generalized Jacobian, one may conclude (0; 0; 0) O: This shows (0; 0; 0) = O. (iii) Since is smooth everywhere on < 3 except at (0; 0; 0), (0; 0; 0) is the only point at which 2 is possibly not smooth. But it is easy to prove that 2 is also smooth at (0; 0; 0). Therefore, 2 is smooth on < 3. Furthermore, r 2 (a; b; c) = 2 (a; b; c)@ (a; b; c): Note that 2 (0; 0; 0)@ (0; 0; 0) = f0g is singleton (0; 0; 0) = fog is a set. (iv) By (ii), for any (a; b; c) 2 < 3 and any (; ; ) T (a; b; c), we have 2 + ( + 1) 2 + ( + 1) 2 1: 5

6 This shows that 0. Suppose (0; b; c) 6= 0. Then it holds that either minfb; cg < 0 or bc 6= 0. In both cases, (ii) implies that 6= 0 and 6= 0. Consequently, > 0. (v) Clearly, if 2 (0; b; c) = 0, then (iii) implies all the other results. If 2 b (0; b; c) = 0 2 c (0; b; c) = 0, then we must have 2 (0; b; c) = 0. If this is not so, (iv) implies b (0; b; c)@ c (0; b; c) > 0, which is a contradiction. The proof is complete. 2 Proposition 2.1 of the NCP if and only if (0; x) is a solution of (3), i.e. G(; x) = 0. (i) If (; x) is a solution of (3), then = 0. And x is a solution (ii) G is continuously dierentiable at (; x) when 6= 0 and F is continuously dierentiable at x. G is semismooth on < n+1 if F is continuously dierentiable on < n, and G is strongly semismooth on < n+1 if F 0 (x) is Lipschtiz continuous on < n. If V x), then V is of the following format, V = e 0 C DF 0 (x) + E where C 2 < n, and both D and E are diagonal matrices in < nn satisfying C i = D ii = E ii = q 2 + x 2i + (F i(x)) 2 ;! x i q 2 + x 2i + (F i(x)) 2? 1; F i (x) q 2 + x 2i + (F i(x)) 2? 1; if 2 + x 2 i + F i(x) 2 > 0, and C i = i ; D ii = i ; E ii = i ; with 2 i + ( i + 1) 2 + ( i + 1) 2 1 if 2 + x 2 i + F i(x) 2 = 0. (iii) (; x) 0 for any (; x) 2 < n+1. And when the NCP has a solution, x is a solution of the NCP if and only if (0; x) is a global minimizer of over < n+1. (iv) is continuously dierentiable on < n+1. The gradient of at (; x) is r (; x) = V T G(; x) = for any V x). (v) In (iv), for any and x e (e? 1) + C T ~ G(; x) F 0 (x) T D ~ G(; x) + E ~ G(; x) (D ~ G(; x)) i (E ~ G(; x)) i 0; 1 i n! If ~ Gi (0; x) 6= 0, then (D ~ G(0; x)) i (E ~ G(0; x)) i > 0: 6

7 (vi) The following four statements are equivalent. ~G(0; x)) i = 0; (D ~ G(0; x))i = 0; (E ~ G(0; x))i = 0; (D ~ G(0; x)) i = (E ~ G(0; x)) i = 0: Proof. (i) If G(; x) = 0, then e? 1 = 0, i.e., = 0. The rest follows from (i) of Lemma 2.1. (ii) When 6= 0 and F is continuously dierentiable at x, (; x i ; F i (x)) is continuously dierentiable at (; x) for 1 i n. Hence G(; x) is dierentiable at (; x). Note that the composition of any two semismooth functions or strongly semismooth functions is semismooth or strongly semismooth (See [19]). Since is strongly semismooth on < 3 by (ii) of Lemma 2.1, semismoothness or strong semismoothness of G follows respectively if F is smooth at x or if F 0 is Lipschitz continuous at x. The form of an element V x) follows from the Chain Rule Theorem (Theorem of [8]) and the generalized Jacobian form of in (ii) of Lemma 2.1. It should be pointed out that we only manage to give an outer estimation x). Nevertheless, this outer estimation will be enough for the following analysis. (iii) Trivially, (; x) 0 for any (; x). If x is a solution of the NCP, (i) shows that G(0; x) = 0, i.e., (0; x) is a global minimizer of. Conversely, if the NCP has a solution, then the global minimum of is zero. If in addition, (0; x) is also a global minimizer of, then (0; x) = 0 and G(0; x) = 0. The desired result follows from (i) again. (iv) can be rewritten as follows: (; x) = 1 2 (e? 1) nx i=1 k (; x i ; F i (x))k 2 : The smoothness of over < n+1 follows from the smoothness of F and 2. The form of r follows from the Chain Rule Theorem and the smoothness of. (v) and (vi) The proof is analogous to that of (vi) and (v) of Lemma 2.1. It is omitted. 2 Remark. Let W denote the set of all elements DF 0 (x) + E such that there exists a vector C which makes the following matrix! 1 0 C DF 0 (x) + E an element x). On the one hand, any element x) is very much like the element W. Because of this similarity, some standard analysis can be extended x) as we shall see in the next section. On the other hand, we must be aware and W are not the same in general. See [8] for more details. Therefore, some extra care needs to be taken if we say that some techniques can be extended to W x). The results below reveal that, ~ G and reduce to, H and when = 0. Further relationships between them can be explored. But we do not proceed here. Lemma 2.2 (i) (0; b; c) = (b; c) for any b; c 2 <. 7

8 (ii) ~ G(0; x) = H(x) for any x 2 < n. (iii) (0; x) = (x) for any x 2 < n. 3 Basic Properties In this section, some basic properties of the functions G and are investigated. These properties include nonsingularity of the generalized Jacobian of G, sucient conditions for a stationary point of to be a solution of the NCP, and the boundedness of the level set of the merit function. In the context of the nonlinear complementarity, the notions of monotone matrices, monotone functions and other related concepts play important roles. We review some of them in the following. A matrix M 2 < nn is called a P -matrix (P 0 -matrix) if each of its principal minors is positive (nonnegative). A function F : < n! < n is said to be a P 0 -function over the open set S < n if for any x; y 2 S with x 6= y, there exists i such that x i 6= y i and (x i? y i )(F i (x)? F i (y)) 0: F is a uniform P -function over S if there exists a positive constant such that for any x; y 2 S max 1in (x i? y i )(F i (x)? F i (y)) kx? yk 2 : Obviously, a P -matrix must be a P 0 -matrix, and a uniform P -function must be a P 0 - function. It is well known that the Jacobian of a P 0 -function is always a P 0 -matrix and the Jacobian of a uniform P -function is a P -matrix (See [9, 34]). The following characterization on a P 0 -matrix can be found in Theorem of [9]. Lemma 3.1 A matrix M 2 < nn is a P 0 -matrix if and only if for every nonzero x there exists an index i (i i n) such that x i 6= 0 and x i (Mx i ) 0. To guarantee nonsingularity of the generalized Jacobian of G at a solution of (3), R-regularity introduced by Robinson [40] will be proved to be one of the sucient conditions. Suppose x is a solution of the NCP (1). Dene three index sets I := f1 i n j x i > 0 = F i(x )g; J := f1 i n j x i = 0 = F i(x )g; K := f1 i n j x i = 0 < F i(x )g: The NCP is said to be R-regular at x if the submatrix F 0 (x ) II of F 0 (x ) is nonsingular and the Schur-complement is a P -matrix. Proposition 3.1 F 0 (x ) J J? F 0 (x ) J I F 0 (x )?1 II F 0 (x ) IJ (i) If 6= 0 and F 0 (x) is a P 0 -matrix, then V is nonsingular for any V x). (ii) If F 0 (x) is a P -matrix, then V is nonsingular for any V x). 8

9 (iii) If = 0 and the NCP is R-regular at x, then V is nonsingular for any V x ). Proof. From the denition of the generalized Jacobian of G(; x), it follows that for any V x), V is nonsingular if and only if the following submatrix of V is nonsingular; DF 0 (x) + E: (i) If 6= 0, then both?d and?e are positive denite diagonal matrices. The nonsingularity of DF 0 (x) + E is equivalent to the nonsingularity of the matrix F 0 (x) + D?1 E with D?1 E a positive denite diagonal matrix. It follows that F 0 (x) + D?1 E is a P -matrix hence nonsingular if F 0 (x) is a P 0 -matrix. (ii) If F 0 (x) is a P -matrix, as remarked after Proposition 2.1, the technique to prove nonsingularity of the matrix DF 0 (x) + E is quite standard. We omit the detail here and refer the reader to [27] for a proof. (iii) If = 0 and the NCP is R-regular at x, the techniques to prove nonsingularity of DF 0 (x) + E are also standard. See for example [17]. Therefore, nonsingularity at (0; x ) follows from nonsingularity of DF 0 (x) + E. 2 The next result provides a sucient condition so that a stationary point of the least square merit function implies a solution of the NCP. Proposition 3.2 If (; x) is a stationary point of and F 0 (x) is a P 0 -matrix, then = 0 and x is a solution of the NCP. Proof. Suppose (; x) is a stationary point of, i.e., r (; x) = 0. By Lemma 2.1, r (; x) = V T G(; x) = 0 for any V x). We now prove that = 0. Otherwise, assume 6= 0. Then V is nonsingular by Proposition 3.1. This shows that G(; x) = 0, which implies = 0. This is a contradiction. Therefore, = 0. In this case, V T G(0; x) = 0 implies that and (F 0 (x)) T D ~ G(0; x) + E ~ G(0; x) = 0; D ii ~ G(; x)(f 0 (x) T D ~ G(; x)) i + D ii ~ Gi (; x)e ii ~ Gi (; x) = 0: Suppose ~ Gi (0; x) 6= 0 for some index i. By (v) and (vi) of Proposition 2.1, D ii ~ G(0; x)(f 0 (x) T D ~ G(0; x))i < 0; for any index i such that G ~ i (0; x) 6= 0. By Lemma 3.1, F 0 (x) T P 0 -matrices. This is a contradiction. Therefore, and F 0 (x) are not ~G(0; x) = 0; which, together with = 0 shows that G(; x) = 0. The desired result follows from (i) of Proposition Lemma 3.2 If F is a uniform P -function on < n and fx k g is an unbounded sequence, then there exists i (1 i n) such that both the sequences fx k i g and ff i(x k )g are unbounded. 9

10 Proof. See the proof of Proposition 4.2 of Jiang and Qi [27]. 2 Lemma 3.3 Suppose that f(a k ; b k ; c k )g is a sequence such that fa k g is bounded, fb k g and fc k g are unbounded. Then f (a k ; b k ; c k )g is unbounded. Proof. Without loss of generality, we may assume that b k! 1 and c k! 1 as k tends to innity. By the denition of, it is clear that k! +1 if either b k or c k tends to?1. Now assume that b k! +1 and c k! +1. Then for suciently large k, it follows that j (a k ; b k ; c k?(a k ) 2 + 2b k c k )j = q (a k ) 2 + (b k ) 2 + (c k ) 2 + b k + c k =?(ak ) maxfb k ; c k g minfb k ; c k g q (a k ) 2 + (b k ) 2 + (c k ) 2 + b k + c k?(a k ) maxfb k ; c k g minfb k ; c k g q?(a k ) 2 + 2(maxfb k ; c k g) maxfb k ; c k g Hence, it follows from the boundedness of fa k g that k is unbounded. This completes the proof. 2 Proposition 3.3 If F is a uniform P -function on < n and k is bounded, then the set of the level sets k () f( k ; x) : ( k ; x) g is bounded for any 0. Proof. Assume that k () is unbounded. Then there exists a sequence f k ; x k g which is unbounded such that ( k ; x k ). This implies that fx k g is unbounded by the boundedness of k. By Lemma 3.2, there exists an index i such that both x k i and F i (x k ) are unbounded. Lemma 3.3 shows that ( k ; x k i ; F i(x k )) is unbounded. Clearly, we obtain that ( k ; x k ) is unbounded. This is a contradiction. Therefore, k () is bounded for any A Damped Generalized Newton Method and Convergence In this section, we develop a generalized Newton method for the system (3). The method contains two main steps. The rst one is to dene a search direction, which we call the Newton step, by solving the following so-called generalized Newton equation V d =?G(; x): (4) where V x). The generalized Newton equation can be rewritten as follows e d =?(e? 1); Cd + (DF 0 (x) + E)dx =? G(; ~ x); where ~ G(; x) is dened as in Section 2. The second main step is to do a line search along the generalized Newton step to decrease the merit function. The full description of our method is stated as follows. For simplicity, let z = (; x), z + = ( + ; x + ) and z k = ( k ; x k ). Similarly, d k = (d k ; dx k ), etc. Algorithm 1 (Damped generalized Newton method) 10

11 Step 1 (Initialization) Choose an initial starting point z 0 = ( 0 ; x 0 ) 2 < n+1 such that 0 > 0, two scalars ; 2 (0; 1), and let k := 0. Step 2 (Search direction) Choose V k k ) and solve the generalized Newton equation (4) with = k, z = z k and V = V k. Let d k be a solution of this equation. If d = 0 is a solution of the generalized Newton equation, the algorithm terminates. Otherwise, go to Step 3. Step 3 (Line search) Let k = i k where i k is the smallest nonnegative integer i such that (z k + () i d k )? (z k ) () i r (z k ) T d k : Step 4 (Update) Let z k+1 := z k + k d k and k := k + 1. Go to Step 2. The above generalized Newton method reduces to the classical damped Newton method if G is smooth. See Dennis and Schnabel [11]. A similar algorithm for solving the system (2) is proposed in [25]. It has been recognized for a long time that nonmonotone line search strategies are superior to the monotone line search strategy from a numerical point of view. As shall be seen later, we shall implement a non-monotone line search in our numerical experiments. In a non-monotone version of the damped generalized Newton method, (z k ) on the left-hand side of the inequality in Step 3 is replaced by maxf (z k ); (z k?1 );...; (z k?l )g; where l is a positive integer number. When l = 0, the non-monotone line search coincides with the monotone line search. Lemma 4.1 If G(z) 6= 0 and the generalized Newton equation (4) is solvable at z, then its solution d is a descent direction of the merit function at z, that is G 0 (z) T d < 0. Furthermore, the line search step is well-dened at z. Proof. It trivially follows from the dierentiability of and the generalized Newton equation. 2 Since is continuously dierentiable on < n+1, it is easy to see that Algorithm 1 is well-dened provided that the generalized Newton direction is well-dened at each step. In Step 2, the existence of the search direction depends on the solvability of the generalized Newton equation. From Proposition 3.1, the generalized Newton equation is solvable if rf (x) is a P 0 -matrix and 6= 0. We repeat that the main dierence between (2) and (3) is that (3) has one more variable and one more equation than (2). This additional variable must be driven to zero in order to obtain a solution of (3) or a solution of the NCP from Algorithm 1. So we next present a result on and d. Lemma 4.2 When > 0, then d 2 (?; 0). Moreover, for any t 2 (0; 1]. + td 2 (0; ) if > 0: 11

12 Proof. By the the rst equation of the generalized Newton equation (4) and the Taylor series, we have d =? e? 1 =? e P 1 1 i=1 i! n P 1 1 i=0 i! n P 1 i=0 =? 1 (i + 1)! n P 1i=0 1 i! n ; which implies that d 2 (?; 0) when > 0. It is easy to see that + td 2 (0; ) for any t 2 (0; 1]. 2 Simply speaking, the above result says that after each step, the variable will be closer to zero than the previous value. Namely, is driven to zero automatically. However, is always positive. This implies two important observations. Firstly, G is continuously dierentiable at z k = ( k ; x k ), which is nice. Secondly, the solvability of the generalized Newton equation becomes more achievable in the case 6= 0 than = 0; see Proposition 3.1. Theorem 4.1 Suppose the generalized Newton equation in Step 2 is solvable for each k. Assume that z = ( ; x ) is an accumulation point of fz k g generated by the damped generalized Newton method. Then the following statements hold: (i) x is a solution of the NCP if fd k g is bounded. (ii) x is a solution of the NCP and fz k g converges to z superlinearly ) is nonsingular and 2 (0; 1 2 ). The convergence rate is quadratic if F 0 is Lipschitz continuous on < n. Proof. The proof is similar to that of Theorem 4.1 in [25] where the damped generalized Newton method is applied to the system (2). We omit the details. 2 Corollary 4.1 Suppose F is a P 0 -function on < n and 2 (0; 1 ). Then Algorithm 1 2 is well-dened. Assume z = ( ; x ) is an accumulation point of fz k g ) is nonsingular or F 0 (x ) is a P -matrix. Then = 0, x is a solution of the NCP, and z k converges to (0; x ) superlinearly. If F 0 is Lipschitz continuous on < n, then the convergence rate is quadratic. Proof. By Lemma 4.2, k > 0 for any k. Since F is a P 0 -function, it follows from Proposition 3.1 k ; x k ) is nonsingular, which implies that the generalized Newton equation is solvable for any k. The result follows from Theorem Corollary 4.2 Suppose F is a uniform P -function on < n and 2 (0; 1 ). Then Algorithm 1 is well-dened, fz k g is bounded and z k converges to z = (0; x ) superlinearly 2 with x the unique solution of the NCP, and the convergence rate is quadratic if F 0 is Lipschitz continuous on < n. 12

13 Proof. The results follow from Proposition 3.3 and Corollary Reamrk. One point worthy mentioning is about the calculation of the generalized Jacobian of G(; x) since we only managed to give an outer estimation x) in Proposition 2.1. However, we never have to worry about this in Algorithm 1. The reason is that the parameter k is never equal to zero for any k. This implies that G is actually smooth at ( k ; x k ) for any k. Therefore, the generalized Jacobian of G reduces to the Jacobian of G which is singleton and easy to calculate. 5 Numerical Results In this section, we present some numerical experiments for Algorithm 1 in Section 4 with a non-monotone line search strategy. We chose l = 3 for k 4 and l = k? 1 for k = 2; 3, where k is the iteration index. We also made the following change in our implementation: k is replaced by 10?6 when k < 10?6 because our experience showed that numerical diculties occur sometimes if k is too close to zero. Algorithm 1 was implemented in MATLAB and run on a Sun SPARC workstation. The following parameters were used for all the test problems: 0 = 10:0, = 10?4, = 0:5. The default initial starting point was used for each test problem in the library MCPLIB [12, 13]. The algorithm is terminated when one of the following criteria is satised: (i) The iteration number reaches to 500; (ii) The line search step is less than 10?10 ; (iii) The minimum of k min(f (x k ); x k )k 1 and kr (z k )k 2 is less than or equal to 10?6. We tested the nonlinear and linear complementarity problems from the library MCPLIB [12, 13]. The numerical results are summarized in Tables 1, where Dim denotes the number of variables in the problem, Iter the number of iterations, which is also equal to the number of Jacobian evaluations for the function F, NF the number of function evaluations for the function F, and " the nal value of k min(f (x ); x )k 1 at the found solution x. The algorithm failed to solve bishop, colvdual, powell and shubik initially. Therefore, we perturbed the Jacobian matrices for these problems by adding I to F 0 (x k ), where > 0 is a small constant and I is an identity matrix. We used = 10?5 for bishop, powell and shubik, and = 10?2 for colvdual. Our code failed to solve tinloi within 500 iterations whether Jacobian perturbation is used or not. However, our experiment showed that it did not make any meaningful progress from the 33-rd iteration to the 500-th iteration. In fact, " = 2:07?6 in the both iterations and " is very close to 10?6 that was used for termination. All other problems have been solved successfully. One may see that most problems were solved in small number of iterations. One important observation is that the number of function evaluations is very close to the number of iterations for most of the test problems. This implies that full Newton steps are taken most times and superlinear convergence follows. 13

14 Problem Dim Iter NF " bertsekas e-07 billups e-07 bishop e-07 colvdual e-07 colvnlp e-08 cycle e-11 degen e-10 explcp e-10 hanskoop e-08 jel e-11 josephy e-10 kojshin e-07 mathinum e-07 mathisum e-07 nash e-11 pgvon e07 powell e-09 scarfanum e-08 scarfasum e-08 scarfbsum e-08 shubik e-07 simple-red e-08 sppe e-10 tinloi (500) 118 (14540) 2.07e-06 tobin e-10 Table 1: Numerical results for the problems from MCPLIB 6 Concluding Remarks By introducing another variable and an additional equation, we have reformulated the NCP as a square system of nonsmooth equations. It has been proved that this reformulation shares some desirable properties of both nonsmooth equations reformulations and smoothing techniques. The semismoothness of the equation and the smoothness of its least square merit function enable us to propose the damped generalized Newton method, and to prove global as well as local superlinear convergence under mild conditions. Encouraging numerical results have been reported. The main feature in the proposed methods is the introduction of the additional equation e? 1 = 0: As we have seen, f k g is a monotonically decreasing positive sequence if 0 > 0. This property ensures the following important consequences: (i) the reformulated system is smooth at each iteration, which might not be so important for our methods since the system is semismooth everywhere; (ii) the linearized system has a unique solution 14

15 at any iteration k under mild conditions such as P 0 -property; (iii) the fact that k must be driven to zero is usually satised in order to ensure right convergence (i.e., the accumulation point should be the solution of the equation or a stationary point of the least square merit function). One may nd other functions which may play a similar role. For example, e +? 1 = 0 might be an alternative. In general, the equation e? 1 = 0 can be replaced by the equation () = 0, where satises the following conditions: (i) : <! < is continuously dierentiable with 0 () > 0 for any (ii) () = 0 implies that = 0 (iii) d =? () 2 (?; 0) for any > 0. 0 () Some comments on the requirements imposed on the function are explained as follows. The condition (i) is to ensure that is smooth and that d is well-dened. The condition (ii) guarantees that G(; x) = 0 implies that = 0 and x is a solution of the NCP and a stationary point of the merit function is a solution of the NCP under some mild conditions; see Propositions 2.1 and 3.2. The condition (iii) implies that 0 < + td < for any t 2 (0; 1], which is required in Armijo line search of Algorithm 1, and which also ensures that always remains positive and in a bounded set. In [38], Qi, Sun and Zhou also treated smoothing parameters as independent variables in their smoothing methods. In their algorithm, these smoothing parameters are updated according to both the line search rule and the quality of the approximate solution of the problem considered. See the mentioned paper for more details. As has been seen in Algorithm, our smoothing parameter is updated by the line search rule. The techniques introduced in this paper seem to be applicable for variational inequality, mathematical programs with equilibrium constraints, semi-denite mathematical programs and related problems. The technique of introducing an additional equation may be useful in other methods to solve the NCP and related problems as far as parameters are needed to be introduced. In an early version [24] of this paper, a damped modied Gauss-Newton method and another damped generalized Newton method based on a modied functional of were proposed, and global as well as local fast convergence results were established. The interested reader is referred to the report [24] for more details. Acknowledgements. The author is grateful to Dr. Danny Ralph for his numerous motivative discussions and many constructive suggestions and comments, and to Dr. Steven Dirkse for providing him the test problems and an MATLAB interface to access these problems. I am also thankful to anonymous referees and Professor Liqun Qi for their valuable comments. References [1] J. Burke and S. Xu, The global linear convergence of a non-interior path-following algorithm for linear complementarity problem, Mathematics of Operations Research 23 (1998)

16 [2] B. Chen and X. Chen, A global and local superlinear continuation-smoothing method for P 0 + R 0 and monotone NCP, SIAM Journal on Optimization 9 (1999) [3] B. Chen and P.T. Harker, A continuation method for monotone variational inequalities, Mathematical Programming (Series A) 69 (1995) [4] B. Chen and P.T. Harker, Smooth approximations to nonlinear complementarity problems, SIAM Journal on Optimization 7 (1997) [5] C. Chen and O.L. Mangasarian, Smoothing methods for convex inequalities and linear complementarity problems, Mathematical Programming 71 (1995) [6] C. Chen and O.L. Mangasarian, A class of smoothing functions for nonlinear and mixed complementarity problems, Computational Optimization and Applications 5 (1996) [7] X. Chen, L. Qi and D. Sun, Global and superlinear convergence of the smoothing Newton methods and its application to general box constrained variational inequalities, Mathematics of Computation 67 (1998) [8] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York, [9] R.W. Cottle, J.-S. Pang and R.E. Stone, The Linear Complementarity Problems, Academic Press, New York, [10] T. De Luca, F. Facchinei and C. Kanzow, A semismooth equation approach to the solution of nonlinear complementarity problems, Mathematical Programming 75 (1996) [11] J.E. Dennis and R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equation, Prentice Hall, Englewood Clis, New Jersey, [12] S.P. Dirkse, MCPLIB and MATLAB interface MPECLIB and MCPLIB models, [13] S.P. Dirkse and M.C. Ferris, MCPLIB: A collection of nonlinear mixed complementarity problems, Optimization Methods and Software 5 (1995) [14] J. Eckstein and M. Ferris, Smooth methods of multipliers for complementarity problems, Mathematical Programming 86 (1999) [15] F. Facchinei, H. Jiang and L. Qi, A smoothing method for mathematical programs with equilibrium constraints, Mathematical Programming 85 (1999) [16] F. Facchinei and C. Kanzow, A nonsmooth inexact Newton method for the solution of large-scale nonlinear complementarity problems, Mathematical Programming (Series B) 17 (1997) [17] F. Facchinei and J. Soares, A new merit function for nonlinear complementarity problems and a related algorithm, SIAM Journal on Optimization 7 (1997)

17 [18] A. Fischer, A special Newton-type optimization method, Optimization 24 (1992) [19] A. Fischer, Solution of monotone complementarity problems with locally Lipschitzian functions, Mathematical Programming 76 (1997) [20] M. Fukushima, Z.-Q. Luo and J.-S. Pang, A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints, Computational Optimization and Applications 10 (1998) [21] S.A. Gabriel and J.J. More, Smoothing of mixed complementarity problems, in Complementarity and Variational Problems, Michael C. Ferris and Jong-Shi Pang, eds., SIAM Publications, 1997, pp [22] C. Geiger and C. Kanzow, On the resolution of monotone complementarity problems, Computational Optimization and Applications 5 (1996) [23] K. Hotta and A. Yoshise, Global convergence of a class of non-interior-point algorithms using Chen-Harker-Kanzow functions for nonlinear complementarity problems, Mathematical Programming 86 (1999) [24] H. Jiang, Smoothed Fischer-Burmeister Equation Methods for the complementarity problem, Manuscript, Department of Mathematics, The University of Melbourne, June [25] H. Jiang, Global convergence analysis of the generalized Newton and Gauss- Newton methods for the Fischer-Burmeister equation for the complementarity problem, Mathematics of Operations Research 24 (1999) [26] H. Jiang, M. Fukushima, L. Qi and D. Sun, A trust region method for solving generalized complementarity problems, SIAM Journal on Optimization 8 (1998) [27] H. Jiang and L. Qi, A new nonsmooth equations approach to nonlinear complementarity problems, SIAM Journal on Control and Optimization 35 (1997) [28] C. Kanzow. An unconstrained optimization technique for large-scale linearly constrained convex minimization problems, Computing 53 (1994) [29] C. Kanzow, Some noninterior continuation methods for linear complementarity problems, SIAM Journal on Matrix Analysis and Applications 17 (1996) [30] C. Kanzow, A new approach to continuation methods for complementarity problems with uniform P-functions, Operations Research Letters 20 (1997) [31] C. Kanzow and H. Jiang, A continuation method for (strongly) monotone variational inequalities, Mathematical Programming 81 (1998) [32] M. Kojima, N. Megiddo and S. Mizuno, A general framework of continuation methods for complementarity problems, Mathematics of Operations Research 18 (1993)

18 [33] M. Kojima, S. Mizuno and T. Noma, Limiting behaviour of trajectories generated by a continuation method for monotone complementarity problems, Mathematics of Operations Research 15 (1990) [34] J.J. More and W.C. Rheinboldt, On P? and S?functions and related classes of n-dimensional nonlinear mappings, Linear Algebra and its Applications 6 (1973) [35] J.-S. Pang, Complementarity problems, in: R. Horst and P. Pardalos, eds., Handbook of Global Optimization, Kluwer Academic Publishers, Boston, 1994, pp [36] L. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Mathematics of Operations Research 18 (1993) [37] L. Qi, Regular pseudo-smooth NCP and BVIP functions and globally and quadratically convergent generalized Newton methods for complementarity and variational inequality problems, Mathematics of Operations Research 24 (1999) [38] L. Qi, D. Sun and G. Zhou, A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities, Mathematical Programming 87 (2000) [39] L. Qi and J. Sun, A nonsmooth version of Newton's method, Mathematical Programming 58 (1993) [40] S.M. Robinson, Strongly regular generalized equation, Mathematics of Operations Research 5 (1980) [41] H. Sellami and S. Robinson, Implementations of a continuation method for normal maps, Mathematical Programming (Series B) 76 (1997) [42] P. Tseng, Growth behavior of a class of merit functions for the nonlinear complementarity problem, Journal of Optimization Theory and Applications 89 (1996) [43] P. Tseng, An infeasible path-following method for monotone complementarity problems, SIAM Journal on Optimization 7 (1997) [44] S. Xu, The global linear convergence of an infeasible non-interior path-following algorithm for complementarity problems with uniform P -functions, Mathematical Programming 87 (2000) 3, [45] N. Yamashita and M. Fukushima, Modied Newton methods for solving a semismooth reformulation of monotone complementarity problems, Mathematical Programming 76 (1997)

19 Attachment Proof of Theorem 4.1: (i) The generalized Newton direction in Step 2 is well-dened by the solvability assumption of the generalized Newton equation. By the generalized Newton equation and the smoothness of, we have In view that d k r (z k ) T d k = G(z k ) T V k z k =?kg(z k )k 2 =?2 (z k ) < 0: 6= 0 and that d = 0 is not a solution of the generalized Newton equation, it follows that d k is a descent direction of the merit function at x k. Therefore, the well-denedness of the line search step (Step 3) and the algorithm follows from dierentiability of the merit function. Without loss of generality, we may assume that z is the limit of the subsequence fz k g k2k where K is a subsequence of f1; 2;...g. If f k g k2k is bounded away from zero, using a standard argument from the decreasing property of the merit function after each iteration and nonnegativeness of the merit function over < n+1, then P k2k? k r (z k ) T d k < +1, which implies that P k2k (z k ) < +1. Hence, lim k!+1;k2k (z k ) = (z ) = 0 and z is a solution of (3). On the other hand, if f k g k2k has a subsequence converging to zero, we may pass to the subsequence and assume that lim k!1;k2k k = 0. From the line search step, we may show that for all suciently large k 2 K (z k + k d k )? (z k ) k r (z k ) T d k ; (z k +?1 k d k )? (z k ) >?1 k r (z k ) T d k : Since fd k g is bounded, by passing to the subsequence, we may assume that lim k!+1;k2k d k = d. By some algebraic manipulations and passing to the subsequence, we obtain r (z ) T d = r (z ) T d ; which means that r (z ) T d = 0. By the generalized Newton equation, it follows that G(z k ) T G(z k ) + G(z k ) T V k d k = G(z k ) T G(z k ) + r (z k ) T d k = 0: This shows that lim k!1;k2k G(z k ) T G(z k ) = G(z ) T G(z ) = 0, namely, z is a solution of (3). (ii) ) is nonsingular, it follows that k(v k )?1 k c; for some positive constant c and all suciently large k 2 K. The generalized Newton equation implies that fd k g k2k is bounded. Therefore, (i) implies that G(z ) = 0. We next turn to the convergence rate. From semismoothness of G at z, for any suciently large k 2 K, where U k + d k ) and G(z k + d k ) = G(z + z k + d k? z )? G(z ) = U(z k + d k? z ) + o(kz k + d k? z k); G(z k ) = G(z + z k? z )? G(z ) = V (z k? z ) + o(kz k? z k); 19

20 where V k ). Let V = V k in the last equality. Then the generalized Newton equation and uniform nonsingularity of V k (k 2 K) imply that kz k + d k? z k = o(kz k? z k): (5) and kd k k = kz k?z k+o(kz k?z k) which implies that lim k1;k2k d k = 0. Consequently, it follows from nonsingularity ), for any suciently large k 2 K Hence, (5) shows that lim k!1;k2k lim k!1;k2k kg(z k )k kz k? z k > 0; kg(z k + d k )k kz k + d k? z k > 0: kg(z k + d k )k = o(kg(z k )k): By the generalized Newton equation and 2 (0; 1), we obtain that 2 k = 1 for all suciently large k 2 K, i.e., the full generalized Newton step is taken. In another word, when k is suciently large, both z k and z k + d k are in a small neighborhood of z by (5), and the damped Newton method becomes the generalized Newton method. Then convergence and the convergence rate follow from Theorem 3.2 of [39]. 2 20

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION: THEORETICAL INVESTIGATION AND NUMERICAL RESULTS 1 Bintong Chen 2, Xiaojun Chen 3 and Christian Kanzow 4 2 Department of Management and Systems Washington State

More information

School of Mathematics, the University of New South Wales, Sydney 2052, Australia

School of Mathematics, the University of New South Wales, Sydney 2052, Australia Computational Optimization and Applications, 13, 201 220 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On NCP-Functions * DEFENG SUN LIQUN QI School of Mathematics,

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble A New Unconstrained Dierentiable Merit Function for Box Constrained Variational Inequality Problems and a Damped Gauss-Newton Method Defeng Sun y and Robert S. Womersley z School of Mathematics University

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A Regularization Newton Method for Solving Nonlinear Complementarity Problems

A Regularization Newton Method for Solving Nonlinear Complementarity Problems Appl Math Optim 40:315 339 (1999) 1999 Springer-Verlag New York Inc. A Regularization Newton Method for Solving Nonlinear Complementarity Problems D. Sun School of Mathematics, University of New South

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps Projected Gradient Methods for NCP 57 Recent Advances in Nonsmooth Optimization, pp. 57-86 Eds..-Z. u, L. Qi and R.S. Womersley c1995 World Scientic Publishers Projected Gradient Methods for Nonlinear

More information

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques Math. Program. 86: 9 03 (999) Springer-Verlag 999 Digital Object Identifier (DOI) 0.007/s007990056a Song Xu James V. Burke A polynomial time interior point path following algorithm for LCP based on Chen

More information

Error bounds for symmetric cone complementarity problems

Error bounds for symmetric cone complementarity problems to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin

More information

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point

More information

An improved generalized Newton method for absolute value equations

An improved generalized Newton method for absolute value equations DOI 10.1186/s40064-016-2720-5 RESEARCH Open Access An improved generalized Newton method for absolute value equations Jingmei Feng 1,2* and Sanyang Liu 1 *Correspondence: fengjingmeilq@hotmail.com 1 School

More information

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Nonlinear Analysis 46 (001) 853 868 www.elsevier.com/locate/na Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Yunbin Zhao a;, Defeng Sun

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

ALGORITHMS FOR COMPLEMENTARITY EQUATIONS. Stephen C. Billups. A dissertation submitted in partial fulfillment of the. requirements for the degree of

ALGORITHMS FOR COMPLEMENTARITY EQUATIONS. Stephen C. Billups. A dissertation submitted in partial fulfillment of the. requirements for the degree of ALGORITHMS FOR COMPLEMENTARITY PROBLEMS AND GENERALIZED EQUATIONS By Stephen C. Billups A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer

More information

A Power Penalty Method for a Bounded Nonlinear Complementarity Problem

A Power Penalty Method for a Bounded Nonlinear Complementarity Problem A Power Penalty Method for a Bounded Nonlinear Complementarity Problem Song Wang 1 and Xiaoqi Yang 2 1 Department of Mathematics & Statistics Curtin University, GPO Box U1987, Perth WA 6845, Australia

More information

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2 A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,

More information

SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS

SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Bayerischen Julius Maximilians Universität Würzburg vorgelegt von STEFANIA

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1

Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1 Expected Residual Minimization Method for Stochastic Linear Complementarity Problems 1 Xiaojun Chen and Masao Fukushima 3 January 13, 004; Revised November 5, 004 Abstract. This paper presents a new formulation

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

Semismooth Support Vector Machines

Semismooth Support Vector Machines Semismooth Support Vector Machines Michael C. Ferris Todd S. Munson November 29, 2000 Abstract The linear support vector machine can be posed as a quadratic program in a variety of ways. In this paper,

More information

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department

More information

Properties of Solution Set of Tensor Complementarity Problem

Properties of Solution Set of Tensor Complementarity Problem Properties of Solution Set of Tensor Complementarity Problem arxiv:1508.00069v3 [math.oc] 14 Jan 2017 Yisheng Song Gaohang Yu Abstract The tensor complementarity problem is a specially structured nonlinear

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

A PROBABILITY-ONE HOMOTOPY ALGORITHM FOR NONSMOOTH EQUATIONS AND MIXED COMPLEMENTARITY PROBLEMS

A PROBABILITY-ONE HOMOTOPY ALGORITHM FOR NONSMOOTH EQUATIONS AND MIXED COMPLEMENTARITY PROBLEMS A PROBABILITY-ONE HOMOTOPY ALGORITHM FOR NONSMOOTH EQUATIONS AND MIXED COMPLEMENTARITY PROBLEMS STEPHEN C. BILLUPS AND LAYNE T. WATSON Abstract. A probability-one homotopy algorithm for solving nonsmooth

More information

A continuation method for nonlinear complementarity problems over symmetric cone

A continuation method for nonlinear complementarity problems over symmetric cone A continuation method for nonlinear complementarity problems over symmetric cone CHEK BENG CHUA AND PENG YI Abstract. In this paper, we introduce a new P -type condition for nonlinear functions defined

More information

PROBLEMS. STEVEN P. DIRKSE y AND MICHAEL C. FERRIS z. solution once they are close to the solution point and the correct active set has been found.

PROBLEMS. STEVEN P. DIRKSE y AND MICHAEL C. FERRIS z. solution once they are close to the solution point and the correct active set has been found. CRASH TECHNIQUES FOR LARGE-SCALE COMPLEMENTARITY PROBLEMS STEVEN P. DIRKSE y AND MICHAEL C. FERRIS z Abstract. Most Newton-based solvers for complementarity problems converge rapidly to a solution once

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

GENERALIZED second-order cone complementarity

GENERALIZED second-order cone complementarity Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the

More information

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this

More information

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 F. FACCHINEI 2 AND S. LUCIDI 3 Communicated by L.C.W. Dixon 1 This research was

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

On the Convergence of Newton Iterations to Non-Stationary Points Richard H. Byrd Marcelo Marazzi y Jorge Nocedal z April 23, 2001 Report OTC 2001/01 Optimization Technology Center Northwestern University,

More information

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that

1. Introduction. We consider the classical variational inequality problem [1, 3, 7] VI(F, C), which is to find a point x such that SIAM J. CONTROL OPTIM. Vol. 37, No. 3, pp. 765 776 c 1999 Society for Industrial and Applied Mathematics A NEW PROJECTION METHOD FOR VARIATIONAL INEQUALITY PROBLEMS M. V. SOLODOV AND B. F. SVAITER Abstract.

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008

Lecture 19 Algorithms for VIs KKT Conditions-based Ideas. November 16, 2008 Lecture 19 Algorithms for VIs KKT Conditions-based Ideas November 16, 2008 Outline for solution of VIs Algorithms for general VIs Two basic approaches: First approach reformulates (and solves) the KKT

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

Solution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP

Solution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP Solution of a General Linear Complementarity Problem using smooth optimization and its application to bilinear programming and LCP L. Fernandes A. Friedlander M. Guedes J. Júdice Abstract This paper addresses

More information

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for

More information

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such 1 FEASIBLE SEQUENTIAL QUADRATIC PROGRAMMING FOR FINELY DISCRETIZED PROBLEMS FROM SIP Craig T. Lawrence and Andre L. Tits ABSTRACT Department of Electrical Engineering and Institute for Systems Research

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

RECURSIVE APPROXIMATION OF THE HIGH DIMENSIONAL max FUNCTION

RECURSIVE APPROXIMATION OF THE HIGH DIMENSIONAL max FUNCTION RECURSIVE APPROXIMATION OF THE HIGH DIMENSIONAL max FUNCTION Ş. İ. Birbil, S.-C. Fang, J. B. G. Frenk and S. Zhang Faculty of Engineering and Natural Sciences Sabancı University, Orhanli-Tuzla, 34956,

More information

Abstract. This paper investigates inexact Newton methods for solving systems of nonsmooth equations. We dene two inexact Newton methods for locally Li

Abstract. This paper investigates inexact Newton methods for solving systems of nonsmooth equations. We dene two inexact Newton methods for locally Li Inexact Newton Methods for Solving Nonsmooth Equations Jose Mario Martnez Department of Applied Mathematics IMECC-UNICAMP State University of Campinas CP 6065, 13081 Campinas SP, Brazil Email : martinez@ccvax.unicamp.ansp.br

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems. Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution

More information

A semismooth Newton method for tensor eigenvalue complementarity problem

A semismooth Newton method for tensor eigenvalue complementarity problem Comput Optim Appl DOI 10.1007/s10589-016-9838-9 A semismooth Newton method for tensor eigenvalue complementarity problem Zhongming Chen 1 Liqun Qi 2 Received: 5 December 2015 Springer Science+Business

More information

Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata

Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata Symmetric Tridiagonal Inverse Quadratic Eigenvalue Problems with Partial Eigendata Zheng-Jian Bai Revised: October 18, 2007 Abstract In this paper we concern the inverse problem of constructing the n-by-n

More information

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0.

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0. BOOK REVIEWS 169 BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 28, Number 1, January 1993 1993 American Mathematical Society 0273-0979/93 $1.00+ $.25 per page The linear complementarity

More information

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction

SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM. 1. Introduction ACTA MATHEMATICA VIETNAMICA 271 Volume 29, Number 3, 2004, pp. 271-280 SOME STABILITY RESULTS FOR THE SEMI-AFFINE VARIATIONAL INEQUALITY PROBLEM NGUYEN NANG TAM Abstract. This paper establishes two theorems

More information

Local Indices for Degenerate Variational Inequalities

Local Indices for Degenerate Variational Inequalities Local Indices for Degenerate Variational Inequalities Alp Simsek Department of Economics, Massachusetts Institute of Technology, Office: 32D-740, 77 Massachusetts Avenue, Cambridge, Massachusetts, 02139

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

A NOTE ON Q-ORDER OF CONVERGENCE

A NOTE ON Q-ORDER OF CONVERGENCE BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa

More information

The nonsmooth Newton method on Riemannian manifolds

The nonsmooth Newton method on Riemannian manifolds The nonsmooth Newton method on Riemannian manifolds C. Lageman, U. Helmke, J.H. Manton 1 Introduction Solving nonlinear equations in Euclidean space is a frequently occurring problem in optimization and

More information

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY PREPRINT ANL/MCS-P6-1196, NOVEMBER 1996 (REVISED AUGUST 1998) MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY SUPERLINEAR CONVERGENCE OF AN INTERIOR-POINT METHOD DESPITE DEPENDENT

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1262 Complementarity Problems over Symmetric Cones: A Survey of Recent Developments in Several Aspects by Akiko YOSHISE July 2010

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems Applied Mathematics and Computation 109 (2000) 167±182 www.elsevier.nl/locate/amc An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras

Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Lingchen Kong, Levent Tunçel, and Naihua Xiu 3 (November 6, 7; Revised December, 7) Abstract Recently, Gowda et al. [] established

More information

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 4, pp. 962 973 c 2001 Society for Industrial and Applied Mathematics MONOTONICITY OF FIXED POINT AND NORMAL MAPPINGS ASSOCIATED WITH VARIATIONAL INEQUALITY AND ITS APPLICATION

More information

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems

An accelerated Newton method of high-order convergence for solving a class of weakly nonlinear complementarity problems Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 0 (207), 4822 4833 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa An accelerated Newton method

More information

Identifying Active Constraints via Partial Smoothness and Prox-Regularity

Identifying Active Constraints via Partial Smoothness and Prox-Regularity Journal of Convex Analysis Volume 11 (2004), No. 2, 251 266 Identifying Active Constraints via Partial Smoothness and Prox-Regularity W. L. Hare Department of Mathematics, Simon Fraser University, Burnaby,

More information

Convergence Analysis of Perturbed Feasible Descent Methods 1

Convergence Analysis of Perturbed Feasible Descent Methods 1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS Vol. 93. No 2. pp. 337-353. MAY 1997 Convergence Analysis of Perturbed Feasible Descent Methods 1 M. V. SOLODOV 2 Communicated by Z. Q. Luo Abstract. We

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003

Variational Inequalities. Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 Variational Inequalities Anna Nagurney Isenberg School of Management University of Massachusetts Amherst, MA 01003 c 2002 Background Equilibrium is a central concept in numerous disciplines including economics,

More information

Abstract. A new class of continuation methods is presented which, in particular,

Abstract. A new class of continuation methods is presented which, in particular, A General Framework of Continuation Methods for Complementarity Problems Masakazu Kojima y Nimrod Megiddo z Shinji Mizuno x September 1990 Abstract. A new class of continuation methods is presented which,

More information

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm

Manual of ReSNA. matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Manual of ReSNA matlab software for mixed nonlinear second-order cone complementarity problems based on Regularized Smoothing Newton Algorithm Shunsuke Hayashi September 4, 2013 1 Introduction ReSNA (Regularized

More information

National Institute of Standards and Technology USA. Jon W. Tolle. Departments of Mathematics and Operations Research. University of North Carolina USA

National Institute of Standards and Technology USA. Jon W. Tolle. Departments of Mathematics and Operations Research. University of North Carolina USA Acta Numerica (1996), pp. 1{000 Sequential Quadratic Programming Paul T. Boggs Applied and Computational Mathematics Division National Institute of Standards and Technology Gaithersburg, Maryland 20899

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Research Article A Penalized-Equation-Based Generalized Newton Method for Solving Absolute-Value Linear Complementarity Problems

Research Article A Penalized-Equation-Based Generalized Newton Method for Solving Absolute-Value Linear Complementarity Problems Mathematics, Article ID 560578, 10 pages http://dx.doi.org/10.1155/2014/560578 Research Article A Penalized-Equation-Based Generalized Newton Method for Solving Absolute-Value Linear Complementarity Problems

More information