system of equations. In particular, we give a complete characterization of the Q-superlinear

Size: px
Start display at page:

Download "system of equations. In particular, we give a complete characterization of the Q-superlinear"

Transcription

1 INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica Universita di Roma \La Sapienza" Via Buonarroti 12, I Roma, Italy 2 Institute of Numerical Mathematics Technical University of Dresden D Dresden, Germany 3 Institute of Applied Mathematics University of Hamburg Bundesstrasse 55, D Hamburg, Germany Abstract: We consider the local behaviour of inexact Newton methods for the solution of a semismooth system of equations. In particular, we give a complete characterization of the Q-superlinear and Q-quadratic convergence of inexact Newton methods. We then apply these results to a particular semismooth system of equations arising from variational inequality problems, and present a globally and locally fast convergent algorithm for its solution. Key words: Semismoothness, inexact Newton methods, variational inequality problems, global convergence, superlinear convergence, quadratic convergence. 1 INTRODUCTION Consider the nonlinear system of equations G(x) = 0; with G : IR n! IR n. Solving this system of equations is a well-understood problem if the operator G is continuously dierentiable, see, e.g., [6]. However, many mathematical problems and several applications lead to a system of equations with a nonsmooth operator G, see, e.g., [23]. Of course, these problems are much more dicult to solve, but there is a growing interest in nding ecient methods which are able to handle them. In this paper we focus on the case in which the system G(x) = 0 is just semismooth (see next section). We consider the use of extensions of the classical inexact Newton method which, in the smooth case, is known to be among the most ecient algorithms for the solution of large-scale systems. The convergence theory presented here completes the recent results by Martnez and Qi [20]. In particular, we give a complete characterization of the Q-superlinear and Q-quadratic convergence of any sequence generated by the nonsmooth inexact Newton method. These characterizations generalize the classical results by Dembo, Eisenstat and Steihaug [4] from the smooth to the semismooth case. 1

2 One of the source problems which leads to nonsmooth equations is the variational inequality problem, VIP(X; F ) for short. This is to nd a vector x in a feasible set X IR n such that F (x ) T (x? x ) 0 8x 2 X; where F : IR n! IR n : Using a function introduced by Fischer [10], we will reformulate the optimality conditions of VIP(X; F ) as a semismooth system of equations and apply an inexact Newton method to this particular system. This algorithm will be shown to enjoy good global and local convergence properties. In particular, we are able to establish global convergence for a wide class of problems as well as to prove Q-superlinear and Q-quadratic convergence results without assuming strict complementarity at the solution. The organization of the paper is as follows. In the next section we will shortly review some basic properties of semismooth functions. In Section 3 we present the local convergence results for an inexact Newton method applied to a general semismooth equation G(x) = 0: This theory will be used in Section 4 in order to show some strong convergence properties of a new algorithm for solving variational inequality problems. Notation. We say that a function G : IR n! IR n is a C k -function if G is k times continuously dierentiable. A C k -function G is called an LC k -function if its kth derivative is locally Lipschitzcontinuous. The Jacobian of a C 1 -function G : IR n! IR n at a point x 2 IR n is denoted by G 0 (x). Finally, k k indicates the Euclidean norm or the subordinate matrix norm. 2 PROPERTIES OF SEMISMOOTH FUNCTIONS Let G : IR n! IR n be a locally Lipschitzian function. Then G is almost everywhere dierentiable by Rademacher's theorem. Let us denote by D G the set of points at which G is dierentiable and by G 0 (x; d) the directional derivative of G at x in the direction d: Then the B-subdierential of G at a point x 2 IR n is dened B G(x) := fh 2 IR nn : 9fx k g! x with x k 2 D G ; and G 0 (x k )! Hg: The generalized Jacobian by Clarke [2] is the convex hull of the := B G(x): The following denition of a semismooth operator is due to Qi and Sun [25] and generalizes a concept by Miin [21] from functionals to vector-valued functions. The denition is also closely related to a similar concept suggested by Kummer [19]. Denition 2.1 Let G : IR n! IR n be locally Lipschitzian at x 2 IR n : G is said to be semismooth at x if the limit lim Hv 0 H2@G(x+tv 0 );v 0!v;t#0 exists for every v 2 IR n : We note that semismooth functions are known to be directionally dierentiable, and that the directional derivative of G at x in the direction v is equal to the limit in Denition 2.1. Furthermore it can be shown that G is semismooth at x if and only if Hd? G 0 (x; d) = o(kdk): This last property motivates the following denition, see [25]. Denition 2.2 Suppose that G : IR n! IR n is semismooth at x 2 IR n : Then G is said to be strongly semismooth at x if, for any d! 0 and for any H + d), we have Hd? G 0 (x; d) = O(kdk 2 ): Note that every C 1 -function is semismooth and that every LC 1 -function is strongly semismooth. For more details on semismooth functions, we refer the reader to [21, 24, 25, 12]. The following denition of a BD-regular vector plays a crucial role in establishing fast local convergence results of several iterative methods. 2

3 Denition 2.3 We say that a Lipschitzian function G : IR n! IR n is BD-regular at a point x 2 IR n if all elements in the B G(x) are nonsingular. The following result was proved by Qi [24]. Proposition 2.1 Assume that G : IR n! IR n is semismooth and that x 2 IR n is a BD-regular solution of the system G(x) = 0: Then there is a neighbourhood of x and a constant c > 0 such that, for all x 2 and all H B G(x); H is nonsingular and kh?1 k c: The next result is due to Pang and Qi [23] and plays an important role in establishing the superlinear rate of convergence of certain Newton-type methods. Proposition 2.2 Assume that G : IR n! IR n is semismooth at x 2 IR n : Then kg(x + h)? G(x)? Hhk lim = 0: h!0;h2@g(x+h) khk A corresponding result, for strongly semismooth functions, was established by Facchinei and Kanzow [8], see also Fischer [12]. Proposition 2.3 Assume that G : IR n! IR n is strongly semismooth at x 2 IR n and directionally dierentiable in a neighbourhood of x: Then kg(x + h)? G(x)? Hhk lim sup < 1: h!0;h2@g(x+h) khk 2 The following is a characterization theorem for superlinear convergence due to Pang and Qi [23]. It generalizes the famous characterization theorem by Dennis and More [5] from the smooth to the nonsmooth case. Theorem 2.4 Let G : IR n! IR n be a locally Lipschitz-continuous function in the open convex set D IR n : Let fx k g D be any sequence converging to x 2 D with x k 6= x for all k: If G is semismooth and BD-regular at x ; then fx k g converges Q-superlinearly to x and G(x ) = 0 if and only if kg(x k ) + H k d k k lim = k!1 kd k 0; k where H k B G(x k ) and d k = x k+1? x k : There is a similar result, due to Facchinei and Kanzow [8], which characterizes the quadratic rate of convergence. Theorem 2.5 Let G : IR n! IR n be a locally Lipschitz-continuous function in the open convex set D IR n : Let fx k g D be any sequence converging to x 2 D with x k 6= x for all k: If G is strongly semismooth and BD-regular at x ; and directionally dierentiable in a neighbourhood of x ; then fx k g converges Q-quadratically to x and G(x ) = 0 if and only if where H k B G(x k ) and d k = x k+1? x k : kg(x k ) + H k d k k lim sup k!1 kd k < 1; k 2 Finally, we state a result which will be useful in Section 4. Proposition 2.6 Let G : IR n! IR n be semismooth and x 2 IR n be a BD-regular solution of the system G(x) = 0: Suppose that there are two sequences fx k g and fd k g such that Then x k! x and kx lim k + d k? x k k!1 kx k? x = 0: k kg(x lim k + d k )k = k!1 kg(x k 0: )k Proof. The result is actually due to Facchinei and Soares [9, Lemma 5.5], where, however, it has been stated only under the stronger assumption that all elements in the generalized ) are nonsingular. It is not dicult to see that their proof goes through also under the weaker BD-regularity assumption, so we omit the details here. 2 3

4 3 NONSMOOTH INEXACT NEWTON METHODS First assume that G : IR n! IR n is a smooth function and consider the following inexact Newton method for solving the system of nonlinear equations G(x) = 0: Algorithm 3.1 (Smooth inexact Newton method) (S.0) Let x 0 2 IR n ; 0 0 and set k = 0: (S.1) If G(x k ) = 0; stop. (S.2) Find a step d k 2 IR n such that G 0 (x k )d k =?G(x k ) + r k ; where the residual vector r k 2 IR n satises the condition kr k k k kg(x k )k: (S.3) Choose k+1 0; set x k+1 := x k + d k ; k := k + 1 and go to (S.1). We summarize the main convergence results in the following theorem. For its proof, the reader is referred to the classical paper by Dembo, Eisenstat and Steihaug [4]. Theorem 3.1 Let x 2 IR n be a solution of the system G(x) = 0: Assume that G is continuously dierentiable in a neighbourhood of x and that the Jacobian G 0 (x ) is nonsingular. Then the following statements hold: (a) Let 2 (0; 1) be arbitrary. Then there is an " > 0 such that, if kx 0? x k " and k for all k; the sequence fx k g generated by Algorithm 3.1 is well-dened and converges Q-linearly to the solution x : (b) If the sequence fx k g generated by Algorithm 3.1 converges to the solution x ; then the rate of convergence is Q-superlinear if and only if kr k k = o(kg(x k )k): (c) If the sequence fx k g generated by Algorithm 3.1 converges to the solution x and if G 0 is Lipschitzcontinuous in a neighbourhood of x ; then the rate of convergence is Q-quadratic if and only if kr k k = O(kG(x k )k 2 ): We note that the assumption kr k k = o(kg(x k )k) in Theorem 3.1 is satised if the forcing sequence f k g goes to 0: The assumption kr k k = O(kG(x k )k 2 ) is satised if k = O(kG(x k )k): Next we assume that the operator G : IR n! IR n is locally Lipschitz-continuous. The following algorithm is a generalization of the smooth inexact Newton method 3.1 to the nonsmooth case based on the generalized Jacobian by Clarke [2]. Algorithm 3.2 (Nonsmooth inexact Newton method) (S.0) Let x 0 2 IR n ; 0 0 and set k = 0: (S.1) If G(x k ) = 0; stop. (S.2) Select an element H k B G(x k ): Find a step d k 2 IR n such that H k d k =?G(x k ) + r k ; where the residual vector r k 2 IR n satises the condition kr k k k kg(x k )k: (S.3) Choose k+1 0; set x k+1 := x k + d k ; k := k + 1 and go to (S.1). 4

5 The following theorem contains the corresponding convergence properties. Theorem 3.2 Assume that G is semismooth in a neighbourhood of x and that x is a BD-regular solution of the system G(x) = 0: Then the following statements hold: (a) There are numbers > 0 and " > 0 such that, if kx 0? x k " and k for all k; the sequence fx k g generated by Algorithm 3.2 is well-dened and converges Q-linearly to the solution x : (b) If the sequence fx k g generated by Algorithm 3.2 converges to the solution x ; then the rate of convergence is Q-superlinear if and only if kr k k = o(kg(x k )k): (c) If the sequence fx k g generated by Algorithm 3.2 converges to the solution x and if G is strongly semismooth in a neighbourhood of x ; then the rate of convergence is Q-quadratic if and only if kr k k = O(kG(x k )k 2 ): Proof. Part (a) has been shown by Martnez and Qi [20]. So we come to part (b). First assume that kr k k = o(kg(x k )k): Actually, under this assumption, it has also been shown in [20] that fx k g converges Q-superlinearly to x : Here, however, we give a dierent proof of this sucient part by exploiting the characterization in Theorem 2.4. We rst note that, in view of the boundedness of the sequence fh k g; there is a constant c 1 > 0 such that kg(x k )? r k k = kh k d k k c 1 kd k k; where the equality comes from the inexact Newton equation. Since kr k k = o(kg(x k )k) by assumption, we therefore have kg(x k )k c 2 kd k k for some constant c 2 > 0 and all k suciently large. Hence we get kr k k kd k k c kr k k 2 kg(x k )k! 0: Since r k = G(x k ) + H k d k ; the Q-superlinear convergence of the sequence fx k g to x now follows from Theorem 2.4. We now prove the converse direction. So assume that fx k g converges to x Q-superlinearly. Let us denote by e k := x k? x the error vector at the kth iterate. From the inexact Newton equation G(x k ) + H k d k = r k ; we obtain the identity r k = [G(x k )? G(x )? H k e k ] + [H k e k+1 ]: Dividing both sides by ke k k; we obtain from Proposition 2.2, the boundedness of the sequence fh k g and the assumed Q-superlinear convergence of fx k g to x that r k ke k k! 0: In view of Proposition 3 in [23], however, there is a constant c 3 > 0 such that kg(x k )k c 3 ke k k for all k suciently large. Hence we have kr k k = o(kg(x k )k): The proof of part (c) is similar to the one of part (b). Instead of Proposition 2.2 and Theorem 2.4 one has to use Proposition 2.3 and Theorem 2.5. We omit the details here. 2 We stress that, concerning the linear convergence, there is one major dierence in the Theorems 3.1 and 3.2: In the smooth case we can take 2 (0; 1) arbitrarily in order to prove local Q-linear convergence, whereas in the nonsmooth case Theorem 3.2 just states that there exists an > 0 such that the sequence fx k g converges Q-linearly if k holds for all k: Martnez and Qi [20] showed by a counterexample that it is in general not sucient to take 2 (0; 1) arbitrarily in the nonsmooth case in order to prove Q-linear convergence of the sequence fx k g: Note, however, that the superlinear and quadratic convergence results are the same for the smooth and nonsmooth inexact Newton methods. 5

6 4 APPLICATION TO VARIATIONAL INEQUALITIES In the previous section we studied the local behaviour of truncated Newton schemes for the solution of semismooth systems of equations. We saw that the main results on the convergence rate of this class of methods go through from the smooth to the semismooth case. However, combining global and superlinear convergence in the semismooth case turns out to be a much more dicult task than in the smooth case, see [20] and the remarks at the end of the previous section. In this section we consider a particular semismooth system of equations derived from the optimality conditions of variational inequalities. Using the structure of this system and the local theory developed in Section 3, we show how it is possible to overcome the diculties just mentioned and to design, under mild assumptions, a globally convergent inexact Newton algorithm which compares favourably with existing algorithms for the same class of problems. We consider the variational inequality problem VIP(X; F ) as introduced in Section 1. Here we assume that the set X is given by X := fx 2 IR n j g(x) 0; h(x) = 0g; where g : IR n! IR m and h : IR n! IR p : Instead of reviewing the large number of existing solution methods for VIP(X; F ), we refer the interested reader to the survey papers [14, 13] and the references therein. Consider the following Karush-Kuhn-Tucker (KKT) optimality conditions of VIP(X; F ): F (x)? g 0 (x) T y + h 0 (x) T z = 0; h(x) = 0; g(x) 0; y 0; y T g(x) = 0: (1) If x solves VIP(X; F ) and if a certain constraint qualication holds (e.g., the linear independence of the gradients of the active constraints), then multiplier vectors y 2 IR m and z 2 IR p exist such that the vector w := (x ; y ; z ) 2 IR n IR m IR p is a KKT-point of VIP(X; F ), i.e., satises the KKT-conditions (1). Conversely, if all component functions g i are concave and all component functions h j are ane, so that X is a convex set, then the x-part of every KKT-point w := (x ; y ; z ) is a solution of VIP(X; F ), see [14]. Moreover, if X is a polyhedral set, then the KKT-conditions are both necessary and sucient for a point to be a solution of VIP(X; F ) without assuming any constraint qualication. We now want to rewrite the KKT-conditions as a nonlinear system of equations. To this end, we make use of the function ' : IR 2! IR dened by '(a; b) := p a 2 + b 2? a? b: This function was introduced by Fischer [10] in 1992, and, since then, has become quite popular in the elds of linear and nonlinear complementarity, constrained optimization and variational inequality problems, see, e.g., [3, 8, 9, 10, 11, 12, 15, 16, 17, 18, 26, 27]. The main property of this function is the following characterization of its zeros: '(a; b) = 0 () a 0; b 0; ab = 0: For this reason, the KKT-conditions (1) can equivalently be written as the nonlinear system of equations (w) := (x; y; z) = 0; (2) where : IR n IR m IR p! IR n IR m IR p is dened by with (w) := (x; y; z) := L(x; y; z) h(x) '(g(x); y) L(x; y; z) := F (x)? g 0 (x) T y + h 0 (x) T z; 1 A '(g(x); y) := ('(g 1 (x); y 1 ); : : :; '(g m (x); y m )) T 2 IR m : Note that the function ' is not dierentiable in the origin, so that the system (2) is a nonsmooth reformulation of the KKT-conditions (1). However, it can easily be seen that is a locally Lipschitzcontinuous operator under suitable assumptions on F; g and h: More precisely, the following stronger properties can be shown, see [7]. 6

7 Proposition 4.1 The following statements hold: (a) Assume that F is a C 1 -function and g, h are C 2 -functions. Then is semismooth. (b) Assume that F is an LC 1 -function and g; h are LC 2 -functions. Then is strongly semismooth. Exploiting results concerning the composition of semismooth functions [21] and strongly semismooth functions [12], respectively, Proposition 4.1 can be shown to hold under weaker dierentiability assumptions on F, g and h. However, we omit these details for the sake of simplicity. Now dene the merit function : IR n IR m IR p! IR by (w) := 1 2 (w)t (w) = 1 2 k(w)k2 : Its smoothness properties are summarized in the following result, see [7]. Proposition 4.2 Assume that F is a C 1 -function and g; h are C 2 -functions. Then is continuously dierentiable, and r (w) = H T (w) with H being arbitrary. We note that the dierentiability of the merit function is quite surprising and plays a crucial role in the globalization strategy of the following inexact Newton method applied to the system of equations (w) = 0: Algorithm 4.1 (Nonsmooth inexact Newton method for VIP(X; F )) (S.0) (Data) Choose w 0 = (x 0 ; y 0 ; z 0 ) 2 IR n IR m IR p ; > 0; > 2; 2 (0; 1=2); 0 0; " 0 and set k = 0: (S.1) (Termination criterion) If kr (w k )k "; stop. (S.2) (Search direction calculation) Select an element H k B (w k ). Find a solution d k of the system H k d =?(w k ) + r k (3) such that kr k k k k(w k )k: If this is not possible or if the condition is not satised, set d k =?r (w k ). (S.3) (Line search) Find the smallest i k 2 f0; 1; 2; : : :g such that r (w k ) T d k?kd k k (4) (w k + 2?ik d k ) (w k ) + 2?ik r (w k ) T d k : (5) (S.4) (Update) Set w k+1 = w k + 2?ik d k, k k + 1; choose k 0 and go to (S.1). The stopping criterion at Step (S.1) can be substituted by any other suitable criterion without changing the properties of the algorithm. Note that the above algorithm makes use of the existence of r in several places, namely in the termination criterion, in test (4) in order to nd out whether or not the inexact Newton direction d k is a \good" descent direction, in taking d k =?r (w k ) if this test is not satised, and, nally, in the Armijo line search test. In what follows, as usual in analyzing the behaviour of algorithms, we shall assume that " = 0 and that the algorithm produces an innite sequence of points. The following result summarizes the main convergence properties of the algorithm. Theorem 4.3 Let F be a C 1 -function and g and h be C 2 -functions. Assume that kr k k k k(w k )k, where k is a sequence of positive numbers such that k for every k with an arbitrary 2 (0; 1). Then the following assertions are valid: 7

8 (a) Each accumulation point of the sequence fw k g generated by the algorithm is a stationary point of. (b) If the sequence fw k g has an accumulation point, let us say w, which is an isolated KKT-point of VIP(X; F ), then fw k g! w. (c) If the sequence fw k g has an accumulation point, let us say w, which is a BD-regular solution of the system (w) = 0, then fw k g! w : Moreover, if f k g! 0, then fw k g converges to w Q-superlinearly. Furthermore, if k = O(k(w k )k), if F is an LC 1 -function and if g and h are LC 2 -functions, then the convergence rate is Q-quadratic. Proof. First, part (a) will be proved by contradiction. Without loss of generality, it can be assumed that the direction is always given by (3). In fact, if, for an innite set of indices K, we have d k =?r (w k ) for all k 2 K, then any limit point is a stationary point of by well-known results (see, e.g., Proposition 1.16 in [1]). Suppose now, renumbering if necessary, that fw k g! w and that r (w ) 6= 0. Since the direction d k always satises (3), we get which yields k(w k )? r k k = kh k d k k kh k k kd k k (6) kd k k k(wk )? r k k : (7) kh k k Note that kh k k cannot be 0. Otherwise, (6) would imply (w k )? r k = 0 which, in turn, since kr k k k k(w k )k with k < 1, is only possible if k(w k )k = 0, so that w k would be a stationary point and the algorithm would have stopped. We now show that 0 < 1 kd k k 2 (8) for some positive 1 and 2. In fact, if, for some subsequence K, fkd k kg K! 0 we have from (7) that fk(w k )? r k kg K! 0 because H k is bounded on the bounded sequence fw k g by known properties of the generalized Jacobian. But then, by continuity and the fact that k(w k )? r k k k(w k )k? kr k k k(w k )k? k k(w k )k (1? )k(w k )k; we get (w ) = 0. Thus, w is a solution of the variational inequality problem. This contradicts the assumption r (w ) 6= 0. On the other hand, kd k k cannot be unbounded because, taking into account that r (w k ) is bounded and > 2, this would contradict (4). Then, since (5) holds at each iteration and is bounded from below, we have that f (w k+1 )? (w k )g! 0 which implies, by the line search test, f2?ik r (w k ) T d k g! 0: (9) We want to show that 2?ik is bounded away from 0: Suppose the contrary. Then, subsequencing if necessary, we have that f2?ik g! 0 so that at each iteration the stepsize is reduced at least once and (5) gives (w k + 2?(ik?1) d k )? (w k ) > r (w k ) T d k : (10) 2?(ik?1) With regard to (8), we can assume, subsequencing if necessary, that fd k g! d 6= 0, so that, passing to the limit in (10), we get r (w ) T d r (w ) T d: (11) On the other hand, we also have, by (4), that r (w ) T d?kdk < 0, which contradicts (11). Hence 2?ik is bounded away from 0. But then (9) and (4) imply fd k g! 0 which contradicts (8). Therefore, r (w ) = 0 must be valid. The proof of point (b) is identical to the proof of Theorem 3.1 (b) in [3] and can therefore be omitted. We note that part (b) can also be proved by exploiting Lemma 4.10 in More and Sorensen [22]. We now pass to the proof of point (c). The fact that the whole sequence fw k g converges to w follows by part (b) noting that the BDregularity assumption implies, by [23, Proposition 3], that w is an isolated solution of the system (w) = 0 and hence also an isolated KKT-point of VIP(X; F ). Then, we rst prove that locally the 8

9 direction is always the solution of system (3) and then that eventually the stepsize of one satises the line search test (5), so that the algorithm eventually reduces to the undamped inexact Newton method and the assertions on the convergence rate readily follow from Theorem 3.2 and Proposition 4.1. Since fw k g converges to a BD-regular solution of the system (w) = 0, we have, by Proposition 2.1, that there exists a positive number C such that kh?1 k k C for every k. Then, taking into account that we can write, for every vector v, kvk kh?1 k kkh kvk, we get mkvk kh k vk; 8k; 8v 2 IR n ; (12) where m = 1=C. Furthermore, we also have, from the boundedness of the generalized Jacobian on bounded sets, that there is a positive constant M such that kh k vk Mkvk: 8k; 8v 2 IR n ; (13) We now note that, since H k is nonsingular for k suciently large, system (3) always admits a solution in the sense that kr k k k k(w k )k: We want to show that this solution satises, for some positive 1, the condition r (w k ) T d k? 1 kd k k 2 : (14) Using (3), (12), and (13), we can write mkd k k kh k d k k = k(w k )? r k k Mkd k k which, taking into account that k tends to 0 so that kr k k = o(k(w k )k), gives m 2 kdk k k(w k )k 2Mkd k k (15) for all k suciently large. But then, since r (w k ) can be written as H T k (wk ) by Proposition 4.2, we get from (3) and (15) r (w k ) T d k =?k(w k )k 2 + (w k ) T r k? m2 4 kdk k 2 + o(k(w k )k 2 ) (16) =? m2 4 kdk k 2 + o(kd k k 2 )? m2 8 kdk k 2 for all k suciently large. Then (14) follows from (16) by taking 1 m 2 =8. But now, noting that fkd k kg! 0, it is easy to see that (14) eventually implies (4) for any > 2 and any positive. To complete the proof of the theorem it only remains to show that eventually the stepsize determined by the Armijo test (5) is 1, that is, that eventually i k = 0. The statements on the local rate of convergence then follow immediately from Theorem 3.2. The main step in order to show that i k = 0 is eventually accepted by the Armijo test (5) is to show that there is a constant > 0 such that eventually (w k ) + r (w k ) T d k (w k ): (17) Taking into account the denition of ; the fact that we can write r (w k ) = H T k (wk ) with H k being the matrix from (3) and the fact that d k satises (3) with kr k k k k(w k )k; we obtain from the Cauchy-Schwarz-inequality: (w k ) + r (w k ) T d k = (w k )? k(w k )k 2 + (w k ) T r k (w k )? k(w k )k 2? k(w k )k kr k k (w k )? (1 + k )k(w k )k 2 = (1? 2? 2 k ) (w k ): Hence there is a constant > 0 such that (17) holds for all k suciently large because 2 (0; 1=2) and k! 0 by assumption. Using part (b) of Theorem 3.2 with G =, we get from Proposition 2.6 that the condition (w k + d k ) (w k ) is satised for all k suciently large. In particular, it then follows from (17) that the Armijo-test (5) is eventually satised with i k = 0: This completes the proof. 2 We stress that, for the global convergence part of the above theorem, it was sucient to choose 2 (0; 1) arbitrarily. The above convergence theorem raises two questions: 9

10 { Under what assumptions is a KKT-point w of VIP(X; F ) a BD-regular solution (and hence an isolated solution) of the system (w) = 0? { Under what assumptions is a stationary point of a KKT-point of VIP(X; F ) (and hence a solution of VIP(X; F ) itself under certain assumptions)? The remaining part of this section is devoted to these questions. Let w = (x; y; z) 2 IR n IR m IR p be an arbitrary vector, and let us introduce the following two index sets: I 0 (w) := fi 2 Ij g i (x) = 0 and y i 0g; I + (w) := fi 2 Ij g i (x) = 0 and y i > 0g: Note that, if w = w is a KKT-point of VIP(X; F ), then I 0 (w ) is the index set of all active constraints, whereas I + (w ) contains the indices of the strongly active contraints. Obviously, we have I + (w) I 0 (w) at an arbitrary point w; and I + (w ) = I 0 (w ) if and only if strict complementarity holds at a KKT-point w : The following result gives a sucient condition for a KKT-point w of VIP(X; F ) to ba a BDregular solution of the equation (w) = 0 and therefore guarantees that parts (b) and (c) of Theorem 4.3 are applicable under the assumptions of this result. Theorem 4.4 Let F be a C 1 -function and g and h be C 2 -functions. Let w := (x ; y ; z ) 2 IR n IR m IR p be a KKT-point of VIP(X; F ). Suppose that (a) v T r x L(w )v > 0 for all v 2 IR n ; v 6= 0; such that rh j (x ) T v = 0 (j 2 J) and rg i (x ) T v = 0 (i 2 I + (w )); (b) the gradients rh j (x ) (j 2 J) and rg i (x ) (i 2 I 0 (w )) are linearly independent. Then all elements H ) are nonsingular, in particular, w is a BD-regular solution of the system (w) = 0: Proof. See Jiang [16] or [7]. 2 We note that the above theorem holds without assuming strict complementarity. Hence, Theorem 4.3 guarantees local Q-superlinear and Q-quadratic convergence of the sequence fw k g even if this assumption is not satised. We now turn to the question under what assumptions a stationary point of is a KKT-point of VIP(X; F ). We rst consider the general problem. Theorem 4.5 Let F be a C 1 -function and g and h be C 2 -functions. Let w := (x ; y ; z ) 2 IR n IR m IR p be a stationary point of : Then w is a KKT-point of VIP(X; F ) if one of the following two assumptions holds: (1) (a) v T r x L(w )v > 0 for all v 2 IR n ; v 6= 0; such that rh j (x ) T v = 0 (j 2 J) and rg i (x ) T v = 0 (i 2 I 0 (w )); and (b) the gradients rh j (x ) (j 2 J) and rg i (x ) (i 2 I 0 (w )) are linearly independent, or (2) (a) v T r x L(w )v > 0 for all v 2 IR n ; v 6= 0; such that rh j (x ) T v = 0 (j 2 J) and rg i (x ) T v = 0 (i 2 I + (w )); and (b) the gradients rh j (x ) (j 2 J) and rg i (x ) (i 2 I + (w )) are linearly independent. Proof. See [7]. 2 Note that the dierence in assumptions (1) and (2) of Theorem 4.5 is that assumption (1) (a) is weaker than assumption (2) (a), but that assumption (1) (b) is stronger than assumption (2) (b). If F is monotone, i.e., if (x? y) T [F (x)? F (y)] 0 8x; y 2 IR n ; and if the feasible set X is polyhedral, one can prove the following result. 10

11 Theorem 4.6 Assume that F is a monotone C 1 -function and that X is polyhedral. Let w be a stationary point of : Then w is a KKT-point of VIP(X; F ) if one of the following conditions is satised: (1) The set X is bounded. (2) The set X is contained in the nonnegative orthant IR n +: Proof. See [7]. 2 In particular, if the polyhedral set X is given in the standard form X = fx 2 IR n j Ax = b; x 0g for some matrix A 2 IR mn and a vector b 2 IR m ; then assumption (2) of Theorem 4.6 is satised. Theorem 4.6 therefore says that, in most cases, our algorithm is able to solve monotone variational inequalities over polyhedral sets (recall that in this case the KKT-conditions and problem VIP(X; F ) itself are completely equivalent). 5 CONCLUDING REMARKS In this paper, we rst investigated the local rate of convergence of inexact Newton methods for a semismooth system of equations, and gave a complete characterization of its Q-superlinear and Q-quadratic convergence. We then applied these results to variational inequalities and presented a globally and locally fast convergent algorithm for the solution of VIP(X; F ). We think that this method has several interesting properties, e.g. it just requires the inexact solution of one linear system at each iteration, it converges (at least) Q-superlinearly even if strict complementarity is not satised at a solution, it has stronger global convergence properties than many other methods, see [14, 13]. We therefore believe that this algorithm could be a very powerful and ecient tool for solving variational inequalities, especially large-scale ones. Preliminary numerical results obtained for a set of small test problems are very promising. References [1] D.P. Bertsekas: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York, [2] F.H. Clarke: Optimization and Nonsmooth Analysis. Wiley, New York, 1983 (reprinted by SIAM, Philadelphia, 1990). [3] T. De Luca, F. Facchinei and C. Kanzow: A semismooth equation approach to the solution of nonlinear complementarity problems. DIS Technical Report 01.95, Universita di Roma \La Sapienza", Roma, Italy, January 1995 (revised July 1995). [4] R.S. Dembo, S.C. Eisenstat and T. Steihaug: Inexact Newton methods. SIAM Journal on Numerical Analysis 19, 1982, pp. 400{408. [5] J.E. Dennis, Jr., and J.J. More: A characterization of the superlinear convergence and its application to quasi-newton methods. Mathematics of Computation 28, 1974, pp. 549{560. [6] J.E. Dennis, Jr., and R.B. Schnabel: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Englewood Clis, NJ, [7] F. Facchinei, A. Fischer and C. Kanzow: A semismooth Newton method for variational inequalities: Theoretical results and preliminary numerical experience. Forthcoming paper. [8] F. Facchinei and C. Kanzow: A nonsmooth inexact Newton method for the solution of largescale nonlinear complementarity problems. Preprint 95, Institute of Applied Mathematics, University of Hamburg, Hamburg, Germany, May

12 [9] F. Facchinei and J. Soares: A new merit function for nonlinear complementarity problems and a related algorithm. DIS Technical Report 15.94, Universita di Roma \La Sapienza", Roma, Italy, [10] A. Fischer: A special Newton-type optimization method. Optimization 24, 1992, pp. 269{284. [11] A. Fischer: A special Newton-type method for positive semidenite linear complementarity problems. Preprint MATH-NM , Technical University of Dresden, Dresden, Germany, 1992 (revised 1993). Journal of Optimization Theory and Applications, to appear. [12] A. Fischer: Solution of monotone complementarity problems with locally Lipschitzian functions. Preprint MATH-NM , Institute of Numerical Mathematics, Technical University of Dresden, Dresden, Germany, May [13] M. Fukushima: Merit functions for variational inequality and complementarity problems. Technical Report, Nara Institute of Science and Technology, Nara, Japan, June [14] P.T. Harker and J.-S. Pang: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Mathematical Programming 48, 1990, pp. 161{220. [15] C. Geiger and C. Kanzow: On the resolution of monotone complementarity problems. Preprint 82, Institute of Applied Mathematics, University of Hamburg, Hamburg, Germany, April 1994 (revised February 1995). Computational Optimization and Applications, to appear. [16] H. Jiang: Local properties of solutions of nonsmooth variational inequalities. Optimization 33, 1995, pp. 119{132. [17] H. Jiang and L. Qi: A nonsmooth equations approach to nonlinear complementarities. Applied Mathematics Report AMR 94/31, School of Mathematics, University of New South Wales, Sydney, Australia, October [18] C. Kanzow: Global convergence properties of some iterative methods for linear complementarity problems. Preprint 72, Institute of Applied Mathematics, University of Hamburg, Hamburg, Germany, June 1993 (revised 1994). SIAM Journal on Optimization, to appear. [19] B. Kummer: Newton's method for non-dierentiable functions. In J. Guddat et al. (eds.): Mathematical Research, Advances in Mathematical Optimization, Akademie Verlag, Berlin, Germany, 1988, pp. 114{125. [20] J.M. Martnez and L. Qi: Inexact Newton methods for solving nonsmooth equations. Applied Mathematics Report 93/9, School of Mathematics, University of New South Wales, Sydney, Australia, 1993 (revised April 1994). [21] R. Mifflin: Semismooth and semiconvex functions in constrained optimization. SIAM Journal on Control and Optimization 15, 1977, pp. 957{972. [22] J.J. More and D.C. Sorensen: Computing a trust region step. SIAM Journal on Scientic and Statistical Computing 4, 1983, pp. 553{572. [23] J.-S. Pang and L. Qi: Nonsmooth equations: motivation and algorithms. SIAM Journal on Optimization 3, 1993, pp. 443{465. [24] L. Qi: A convergence analysis of some algorithms for solving nonsmooth equations. Mathematics of Operations Research 18, 1993, pp. 227{244. [25] L. Qi and J. Sun: A nonsmooth version of Newton's method. Mathematical Programming 58, 1993, pp. 353{368. [26] P. Tseng: Growth behaviour of a class of merit functions for the nonlinear complementarity problem. Technical Report, Department of Mathematics, University of Washington, Seattle, May 1994 (revised March 1995). Journal of Optimization Theory and Applications, to appear. [27] N. Yamashita and M. Fukushima: Modied Newton methods for solving semismooth reformulations of monotone complementarity problems. Technical Report TR-IS-95021, Nara Institute of Science and Technology, Nara, Japan, May

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS

SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS A SEMISMOOTH EQUATION APPROACH TO THE SOLUTION OF NONLINEAR COMPLEMENTARITY PROBLEMS Tecla De Luca 1, Francisco Facchinei 1 and Christian Kanzow 2 1 Universita di Roma \La Sapienza" Dipartimento di Informatica

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2

A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS. Christian Kanzow 1 and Hou-Duo Qi 2 A QP-FREE CONSTRAINED NEWTON-TYPE METHOD FOR VARIATIONAL INEQUALITY PROBLEMS Christian Kanzow 1 and Hou-Duo Qi 2 1 University of Hamburg Institute of Applied Mathematics Bundesstrasse 55, D-20146 Hamburg,

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999)

A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION. September 1997 (revised May 1998 and March 1999) A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION Bintong Chen 1 Xiaojun Chen 2 Christian Kanzow 3 September 1997 revised May 1998 and March 1999 Abstract: We introduce a new NCP-function in order to reformulate

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism

2 B. CHEN, X. CHEN AND C. KANZOW Abstract: We introduce a new NCP-function that reformulates a nonlinear complementarity problem as a system of semism A PENALIZED FISCHER-BURMEISTER NCP-FUNCTION: THEORETICAL INVESTIGATION AND NUMERICAL RESULTS 1 Bintong Chen 2, Xiaojun Chen 3 and Christian Kanzow 4 2 Department of Management and Systems Washington State

More information

Abstract. This paper investigates inexact Newton methods for solving systems of nonsmooth equations. We dene two inexact Newton methods for locally Li

Abstract. This paper investigates inexact Newton methods for solving systems of nonsmooth equations. We dene two inexact Newton methods for locally Li Inexact Newton Methods for Solving Nonsmooth Equations Jose Mario Martnez Department of Applied Mathematics IMECC-UNICAMP State University of Campinas CP 6065, 13081 Campinas SP, Brazil Email : martinez@ccvax.unicamp.ansp.br

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 F. FACCHINEI 2 AND S. LUCIDI 3 Communicated by L.C.W. Dixon 1 This research was

More information

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble A New Unconstrained Dierentiable Merit Function for Box Constrained Variational Inequality Problems and a Damped Gauss-Newton Method Defeng Sun y and Robert S. Womersley z School of Mathematics University

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

School of Mathematics, the University of New South Wales, Sydney 2052, Australia

School of Mathematics, the University of New South Wales, Sydney 2052, Australia Computational Optimization and Applications, 13, 201 220 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. On NCP-Functions * DEFENG SUN LIQUN QI School of Mathematics,

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps

Projected Gradient Methods for NCP 57. Complementarity Problems via Normal Maps Projected Gradient Methods for NCP 57 Recent Advances in Nonsmooth Optimization, pp. 57-86 Eds..-Z. u, L. Qi and R.S. Womersley c1995 World Scientic Publishers Projected Gradient Methods for Nonlinear

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

A Regularization Newton Method for Solving Nonlinear Complementarity Problems

A Regularization Newton Method for Solving Nonlinear Complementarity Problems Appl Math Optim 40:315 339 (1999) 1999 Springer-Verlag New York Inc. A Regularization Newton Method for Solving Nonlinear Complementarity Problems D. Sun School of Mathematics, University of New South

More information

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Nonlinear Analysis 46 (001) 853 868 www.elsevier.com/locate/na Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems Yunbin Zhao a;, Defeng Sun

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

Priority Programme 1962

Priority Programme 1962 Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation

More information

S. Lucidi F. Rochetich M. Roma. Curvilinear Stabilization Techniques for. Truncated Newton Methods in. the complete results 1

S. Lucidi F. Rochetich M. Roma. Curvilinear Stabilization Techniques for. Truncated Newton Methods in. the complete results 1 Universita degli Studi di Roma \La Sapienza" Dipartimento di Informatica e Sistemistica S. Lucidi F. Rochetich M. Roma Curvilinear Stabilization Techniques for Truncated Newton Methods in Large Scale Unconstrained

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object A VIEW ON NONLINEAR OPTIMIZATION JOSE HERSKOVITS Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro 1 Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL 1. Introduction Once

More information

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem

On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem On the Coerciveness of Merit Functions for the Second-Order Cone Complementarity Problem Guidance Professor Assistant Professor Masao Fukushima Nobuo Yamashita Shunsuke Hayashi 000 Graduate Course in Department

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

Handout on Newton s Method for Systems

Handout on Newton s Method for Systems Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n

More information

Technische Universität Dresden Institut für Numerische Mathematik. An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions

Technische Universität Dresden Institut für Numerische Mathematik. An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions Als Manuskript gedruckt Technische Universität Dresden Institut für Numerische Mathematik An LP-Newton Method: Nonsmooth Equations, KKT Systems, and Nonisolated Solutions F. Facchinei, A. Fischer, and

More information

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o

1 Introduction We consider the problem nd x 2 H such that 0 2 T (x); (1.1) where H is a real Hilbert space, and T () is a maximal monotone operator (o Journal of Convex Analysis Volume 6 (1999), No. 1, pp. xx-xx. cheldermann Verlag A HYBRID PROJECTION{PROXIMAL POINT ALGORITHM M. V. Solodov y and B. F. Svaiter y January 27, 1997 (Revised August 24, 1998)

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS

SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS SEMISMOOTH LEAST SQUARES METHODS FOR COMPLEMENTARITY PROBLEMS Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Bayerischen Julius Maximilians Universität Würzburg vorgelegt von STEFANIA

More information

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY

MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY PREPRINT ANL/MCS-P6-1196, NOVEMBER 1996 (REVISED AUGUST 1998) MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY SUPERLINEAR CONVERGENCE OF AN INTERIOR-POINT METHOD DESPITE DEPENDENT

More information

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996 Working Paper Linear Convergence of Epsilon-Subgradient Descent Methods for a Class of Convex Functions Stephen M. Robinson WP-96-041 April 1996 IIASA International Institute for Applied Systems Analysis

More information

Merit functions and error bounds for generalized variational inequalities

Merit functions and error bounds for generalized variational inequalities J. Math. Anal. Appl. 287 2003) 405 414 www.elsevier.com/locate/jmaa Merit functions and error bounds for generalized variational inequalities M.V. Solodov 1 Instituto de Matemática Pura e Aplicada, Estrada

More information

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities

On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods

More information

Error bounds for symmetric cone complementarity problems

Error bounds for symmetric cone complementarity problems to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function

A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister

More information

Convergence Analysis of Perturbed Feasible Descent Methods 1

Convergence Analysis of Perturbed Feasible Descent Methods 1 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS Vol. 93. No 2. pp. 337-353. MAY 1997 Convergence Analysis of Perturbed Feasible Descent Methods 1 M. V. SOLODOV 2 Communicated by Z. Q. Luo Abstract. We

More information

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong

More information

A nonmonotone semismooth inexact Newton method

A nonmonotone semismooth inexact Newton method A nonmonotone semismooth inexact Newton method SILVIA BONETTINI, FEDERICA TINTI Dipartimento di Matematica, Università di Modena e Reggio Emilia, Italy Abstract In this work we propose a variant of the

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see

Downloaded 12/13/16 to Redistribution subject to SIAM license or copyright; see SIAM J. OPTIM. Vol. 11, No. 4, pp. 962 973 c 2001 Society for Industrial and Applied Mathematics MONOTONICITY OF FIXED POINT AND NORMAL MAPPINGS ASSOCIATED WITH VARIATIONAL INEQUALITY AND ITS APPLICATION

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

A Finite Newton Method for Classification Problems

A Finite Newton Method for Classification Problems A Finite Newton Method for Classification Problems O. L. Mangasarian Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI 53706 olvi@cs.wisc.edu Abstract A fundamental

More information

A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT

A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES Patrice MARCOTTE Jean-Pierre DUSSAULT Resume. Il est bien connu que la methode de Newton, lorsqu'appliquee a

More information

An approach to constrained global optimization based on exact penalty functions

An approach to constrained global optimization based on exact penalty functions DOI 10.1007/s10898-010-9582-0 An approach to constrained global optimization based on exact penalty functions G. Di Pillo S. Lucidi F. Rinaldi Received: 22 June 2010 / Accepted: 29 June 2010 Springer Science+Business

More information

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone

Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F.

Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM. M. V. Solodov and B. F. Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION PROXIMAL POINT ALGORITHM M. V. Solodov and B. F. Svaiter January 27, 1997 (Revised August 24, 1998) ABSTRACT We propose a modification

More information

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence.

Keywords: Nonlinear least-squares problems, regularized models, error bound condition, local convergence. STRONG LOCAL CONVERGENCE PROPERTIES OF ADAPTIVE REGULARIZED METHODS FOR NONLINEAR LEAST-SQUARES S. BELLAVIA AND B. MORINI Abstract. This paper studies adaptive regularized methods for nonlinear least-squares

More information

GENERALIZED second-order cone complementarity

GENERALIZED second-order cone complementarity Stochastic Generalized Complementarity Problems in Second-Order Cone: Box-Constrained Minimization Reformulation and Solving Methods Mei-Ju Luo and Yan Zhang Abstract In this paper, we reformulate the

More information

Maria Cameron. f(x) = 1 n

Maria Cameron. f(x) = 1 n Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that

More information

On the Convergence of Newton Iterations to Non-Stationary Points Richard H. Byrd Marcelo Marazzi y Jorge Nocedal z April 23, 2001 Report OTC 2001/01 Optimization Technology Center Northwestern University,

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS

CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS MATHEMATICS OF COMPUTATION Volume 68, Number 228, Pages 165 1613 S 25-5718(99)1135-7 Article electronically published on March 1, 1999 CONVERGENCE BEHAVIOUR OF INEXACT NEWTON METHODS BENEDETTA MORINI Abstract.

More information

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems Math. Program., Ser. A (2013) 142:591 604 DOI 10.1007/s10107-012-0586-z SHORT COMMUNICATION A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems

More information

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method Zhaosong Lu November 21, 2015 Abstract We consider the problem of minimizing a Lipschitz dierentiable function over a

More information

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008

Lecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008 Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such 1 FEASIBLE SEQUENTIAL QUADRATIC PROGRAMMING FOR FINELY DISCRETIZED PROBLEMS FROM SIP Craig T. Lawrence and Andre L. Tits ABSTRACT Department of Electrical Engineering and Institute for Systems Research

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

arxiv: v1 [math.oc] 1 Jul 2016

arxiv: v1 [math.oc] 1 Jul 2016 Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the

More information

Semismooth Support Vector Machines

Semismooth Support Vector Machines Semismooth Support Vector Machines Michael C. Ferris Todd S. Munson November 29, 2000 Abstract The linear support vector machine can be posed as a quadratic program in a variety of ways. In this paper,

More information

Trust-region methods for rectangular systems of nonlinear equations

Trust-region methods for rectangular systems of nonlinear equations Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini

More information

2 jian l. zhou and andre l. tits The diculties in solving (SI), and in particular (CMM), stem mostly from the facts that (i) the accurate evaluation o

2 jian l. zhou and andre l. tits The diculties in solving (SI), and in particular (CMM), stem mostly from the facts that (i) the accurate evaluation o SIAM J. Optimization Vol. x, No. x, pp. x{xx, xxx 19xx 000 AN SQP ALGORITHM FOR FINELY DISCRETIZED CONTINUOUS MINIMAX PROBLEMS AND OTHER MINIMAX PROBLEMS WITH MANY OBJECTIVE FUNCTIONS* JIAN L. ZHOUy AND

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information

ARE202A, Fall Contents

ARE202A, Fall Contents ARE202A, Fall 2005 LECTURE #2: WED, NOV 6, 2005 PRINT DATE: NOVEMBER 2, 2005 (NPP2) Contents 5. Nonlinear Programming Problems and the Kuhn Tucker conditions (cont) 5.2. Necessary and sucient conditions

More information

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques Math. Program. 86: 9 03 (999) Springer-Verlag 999 Digital Object Identifier (DOI) 0.007/s007990056a Song Xu James V. Burke A polynomial time interior point path following algorithm for LCP based on Chen

More information

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line 7 COMBINING TRUST REGION AND LINE SEARCH TECHNIQUES Jorge Nocedal Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA. Ya-xiang Yuan State Key Laboratory

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs

On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs On Second-order Properties of the Moreau-Yosida Regularization for Constrained Nonsmooth Convex Programs Fanwen Meng,2 ; Gongyun Zhao Department of Mathematics National University of Singapore 2 Science

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information