An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization

Size: px
Start display at page:

Download "An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization"

Transcription

1 Applied Mathematical Modelling 31 (2007) An interior point type QP-free algorithm with superlinear convergence for inequality constrained optimization Zhibin Zhu * Department of Computational Science and Mathematics, Guilin University of Electronic Technology, Guilin , PR China Received 1 May 2005; received in revised form 1 August 2005; accepted 12 April 2006 Available online 17 August 2006 Abstract A feasible interior point type algorithm is proposed for the inequality constrained optimization. Iterate points are prevented from leaving to interior of the feasible set. It is observed that the algorithm is merely necessary to solve three systems of linear equations with the same coefficient matrix. Under some suitable conditions, superlinear convergence rate is obtained. Some numerical results are also reported. Ó 2006 Elsevier Inc. All rights reserved. MSC: 90C30; 65K10 Keywords: Inequality constrained optimization; System of linear equations; Interior point; Global convergence; Superlinear convergence 1. Introduction In this paper, it is proposed to consider the following nonlinear mathematical programming problem: min f ðxþ s:t gðxþ 6 0; ð1:1þ where f : R n! R, g : R n! R m are continuously differentiable functions. To solve the problem (1.1), there are two type methods with superlinear convergence: Successive quadratic programming (SQP) type algorithms [1 6] and QP-free type algorithms [7]. In general, since SQP algorithms are necessary to solve one or more quadratic programming subproblems in single iteration, the computation effort is very large. In [8,9], QP-free algorithms were proposed to solve the problem (1.1), in which, an iteration similar to the following linear system was considered: * Tel.: ; fax: address: zhuzb@guet.edu.cn X/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi: /.apm

2 1202 Z. Zhu / Applied Mathematical Modelling 31 (2007) Hd 0 þr x Lðx; kþ ¼0; ð1:2þ l rg ðxþ T d 0 þ k 0 g ðxþ ¼0; ¼ 1 m; where Lðx; kþ ¼f ðxþþ P m ¼1 k g ðxþ is lagrangian, H an estimate of the Hessian of L, x the current estimate of a solution x *, d 0 the search direction, and k 0 the next estimate of the Kuhn Tucker multiplier vector associated with x *.In[8], l 2 R m and l > 0, but was not interpreted as the current multiplier estimate. At each iteration, the search direction d was computed in two stages: firstly, a descent direction d 0 is defined by solving (1.2). By modifying d 0, then a feasible descent direction d was obtained. An initial point was an interior point, and iteration points were prevented from leaving the interior of feasible set by using the line search with truncation of the step. Under some assumptions, global convergence to a KKT point was proven, but superlinear convergence was lost due to truncation of the step. In [9], l > 0 was viewed as the current estimate of the KKT multiplier vector associated with x *. At each iteration, based on the direction d 0, which is the solution of (1.2), a revised direction d 1 is obtained by solving the following linear system: Hd þr x Lðx; kþ ¼0; ð1:3þ l rg ðxþ T d þ k g ðxþ ¼ l kd 0 k m ; ¼ 1 m; where m > 2, the search direction d was computed by making a convex combination with d 0 and d 1. In order to avoid Maratos effect, a high-order correction direction d ~ is obtained by solving the following linear least squares problem: 1 min 2 kdk2 ð1:4þ s:t: g ðx þ dþþrg ðxþ T d ¼ w; 2 IðxÞ; where I(x) is a suitable approximate active set at x, and w is a scalar variable. Obviously, (1.4) is equivalent to a linear system, but has a different coefficient matrix, comparing with linear systems (1.2) and (1.3). At the same time, an arc search was presented to prevent the step from being truncated and iteration points from leaving the interior of feasible set. Under some assumptions, global convergence was proven. Unlike [8], it was obtained locally superlinear convergence. Recently, in [10], this type QP-free method is further modified for solving the problem (1.1), in which the method is based on a nonsmooth equation reformulation of the KKT optimality condition by means of the Fischer Burmeister NCP function. Under some weaker assumptions than those in [9], it is obtained the superlinear convergence rate. But, in single iteration, it is necessary to solve three systems of linear equation and a linear squares problem to obtain the search direction. In this paper, a new QP-free algorithm is proposed to improve some facts ust pointed out. Unlike [9], the high-order correction direction d ~ is obtained by solving the following linear system: Hd þr x Lðx; kþ ¼0; ð1:5þ l rg ðxþ T d þ k g ðxþ ¼~g ; ¼ 1 m; where ~g ¼ð~g ; 2 IÞ is a suitable vector (Please see the algorithm). Thereby, the search direction is obtained by solving three systems of linear equations with the same coefficient matrix as (1.2). Thereby, the computational effort of the proposed algorithm is reduced further. Under some suitable assumptions, global convergence is obtained as well as the superlinear convergence rate. In the end, some limited numerical experiments are given to show that the algorithm is effective. 2. Description of algorithm For the sake of simplicity, denote I ¼f1;...; mg; X ¼fx 2 R n gðxþ 6 0g; X 0 ¼fx2 R n g ðxþ < 0; 2 Ig; IðxÞ ¼fg ðxþ ¼0; 2 Ig: The following algorithm is proposed for solving problem (1.1).

3 Z. Zhu / Applied Mathematical Modelling 31 (2007) Algorithm Step 0. Initialization and date: Given a starting point x 0 2 X 0, H 0 2 R n n, an initial symmetric positive definite matrix. h, r 2 (0,1), a 2 0; 1 2,2<s < d <3,l > 0; 0 < l 0 6 l, =1 m, k =0; Step 1. Computation of the Newton direction d k 0 : Let N k ¼ðrg ðx k Þ; 2 IÞ; G k ¼ diagðg ðx k Þ; 2 IÞ; M k ¼ diagðl k ; 2 IÞ: ð2:1þ Solve the following system of linear equations: H k N k d ¼ rf ðxk Þ : ð2:2þ M k N T k G k k 0 Let ðd k 0 ; pk Þ be the solution. If d k 0 ¼ 0, STOP. Step 2. Computation of the search direction: 2.1. Computation of the descent direction d k 1 : Solve the following system of linear equations: H k N k d ¼ rf! ðxk Þ ; M k N T k G k k kd k 0 kd e ð2:3þ where e = (1,...,1) T 2 R m. Let ðd k 1 ; ~pk Þ be the solution Computation of the main search direction d k : Establish a convex combination of d k 0 and dk 1 : d k ¼ð1 b k Þd k 0 þ b kd k 1 ; kk ¼ð1 b k Þp k þ b k ~p k ; ð2:4þ where b k ¼ max b 2ð0; 1Šð1 bþrf ðx k Þ T d k 0 þ brf ðxk Þ T d k 1 6 hrf ðxk Þ T d k 0 : ð2:5þ 2.3. Computation of the high-order corrected direction d ~ k : Let L k ¼f2Ig ðx k Þ P k k g: ð2:6þ Solve the following system of linear equations:! H k N k d rf ðx k Þ ¼ ; ð2:7þ M k N T k G k k kd k 0 ks e þ M k ~g k where ( ~g k ¼ð~g k ; 2 IÞ; ~gk ¼ g ðx k þ d k Þ; 2 L k ð2:8þ 0; 2 I n L k : Let ð d ~ k þ d k ; ~ k k þ k k Þ be the solution. If k d ~ k k > kd k k, set d ~ k ¼ 0. Step 3. The line search: Let J k ¼f2Ig ðx k Þ P k k g: ð2:9þ Compute t k, the first number t in the sequence fl; 1 ; 1 ; 1 ;...g satisfying f ðx k þ td k þ t 2 d ~ k Þ 6 f ðx k Þþatrfðx k Þ T d k ; ð2:10þ g ðx k þ td k þ t 2 d ~ k Þ 6 g ðx k Þ; 2 J k : ð2:11þ g ðx k þ td k þ t 2 d ~ k Þ < 0; 2 I n J k : ð2:12þ

4 1204 Z. Zhu / Applied Mathematical Modelling 31 (2007) Step 4. Update: Obtain H k+1 by updating the positive definite matrix H k using some quasi-newton formulas. Set l kþ1 ¼ minfmaxfp k ; kdk 0kg; lg; 2 I; ð2:13þ and x kþ1 ¼ x k þ t k d k þ t 2~ k d k. Let k = k + 1. Go back to step Global convergence of algorithm In this section, it is first shown that Algorithm A is well defined. The following assumptions are true throughout the paper. H3.1 The interior X 0 of the feasible set X is nonempty. H3.2 The functions f, g are continuously differentiable. H3.3 The set X \ {x 2 R n f(x) 6 f(x 0 )} is compact. H3.4 For all x 2 X, the vectors fr gi ðxþ; 2 IðxÞg are linearly independent. H3.5 There exist two constants 0 < a 6 b, such that akdk 2 < d T H k d < bkdk 2, for all k, for all d 2 R n. Lemma 3.1. Given any vector, x 2 X, any positive definite matrix H 2 R n n, and any nonnegative vector l =(l, 2 I) 2 R m such that l > 0, 2 I(x), the matrix F(x, H, l) defined by H NðxÞ F ðx; H; lþ ¼ MNðxÞ T GðxÞ is nonsingular, where NðxÞ ¼ðrg ðxþ; 2 IÞ; M ¼ diagðl; 2 IÞ; GðxÞ ¼diagðg i ðxþ; 2 IÞ: Proof. The proof of this lemma is similar to that of Lemma 3.1 in [9]. h In view of Lemma 3.1, linear systems (2.2), (2.3) and (2.7) are consistent. Lemma 3.2 (1) If d k 0 ¼ 0, then xk is a KKT point of (1.1). (2) If d k 0 6¼ 0, then dk computed according to (2.4) is well defined, and rf ðx k Þ T d k 0 6 ðdk 0 ÞT H k d k 0 < 0; rf ðxk Þ T d k 6 hrf ðx k Þ T d k 0 < 0; rg ðx k Þ T d k ¼ kk g l k ðx k Þ b k kd k 0 kd ; 2 I: ð3:1þ Proof (1) It is obvious according to the definition of the KKT point of (1.1). (2) If d k 0 6¼ 0, from (2.2), we have rf ðx k Þ T d k 0 ¼ ðdk 0 ÞT H k d k 0 ðn kp k Þ T d k 0 ¼ ðd k 0 ÞT H k d k 0 þ X 2I ðp k Þ2 l k g ðx k Þ 6 ðd k 0 ÞT H k d k 0 < 0; rg ðx k Þ T d k 0 ¼ pk g l k ðx k Þ; 2 I:

5 Thereby, from (2.5), there exists some b 2 (0,1], such that b k = b 2 (0,1], i.e., d k is well-defined. In addition, from (2.3), it follows that: rg ðx k Þ T d k 1 ¼ ~pk g l k ðx k Þ kd k 0 kd ; 2 I: Thus, from (2.4), it is clear to see that Z. Zhu / Applied Mathematical Modelling 31 (2007) rg ðx k Þ T d k ¼ð1 b k Þrg ðx k Þ T d k 0 þ b krg ðx k Þ T d k 1 ¼ kk g l k ðx k Þx k b k kd k 0 kd ; 2 I; rf ðx k Þ T d k ¼ð1 b k Þrf ðx k Þ T d k 0 þ b krf ðx k Þ T d k 1 6 hrf ðxk Þ T d k 0 < 0: The claim holds. h Lemma 3.3. The line search in step 3 yields a step t k ¼ 1 i for some finite i = i(k). 2 Proof. Firstly, for (2.10), we have a k, f ðx k þ td k þ t 2 d ~ k Þ fðx k Þ atrfðx k Þ T d k ¼rf ðx k Þ T ðtd k þ t 2 d k Þ atrf ðx k Þ T d k oðtþ ¼ð1 aþtrfðx k Þ T d k þ oðtþ: Since f is continuously differentiable, $f(x k ) T d k <0, a 2 0; 1 2, there exists some t > 0, such that a k 6 0, 8t 2½0;tŠ. Secondly, for (2.11), from (2.9) and (3.1), we obtain, for 2 J k, that b k, g ðx k þ td k þ t 2 ~ d k Þ g ðx k Þ ¼ trg ðx k Þ T d k þ oðtþ ¼ kk tg l k ðx k Þ tb k kd k 0 kd þ oðtþ 6 1 tg 2 l k ðxk Þ tb k kd k 0 kd þ oðtþ: So, there exists some t > 0, 2 J k, such that b k 6 0; 8t 2½0; t Š. Thirdly, for (2.12), since g is continuous and g (x k ) < 0, there exists some t > 0, 62 J k, such that g ðx k þ td k þ t 2 d ~ k Þ g ðx k Þ < 0: 1 i Let ¼ minft;t 2 ; 2 Ig, then the claim holds. h In the sequel, we will prove that the algorithm is globally convergent. From H3.2, H3.3 and H3.5, we might as well assume that there exists a subsequence K, such that x k! x ; H k! H ; d k 0! d 0 ; pk! p ; l k! l ; k 2 K: ð3:2þ Lemma 3.4. If x k! x *,k2k, then d k 0! 0; k 2 K. Proof. Since {f(x k )} is monotonically decreasing, the facts {x k } k2k! x * and continuity of f simply that f ðx k Þ!fðx Þ; k!1: ð3:3þ Suppose by contradiction that d 0 6¼ 0. Then, from (2.13), wehave l k! l > 0; 2 I; k 2 K:

6 1206 Z. Zhu / Applied Mathematical Modelling 31 (2007) So, it is easy to prove that ðd 0 ; p Þ is the unique solution of the following linear system: H d þrfðx ÞþN p ¼ 0; l rg ðx Þ T d þ p g ðx Þ¼0; 2 I; where N * =($g (x * ), 2 I). Thereby, rf ðx Þ T d 0 < 0: Similar to (2.4), we define d *, and, by imitating the proof of Lemma 3.2, it follows that: rf ðx Þ T d < 0; rg ðx Þ T d 6 k g l ðx Þ b kd 0 kd ; 2 I: ð3:6þ From (3.5), (3.6) and the proof of Lemma 3.3, we can conclude that the step-size t k obtained by the linear search in step 3 is bounded away from zero on K, i.e., t k P t ¼ infft k ; k 2 Kg > 0; k 2 K: So, from (2.10), (3.3) and (3.6), we get 0 ¼ limðf ðx kþ1 Þ fðx k ÞÞ 6 lim at k rf ðx k Þ T d k 6 1 k2k k2k 2 at rf ðx Þ T d < 0: It is a contradiction, which shows that d k 0! 0, k 2 K. h In order to obtain the global convergence of the algorithm, we assume the following condition. H3.6 The number of stationary points of (1.1) is finite. ð3:4þ ð3:5þ Theorem 3.5. The algorithm A either stops at the KKT point x k of (1.1) in finite iteration, or generates an infinite sequence {x k } whose all accumulation points are KKT points of (1.1). Proof. The first statement is obvious, the only stopping point being in step 1. Thus, suppose that {x k } k2k! x *, d k 0! 0, k 2 K. From (3.4), we have rf ðx ÞþN p ¼ 0; p g ðx Þ¼0; 2 I ð3:7þ If g (x * )<0," 2 I, then p * =0,$f(x * ) = 0, it is obvious that x * is a KKT point of (1.1). Without loss of generality, we suppose that there exists some 0 2 I, such that g 0 ðx Þ¼0. If p 0 P 0, then it is easy to see that x * is a KKT point of (1.1). Suppose that p 0 < 0. Since there are only finitely many choices for sets J k I, we might as well assume, for k 2 K, k large enough, that J k J, where J is a constant set. Obviously, the fact d k k2k 0! 0 and the definition of k k imply that 0 2 J 5 ;. From Lemma 3.4 and the condition H3.6, according to Theorem 3.11 in [9], it holds that x k! x *, k!1. Thereby, Lemma 3.4 shows that d k 0! 0, k k! u *, k!1. So, it holds that k k 0 6 g 0 ðx k Þ; k k 0! k 0 < 0; g 0 ðx k Þ!g 0 ðx Þ¼0: ð3:8þ While, from (2.11), there exists some k 0 such that, for k P k 0 g 0 ðx k Þ 6 g 0 ðx k 1 Þ 6 6 g 0 ðx k0þ1 Þ 6 g 0 ðx k 0 Þ < 0: It is in contradiction with (3.8), which shows that x * is a KKT point of (1.1). h 4. Rate of convergence Now we strengthen the regularity assumptions on the functions involved. Assumption H3.2 are replaced by H4.1 The functions f, g are twice continuously differentiable. In order to obtain superlinear convergence, we also make the following additional assumptions, where H4.3 may supersede H3.6.

7 Z. Zhu / Applied Mathematical Modelling 31 (2007) H4.2 The sequence generated by the algorithm possesses an accumulation point x * (in view of Theorem 3.5,a KKT point). H4.3 The second-order sufficiency conditions with strict complementary slackness are satisfied at the KKT point x * and the corresponding multiplier vector u *. u * satisfies that u 6 l; ¼ 1 m. Lemma 4.1. The entire sequence {x k } converges to x *, i.e., x k! x *,k!1. Proof. From (2.10) and (3.1), it holds that f ðx kþ1 Þ 6 f ðx k Þþat k rf ðx k Þ T d k 6 f ðx k Þ aht k ðd k 0 ÞT H k d k 0 : from (3.3) and H3.5, we have t k kd k 0 k!0. Thereby, from (2.3) and (2.4), it holds that t kkd k 0k!0. So, t k kd k kþt 2 k k~ d k k 6 2t k kd k k!0, that is to say, kx k+1 x k k!0. According to H4.3 and proposition 4.1 in [5], we have lim k!1 x k = x *. h Lemma 4.2. For k large enough, it holds that L k I(x * ), and ðd k 0 Þ!0; pk! u ; ~p k! u ; k k! u ; l k! u : Proof. According to Lemma 3.4, and x k! x *, it holds that d k 0! 0, k!1. Since x* is a KKT point of (1.1), it follows that: rf ðx ÞþN u ¼ 0; g ðx Þu ¼ 0; 2 I: Denote D k ¼ diagðg 2 ðxk Þ; 2 IÞ; D ¼ diagðg 2 ðx Þ; 2 IÞ. From H3.4, it is clear that ðn T N þ D Þ is nonsingular. Thereby, we have u ¼ ðn T N þ D Þ 1 N T rf ðx Þ: ð4:1þ In addition, from (2.2), we get H k d k 0 þrfðxk ÞþN k p k ¼ 0; D k p k,d k ¼ðD k ; 2 IÞ; Dk ¼ lk g ðx k Þrg ðx k Þ T d k 0 ; 2 I: So, ðn T k N k D k Þp k ¼ N T k ðrf ðxk ÞþH k d k 0 ÞþDk : Since ðn T k N k þ D k Þ!ðN T N þ D Þ; it is obvious, for k large enough, that ðn T k N k þ D k Þ is nonsingular; and ðn T k N k þ D k Þ 1!ðN T N þ D Þ 1 : So, from d k 0! 0; Dk! 0, we have p k!ðn T N þ D Þ 1 N T rf ðx Þ¼u : With the same reason, we can prove that ~p k! u. Thereby, from the definition of k k, it is easy to see that k k! u *. Thereby, from H4.3 and the definition of L k, it holds that L k I(x * ). In the end, from the definition of l k and H4.3, we know that l k! u *. h Lemma 4.3. For k large enough, it holds that kd k kkd k 1 kkdk 0k; k!1;

8 1208 Z. Zhu / Applied Mathematical Modelling 31 (2007) and rf ðx k ÞþH k d k þ A k p k Iðx Þ ¼ oðkdk kþ; rf ðx k ÞþH k d k þ A k ~p k Iðx Þ ¼ oðkdk kþ; rf ðx k ÞþH k d k þ A k k k Iðx Þ ¼ oðkdk kþ; g ðx k Þþrg ðx k Þ T d k ¼ oðkd k k 2 Þ; 2 Iðx Þ; p k ¼ oðkdk kþ; ~p k ¼ oðkdk kþ; k k ¼ oðkdk kþ; 62 Iðx Þ; where A k ¼ðrg ðx k Þ; 2 Iðx ÞÞ; p k Iðx Þ ¼ðpk ; 2 Iðx ÞÞ; ~p k Iðx Þ ¼ð~pk ; 2 Iðx ÞÞ; k k Iðx Þ ¼ðkk ; 2 Iðx ÞÞ: ð4:2þ Proof. From (2.2), we have rf ðx k ÞþH k d k 0 þ N kp k ¼ 0; p k g ðx k Þþl k rg ðx k Þ T d k 0 ¼ 0; 2 I: ð4:3þ Obviously, the fact that d k 0! 0, pk! u *, l k! u *, k!1and the definition of l k imply, for k; large enough, that l k ¼ pk, 2 I(x* ), and rf ðx k ÞþH k d k 0 þ A kp k Iðx Þ ¼ oðkdk 0 kþ; g ðx k Þþrg ðx k Þ T d k 0 ¼ 0; g ðx k Þ¼Oðkd k 0 kþ; 2 Iðx Þ; ð4:4þ p k ¼ oðkdk 0 kþ; 62 Iðx Þ: From (2.3), we have that rf ðx k ÞþH k d k 1 þ N k~p k ¼ 0; ~p k g ðx k Þþl k rg ðx k Þ T d k 1 ¼ ðkdk 0 kþd ¼ oðkd k 0 k2 Þ; 2 Iðx Þ: With the same reason, we have that rf ðx k ÞþH k d k 1 þ A k~p k Iðx Þ ¼ oðkdk 0 kþ; ~p k g ðx k Þþl k rg ðx k Þ T d k 1 ¼ oðkdk 0 k2 Þ; 2 Iðx Þ; ~p k ¼ oðkdk 0 kþ; 62 Iðx Þ: Denote Dd k ¼ d k 1 dk 0 ; Dpk Iðx Þ ¼ pk Iðx Þ ~pk Iðx Þ ¼ lk Iðx Þ ~pk Iðx Þ ; ð4:5þ ð4:6þ then, according to (4.4) and (4.6), it holds that H k Dd k þ A k Dp k Iðx Þ ¼ oðkdk 0 kþ; ð4:7þ According to H3.4, we know, for k large enough, that A T k A k is nonsingular. Furthermore, denote P k ¼ I n A k ða T k A kþ 1 A T k ; Ddk ¼ Dd k 1 þ Ddk 2 ; Ddk 1 ¼ P kdd k ; then, from (4.7), the fact that P k A k = 0 implies that ðdd k 1 ÞT H k Dd k ¼ oðkd k 0 kkddk 1 kþ; i.e., OðkDd k 1 kkddk kþ ¼ oðkd k 0 kkddk 1 kþ; kddk k¼oðkd k 0 kþ: Thereby, it holds that kd k kkd k 0 kkdk 1 k:

9 To prove that (4.2) is true. First of all, from (4.7), the fact that kdd k k¼oðkd k 0 kþ ¼ oðkdk kþ implies that Dp k Iðx Þ ¼ oðkdk kþ: ð4:8þ So, according to (4.4), (4.6) and (4.8) and the definition of d k, k k,wehave rf ðx k ÞþH k d k þ A k p k Iðx Þ ¼ oðkdk kþ; rf ðx k ÞþH k d k þ A k ~p k Iðx Þ ¼ oðkdk kþ; rf ðx k ÞþH k d k þ A k k k Iðx Þ ¼ oðkdk kþ; g ðx k Þþrg ðx k Þ T d k 0 ¼ 0; 2 Iðx Þ; and, for 2 I(x * ), it holds that g ðx k Þþrg ðx k Þ T d k 1 ¼ ~pk 1 l k! g ðx k Þþoðkd k k 2 Þ¼ 1 Dp k l k g ðx k Þþoðkd k k 2 Þ¼oðkd k k 2 Þ: So, it holds that g ðx k Þþrg ðx k Þ T d k ¼ oðkd k k 2 Þ; 2 Iðx Þ: While, from the definition of k k, it is easy to see that k k ¼ oðkdk kþ; 62 Iðx Þ: The claim holds. h Lemma 4.4. For k large enough, ~ d k obtained by step 3 satisfies k ~ d k k¼oðkd k k 2 Þ: Z. Zhu / Applied Mathematical Modelling 31 (2007) Proof. According to (2.7) and Lemma 4.3, it is easy to that ~g k ¼ Oðkdk k 2 Þ; 2 I. So, we have rf ðx k ÞþH k ð d ~ k þ d k ÞþN k ð ~ k k þ k k Þ¼0; ð ~ k k þ k k Þg ðx k Þþl k rg ðx k Þ T ð d ~ k þ d k Þ¼ kd k 0 ks l k ~gk ¼ Oðkdk k 2 Þ; 2 I: So, from (4.3), (4.5), (2.4) and (4.9), it holds that i.e., H k d ~ k þ N k ~ k k ¼ 0; l k rg ðx k Þ T d ~ k þ ~ k k g ðx k Þ¼ kd k 0 ks þ b k kd k 0 kd l k ~gk ¼ Oðdk k 2 Þ; 2 I:! H k N k ~d k 0 ¼ M k N T k G k ~k k Oðkd k k 2 ; Þ ð4:9þ ð4:10þ which shows that k d ~ k k¼oðkd k k 2 Þ; k ~ k k k¼oðkd k k 2 Þ: The claim holds. h In order to obtain superlinear; convergence, a crucial requirement is that a unit step size be used in a neighborhood of the solution. This can be achieved thanks to the following assumption: H4.4 The sequence of matrices {H k } satisfies kp k ðh k r 2 xx Lðxk ; k k ÞÞd k k¼oðkd k kþ;

10 1210 Z. Zhu / Applied Mathematical Modelling 31 (2007) where P k ¼ I n A k ða T k A kþ 1 A T k ; r2 xx Lðxk ; k k Þ¼r 2 f ðx k Þþ X 2IðX Þ k k r2 g ðx k Þ: Lemma 4.5. For k large enough, inequalities in step 4.1 are all true, and x kþ1 ¼ x k þ d k þ ~ d k ; t k 1. Proof. According to H4.3, it is easy to see that J k = ;. So, it is only necessary to prove that f ðx k þ d k þ d ~ k Þ 6 f ðx k Þþarfðx k Þ T d k ; ð4:11þ g ðx k þ d k þ d ~ k Þ < 0; 2 I: ð4:12þ With the same reason, we can prove that (4.10) holds in this case. So, the facts that d > s 2 (2,3), k ~ k k k¼oðkd k k 2 Þ and g (x k )=O(kd k k), 2 I(x * ) imply that rg ðx k Þ T d ~ k ¼ 1 kd k l k 0 ks b k kd k 0 kd þ g ðx k Þ ~ k k g ðx k þ d k Þ ¼ 1 kd k l k 0 ks g ðx k þ d k Þþoðkd k 0 ks Þ: ð4:13þ So, it holds, for 2 I(x * ), that g ðx k þ d k þ d ~ k Þ¼g ðx k þ d k Þþrg ðx k þ d k Þ T d ~ k þ Oðk d ~ k k 2 Þ ¼ g ðx k þ d k Þþrg ðx k Þ T d ~ k þ Oðkd k k 3 Þ ¼ 1 kd k l k 0 ks þ oðkd k 0 ks Þ: ð4:14þ Thereby, for k large enough, for 2 I(x * ), (4.12) holds. For 2 I n I(x * ), (4.12) holds from the facts x k! x *, d k! 0, g (x * ) < 0 and the continuity of g. According to (2.2), (4.2) and (4.14) and the proof of Lemma 4.4 in [11], the fact that k d ~ k k¼oðkd k k 2 Þ implies that (4.11) holds. h In view of Lemma 4.5, (4.2), H4.4 and the way of Theorem 5.2 in [2] or Theorem 4.1 in [12], it is easy to get the convergence theorem as follows: Theorem 4.6. Under all stated assumptions, the algorithm is superlinearly convergent, i.e., the sequence {x k } generated by the algorithm satisfies kx k+1 x * k = O(kx k x * k). 5. Numerical experiments In this section, we carry out some limited numerical experiments based on the algorithm. The code of the proposed algorithm is written by using C++ programming language, and run on Windows In the implementation, H k is updated by the BFGS formula [13]. The stopping criterion of step 1 is changed to If kd k k ; STOP: This algorithm has been tested on some problems from [14], where no equality constraints are present, and a feasible initial point is provided for each problem. The results are summarized in Table 1. For each test problem, No. is the number of the test problem in [14], NIT the number of iterations, NF the number of evaluations of the obective functions, NG the number of evaluations of scalar constraint functions, FV the final value of the obective function, and CPU the total time taken by the process (unit: millisecond). A CPU-time of 0 simply means execution time below 10 ms (or 0.01 s). During numerical experiments, in order to obtain some better numerical results, we make some small modifications in the algorithm. When the strict complementarity conditions are not satisfied near to a solution of

11 Z. Zhu / Applied Mathematical Modelling 31 (2007) Table 1 The detail information of numerical experiments No. NIT/NF/NG FV kd k 0 k CPU 12 8/8/ e [14] 24 28/28/ e [14] 31 11/11/ e [14] 33 53/53/2646 p ffiffiffi e [14] 43 16/16/ e [14] 66 17/17/ e [14] 76 20/20/ e [14] 93 17/17/ e [14] /32/ e [14] 110 2/2/ e [14] /43/ e [14] /55/ e [14] the problem (1.1), some parameters l corresponding to a nearly active constraint g become very small, thus linear systems (2.2), (2.3) and (2.7) may become ill-conditioned. Here, replace the coefficient matrix of those linear systems by the following matrix H k A k A T k 0 ; A k ¼ðr g ðx k Þ; 2 L k 1 Þ: and replace those right hands by the following corresponding vectors: rf!!! ðxk Þ rf ðx k Þ rf ðx k Þ ; ; ; g Lk 1 ðx k Þ g Lk 1 ðx k Þþkd k 0 kd ~e g Lk 1 ðx k Þþkd k 0 ks ~e þ ~g k g Lk 1 ðx k Þ¼ðg ðx k Þ; 2 L k 1 Þ; ~e ¼ð1;...; 1Þ T 2 R L k 1 ; ~g k ¼ðg ðx k þ d k Þ; 2 L k 1 Þ: respectively. Acknowledgements The author would like to thank two anonymous referees, whose constructive comments led to a considerable revision of the original paper. This work was supported in part by the NNSF ( , ) of China.

12 1212 Z. Zhu / Applied Mathematical Modelling 31 (2007) References [1] P.T. Boggs, J.W. Tolle, P. Wang, On the local convergence of quasi-newton methods for constrained optimization, SIAM J. Control Optim. 20 (1982) [2] F. Facchinei, S. Lucidi, Quadraticly and superlinearly convergent for the solution of inequality constrained optimization problem, JOTA 85 (2) (1995) [3] M.A. Fukushima, successive quadratic programming algorithm with global convergence properties, Math. Program. 35 (1986) [4] S.P. Han, Superlinearly convergent variable metric algorithm for general nonlinear programming problem, Math. Program. 11 (1976) [5] E.R. Panier, A.L. Tits, A superlinearly convergent feasible method for the solution of inequality constrained optimization problems, SIAM J. Control Optim. 25 (1987) [6] E.R. Panier, A.L. Tits, On combining feasibility descent and superlinear convergence in inequality constrained optimization, Math. Program. 59 (1993) [7] Z.Y. Gao, G.P. He, F. Wu, Sequential Systems of Linear Equations Algorithm for Nonlinear Optimization Problems with General Constraints, JOTA 95 (1997) [8] J.N. Herskovits, A two-stage feasible direction algorithm for nonlinear constrained optimization, Math. Program. 36 (1986) [9] E.R. Panier, A.L. Tits, J.N. Herskovits, A QP-free globally convergent, locally superlinearly convergent algorithm for inequality constrained optimization, SIAM J. Control Optim. 26 (1988) [10] H.D. Qi, L.Q. Qi, A new QP-free, globally convergent, locally superlinearly convergent algorithm for inequality constrained optimization, SIAM J. Optim. 11 (2000) [11] Zhu. Zhibin, A globally and superlinearly convergent feasible QP-free method for nonlinear programming, Appl. Math. Comput. 168 (2005) [12] Zhu. Zhibin, An efficient sequential quadratic programming algorithm for nonlinear programming, J. Comput. Appl. Math. 175 (2005) [13] M.J.D. Powell, A fast algorithm for nonlinearly constrained optimization calculations, in: G.A. Watson (Ed.), Numerical Analysis, Springer-Verlag, Berlin, 1978, pp [14] W. Hock, K. Schittkowski, Test examples for nonlinear programming codeslecture Notes in Economics and Mathematical Systems, vol. 187, Springer, Berlin, 1981.

A feasible descent SQP algorithm for general constrained optimization without strict complementarity

A feasible descent SQP algorithm for general constrained optimization without strict complementarity Journal of Computational and Applied Mathematics 180 (2005) 391 412 www.elsevier.com/locate/cam A feasible descent SQP algorithm for general constrained optimization without strict complementarity Jin-Bao

More information

A null space method for solving system of equations q

A null space method for solving system of equations q Applied Mathematics and Computation 149 (004) 15 6 www.elsevier.com/locate/amc A null space method for solving system of equations q Pu-yan Nie 1 Department of Mathematics, Jinan University, Guangzhou

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1

QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 QUADRATICALLY AND SUPERLINEARLY CONVERGENT ALGORITHMS FOR THE SOLUTION OF INEQUALITY CONSTRAINED MINIMIZATION PROBLEMS 1 F. FACCHINEI 2 AND S. LUCIDI 3 Communicated by L.C.W. Dixon 1 This research was

More information

INTERIOR POINT ALGORITHMS FOR NONLINEAR CONSTRAINED LEAST SQUARES PROBLEMS

INTERIOR POINT ALGORITHMS FOR NONLINEAR CONSTRAINED LEAST SQUARES PROBLEMS Inverse Problems in Engng., 00?, Vol. 00, No. 0, pp. 1 13 INTERIOR POINT ALGORITHMS FOR NONLINEAR CONSTRAINED LEAST SQUARES PROBLEMS JOSE HERSKOVITS a, *, VERANISE DUBEUX a, CRISTOVA O M. MOTA SOARES b

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: [University of New South Wales] On: 18 February 2009 Access details: Access Details: [subscription number 906810409] Publisher Taylor & Francis Informa Ltd Reistered in

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

2 jian l. zhou and andre l. tits The diculties in solving (SI), and in particular (CMM), stem mostly from the facts that (i) the accurate evaluation o

2 jian l. zhou and andre l. tits The diculties in solving (SI), and in particular (CMM), stem mostly from the facts that (i) the accurate evaluation o SIAM J. Optimization Vol. x, No. x, pp. x{xx, xxx 19xx 000 AN SQP ALGORITHM FOR FINELY DISCRETIZED CONTINUOUS MINIMAX PROBLEMS AND OTHER MINIMAX PROBLEMS WITH MANY OBJECTIVE FUNCTIONS* JIAN L. ZHOUy AND

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

On memory gradient method with trust region for unconstrained optimization*

On memory gradient method with trust region for unconstrained optimization* Numerical Algorithms (2006) 41: 173 196 DOI: 10.1007/s11075-005-9008-0 * Springer 2006 On memory gradient method with trust region for unconstrained optimization* Zhen-Jun Shi a,b and Jie Shen b a College

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems

A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS

ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes

20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes 016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which

More information

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration Address: Prof. K. Schittkowski Department of Computer Science University of Bayreuth D - 95440

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

A Local Convergence Analysis of Bilevel Decomposition Algorithms

A Local Convergence Analysis of Bilevel Decomposition Algorithms A Local Convergence Analysis of Bilevel Decomposition Algorithms Victor DeMiguel Decision Sciences London Business School avmiguel@london.edu Walter Murray Management Science and Engineering Stanford University

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Qing-Juan Xu & Jin-Bao Jian

Qing-Juan Xu & Jin-Bao Jian A nonlinear norm-relaxed method for finely discretized semi-infinite optimization problems Qing-Juan Xu & Jin-Bao Jian Nonlinear Dynamics An International Journal of Nonlinear Dynamics and Chaos in Engineering

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

MATH 4211/6211 Optimization Quasi-Newton Method

MATH 4211/6211 Optimization Quasi-Newton Method MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear

More information

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such 1 FEASIBLE SEQUENTIAL QUADRATIC PROGRAMMING FOR FINELY DISCRETIZED PROBLEMS FROM SIP Craig T. Lawrence and Andre L. Tits ABSTRACT Department of Electrical Engineering and Institute for Systems Research

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

Fuzzy age-dependent replacement policy and SPSA algorithm based-on fuzzy simulation

Fuzzy age-dependent replacement policy and SPSA algorithm based-on fuzzy simulation Available online at wwwsciencedirectcom Information Sciences 178 (2008) 573 583 wwwelseviercom/locate/ins Fuzzy age-dependent replacement policy and SPSA algorithm based-on fuzzy simulation Jiashun Zhang,

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Seminal papers in nonlinear optimization

Seminal papers in nonlinear optimization Seminal papers in nonlinear optimization Nick Gould, CSED, RAL, Chilton, OX11 0QX, England (n.gould@rl.ac.uk) December 7, 2006 The following papers are classics in the field. Although many of them cover

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

Convergence of a linear recursive sequence

Convergence of a linear recursive sequence int. j. math. educ. sci. technol., 2004 vol. 35, no. 1, 51 63 Convergence of a linear recursive sequence E. G. TAY*, T. L. TOH, F. M. DONG and T. Y. LEE Mathematics and Mathematics Education, National

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object A VIEW ON NONLINEAR OPTIMIZATION JOSE HERSKOVITS Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro 1 Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL 1. Introduction Once

More information

Some Theoretical Properties of an Augmented Lagrangian Merit Function

Some Theoretical Properties of an Augmented Lagrangian Merit Function Some Theoretical Properties of an Augmented Lagrangian Merit Function Philip E. GILL Walter MURRAY Michael A. SAUNDERS Margaret H. WRIGHT Technical Report SOL 86-6R September 1986 Abstract Sequential quadratic

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent

A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent A Simple Primal-Dual Feasible Interior-Point Method for Nonlinear Programming with Monotone Descent Sasan Bahtiari André L. Tits Department of Electrical and Computer Engineering and Institute for Systems

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities Thiago A. de André Paulo J. S. Silva March 24, 2007 Abstract In this paper, we present

More information

Possible numbers of ones in 0 1 matrices with a given rank

Possible numbers of ones in 0 1 matrices with a given rank Linear and Multilinear Algebra, Vol, No, 00, Possible numbers of ones in 0 1 matrices with a given rank QI HU, YAQIN LI and XINGZHI ZHAN* Department of Mathematics, East China Normal University, Shanghai

More information

1. Introduction. We consider the general smooth constrained optimization problem:

1. Introduction. We consider the general smooth constrained optimization problem: OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES

A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES A SIMPLY CONSTRAINED OPTIMIZATION REFORMULATION OF KKT SYSTEMS ARISING FROM VARIATIONAL INEQUALITIES Francisco Facchinei 1, Andreas Fischer 2, Christian Kanzow 3, and Ji-Ming Peng 4 1 Università di Roma

More information

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

Proximal-Based Pre-correction Decomposition Methods for Structured Convex Minimization Problems

Proximal-Based Pre-correction Decomposition Methods for Structured Convex Minimization Problems J. Oper. Res. Soc. China (2014) 2:223 235 DOI 10.1007/s40305-014-0042-2 Proximal-Based Pre-correction Decomposition Methods for Structured Convex Minimization Problems Yuan-Yuan Huang San-Yang Liu Received:

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences

Smoothed Fischer-Burmeister Equation Methods for the. Houyuan Jiang. CSIRO Mathematical and Information Sciences Smoothed Fischer-Burmeister Equation Methods for the Complementarity Problem 1 Houyuan Jiang CSIRO Mathematical and Information Sciences GPO Box 664, Canberra, ACT 2601, Australia Email: Houyuan.Jiang@cmis.csiro.au

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble

1 Introduction Let F : < n! < n be a continuously dierentiable mapping and S be a nonempty closed convex set in < n. The variational inequality proble A New Unconstrained Dierentiable Merit Function for Box Constrained Variational Inequality Problems and a Damped Gauss-Newton Method Defeng Sun y and Robert S. Womersley z School of Mathematics University

More information

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems Applied Mathematics and Computation 109 (2000) 167±182 www.elsevier.nl/locate/amc An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

CONVEXIFICATION SCHEMES FOR SQP METHODS

CONVEXIFICATION SCHEMES FOR SQP METHODS CONVEXIFICAION SCHEMES FOR SQP MEHODS Philip E. Gill Elizabeth Wong UCSD Center for Computational Mathematics echnical Report CCoM-14-06 July 18, 2014 Abstract Sequential quadratic programming (SQP) methods

More information