A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
|
|
- Aileen Franklin
- 6 years ago
- Views:
Transcription
1 IJMMS 25:6 2001) PII. S Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES MAHMOUD M. EL-ALEM, MOHAMMEDI R. ABDEL-AZIZ, and AMR S. EL-BAKRY Received 23 December 1998) Abstract. Solving systems of nonlinear equations and inequalities is of critical importance in many engineering problems. In general, the existence of inequalities in the problem adds to its difficulty. We propose a new projected Hessian Gauss-Newton algorithm for solving general nonlinear systems of equalities and inequalities. The algorithm uses the projected Gauss-Newton Hessian in conjunction with an active set strategy that identifies active inequalities and a trust-region globalization strategy that ensures convergence from any starting point. We also present a global convergence theory for the proposed algorithm Mathematics Subject Classification. Primary 65F15, 65G Introduction. The solution of systems of nonlinear equations and inequalities is the goal of many scientific and engineering applications. In constrained optimization problems this task is often required to ensure that the feasibility set is not empty. This is critical specially in algorithms that are based on the assumption that the feasibility region is not empty, for example, interior-point methods, cf. El-Bakry, Tapia, Tsuchiya, and Zhang [6]). In this paper, we provide a new algorithm for effectively solving such systems. In this paper, we consider the problem of finding a solution x that satisfies the following set of nonlinear equalities and inequalities: Fx)= 0, Gx) 0, 1.1) where x R n, F = f 1,f 2,...,f m ) T, and G = g 1,g 2,...,g p ) T. We assume that m n and no restriction is imposed on p. The functions f i,i= 1,...,m and g j,j= 1,...,p are assumed to be twice continuously differentiable. We define the indicator matrix Wx) R p p, whose diagonal entries are 1, if g i x) 0, w i x) = 1.2) 0, if g i x) < 0. Using this matrix, problem 1.1) can be transformed to the following equivalent equality constrained optimization problem: minimize Wx)Gx) 2 2, subject to Fx)= )
2 398 MAHMOUD M. EL-ALEM ET AL. The matrix Wx) is discontinuous; however, the function Wx)Gx) is Lipschitz continuous and the function Gx) T Wx)Gx) is continuously differentiable Dennis, El-Alem, and Williamson [3]). Our proposed algorithm is iterative. We start with a point x 0 R n. Then the algorithm generates a sequence of iterates {x k }. One iteration of the algorithm produces, at the point x k, a point x k+1 = x k + s k that is a better approximation to a solution x than x k. We measure the progress towards a solution using the l 2 exact penalty function associated with problem 1.3); Φx;r)= Gx) T Wx)Gx)+r Fx) T Fx), 1.4) where r>0 is a parameter. Our algorithm uses the trust-region globalization strategy to ensure that, from any starting point x 0, the sequence of iterates {x k } generated by the algorithm converges to a solution of problem 1.3). The basicidea of the trustregion algorithms is as follows. Approximate problem 1.3) by a model trust-region subproblem. The trial step is obtained by solving this subproblem. The trial step is tested and the trust-region radius is updated accordingly. If the step is accepted, then we set x k+1 = x k +s k. Otherwise, the radius of the trust-region is decreased and another trial step is computed using the new value of the trust-region radius. Detailed description of this algorithm is given in Section 2. The following notations are used throughout the rest of the paper. Subscripted functions are function values at particular points; for example, F k = Fx k ), G k = Gx k ) and so on. However, the arguments of the functions are not abbreviated when emphasizing the dependence of the functions on their arguments. The ith component of a vector Vx k ) is denoted by either V k ) i or V i x k ). We use the same symbol 0 to denote the real number zero, the zero vector, and the zero matrix. Finally, all norms used in this paper are l 2 -norms. 2. Algorithmic framework. This section is devoted to present the detailed description of our trust-region algorithm for solving problem 1.1). The reduced Hessian approach is used to compute a trial step s k. In particular, the trial step s k is decomposed into two orthogonal components; the normal component u k and the tangential component v k. The trial step s k has the form s k = u k +v k = u k +Z k v k, where Z k is a matrix whose columns form an orthonormal basis for the null space of F T k. The matrix Zx) can be obtained from the QR factorization of Fx) as follows: Fx)= [ Yx)Zx) ][ ] Rx), 2.1) 0 where Yx) is the matrix whose columns form an orthonormal basis for the column space of Fx) and Rx) is an m m upper triangular matrix. When Fx) has full column rank, the matrix Yx) R n m, Zx) R n n m), and Rx) is nonsingular. Using this factorization, the first-order necessary conditions for a point x to be a stationary point of problem 1.3) can be written in the following form: [ ] [ ] Zx ) T Gx )W x )Gx ) 0 =. 2.2) Fx ) 0
3 A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR We obtain the normal component u k by solving the following trust-region subproblem: minimize 1 F T k 2 u+f 2 k, subject to u σ k 2.3) for some σ 0,1), where k is the trust-region radius. Given the normal component u k, we compute the tangential component v k by solving the following trust-region subproblem: minimize G k W k G k + G k W k G T k u T k) Zk v vt Z T k G kw k G T k Z k v, subject to Z k v 2 k u k 2, 2.4) then, we set v k = Z k v k. Once the trial step is computed, we test it to determine whether it is accepted. To do that, we compare the actual reduction in the merit function in moving from x k to x k +s k versus the predicted reduction. We define the actual reduction as Ared k = G T k W kg k G ) T ) ) x k +s k W xk +s k G xk +s k +r k [ F k 2 F xk +s k ) 2 ]. 2.5) The predicted reduction in the merit function is defined to be Pred k = W k G k 2 Wk Gk + G T k s ) [ k 2 +rk F k 2 Fk + F T k s 2] k. 2.6) After computing s k, the parameter r k is updated to ensure that Pred k 0. Our way of updating r k is presented in Step 3 of Algorithm 2.1. After updating r k, the step is tested for acceptance. If Ared k /Pred k <η 1, where η 1 0,1) is a small fixed constant, then the step is rejected. In this case, the radius of the trust-region k is decreased by setting k = α 1 s k, where α 1 0,1), and another trial step is computed using the new trust-region radius. If Ared k /Pred k η 1, then the step is accepted and the trust-region radius is updated. Our way of updating k is presented in Step 4 of Algorithm 2.1. Finally, we use the first-order necessary conditions 2.2) to terminate the algorithm. The algorithm is terminated when, for some ε tol > 0, Z T k G kw k G + Fk k ε tol. 2.7) Now, we present the algorithmicframework of our proposed method. Algorithm 2.1 Step 0 initialization). Given x 0 R n, compute W 0. Set r 0 = 1 and ζ = 0.1. Choose ε tol, α 1, α 2, η 1, η 2, min, max, and 0 such that ε tol > 0, 0 <α 1 < 1 <α 2, 0 <η 1 <η 2 < 1, and min 0 max. Set k = 0. Step 1 test for convergence). If Z T k G kw k G k + F k ε tol, then terminate the algorithm.
4 400 MAHMOUD M. EL-ALEM ET AL. Step 2 compute a trial step). If F k =0, then a) Set u k = 0. b) Compute v k by solving problem 2.4) with u k = 0. c) Set s k = Z k v k. Else a) Compute u k by solving problem 2.3). b) If Z T k G kw k G k + G k W k G T k u k) =0, then set v k = 0. Else, compute v k by solving 2.4). End if. c) Set s k = u k +Z k v k and x k+1 = x k +s k. End if. Step 3 update the parameter r k ). a) Set r k = r k 1. b) If Pred k r k /2[ F k 2 F k + F T k s k 2 ], then set r k = 2[ G ) T ) ) ] x k +s k W xk +s k G xk +s k Gk W k G k F k 2 F k + F T k s k 2 +ζ. 2.8) End if. Step 4 test the step and update k ). If Ared k /Pred k <η 1, then Set k = α 1 s k. Go to Step 2. Else if η 1 Ared k /Pred k <η 2, then Set x k+1 = x k +s k. Set k+1 = max k, min ). Else Set x k+1 = x k +s k. Set k+1 = min{ max,max{ min,α 2 k }}. End if. Step 5. Set k = k+1 and go to Step 1. It is worth mentioning that either direct or iterative methods can be used to solve the trust-region subproblems arising in the above algorithm. Recently Abdel-Aziz [2] proposed a new method for computing the projection using implicitly restarted Lanczos method which proved to be very successful in solving nonlinear structural eigensystems see Abdel-Aziz [1]). 3. Convergence analysis. In this section, we present our global convergence theory. However, for the global convergence results to follow, we require some assumptions to be satisfied by the problem. Let {x k } be the sequence of points generated by the algorithm and let Ω be a convex subset of R n that contains all iterates x k and x k +s k, for all trial steps s k examined in the course of the algorithm. On the set Ω, the following assumptions are imposed.
5 A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR A1) The functions F and G are twice continuously differentiable for all x Ω. A2) The matrix Fx) has full column rank. A3) All of Fx), Fx),Gx), Gx), 2 f i x) for i = 1,...,m, 2 g j x), for j = 1,...,p, and Fx) T Fx)) 1 are uniformly bounded in Ω Important lemmas. We present some important lemmas that are needed in the subsequent proofs. Lemma 3.1. Let assumptions A1), A2), and A3) hold, then at any iteration k, where K 1 is a positive constant independent of k. u k K 1 F k, 3.1) Proof. The proof is similar to the proof of [4, Lemma 7.1]. In the following lemma, we prove that, at any iteration k, the predicted reduction in the 2-norm of the linearized constraints is at least equal to the decrease obtained by the Cauchy step. Lemma 3.2. Let assumptions A1), A2), and A3) hold, then the normal component u k of the trial step satisfies F k 2 Fk + F T k u 2 k K 2 F k min { k, F k }, 3.2) where K 2 is a positive constant independent of k. Proof. See Powell [8]. Lemma 3.3. Let assumptions A1), A2), and A3) hold. Then the predicted decrease obtained by the trial step satisfies K 2 Pred k r k 2 F k min { k, F k }, 3.3) where K 2 is as in Lemma 3.2. Proof. The proof follows directly from our way of updating the parameter r k and Lemma 3.2. The following lemma provides a lower bound on the decrease obtained by the step s k on the linearization of Wx k )Gx k +s k ) 2. Lemma 3.4. Let assumptions A1), A2), and A3) hold. Then W k G k 2 Wk Gk + G T k s ) k 2 K 3 Z T k Gk W k G k + G k W k G T k u ) k { Z T min k Gk W k G k + G k W k G T k u ) } k, 2 k u k 2 2 u k G k W k G k Wk G T k u 2 k, 3.4) where K 3 is a positive constant independent of k.
6 402 MAHMOUD M. EL-ALEM ET AL. Proof. From the way of computing the step v k, it satisfies the fraction of Cauchy decrease condition. Hence, there exists a constant K 3 > 0 independent of k, such that 2 G k W k G k + G k W k G T k u T k) Zk v k v T k ZT k G kw k G T k Z k v k K 3 Z T k Gk W k G k + G k W k G T k u ) k { min Z T k Gk W k G k + G k W k G T k u ) } k, 2 k u k ) See Moré [7] for a proof. We also have, W k G k 2 Wk Gk + G T k s k) 2 2 Gk W k G k + G k W k G T k u k) T Zk v k v T k ZT k G kw k G T k Z k v k 2G T k W k G T k u k u T k G kw k G T k u k. 3.6) The proof follows from inequalities 3.5) and 3.6). Lemma 3.5. At any iteration k, W k+1 = W k +D k, where Dx k ) R p p is a diagonal matrix whose diagonal entries are 1, if ) g k i < 0 and g k+1 )i 0, d k ) i = 1, if ) g k i 0 and g k+1 )i < 0, 0, otherwise. 3.7) Proof. The proof is straightforward. Lemma 3.6. Let assumptions A1) and A3) hold. At any iteration k, there exists a positive constant K 4 independent of k, such that D k G k K 4 s k, 3.8) where D k R p p is the diagonal matrix whose diagonal entries are given in 3.7). Proof. See El-Alem and El-Sobky [5, Lemma 5.6]. The following two lemmas give upper bounds on the difference between the actual reduction and the predicted reduction. Lemma 3.7. Let assumptions A1), A2), and A3) hold, then Ared k Pred k K 5 s k 2 1+r k F k +r k s k ), 3.9) where K 5 is a positive constant independent of k. Proof. Using 2.5), 2.6), and Lemma 3.5, we have Ared k Pred k Wk Gk + G T k s k) 2 G T k+1 W k +D k )G k+1 r Fk k + F T k s 2 k F k )
7 A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR Therefore, [ ] Ared k Pred k s T k G k W k G T k Gx k +ξ 1 s k )W k Gx k +ξ 1 s k ) T s k + s T k 2 G ) x k +ξ 1 s k Wk G ) x k +ξ 1 s k sk [ + D k G k + G ) ] T 2 x k +ξ 2 s k sk s T +r k k [ F k F T k F ) ) ] T x k +ξ 3 s k F xk +ξ 3 s k s k s T +r k k 2 F ) ), x k +ξ 3 s k F xk +ξ 3 s k sk 3.11) for some ξ 1,ξ 2,and ξ 3 0,1). Hence, by using the assumptions and inequality 3.8), the proof follows. Lemma 3.8. Let assumptions A1), A2), and A3) hold, then Aredk Pred k K6 r k s k 2, 3.12) where K 6 is a positive constant independent of k. Proof. The proof follows directly from Lemma 3.7, the fact that r k 1, and the problem assumptions. The following lemma gives a lower bound on the predicted decrease obtained by the trial step. Lemma 3.9. Let assumptions A1), A2), and A3) hold, then there exists a constant K 7 that does not depend on k such that, for all k, Pred k K 3 2 Z T k min K 7 F k G k W k G k + G k W k G T k k) u { } Z T k G k W k G k + G k W k G T k k) u, 2k u k 2 [ F k + G k W k G k )+r k F k 2 F k + F T k s k 2], 3.13) where K 3 is as in Lemma 3.4. Proof. The proof follows directly from the definition of Pred k and Lemmas 3.1 and 3.4. Now, we prove several results that are crucial to our global convergence theory. The following lemma demonstrate that as long as F k > 0, an acceptable step must be found. In other words, the algorithm cannot loop infinitely without finding an acceptable step. To state this result, we need to introduce one more notation that is used throughout the rest of the paper. The jth trial iterate of iteration k is denoted by k j. Lemma Let assumptions A1), A2), and A3) hold. If F k ε, where ε is any positive constant, then an acceptable step is found after finitely many trials.
8 404 MAHMOUD M. EL-ALEM ET AL. Proof. The proof is by contradiction. Assume that F k ε>0 and there is no finite j that satisfies the condition Ared k j /Pred k j η 1. Hence, for all j, we have 1 η 1 ) Ared k j 1 Aredk j Pred Pred = k j 2K 6 2 k j k j Pred k j K 2 ε min { }. 3.14) ε, k j A contradiction arises if we let k j goes to zero. This completes the proof. Lemma Let assumptions A1), A2), and A3) hold. For all trial steps j of any iteration k generated by the algorithm, k j satisfies k j K 8 F k, 3.15) where K 8 is a positive constant that does not depend on k or j. Proof. Consider any iterate k j. If the previous step was accepted; that is, if j = 1, then k min. If we take ν = sup x Ω Fx), then we have k min min ν F k. 3.16) Now, assume that j>1. We have from the way of updating the trust-region radius, δ k j α 1 s k j 1. But the trial step s k j 1 was rejected. Hence, Aredk j 1 Pred 1 η 1 )< k j 1 2K sk 6 2 j 1 < Pred k j 1 K 2 F k min { F k, sk }. 3.17) j 1 If s k j 1 < min{1 η 1 )K 2 /4K 6,1} F k, then we obtain 1 η 1 )<1 η 1 )/2. This contradiction implies that it must be the case that s k j 1 min{1 η 1 )K 2 /4K 6,1} F k. Therefore, we have k j = α 1 s k j 1 >α 1 min { 1 η 1 )K 2 /4K 6,1 } F k. 3.18) Inequalities 3.16) and 3.18) prove the lemma. Lemma Let assumptions A1), A2), and A3) hold. Then, for any iterate k j at which the parameter r k j is increased, r k j F k K 9 Fk + G k W k G k ), 3.19) where K 9 is a positive constant that does not depend on k or j. Proof. The proof follows from Lemma 3.2 and inequalities 3.13) and 3.15). Lemma Let assumptions A1), A2), and A3) hold. If Z T k G kw k G k ε 0, where ε 0 is a positive constant, then there exists two constants K 10 and K 11 that depend on ε 0 but do not depend on k or j, such that, if F k K 10 k, then { Pred k K 11 k +r Fk k 2 Fk + F T k s 2} k. 3.20) Proof. The proof is similar to the proof of Lemmas 7.7 and 7.8 of Dennis, El-Alem, and Maciel [4].
9 A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR Global convergence theorem. In this section, we prove our global convergence result for Algorithm 2.1. The following lemma proves that for the iteration sequence at which r k is increased the corresponding sequence of F k converges to zero. Lemma Let assumptions A1), A2), and A3) hold. If r k, then lim Fki = 0, 3.21) k i where {k i } indexes the iterates at which the parameter r k is increased. Proof. The proof follows directly from Lemma 3.12 and assumption A3). In the following theorem, we prove that the sequence { F k } converges to zero. Theorem Let assumptions A1), A2), and A3) hold. Then the iteration sequence satisfies lim Fk = ) k Proof. Suppose that there exists an infinite subsequence of indices {k j } indexing iterates that satisfy F kj ε. From Lemma 3.10, there exists an infinite sequence of acceptable steps. Without loss of generality, we assume that all members of the sequence {k j } are acceptable iterates. We consider two cases. If {r k } is bounded, then there exists an integer k such that for all k k, r k = r. Therefore, from Lemmas 3.3 and 3.11, we have for any ˆk {k j } and ˆk k, Predˆk r 2 K { 2 Fˆk min } ˆk, Fˆk rk Fˆk min { K 8,1 }. 3.23) Since all the iterates of {k j } are acceptable, then for any ˆk {k j }, we have Hence, from the above two inequalities, we have Φˆk Φˆk+1 = Aredˆk η 1 Predˆk. 3.24) Φˆk Φˆk+1 η 1ε 2 rk 2 min { k 8,1 }. 3.25) 2 This gives a contradiction with the fact that {Φ k } is bounded when {r k } is bounded. Now, consider the case when {r k } is unbounded. Hence, there exists an infinite number of iterates {k i } at which the parameter r k is increased. From Lemma 3.14, for k sufficiently large, the two sequences {k i } and {k j } do not have common elements. Let k and ˆk be two consecutive iterates at which the parameter r k is increased and k<k j < ˆk, where k j {k j }. The parameter r k is the same for all iterates k that satisfy k k<ˆk. Since all the iterates of {k j } are acceptable, then for all k j {k j }, Φ kj Φ kj +1 = Ared kj η 1 Pred kj. 3.26) From Lemmas 3.3 and 3.11 and inequality 3.26), we have Φ kj Φ kj +1 r kj η 1 K 2 2 F k 2 min { K 8,1 }. 3.27)
10 406 MAHMOUD M. EL-ALEM ET AL. If we sum over all iterates that lie between k and ˆk, we have Therefore, r k ˆk 1 k j = k Φ kj Φ kj +1 η 1K 2 ε 2 min { K 8,1 }. 3.28) r kj 2 G T k W kg k G Ṱ k WˆkGˆk +[ F k 2 2] Fˆk η 1K 2 ε 2 min { K 8,1 }. 3.29) 2 Since r k, then for k sufficiently large, we have 2 F k 2 Fˆk η 1K 2 ε 2 min { K 8,1 }. 3.30) 4 But this contradicts Lemma Hence, in both cases, we have a contradiction. Thus, the supposition is wrong and the theorem is proved. Theorem Let assumptions A1), A2), and A3) hold. Then the iteration sequence satisfies liminf Z T k G kw k G k = ) k Proof. The proof is by contradiction. Suppose that Z T k G kw k G k >εholds for all k generated by the algorithm. Assume that there exists an infinite subsequence {k i } such that F ki >α ki, where α is a positive constant. For later use, we take α = K 10. Since F k 0, we have lim ki = ) k i Consider any iterate k j {k i }. There are two cases to consider. First, consider the case where the sequence {r k } is unbounded. We have for the rejected trial step j 1 of iteration k, F k >K 10 k j = α 1 K 10 s k j 1. Using inequalities 3.3) and 3.9) and the fact that the trial step s k j 1 was rejected, we have ) Aredk j 1 Pred 1 η1 k j 1 Pred k j 1 2K 5 sk j 1 +2K5 r k j 1 sk 2 j 1 + sk ) j 1 Fk r k j 1K 2 min α 1 K 10,1 ) Fk ) 2K 5 r k j 1K 2 α 1 K 10 min α 1 K 10,1 ) + 2K 5 1+α1 K 10 K 2 α 1 K 10 min α 1 K 10,1 ) sk j ) Because {r k } is unbounded, there exists an iterate ˆk sufficiently large such that for all k ˆk, we have 4K 5 r k j 1 > K 2 α 1 K 10 min α 1 K 10,1 ) ). 3.34) 1 η 1 This implies that for all k ˆk, sk K 2 α 1 K 10 min α 1 K 10,1 ) ) 1 η 1 j 1 ). 3.35) 4K 5 1+α1 K 10
11 A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR From the way of updating the trust-region radius, we have k j = α sk 1 K 2 α 2 j 1 1K 10 min α 1 K 10,1 ) ) 1 η ) ) 4K 5 1+α1 K 10 This gives a contradiction. So k j cannot go to zero in this case. Second, consider the case when the sequence {r k } is bounded. There exists an integer k and a constant r such that for all k k, r k = r. Let k k and consider a trial step j of iteration k, such that F k >K 10 k j, where K 10 is as in Lemma If j = 1, then from our way of updating the trust-region radius, we have k j min. Hence k j is bounded in this case. If j>1 and Fk l >K10 k l 3.37) for l = 1,...,j, then for the rejected trial step j 1 of iteration k, we have Hence, ) Aredk j 1 Pred 1 η1 k j 1 Pred k j 1 2K sk 6 j 1 K 2 min K 10,1 ) Fk. 3.38) k j = α sk 1 α 1 K 2 min K 10,1 ) ) 1 η Fk 1 j 1 2K 6 α 1K 2 min K 10,1 ) ) 1 η 1 K10 k 1 α 1K 2 min 2K 6 K10,1 ) 1 η 1 ) K10 2K 6 min. 3.39) Hence k j is bounded in this case too. If j>1 and 3.37) does not hold for all l, there exists an integer m such that 3.37) holds for l = m+1,...,j and Fk l K10 k l 3.40) for l = 1,...,m. As in the above case, we can write k j α 1K 2 min K 10,1 ) ) 1 η 1 Fk α 1 K 2 min K10,1 ) ) 1 η 1 K10 k m ) 2K 6 2K 6 But from our way of updating the trust-region radius, we have k m+1 α 1 s k m. 3.42) Now, using 3.40), Lemma 3.13, and the fact that the trial step s k m is rejected, we can write ) Aredk m Pred k m 1 η1 2K 6 r s k m. 3.43) Pred k m K 11 This implies s k m K 11 1 η 1 )/2K 6 r). Hence k j α2 1K 2 min K 10,1 ) 1 η 1 ) 2K10 K 11 4K 2 6 r. 3.44) This implies that k j is bounded in this case too. Hence k j is bounded in all cases.
12 408 MAHMOUD M. EL-ALEM ET AL. This contradiction implies that for k j sufficiently large, all the iterates satisfy F k K 10 k j. This implies, using Lemma 3.13, that {r k } is bounded. Letting k j k and using Lemma 3.13, we have Φ k j Φ k j +1 = Ared k j η 1 Pred k j η 1 K 11 k j. 3.45) As k goes to infinity the above inequality implies that lim k j = ) k This implies that the radius of the trust-region is not bounded below. If we consider an iteration k j > k and the previous step was accepted; that is, j = 1, then k 1 min. Hence k j is bounded in this case. Assume that j>1, that is, there exists at least one rejected trial step. For the rejected trial step s k j 1, using Lemmas 3.8 and 3.13, we must have ) rk sk 6 2 j 1 1 η1 <. 3.47) K 11 k j 1 From the way of updating the trust-region radius, we have ) k j = α sk 1 α 1 K 11 1 η1 j 1 >. 3.48) rk 6 Hence k j is bounded. But this contradicts 3.46). The supposition is wrong. This completes the proof of the theorem. From Theorems 3.15 and 3.16, we conclude that, given any ε tol > 0, the algorithm terminates because Z T k G kw k G k + F k <ε tol, for some k finite. 4. Concluding remarks. We have introduced a new trust-region algorithm for solving nonlinear systems of equalities and inequalities. Our active set is simple and natural. It is similar to Dennis, El-Alem, and Williamson s approach for treating the active constraints [3]. At every iteration, the step is computed by solving two simple trustregion subproblems similar to those for unconstrained optimization. Our theory proves that the algorithm is globally convergent in the sense that, in the limit, a subsequence {k j } of the iteration sequence generated by the algorithm satisfies the first-order conditions 2.2). We point out here that our convergence result only gives first-order stationary-point convergence. This means that there may be a case where a subsequence of the iteration sequence satisfies 2.2) but does not necessarily satisfies 1.1). This can only happen when the corresponding sequence of matrices {Z T k j G kj W kj } does not have full column rank in the limit. Acknowledgement. A part of this research has been done in the Department of Mathematics, Faculty of Science, Alexandria University, Egypt.
13 A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR References [1] M. R. Abdel-Aziz, Safeguarded use of the implicit restarted Lanczos technique for solving non-linear structural eigensystems, Internat. J. Numer. Methods Engrg ), no. 18, MR 95f: Zbl [2], Implicitly restarted projection algorithm for solving optimization problems, Numer. Funct. Anal. Optim ), no. 3-4, CMP [3] J. Dennis, M. El-Alem, and K. Williamson, A trust-region approach to nonlinear systems of equalities and inequalities, Tech. report, TR94-38 Department of Computational and Applied Mathematics, Rice University, Texas), [4] J. E. Dennis, Jr., M. El-Alem, and M. C. Maciel, A global convergence theory for general trust-region-based algorithms for equality constrained optimization, SIAM J. Optim ), no. 1, MR 98b: Zbl [5] M. El-Alem and B. El-Sobky, A new trust-region algorithm for general nonlinear programming, Tech. report, TR97-25 Department of Computational and Applied Mathematics, Rice University, Texas), [6] A. S. El-Bakry, R. A. Tapia, T. Tsuchiya, and Y. Zhang, On the formulation and theory of the Newton interior-point method for nonlinear programming, J. Optim. Theory Appl ), no. 3, MR 97c: Zbl [7] J. J. Moré, Recent developments in algorithms and software for trust region methods, Mathematical Programming: the State of the Art Bonn, 1982), Springer, Berlin, 1983, pp MR 85b: Zbl [8] M. J. D. Powell, Convergence properties of a class of minimization algorithms, Nonlinear programming, 2 Proc. Sympos. Special Interest Group on Math. Programming, Univ. Wisconsin, Madison, Wis., 1974), Academic Press, New York, 1974, pp MR 52#7128. Zbl El-Alem: Department of Mathematics, Faculty of Science, Alexandria University, Alexandria, Egypt Abdel-Aziz: Department of Mathematics and Computer Science, Kuwait University, Kuwait El-Bakry: Department of Mathematics, Faculty of Science, Alexandria University, Alexandria, Egypt
On Stability of Fuzzy Multi-Objective. Constrained Optimization Problem Using. a Trust-Region Algorithm
Int. Journal of Math. Analysis, Vol. 6, 2012, no. 28, 1367-1382 On Stability of Fuzzy Multi-Objective Constrained Optimization Problem Using a Trust-Region Algorithm Bothina El-Sobky Department of Mathematics,
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationAnalysis of Inexact Trust-Region Interior-Point SQP Algorithms. Matthias Heinkenschloss Luis N. Vicente. TR95-18 June 1995 (revised April 1996)
Analysis of Inexact rust-region Interior-Point SQP Algorithms Matthias Heinkenschloss Luis N. Vicente R95-18 June 1995 (revised April 1996) Department of Computational and Applied Mathematics MS 134 Rice
More informationNonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function
Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function Michael Ulbrich and Stefan Ulbrich Zentrum Mathematik Technische Universität München München,
More informationA Recursive Trust-Region Method for Non-Convex Constrained Minimization
A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction
More informationA CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE
Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received
More informationThis manuscript is for review purposes only.
1 2 3 4 5 6 7 8 9 10 11 12 THE USE OF QUADRATIC REGULARIZATION WITH A CUBIC DESCENT CONDITION FOR UNCONSTRAINED OPTIMIZATION E. G. BIRGIN AND J. M. MARTíNEZ Abstract. Cubic-regularization and trust-region
More information230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n
Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More information1. Introduction. We analyze a trust region version of Newton s method for the optimization problem
SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John
More informationA SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION
A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationGaussian interval quadrature rule for exponential weights
Gaussian interval quadrature rule for exponential weights Aleksandar S. Cvetković, a, Gradimir V. Milovanović b a Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice
More informationA Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions
The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded
More informationCubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization J. M. Martínez M. Raydan November 15, 2015 Abstract In a recent paper we introduced a trust-region
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationTrust-region methods for rectangular systems of nonlinear equations
Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationTrust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization
Trust-Region SQP Methods with Inexact Linear System Solves for Large-Scale Optimization Denis Ridzal Department of Computational and Applied Mathematics Rice University, Houston, Texas dridzal@caam.rice.edu
More informationA PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationIntroduction to Real Analysis Alternative Chapter 1
Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationA Regularized Interior-Point Method for Constrained Nonlinear Least Squares
A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique
More informationSecond Order Optimization Algorithms I
Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationOn Lagrange multipliers of trust-region subproblems
On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008
More informationFinite Convergence for Feasible Solution Sequence of Variational Inequality Problems
Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,
More informationOn fast trust region methods for quadratic models with linear constraints. M.J.D. Powell
DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many
More informationGLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH
GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More information1. Introduction. We consider the general smooth constrained optimization problem:
OPTIMIZATION TECHNICAL REPORT 02-05, AUGUST 2002, COMPUTER SCIENCES DEPT, UNIV. OF WISCONSIN TEXAS-WISCONSIN MODELING AND CONTROL CONSORTIUM REPORT TWMCC-2002-01 REVISED SEPTEMBER 2003. A FEASIBLE TRUST-REGION
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationA trust region method based on interior point techniques for nonlinear programming
Math. Program., Ser. A 89: 149 185 2000 Digital Object Identifier DOI 10.1007/s101070000189 Richard H. Byrd Jean Charles Gilbert Jorge Nocedal A trust region method based on interior point techniques for
More informationREGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis
More informationOn the iterate convergence of descent methods for convex optimization
On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More informationA null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties
A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationGlobal convergence of trust-region algorithms for constrained minimization without derivatives
Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationInexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty
Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for
More informationNonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization
Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong
More informationc 2007 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.
More informationInexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2
Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2 J. M. Martínez 3 December 7, 1999. Revised June 13, 2007. 1 This research was supported
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More informationConvex Quadratic Approximation
Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationStep lengths in BFGS method for monotone gradients
Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly
More informationOn the Solution of Constrained and Weighted Linear Least Squares Problems
International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science
More informationReduced-Hessian Methods for Constrained Optimization
Reduced-Hessian Methods for Constrained Optimization Philip E. Gill University of California, San Diego Joint work with: Michael Ferry & Elizabeth Wong 11th US & Mexico Workshop on Optimization and its
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationExistence and uniqueness: Picard s theorem
Existence and uniqueness: Picard s theorem First-order equations Consider the equation y = f(x, y) (not necessarily linear). The equation dictates a value of y at each point (x, y), so one would expect
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationGauss Hermite interval quadrature rule
Computers and Mathematics with Applications 54 (2007) 544 555 www.elsevier.com/locate/camwa Gauss Hermite interval quadrature rule Gradimir V. Milovanović, Alesandar S. Cvetović Department of Mathematics,
More informationFIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow
FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint
More informationYongdeok Kim and Seki Kim
J. Korean Math. Soc. 39 (00), No. 3, pp. 363 376 STABLE LOW ORDER NONCONFORMING QUADRILATERAL FINITE ELEMENTS FOR THE STOKES PROBLEM Yongdeok Kim and Seki Kim Abstract. Stability result is obtained for
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationObstacle problems and isotonicity
Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationHandout on Newton s Method for Systems
Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n
More informationJournal of Inequalities in Pure and Applied Mathematics
Journal of Inequalities in Pure and Applied Mathematics ON A HYBRID FAMILY OF SUMMATION INTEGRAL TYPE OPERATORS VIJAY GUPTA AND ESRA ERKUŞ School of Applied Sciences Netaji Subhas Institute of Technology
More informationMEAN CURVATURE FLOW OF ENTIRE GRAPHS EVOLVING AWAY FROM THE HEAT FLOW
MEAN CURVATURE FLOW OF ENTIRE GRAPHS EVOLVING AWAY FROM THE HEAT FLOW GREGORY DRUGAN AND XUAN HIEN NGUYEN Abstract. We present two initial graphs over the entire R n, n 2 for which the mean curvature flow
More informationOn affine scaling inexact dogleg methods for bound-constrained nonlinear systems
On affine scaling inexact dogleg methods for bound-constrained nonlinear systems Stefania Bellavia, Sandra Pieraccini Abstract Within the framewor of affine scaling trust-region methods for bound constrained
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationWorst-Case Complexity Guarantees and Nonconvex Smooth Optimization
Worst-Case Complexity Guarantees and Nonconvex Smooth Optimization Frank E. Curtis, Lehigh University Beyond Convexity Workshop, Oaxaca, Mexico 26 October 2017 Worst-Case Complexity Guarantees and Nonconvex
More informationPDE-Constrained and Nonsmooth Optimization
Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)
More informationAn interior-point trust-region polynomial algorithm for convex programming
An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic
More informationNumerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University
Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationIterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming
Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationExam 2 extra practice problems
Exam 2 extra practice problems (1) If (X, d) is connected and f : X R is a continuous function such that f(x) = 1 for all x X, show that f must be constant. Solution: Since f(x) = 1 for every x X, either
More informationA globally and quadratically convergent primal dual augmented Lagrangian algorithm for equality constrained optimization
Optimization Methods and Software ISSN: 1055-6788 (Print) 1029-4937 (Online) Journal homepage: http://www.tandfonline.com/loi/goms20 A globally and quadratically convergent primal dual augmented Lagrangian
More informationGlobal Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction
ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationA convergence result for an Outer Approximation Scheme
A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento
More informationA New Trust Region Algorithm Using Radial Basis Function Models
A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations
More informationDirect Adaptive NN Control of MIMO Nonlinear Discrete-Time Systems using Discrete Nussbaum Gain
Direct Adaptive NN Control of MIMO Nonlinear Discrete-Time Systems using Discrete Nussbaum Gain L. F. Zhai 1, C. G. Yang 2, S. S. Ge 2, T. Y. Tai 1, and T. H. Lee 2 1 Key Laboratory of Process Industry
More informationOn Lagrange multipliers of trust region subproblems
On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More informationQuasi-Newton Methods
Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references
More informationAnalysis Finite and Infinite Sets The Real Numbers The Cantor Set
Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered
More informationGAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n
GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. A. ZVAVITCH Abstract. In this paper we give a solution for the Gaussian version of the Busemann-Petty problem with additional
More information