A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

Size: px
Start display at page:

Download "A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints"

Transcription

1 Journal of Computational and Applied Mathematics 161 (003) A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Detong Zhu Department of Mathematics, Shanghai Normal University, Shanghai 0034, China Received 1 September 00; received in revised form 0 February 003 Abstract In this paper we propose a new interior ane scaling region algorithm with nonmonotonic interior point bactracing technique for nonlinear optimization subject to linear equality and inequality constraints. he trust region subproblem in the proposed algorithm is dened by minimizing a quadratic function subject only to an ane scaling ellipsoidal constraint in a null subspace of the extended equality constraints. Using both trust region strategy and line search technique, the ane scaling trust region subproblem at each iteration generates bactracing interior step to obtain a new accepted step. he global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions. A nonmonotonic criterion should bring about speeding up the convergence progress in some ill-conditioned cases. c 003 Published by Elsevier B.V. MSC: 90 C 30; 65 K 05; 49 M 40 Keywords: rust region method; Bactracing step; Ane scaling; Nonmonotonic technique 1. Introduction In this paper we analyze the solution of the nonlinear optimization problem subjective to both linear equality and linear inequality constraints: min f(x) s:t: A 1 x = b 1 ; A x b ; (1.1) he author gratefully acnowledges the partial supports of the National Science Foundation Grant ( ) of China, Science Foundation Grant (0ZA14070) of Shanghai echnical Sciences Committee and Science Foundation Grant (0DK06) of Shanghai Education Committee. address: dtzhu@shtu.edu.cn (D. Zhu) /$ - see front matter c 003 Published by Elsevier B.V. doi: /s (03)

2 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 where f : R n R is smooth nonlinear function, not necessarily convex, [ ] A A1 = R m n ; A A 1 =[a 1;:::;a l ] R l n and A =[a l+1;:::;a m] R (m l) n ; (m l) are matrices, b = ( b1 b ) =[b 1 ;:::;b l ;b l+1 ;:::;b m ] R m is a vector. he feasible set is denoted ={x A 1 x = b 1 ;A x b and the strict interior feasible (or strictly feasible ) set int() ={x A 1 x=b 1 ;A x b for the inequality constraints. here are quite a few articles [1,,5,6] proposing sequential convex quadratic programming methods with trust region idea. Most existing methods generate sequences of points in the interior of the feasible set with the strictly feasible constraints. Recently, Coleman and Li [3] presented a trust region ane scaling interior point algorithm for the minimization problem subject only to linear inequality constraints, that is, min f(x) s:t: A x b : (1.) he basic idea can be summarized as follows: when x is the current strictly feasible interior point iterate and is an approximation to the Lagrangian multipliers of the problem (1.), the scaling matrix D and the diagonal matrix C are dened as follows: D(x) =diag{a x b and D =D(x ); C(x) =diag{ ; and C =C(x ): (1.3) x + d based on the trust region subproblem min q (d) = f d + 1 d B d + 1 d A D 1 C A d s:t: (d; D 1= A d) 6 ; (1.4) where f = f(x ); d= x x ; B is either f(x ) or its approximation, f d + 1 d B d is the local quadratic approximation of f at x and is the trust region radius. In the algorithm proposed by Coleman and Li [3], [d ] be denoted to the minimum value of q (s) along the direction d within the feasible trust region, i.e., [d ] =q ( d ) ={minq (d ); s:t: (d ; D 1= A d ) 6 ; x + d : (1.5)

3 D. Zhu / Journal of Computational andappliedmathematics 161 (003) An approximate trust region solution will be damped in order to maintain strict feasibility. he damping parameter ( 0 ; 1], for some constant 0 (0; 1) and 1=O( d ). he damping step s along d is dened as s = d ; = : (1.6) Coleman and Li [3] proposed the trust region interior point method (RAM) by using the trust region radius is adjusted for nonlinearity and feasibility, that is, an iteration satises a force q (s ) (q (g )) here (0; 1); g(x) = ( f(x) A ), and hence ensure sucient reduction of the objective function. Coleman and Li [3] proposed: rust region ane scaling interior method 1. Choose parameters 0 1 1; (0; 1 ); ; 0. Select an initial trust region radius 0 0 and a maximal trust region radius 0 0, give a starting point x 0 int(). Set = 0, go to the main step.. Choose a symmetric matrix B f(x ). Evaluate f = f(x ), compute a least squares Lagrangian multiplier approximation and D = diag{ax b; C = diag{. 3. If f g 1= 6, stop with the approximate optimal solution x here g = f A. 4. Solve a step d, with x + d int(), based on the subproblem (1.5) and min d R n (d) = f d + 1 d B d + 1 d A S 1 C A d (S ) s:t: (d; S 1= A d) 6 ; where S = D or S = D given in [3]. 5. Calculate Pred(d )= (d ); (1.7) Ared(d )=f f(x + d ); (1.8) = Ared(h ) Pred(h ) : 6. If, then tae x +1 = x + d. Otherwise, x +1 = x, and +1 (0; 1 ]. 7. Updating trust region size +1 from, [ 1 ; ]; if 6 1 ; +1 = ( ; ]; if 1 ; ( ; 3 ]; if : (1.9) (1.10) 8. Update B to obtain B +1. hen set + 1 and go to step. In order to obtain the strict interior feasibility, a stepsize [d ] to be the optimal step within int() ifx + d is strictly feasible; otherwise, [d ] is chosen just short of step to the boundary.

4 4 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 Coleman and Li [] suggested { [d ] = d ; if x + d int(); d = d ; otherwise: (1.11) where ( l ; 1] for some 0 l 1 and 1=O( d ). One of the advantage of the model of trust region method is that it does not require the objective function to be convex. It is possible that the trust region subproblem with the strictly feasible constraint needs to be resolved many times before obtaining an acceptable step, and hence the total computation for completing one iteration might be expensive and dicult. he trust region strategy in association with line search technique for solving unconstrained optimization suggested in [10] motivates to switch to employ the bactracing steps at trial step which may be unaccepted in trust region strategy, since the trial step should provide a direction of sucient descent. he nonmonotone technique is developed to line search technique and trust region algorithm for unconstrained optimization, respectively (see [8,4], for instance). he nonmonotonic idea motivates to further study the bactracing ane scaling interior point algorithm, because monotonicity may cause a series of very small steps if the contours of objective function f are a family of curves with large curvature. In order to avoid the diculties of the strictly feasible constraints in trust region subproblem, the trust region subproblem in the proposed algorithm is dened by minimizing a quadratic function subject only an ane scaling ellipsoid constraint in the null subspace of the equality constraints. he paper is organized as follows. In Section, we describe the algorithm which combines the techniques of trust region strategy, interior point, ane scaling and nonmonotonic bactracing search. In Section 3, wea global convergence of the proposed algorithm is established. Some further convergence properties such as strong global convergence and local convergence rate are discussed in Section 4.. Algorithm In this section, we propose an ane scaling trust region method with nonmonotonic interior point bactracing technique for problem (1.1). he trust region subproblem involves choosing a scaling matrix D and a quadratic model q (d). We motivate our choice of ane searching matrix by examining the optimality conditions for problem (1.1). Optimality conditions for problem (1.1) are well established. A feasibility x is said to be stationary point for problem (1.1) which is called the rst order necessary condition, if there exist two vectors R l ; 0 6 R m l such that diag{a x b =0; and f(x ) A 1 A =0: (.1) Strict complementarity is said to hold at x if i 0; i =1;:::;l and at least one of the two inequalities a l+i x b l+i 0, and i 0; (i =1;:::;m l) holds, that is, i 0; i=1;:::;l and a l+i x b l+i + i 0; i=1;:::;m l, where i ; b l+i and i are the ith component of the vectors ;b and, respectively. he trust region subproblem arise naturally from the Newton step for the rst-order necessary conditions for the problem (1.1). Ignoring primal and dual feasibility of the inequality constraints,

5 D. Zhu / Journal of Computational andappliedmathematics 161 (003) the rst order necessary condition of (1.1) can be expressed as an (m + n) by(m + n) system of nonlinear equation f(x) A 1 A =0; A 1 x = b 1 ; diag{a x b =0: (.) For any x R n ; R l ; R m l, x denotes the vector in R m+n with the rst n components equal to x, the second l components to and the last m l components equal to. he Newton step x for the above equation f(x) A 1 A x f A 1 A A = A 1 x b 1 ; (.3) diag{ A 0 D D where D =diag{a x b : (.4) In order to globalize, we employ to replacing diag{ by C =diag{ which was suggested by Coleman and Li [3], that is, f A 1 A x f A 1 A A = A 1 x b 1 C A 0 D D : (.5) he modied Newton step can be shown to suciently approximate the exact Newton step, asymptotically, to achieve fast convergence. Using the augmented quadratic as the objective function of the model, a trust region consistent with the modied Newton step x N in the null subspace of

6 6 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 A 1 is min f d + 1 d B d + 1 d (A D 1 C A )d s:t: A 1 d =0 (d; D 1= A d) 6 where d = x x ; B is either f(x ) or its approximation, is the trust region radius. Set the transformation ˆd = D 1= A d, trust region subproblem (.6) is equivalent to the following problem in original variable space, min q(d) = f d + 1 d B d + 1 ˆd C ˆd s:t: A 1 d =0; D 1= ˆd = A d (d; ˆd) 6 : Now, we will introduce our trust region subproblem: min d R n (S ) s:t: A 1 d =0 (d) = f d + 1 d B d + 1 d (A S 1 C A )d (d; S 1= Ad) 6 ; with S = D =diag{a x b, (or another S = D suggested in [3]). he least squares Lagrangian multipliers and, ] ] g = f(x ) A 1 A ; and [ A 1 A 0 D 1= ][ L:S: = [ f(x ) 0 (.6) (.7) : (.8) Let P denote the orthogonal projection onto the null space of [ ] A1 0 ; A D 1= then [ ] f(x ) f(x g = P ) = ( f(x ) A 0 1 A + D 1= ): (.9) It is clear to see that from the subproblem (S ), a sucient decrease of (d) measured against the decrease from the damped minimizer f(x ) g leads to satisfaction of complementarity: lim f(x ) A 1 A =0; and lim D1= =0: (.10) We now describe trust region ane scaling interior point algorithm with a nonmonotonic bactracing interior point technique for solving the problem (1.1).

7 D. Zhu / Journal of Computational andappliedmathematics 161 (003) Initialization step: Choose parameters (0; 1 );! (0; 1); 0 1 1; ; 0 and positive integer M. Let m(0) = 0. Choose a symmetric matrix B 0. Select an initial trust region radius 0 0 and a maximal trust region radius 0 0, give a starting strictly feasible interior point x 0 int(). Set = 0, go to the main step. Main step: 1. Evaluate f =f(x ); f(x ) and D =diag{a x b. Choose a symmetric matrix B f(x ). Evaluate f = f(x ), compute a least squares Lagrangian multiplier approximations and. Set C = diag{.. If f g 1= 6, stop with the approximate solution x. 3. Solve a step d based on the subproblem (S ). 4. Choose =1;!;! ;:::; until the following inequality is satised f(x + d ) 6 f(x l() )+ f(x ) d ; (.11) with x + d where f(x l() ) = max 06j6m() {f(x j ). 5. Set { d ; if x + d int(); h = d ; otherwise where ( 0 ; 1], for some and 1=O( d ), and set x +1 = x + h : 6. Calculate Pred(h )= (h ); (.1) (.13) (.14) (.15) [Ared(h )=f(x l() ) f(x + h ); (.16) Ared(h ˆ = [ ) Pred(h ) : 7. Updating trust region size +1 from, [ 1 ; ]; if ˆ 6 1 ; +1 = ( ; ]; if 1 ˆ ; ( ; min{ 3 ; max ]; if ˆ : (.17) (.18) 8. ae m( +1)=min{m()+1;M, and update B to obtain B +1. hen set + 1 and go to step. Remar 1. In the subproblem (S ); f d + 1 d B d is a local quadratic model of the objective function f around x, while a candidate iterative direction d is generated by minimizing (d) only within the ane scaling ellipsoidal ball centered at x with radius in the null subspace N([ A1 A 0 D 1= ]).

8 8 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 Remar. he scalar given in step 4, denotes the stepsize along d to the boundary (.1) of the linear inequality constraints =min { a l+i x b l+i a l+i d a l+i x b l+i a l+i d 0; i=1;:::;m l ; (.19) with =+ if (a l+i x b l+i )=(a l+i d ) 6 0 for all i. A ey property of the scalar is that an arbitrary step d to the point x + d does not violate any linear inequality constraints. o see this, rst observe that if (a l+i x b l+i )=(a l+i d ) 6 0 for some i =1;:::;m l, and hence a l+i x b l+i 0 implies a l+i d 0. herefore, for all (0; + ), a l+i(x + d ) b l+i = a l+ix b l+i + a l+id 0; (.0) which means the ith linear stric inequality constraint holds. If =((a l+i x b l+i )=a l+i d ) 0 for some i, and hence a l+i x b l+i 0 implies a l+i d 0. We have that from 6 (a l+i x b l+i )=a l+i d, a l+ix b l+i 6 a l+id : (.1) Hence, (.0) (.1) mean that no matter what cases the inequality a l+i (x + d ) b l+i for all any i =1;:::;m l holds. Remar 3. Note that in each iteration the algorithm solves only one general trust region subproblem on the null subspace. If the solution d fails to meet the acceptance criterions (.11) and (.1) (tae = 1), then we turn to line search, i.e., retreat from x + h until the criterions are satised. he usual monotone algorithm can be viewed as a special case of the proposed algorithms when M =0. Remar 4. We improved the trust region interior algorithm in [,3] by using bactracing interior linesearch technique and the trust region radius adjusted depends on the traditional trust region criterion. At the linesearch, we use f(x ) d in (.11) instead of the accepted step in the traditional trust region criterion. he linesearch criterion (.11) is satised easier than the traditional trust region step, because if B + A S 1 C A is positive semidenite, then f(x ) d 6 (d ). 3. Global convergence hroughout this section we assume that f : R n R 1 is twice continuously dierentiable and bounded from below. Given x 0, the algorithm generates a sequence {x R n. In our analysis, we denote the level set of f by L(x 0 )={x R n f(x) 6 f(x 0 ); A 1 x = b 1 ; A x b : he following assumption is commonly used in convergence analysis of most methods for linear inequality constraints optimization. Assumption A1. Sequence {x generated by the algorithm is contained in a compact set L(x 0 ) on R n.

9 D. Zhu / Journal of Computational andappliedmathematics 161 (003) Assumption A. here exist positive scalars f and g such that f(x) 6 f and g(x) 6 g for all x L(x 0 ). here exists a positive scalar B such that B 6 B for all. Assumption A3. [ A1 0 ] A D(x) 1= is assumed to have full row ran for all x L(x 0 ). Dene [ ] f(x ) 0 M = : 0 C (3.1) Let (d ; ˆd ) denote a solution to (S ). he rst order necessary conditions (.7) (see [7,9]) imply that there exists 0 such that [ ] [ ] [ ] [ ] d f A 1 A (M + I) = (3.) ˆd 0 0 with ( [ d D 1= ] ) =0; A 1 d =0: (3.3) ˆd Clearly, +1 = +1 N = N +N ; +1 = +1 N = N +N when = 0, where (d N ;N ;N )is the modied Newton step, i.e., f A 1 A d N f A 1 A A N = 0 : C A 0 D D N Let the columns of Z denote an orthonormal basis for the null space of [ ] A1 0 A D(x) 1= the second order necessary conditions of (.7), the projected Hessian Z (M + I)Z is positive semi-denite (see [1]). Under the Assumption A3, the Lagrangian multipliers ; can be computed via the normal equations of (.8), i.e., [ ][ ] [ ] A1 A 1 A 1 A A1 f(x ) = : (3.4) A A 1 A A + D(x ) A f(x )

10 10 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 It is well nown from solving the trust region algorithms in order to assure the global convergence of the proposed algorithm, it is a sucient condition to show that at th iteration the predicted reduction dened by Pred(d )= (d ) which is obtained by the step d from trust region subproblem, satises a sucient descent condition (see [11]). Lemma 3.1. Let the step d be the solution of the trust region subproblem (S ), assume that Assumptions A1 A3 hold, then there exists 0 such that the step d satises the following sucient descent condition. Pred(d ) f g 1= min { ; f g 1= (3.5) M for all f ; g ; M, and. In fact, here = 1 and f g 1 = g + D 1= A g. Proof. By the least squares Lagrangian multipliers and,in(.8), that is, min A 1 + A f + D 1= : ; By (3.4), we have that (A 1 A 1) + A 1 A = A 1 f ; (A A 1) + A A + D = A f : So, we can obtain that and satisfy = (A 1 A 1) 1 A 1 ( f A ); (3.6) with D = A f A A 1 A A = A g (3.7) g = f(x ) A 1 A =[I A 1(A 1 A 1) 1 A 1 ]( f A ) which implies A 1 g =0: Dene p = g, and hence ˆp = D 1= A p. (t) = (tp ) = t( f p )+ 1 t (p B p + p A D 1 C A p ) [ ][ ] = t( f p )+ 1 B 0 p t (p ; ˆp ) : 0 C ˆp (3.8)

11 D. Zhu / Journal of Computational andappliedmathematics 161 (003) From the denitions of p and ˆp, we have that from (3.7) [ ] p = g ˆp + D 1= A g = f g given in (.9). Now, consider the following subproblem [ ] min t( f p )+ 1 p t (p ; ˆp )M ˆp ( ) s:t: ˆp = D 1= A p 0 6 t 6 = (p ;ˆp ) ; since p satises A 1 p =0. Let t be the optimal solution of the above subproblem ( ) and be the optimal value of the subproblem (S ). Let [ p =(p ;ˆp )M ˆp Consider two cases: ] and t = f p : (1) 0, if t (p ;ˆp ) 6, then t = t is the solution of subproblem ( ), we have that [ ] 6 (t p )= f p + 1 f p p (p ;ˆp )M ˆp 6 1 f p M (p ;ˆp ) = 1 f p M : On the other hand, if t (p ; ˆp ), i.e., f p ( = (p ;ˆp ) ), then set t = = (p ;ˆp ), we have that ( ) 6 (t p )= f p + 1 ( ) [ ] p (ˆp ; ˆp )M (p ;ˆp ) (p ;ˆp ) ˆp ( 6 = 1 (p ;ˆp ) ( (p ;ˆp ) 6 1 f p 1= : ) f p + 1 ( ) f p (p ;ˆp ) )

12 1 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 () If 6 0, then set t = = (p ;ˆp ), we have also that ( ) 6 (t p )= f p + 1 ( ) (p ;ˆp ) (p ;ˆp ) 6 (p ;ˆp ) f p = f p 1= : As above two cases, we have the condition of the lemma holds. he following lemma show the relation between the gradient f of the objective function and the step d generated by the proposed algorithm. We can see from the lemma that the direction of the trial step is a suciently descent direction. Lemma 3.. At the th iteration, let d be generatedin trust region subproblem (S ), then f d 6 1 f g 1= min { ; f g 1= M where 1 0 is a constant. (3.9) Proof. Let (d ; ˆd ) denote a solution to subproblem (S ). he rst order necessary conditions of (S ) imply that (3.) (3.3) hold. aing norm in (3.3), we can obtain = (d ; ˆd ) 6 ( f(x ) A 1 A + D 1= ) 1= + M (d ; ˆd ) = f g 1= + M (d ; ˆd ) : (3.10) And note (d ; ˆd ) 6, f g 1= + M : (3.11) he rst order necessary conditions of (S ) imply that there exists 0 and +1 such that [ ] {[ ] [ ] [ ] d f A = (M + I) + 1 A (3.1) ˆd 0 0 D 1= with +1 (A d D 1= ˆd ) = 0, where A + is the Moore Penrose generalized inverse of the matrix A. Hence, +1 (A d D 1= ˆd ) = 0 means {[ ] [ ] [ ] ] f A 1 A f d = D 1= +1 [ d ˆd

13 D. Zhu / Journal of Computational andappliedmathematics 161 (003) {[ ] [ ] [ ] f A 1 A = D 1= +1 (M + I) + {[ ] [ ] [ ] f A 1 A D 1= +1 : (3.13) herefore, taing norm in (3.13), we can obtain that from (3.11) [ ] [ ] [ ] f 1 f A 1 A d 6 + M f g M + f g 1= D 1= f 6 g max{ M ; f g 1= 6 1 { f 4 f g 1= min g 1= ; : (3.14) M From (3.14) and taing 1 = 1, the conclusion of the lemma holds. 4 he Assumptions A1 A imply that there exist D ; M 0 such that D 1 6 D ;, and M 6 M ;. Further, assume that f(x) 6 M ; x L(x 0 ): heorem 3.3. Let {x R n be a sequence generatedby the algorithm. Assume that Assumptions A1 A3 holdandthe strict complementarity of the problem (1.1) holds. hen lim inf f g =0: Proof. According to the acceptance rule in step 5, we have (3.15) f(x l() ) f(x + d ) f d : (3.16) aing into account that m( +1)6m() + 1, and f(x +1 ) 6 f(x l() ), we have f(x l(+1) ) 6 max 06j6m()+1 {f(x +1 j ) = f(x l() ). his means that the sequence {f(x l() ) is nonincreasing for all, and therefore {f(x l() ) is convergent. By (.11) and (3.9), for all M, f(x l() )=f(x l() 1 + l() 1 d l() 1 ) 6 max 06j6m(l() 1) {f(x l() j 1) + l() 1 f l() 1d l() 1

14 14 D. Zhu / Journal of Computational andappliedmathematics 161 (003) max {f(x l() j 1) 06j6m(l() 1) { l() 1 1 f l() 1g l() 1 1= min l() 1 ; f l() 1 g l() 1 1= : (3.17) M l() 1 If the conclusion of the theorem is not true, then there exists some 0 such that f g ; =1; ;::: : (3.18) herefore, we have that f(x l() ) 6 f(x l(l() 1) ) l() 1 1 min { l() 1 ; As {f(x l() ) is convergent, we obtain from (3.19) that lim l() 1 l() 1 =0: his, by (d ; ˆd ) 6, imply that M : (3.19) lim l() 1 d l() 1 =0: his means that either (3.0) or lim inf l() 1 =0; lim l() 1 =0: (3.1) (3.) By the updating formula of, for all j; j 1 6 +j 6 j, so that M+1 1 l() M+1 l() 1.If(3.) holds, then lim =0: (3.3) Assume that given in step 4 is the stepsize to the boundary of inequality constraints along d. From (.19), =min { a l+i x b l+i a l+i d a l+i x b l+i a l+i d 0; i=1;:::;m l ; with =+ if (a l+i x b l+i )=(a l+i d ) 6 0 for all i. From ˆd =D 1= A d and (3.), there exists +1 such that a l+id =(a l+id b l+i ) 1= ˆd i = (a l+i d b l+i ) i +1 + i +1 ; where ˆd i and i +1 are the ith component of the vectors ˆd and +1, respectively.

15 D. Zhu / Journal of Computational andappliedmathematics 161 (003) Hence, there exists j {1;:::;m l such that = a l+j x b l+j a l+j d From (3.), we have that [ ] [ A 1 A D 1= + j +1 j +1 j + +1 : (3.4) +1 ] [ ] f +1 = +(M + I) 0 ] : ˆd [ d Since [ A1 0 ] A D 1= has full row ran in the compact set L(x 0 ); { is bounded and f(x) is twice continuously dierentiable. here exist 1 0 and 0 such that ( + ) : Similar to (3.11), we can obtain that f g 1= M : Since the strict complementarity of the problem (1.1) holds and (3.18) and M 6 M ;, (3.5) 0 it is clear that from lim =+ : (3.4) means that we conclude that lim =+ : (3.6) By the condition on the strictly feasible stepsize ( 0 ; 1], for some and 1= O( d ); lim = 1, comes from lim d =0. From above, we have obtained that if the step size given in (.1), then the step size will be determined in (.1) and hence (3.4) holds and 0, we conclude that lim =+, and lim =1. We now prove that if (1 ) 6 ; M then = 1 must satisfy the condition (.11) in step 4, i.e., (3.7) f(x + d ) 6 f(x l() )+ f d : (3.8) If the above formula is not true, we have f(x + d ) f(x l() )+ f d f(x )+ f d : (3.9)

16 16 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 Because f(x) is twice continuously dierentiable, we have f(x + d ) f(x )= f d + 1 d f(x + d )d ; where [0; 1]. Hence, (3.8) implies that (1 ) f d + 1 d f(x + d )d 0; from which we obtain (1 ) f d + 1 M d 0. By (3.9), { (1 ) min ; + 1 M 0: (3.30) M Since 6 (1 )= M 6 = M, we hav e [ (1 ) + 1 M ] 0. his means that, by 0; (1 ) 1 M 6 M, which contradicts (3.7). From the above we see that if (3.8) holds, the step size + given in (.1), then the step size will be determined only in (.11). So, = 1, i.e., h = d and hence x +1 = x + d. We now that f(x + d ) f(x )+ 1 d C d (d ) 6 1 d f(x + d ) B 6 M ; (3.31) where [0; 1]. Since Lemma 3.1 implies that (d ) min{ ;= M, we readily obtain that set = f(x ) f(x + h ) ; Pred(h ) (3.3) then, { 1 converges to zero. his implies that { is not decreased for suciently large and hence bounded away from zero. hus, { cannot converge to zero, contradicting (3.3). If (3.1) holds, by (3.0), following the way used in [8], we can prove by induction that lim h l() j =0; (3.33) and hence, it can be derived that lim f(x l()) = lim f(x ): (3.34) By the rule for accepting the step h, f(x +1 ) f(x l() ) 6 f d 6 1 f g 1= min { ; f g 1= { 6 1 min ; M M : (3.35)

17 D. Zhu / Journal of Computational andappliedmathematics 161 (003) By (3.34) and (3.35) mean that by { being bounded away from zero, lim =0: Since the strict complementarity of the problem (1.1) holds, we can obtain that if is determined by (.1), then from (3.4) (3.5), lim 0: (3.36) So, lim = 0 holds only in (.11). he acceptance rule (.11) means that, for large enough Since ( f x + ) (! d f(x ) f x + )! d f(x l() )! f d : (3.37) ( f x +! d ) f(x )= ( )! f d +o! d ; we have (1 ) ( )! f d +o! d 0: (3.38) Dividing (3.38) by( =!) d and noting that 1 0 and f d 6 0, we obtain f lim d d =0: From f d 6 1 f g 1= min { ; f g 1= we have that (3.39) means M { 6 1 min ; M (3.39) (3.40) lim d =0; (3.41) which contradicts d 6 and hence the conclusion of the theorem is true. So, the conclusion of the theorem is true. 4. Properties of the local convergence heorem 3.3 indicates that at least one limit point of {x is a stationary point. In this section we shall rst extend this theorem to a stronger result and the local convergence rate, but it requires more assumptions. Assumption A4. he solution x of problem (1.1) satises the strong second order sucient condition, that is, let the columns of Z denote an orthonormal basis for the null space of [ ] A1 0 ; A D 1=

18 18 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 then there exists 0 such that p (Z H Z )p p ; p (4.1) [ ] f(x ) 0 where H =. 0 C Assumption A5. Let H = the null space of [ A1 0 A D 1= ] ; (M H )Z d lim =0: d his means that for large [ ] f(x ) 0, and the columns of Z denote an orthonormal basis for 0 C d (Z M Z )d = d (Z H Z )d +o( d ): (4.) heorem 4.1. Assume that the Assumptions A4 A5 hold. Let {x be a sequence generatedby the algorithm. If the strict complementarity of the problem (1.1) holds at every limit point of {x, then d 0. Furthermore, if x is close enough to x, and x is a strict local minimum of the problem (1.1), then x x. Proof. By (3.1) and (3.3), we get {[ ] [ ] [ f A f 1 A d = ] = [d ; ˆd ](M + I) [ ] d 6 [d ; ˆd ]M = [d ; ˆd ] ˆd [ d ˆd [ f(x ) 0 0 C ][ d D 1= ˆd ] ] [ ] d +1 ˆd = {d ( f )d + d (A D 1= C D 1= A )d : (4.3) Let p satisfy a l+i p = 0 for all i =1;:::;m l with a l+i d b l+i = 0. Dene ˆp i =(a l+i d b l+i ) 1= a l+i p if a l+i d b l+i 0 and ˆp i = 0, otherwise. hen A p = D 1= ˆp. Let the columns of

19 D. Zhu / Journal of Computational andappliedmathematics 161 (003) Z denote an orthonormal basis for the null space of [ ] A1 0 : A D Hence (p ;ˆp )=Z w and ˆd C ˆd ˆd C ˆd = 0. So, the above inequality (4.3) implies f d 6 {d ( f )d + d (A D 1= C D 1= A )d = d Z ( f )Z d +o( d ): herefore, from (4.1) (4.), we get that for all large f d 6 d +o( d ): (4.4) According to the acceptance rule in step 4, we have f(x l() ) f(x + d ) f d d +o( d ): (4.5) Similar to the proof of heorem in [8], we have that the sequence {f(x l() ) is nonincreasing for all, and therefore {f(x l() ) is convergent. (4.3) and (4.4) mean that f(x l() ) 6 f(x l(l() 1) ) l() 1 d l() 1 +o( d l() 1 ): (4.6) hat {f(x l() ) is convergent means lim l() 1{ d l() 1 +o( d l() 1 ) =0: (4.7) Similar to the proof of heorem in [8], we can also obtain that lim f(x l()) = lim f(x ): (4.5) and (4.8) imply that lim d =0: Assume that there exists a subsequence K { such that (4.8) (4.9) lim d 0: (4.10) ; K his implies that lim ; K =0. Assume that given in step 4 is the stepsize to the boundary of inequality constraints along d. Similar to prove (3.4), we can obtain that for some j =1;:::;m l, = a l+j x b l+j a l+j d + j +1 j +1 j + +1 : +1 lim ; K =0 and +1 bounded imply lim =0 and lim j +1 =0. Hence, +1 = +1 N and i +1 0 when =0. So, (+1 N )j = j +1 0, as j Since the strict complementarity of the problem (1.1) holds at every limit point of {x, i.e., j +1 + a l+j x b l+j 0, for all large

20 0 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 ; j = 1;:::;m l the acceptance stepsize given in the boundary (.1), lim 0: Similar to prove (3.39), we can also obtain that from 0, f 0 6 lim d d 6 lim d +o( d ) 6 0: (4.11) d From (4.4), we have that lim d =0; ; K which contradicts (4.10). herefore, we have that lim d =0: (4.1) Assume that there exists a limit point x which is a local minimum of f, let {x K be a subsequence of {x converging to x.as l() M, for any, x = x l(+m+1) d l(+m+1) 1 d l(+m+1) 1 ; there exists a point x l() such that from (4.1) lim x l() x =0; so that we can obtain (4.13) lim x l() x 6 lim x x + lim x l() x =0: (4.14) K; K; K; his means that also the subsequence {x l() K converges to x. As the Assumption A4 necessarily holds in a neighborhood of x, then x is the only limit point {x in some neighborhood N(x ; ) ofx, where 0 is an any constant. Similar to the proof of heorem 4. in [14], we can also prove that x x, which means that the conclusion of the theorem is true. heorem 4.. Assume that Assumptions A4 A5 hold. Let {x be a sequence generatedby the algorithm. If the strict complementarity of the problem (1.1) holds at every limit point of {x, then lim f g =0: (4.15) Proof. Assume that there are an 1 (0; 1) and a subsequence { f m i g mi of { f g such that for all m i ; i=1; ;::: f m i g mi 1: heorem 3.3 guarantees the existence of another subsequence { f l i g li such that (4.16) f g ; for m i 6 l i ; (4.17)

21 D. Zhu / Journal of Computational andappliedmathematics 161 (003) and f l i g li 6 (4.18) for an (0; 1 ). From heorem 4.1, we now that lim =0: (4.19) Let the stepsize scalar be given in (.19) along d to the boundary (.1). According to the denition (.19), =min { a l+i x b l+i a l+i d a l+i x b l+i a l+i d 0; i=1;:::;m l ; with =+ if (a l+i x b l+i )=(a l+i d ) 6 0 for all i =1;:::;m l. From ˆd = D 1= A d and (3.), there exists +1 such that a l+id =(a l+id b l+i ) 1= ˆd i = (a l+i d b l+i ) i +1 + i +1 (4.0) where ˆd i and i +1 are the ith component of the vectors ˆd and +1, respectively. If d, then = 0. Since the strict complementarity of the problem (1.1) holds at every limit point of {x, i.e., j +1 + a l+j x b l+j 0, for all large ; +1 = N +1 0 when =0. So, i +1 =(N +1 )i 0. From (4.0), it is clear that lim =1. If d = 0, then +1. From (4.0), = a l+i x b l+j a l+j d + j +1 j +1 j : From above, we have obtained that if (4.17) holds and 0, we conclude that lim =+, and lim =1. Further, by the condition on the strictly feasible stepsize 1=O( d ), and lim d =0, we have lim =1. Because f(x) is twice continuously dierentiable, we have that, from above, f(x + d )=f(x )+ f d + 1 d ( f(x ))d +o( d ) 6 f(x l() )+ f d + ( 1 ) f d + 1 [ f d + d (B + A D 1= C D 1= A )d ] + 1 d ( f(x ) B A D 1= C D 1= A )d +o( d ): (4.1) From (3.3), we can obtain f d + d (B + A D 1= C D 1= A )d = d d 6 0; d ( f(x ) B A D 1= C D 1= A )d = d ( f(x ) B )d ˆd C ˆd =o( d ):

22 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 he last equality holds since (4.) and ˆd C ˆd ˆd C ˆd = 0. From (4.) and (4.3), we have that for large enough i and m i 6 l i, f(x + d ) 6 f(x l() )+ f d (4.) which means that the step size = 1, i.e., h = d for large enough i and m i 6 l i. By (4.), we now that f(x + d ) f(x ) (d ) = [ f d + 1 d ( f(x ))d +o( d ) ] [ ] f d + 1 d (B + A D 1= C D 1= A )d =o( d ): From (3.5) and (4.1), for large enough i; m i 6 l i, Pred(d ) 1 f g 1= min { ; f g 1= M As d = h, for large i; m i 6 l i, we obtain that ˆ = f f(x + h ) Pred(h ) =1+ f f(x + d )+ (d ) Pred(h ) 1 o( d ) 1 min{ ; M : his means that for large i; m i 6 l i, f f(x + d ) Pred(h ) 1 min 1 min { ; M : From x +1 x 6, it follows that for suciently large i, f f(x + d ) 3 3 x +1 x where 3 = 1 M. We then deduce from this bound that for i suciently large, l i 1 l i 1 { ; M (4.3) : (4.4) (4.5) x mi x li 6 x x f f(x + d )= 1 {f mi f li : 3 3 =m i =m i =m i herefore, (4.8) implies that f mi f li tends to zero as i tends to innity, and hence x mi x li tends to zero as i tends to innity. By continuity of the gradient f(x) and g(x), we thus duduce that f(x li ) g li 1 f(x mi ) g mi 1 also tends to zero. However, this is impossible because of the l i 1

23 D. Zhu / Journal of Computational andappliedmathematics 161 (003) denitions of {l i and {m i, which imply that f(x li ) g li 1 f(x mi ) g mi 1 f(x mi ) g mi 1 f(x li ) g li Hence no subsequence satisfying (4.15) can exist, and the theorem is proved. See also [13]. We now discuss the convergence rate for the proposed algorithm. For this purpose, it is shown that for large enough, the step size 1; lim = 1, and there exists ˆ 0 such that ˆ. heorem 4.5. Assume that Assumptions A1 A5 hold. If the strict complementarity of the problem (1.1) holds at every limit point of {x, then for suciently large, the step 1 and lim =1, andthe trust region constraint is inactive, that is, there exists ˆ 0 such that K ˆ; K ; where K is a large enough index. Further, for suciently large, h is the quasi-newton step. Proof. Let the stepsize scalar be given in (.19) along d to the boundary (.1) of the inequality constraints. According to the denition (.19), =min { a l+i x b l+i a l+i d a l+i x b l+i a l+i d 0; i=1;:::;m l ; with =+ if (a l+i x b l+i )=(a l+i d ) 6 0 for all i=1;:::;m l. Since the strict complementarity of the problem (1.1) holds at every limit point of {x, similar to the above heorem 4.4, we can also obtain that if lim = 1 when is given in (.1) along d to the boundary of the inequality constraints. his means that the step size 1, i.e., h = d for large enough if is determined by (.1). Similar to the proof of heorem 4.1, we can also obtain that d 0. Hence, by the condition on the strictly feasible stepsize 1=O( d ); lim =1. Similar to prove (4.), we can also obtain that at the th iteration, f(x + d ) 6 f(x l() )+ f d : (4.6) By the above inequality, we now that x +1 = x + d : By Assumptions A4 A5, we can obtain that 1 = Ared(h ) Pred(h ) Pred(h ) = [ f h + 1 h (B + A D 1= C D 1= A )h ] [ f h + 1 h f(x )h +o( h )] Pred(h ) = o( h ) Pred(h ) : (4.7)

24 4 D. Zhu / Journal of Computational andappliedmathematics 161 (003) 1 5 Similar to prove (4.3), for large enough, Pred(d )= f h + 1 h (B + A D 1= C D 1= A )h = f h + 1 h Z ( f(x ))Z h +o( h ) 4 d +o( d ): (4.8) Similar to the proof of heorem 4.1, we can also obtain that d 0. Hence, (4.7) and (4.8) mean that 1. Hence there exists ˆ 0 such that when d 6 ˆ, ˆ, and therefore, +1.Ash 0, there exists an index K such that d 6 ˆ whenever K. hus K ˆ; K : Similar to proof of heorem 5 in [3], the conclusion of the theorem holds if the quasi-newton step is instead of the Newton step. he heorem 4.4 means that the local convergence rate for the proposed algorithm depends on the Hessian of objective function at x and the ([ local convergence ]) rate of the step d.ifd becomes the quasi-newton step in the null subspace N A1 0 A, then the sequence {x D 1= generated by the algorithm converges x superlinear. Furthermore, the local convergence rate results obtained in [3] can be proved under the same conditions. References [1] M.A. Branch,.F. Coleman, Y. Li, A subspace, interior and conjugate gradient method for large-scale bound-constrained minimization problems, SIAM J. Sci. Comput. 1 (1) (1999) 1 3. [].F. Coleman, Y. Li, An interior trust region approach for minimization subject to bounds, SIAM J. Optim. 6 () (1996) [3].F. Coleman, Y. Li, A trust region and ane scaling interior point method for nonconvex minimization with linear inequality constraints, Math. Program. Ser. A 88 (000) [4] N.Y. Deng, Y. Xiao, F.J. Zhou, A nonmonotonic trust region algorithm, J. Optim. heory Appl. 76 (1993) [5] J.E. Dennis Jr., R.B. Schnable, Numerical Methods for Unconstrained Optimization and Non-Linear Equations, Prentice Hall, New Jersey, [6] I.I. Diin, Iterative solution of problems of linear and quadratic programming, Soviet Math. Dol. 8 (1967) [7] R. Fletcher, Practical Methods of Optimization. Vol. I: Unconstrained Optimization; Vol. II: Constrained Optimization, Wiley, New Yor, [8] L. Grippo, F. Lampariello, S. Lucidi, A nonmonotonic line search technique for Newton s methods, SIAM J. Numer. Anal. 3 (1986) [9] J.J. More, D.C. Sorensen, Computing a trust region step, SIAM J. Sci. Statist. Comput. 4 (1983) [10] J. Nocedal, Y. Yuan, Combining trust region and line search techniques, in: Y. Yuan (Ed.), Advances in Nonlinear Programming, Kluwer, Dordrecht, 1998, pp [11] M.J. Powell, On the global convergence of trust region algorithm for unconstrained minimization, Math. Programming 9 (1984)

25 D. Zhu / Journal of Computational andappliedmathematics 161 (003) [1] D.C. Sorensen, Newton s method with a model trust region modication, SIAM J. Numer. Anal. 19 (198) [13] D. Zhu, Curvilinear paths and trust region methods with nonmonotonic bac tracing technique for unconstrained optimization, J. Comput. Math. 19 (001) [14] D. Zhu, Nonmonotonic bactracing trust region interior point algorithm for linear constrained optimization, J. Comput. Appl. Math. 155 (003)

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization Journal of Computational and Applied Mathematics 155 (2003) 285 305 www.elsevier.com/locate/cam Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line 7 COMBINING TRUST REGION AND LINE SEARCH TECHNIQUES Jorge Nocedal Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA. Ya-xiang Yuan State Key Laboratory

More information

1. Introduction In this paper we study a new type of trust region method for solving the unconstrained optimization problem, min f(x): (1.1) x2< n Our

1. Introduction In this paper we study a new type of trust region method for solving the unconstrained optimization problem, min f(x): (1.1) x2< n Our Combining Trust Region and Line Search Techniques Jorge Nocedal y and Ya-xiang Yuan z March 14, 1998 Report OTC 98/04 Optimization Technology Center Abstract We propose an algorithm for nonlinear optimization

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University

INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS Ronald J. Stern Concordia University Department of Mathematics and Statistics Montreal, Quebec H4B 1R6, Canada and Henry Wolkowicz

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Adaptive two-point stepsize gradient algorithm

Adaptive two-point stepsize gradient algorithm Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of

More information

Math 164: Optimization Barzilai-Borwein Method

Math 164: Optimization Barzilai-Borwein Method Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

CONVEXIFICATION SCHEMES FOR SQP METHODS

CONVEXIFICATION SCHEMES FOR SQP METHODS CONVEXIFICAION SCHEMES FOR SQP MEHODS Philip E. Gill Elizabeth Wong UCSD Center for Computational Mathematics echnical Report CCoM-14-06 July 18, 2014 Abstract Sequential quadratic programming (SQP) methods

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

A REDUCED HESSIAN METHOD FOR LARGE-SCALE CONSTRAINED OPTIMIZATION by Lorenz T. Biegler, Jorge Nocedal, and Claudia Schmid ABSTRACT We propose a quasi-

A REDUCED HESSIAN METHOD FOR LARGE-SCALE CONSTRAINED OPTIMIZATION by Lorenz T. Biegler, Jorge Nocedal, and Claudia Schmid ABSTRACT We propose a quasi- A REDUCED HESSIAN METHOD FOR LARGE-SCALE CONSTRAINED OPTIMIZATION by Lorenz T. Biegler, 1 Jorge Nocedal, 2 and Claudia Schmid 1 1 Chemical Engineering Department, Carnegie Mellon University, Pittsburgh,

More information

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object

2 JOSE HERSKOVITS The rst stage to get an optimal design is to dene the Optimization Model. That is, to select appropriate design variables, an object A VIEW ON NONLINEAR OPTIMIZATION JOSE HERSKOVITS Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro 1 Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL 1. Introduction Once

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

system of equations. In particular, we give a complete characterization of the Q-superlinear

system of equations. In particular, we give a complete characterization of the Q-superlinear INEXACT NEWTON METHODS FOR SEMISMOOTH EQUATIONS WITH APPLICATIONS TO VARIATIONAL INEQUALITY PROBLEMS Francisco Facchinei 1, Andreas Fischer 2 and Christian Kanzow 3 1 Dipartimento di Informatica e Sistemistica

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Quasi-Newton Methods

Quasi-Newton Methods Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such

2 Chapter 1 rely on approximating (x) by using progressively ner discretizations of [0; 1] (see, e.g. [5, 7, 8, 16, 18, 19, 20, 23]). Specically, such 1 FEASIBLE SEQUENTIAL QUADRATIC PROGRAMMING FOR FINELY DISCRETIZED PROBLEMS FROM SIP Craig T. Lawrence and Andre L. Tits ABSTRACT Department of Electrical Engineering and Institute for Systems Research

More information

Preconditioned conjugate gradient algorithms with column scaling

Preconditioned conjugate gradient algorithms with column scaling Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and

More information

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA

INVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction

Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method. 1 Introduction ISSN 1749-3889 (print), 1749-3897 (online) International Journal of Nonlinear Science Vol.11(2011) No.2,pp.153-158 Global Convergence of Perry-Shanno Memoryless Quasi-Newton-type Method Yigui Ou, Jun Zhang

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization 5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

ON TRIVIAL GRADIENT YOUNG MEASURES BAISHENG YAN Abstract. We give a condition on a closed set K of real nm matrices which ensures that any W 1 p -grad

ON TRIVIAL GRADIENT YOUNG MEASURES BAISHENG YAN Abstract. We give a condition on a closed set K of real nm matrices which ensures that any W 1 p -grad ON TRIVIAL GRAIENT YOUNG MEASURES BAISHENG YAN Abstract. We give a condition on a closed set K of real nm matrices which ensures that any W 1 p -gradient Young measure supported on K must be trivial the

More information

1. Introduction. We develop an active set method for the box constrained optimization

1. Introduction. We develop an active set method for the box constrained optimization SIAM J. OPTIM. Vol. 17, No. 2, pp. 526 557 c 2006 Society for Industrial and Applied Mathematics A NEW ACTIVE SET ALGORITHM FOR BOX CONSTRAINED OPTIMIZATION WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract.

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity

A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 234 (2) 538 544 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy

More information

1 Introduction Sequential Quadratic Programming (SQP) methods have proved to be very ecient for solving medium-size nonlinear programming problems [12

1 Introduction Sequential Quadratic Programming (SQP) methods have proved to be very ecient for solving medium-size nonlinear programming problems [12 A Trust Region Method Based on Interior Point Techniques for Nonlinear Programming Richard H. Byrd Jean Charles Gilbert y Jorge Nocedal z August 10, 1998 Abstract An algorithm for minimizing a nonlinear

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

A Distributed Newton Method for Network Utility Maximization, I: Algorithm A Distributed Newton Method for Networ Utility Maximization, I: Algorithm Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract Most existing wors use dual decomposition and first-order

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems

Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems Volume 29, N. 2, pp. 195 214, 2010 Copyright 2010 SBMAC ISSN 0101-8205 www.scielo.br/cam Global convergence of a regularized factorized quasi-newton method for nonlinear least squares problems WEIJUN ZHOU

More information

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

2. Quasi-Newton methods

2. Quasi-Newton methods L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration

A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration A Robust Implementation of a Sequential Quadratic Programming Algorithm with Successive Error Restoration Address: Prof. K. Schittkowski Department of Computer Science University of Bayreuth D - 95440

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s

and P RP k = gt k (g k? g k? ) kg k? k ; (.5) where kk is the Euclidean norm. This paper deals with another conjugate gradient method, the method of s Global Convergence of the Method of Shortest Residuals Yu-hong Dai and Ya-xiang Yuan State Key Laboratory of Scientic and Engineering Computing, Institute of Computational Mathematics and Scientic/Engineering

More information

Complexity of gradient descent for multiobjective optimization

Complexity of gradient descent for multiobjective optimization Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March

More information