Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization

Size: px
Start display at page:

Download "Nonmonotonic back-tracking trust region interior point algorithm for linear constrained optimization"

Transcription

1 Journal of Computational and Applied Mathematics 155 (2003) Nonmonotonic bac-tracing trust region interior point algorithm for linear constrained optimization Detong Zhu Department of Mathematics, Shanghai Normal University, Shanghai , China Received 1 February 2002; received in revised form 12 October 2002 Abstract In this paper, we modify the trust region interior point algorithm proposed by Bonnans and Pola in (SIAM J. Optim. 7(3) (1997) 717) for linear constrained optimization. A mixed strategy using both trust region and line-search techniques is adopted which switches to bac-tracing steps when a trial step produced by the trust region subproblem may be unacceptable. The global convergence and local convergence rate of the improved algorithm are established under some reasonable conditions. A nonmonotonic criterion is used to speed up the convergence progress in some ill-conditioned cases. The results of numerical experiments are reported to show the eectiveness of the proposed algorithm. c 2003 Elsevier Science B.V. All rights reserved. MSC: 90C.30; 65K.05; 49M.40 Keywords: Trust region methods; Bac tracing; Nonmonotonic technique; Interior points 1. Introduction In this paper, we analyze the trust region interior point algorithm for solving the linear equality constrained optimization problem: min f(x) s:t: Ax= b; x 0; where f : R n R is smooth nonlinear function, not necessarily convex, A R m n is a matrix b R m is a vector. There are quite a few articles proposing sequential convex quadratic programming methods with trust region idea. These resulting methods generate sequences of points in the interior /03/$ - see front matter c 2003 Elsevier Science B.V. All rights reserved. doi: /s (02) (1.1)

2 286 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) of the feasible set. Recently, Bonnans and Pola in [2] presented a trust region interior point algorithm for the linear constrained optimization. The algorithm would use two matrices at each iteration. The def rst is X = diag{x 1;:::;xn }, a scaling matrix, where xi is the ith component of x 0, the current interior feasible point. The second matrix is B, a symmetric approximation of the Hessian 2 f(x ) of the objective function in which B assumed to be positive semidenite in [2]. A search direction at x by solving the trust region convex quadratic programming: min s:t: (d) def =f + g T d dt B d Ad=0 d T X 2 d 6 2 ; x + d 0; where g = f(x ); d = x x ;B ; (d) is the local quadratic approximation of f and is the trust region radius. Let d be the solution of the subproblem. Then Bonnans and Pola in [2] used the line search to computing =! l, with l the smallest nonnegative integer such that (1.2) f(x +! l d ) 6 f(x )+! l ( (d ) f ); (1.3) where (0; 1 ) and! (0; 1), we obtain next step, 2 x +1 = x + d : However, Bonnans and Pola in [2] assumed that (d ) is a convex function, in order to obtain global convergence of the proposed algorithm. Trust region method is a well-accepted technique in nonlinear optimization to assure global convergence. One of the advantages of the model is that it does not require the objective function to be convex. A solution that minimizes the model function within the trust region is solved as a trial step. If the actual reduction achieved on the objective function f at this point is satisfactory comparing with the reduction predicted by the quadratic model, then the point is accepted as a new iterate, and hence the trust region radius is adjusted and the procedure is repeated. Otherwise, the trust region radius should be reduced and a new trial point needs to be determined. It is possible that the trust region subproblem needs to be resolved many times before obtaining an acceptable step, and hence the total computation for completing one iteration might be expensive. Nocedal and Yuan [9] suggested a combination of the trust region and line-search method for unconstrained optimization. The plausible remedy motivates to switch to the line-search technique by employing the bac-tracing steps at an unsuccessful trial step. Of course, the prerequisite for being able to maing this shift is that although the trial step is unacceptable as next iterative point, it should provide a direction of sucient descent. More recently, the nonmonotonic line-search technique for solving unconstrained optimization is proposed by Grippo et al. in [6]. Furthermore, the nonmonotone technique is developed to trust region algorithm for unconstrained optimization (see [3], for instance). The nonmonotonic idea motivates to further study the bac-tracing trust region interior point algorithm, because monotonicity may cause a series of very small steps if the contours of objective function f are a family of curves with large curvature. The paper is organized as follows. In Section 2, we describe the algorithm which combines the techniques of trust region interior point, bac-tracing step, scaling matrix and nonmonotonic search. In Section 3, wea global convergence of the proposed algorithm is established. Some further convergence properties such as strong global convergence and local convergence rate are discussed (1.4)

3 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) in Section 4. Finally, the results of numerical experiments of the proposed algorithm are reported in Section Algorithm In this section, we describe a method which combines nonmonotonic line-search technique with a trust region interior point algorithm Initialization step Choose parameters (0; 1 2 );! (0; 1); ; ; 0 and positive integer M. Let m(0) = 0. Choose a symmetric matrix B 0. Select an initial trust region radius 0 0 and a maximal trust region radius max 0, give a starting strictly feasible point x 0 int()={x Ax = b; x 0}. Set = 0, go to the main step Main step 1. Evaluate f = f(x ); g = f(x ). 2. If P g 6, stop with the approximate solution x, here the projection map P =I A T ( A A T ) 1 A def def of the null subspace of N( A ) with A =AX ; g =X g. 3. Solve subproblem min (d) def =f + g T d dt B d (S ) s:t: Ad=0; X 1 d 6 ; x + d 0: Denote by d the solution of the subproblem (S ). 4. Choose =1;!;! 2 ;:::, until the following inequality is satised: f(x + d ) 6 f(x l() )+ g T d ; (2.1) where f(x l() ) = max 06j6m() {f(x j )}. 5. Set h = d ; (2.2) x +1 = x + h : Calculate Pred(h )=f (h ); (2.3) (2.4) [Ared(h )=f(x l() ) f(x + h ); (2.5) ˆ = [ Ared(h ) Pred(h ) (2.6)

4 288 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) and tae [ 1 ; 2 ] if ˆ 6 1 ; +1 = ( 2 ; ] if 1 ˆ 2 ; ( ; min{ 3 ; max }] if ˆ 2 : Calculate f(x +1 ) and g Tae m( +1)=min{m()+1;M}, and update B to obtain B +1. Then set + 1 and go to step 2. Remar 1. In the subproblem (S ), (d) is a local quadratic model of the objective function f around x. A candidate iterative direction d is generated by minimizing (d) along the interior points of the feasible set within the ellipsoidal ball centered at x with radius. Remar 2. A ey property of this transformation in trust region subproblem (S ) is that X 1 d is at least unit distance from all bounds in the scaled coordinates; i.e., an arbitrary step X 1 d to the point x + d does not violate any bound if d T X 2 d 1. To see this, rst observe that n ( ) d d T X 2 i 2 d = 1 i=1 x i implies d i xi, for i =1;:::;n, where xi and di are the ith components of x and d, respectively. Then no matter what the sign of d i is, the inequality xi + di 0 holds. One typical method for solving the subproblem (S ) was presented in [2] in detail (also see [5,8,11]). Remar 3. Note that in each iteration the algorithm solves only one trust region subproblem. If the solution d fails to meet the acceptance criterion (2.1) (tae = 1), then we turn to line search, i.e., retreat from x + h until the criterion is satised. Remar 4. We improved the trust region interior algorithm in [2] by using bac-tracing trust region technique and the trust region radius adjusted depends on the traditional trust region criterion. At the line search, we use g T d in (2.1) instead of (d ) f in (1.3). The line-search criterion (2.1) is satised easier than criterion (1.3), because if B is positive semidenite, then g T d 6 (d ) f. Remar 5. Comparing the usual monotone technique with the nonmonotonic technique, when M 1, the accepted step h only guarantees that f(x +h ) is smaller than f(x l() ). It is easy to see that the usual monotone algorithm can be viewed as a special case of the proposed algorithm when M =0. 3. Global convergence Throughout this section we assume that f : R n R 1 is twice continuously dierentiable and bounded from below. Given x 0 R n, the algorithm generates a sequence {x } R n. In our analysis,

5 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) we denote the level set of f by L(x 0 )={x R n f(x) 6 f(x 0 ); Ax= b; x 0}: The following assumption is commonly used in convergence analysis of most methods for linear equality constrained optimization. Assumption 1. Sequence {x } generated by the algorithm is contained in a compact set L(x 0 )on R n. Matrix A has full row-ran m. Based on solving the above trust region subproblem, we give the following lemma. Lemma 3.1. d that is a solution of subproblem (S ) if andonly if there exist 0, R m such (B + X 2 )d = g + A T ; Ad =0; x + d 0; ( 2 d T X 2 d ) = 0 (3.1) holds and B + X 2 is positive semidenite in N(A). Proof. Suppose d is the solution of (S ), it is a straightforward conclusion of the rst-order necessary conditions that there exist 0 and R m such that (3.1) holds. It is only need to show that B + X 2 is positive semidenite in N(A). Suppose that d 0, since d solves (S ), it also solves min{ (d) def =g T d dt B d Ad =0; x + d 0 X 1 d = X 1 d }: It follows that (d) (d ) for all d such that X 1 d = X 1 d and Ad = 0, that is, g T d dt B d g T d dt B d : (3.2) Since (B + X 2 )d = g + A T and Ad = 0, we have g T d = d T (B + X 2 )d and g T d = d T (B + X 2 )d : (3.3) Replacing the above two equalities into (3.2) and rearranging terms in (3.2) gives (d d ) T (B + X 2 )(d d ) (d T X 2 d d T X 2 d )=0; because of d 0, we have the conclusion. If d =0,by(3.1) we have g + A T =0, so d = 0 solves min{ 1 2 dt B d d T X 2 d 6 2 }

6 290 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) and we must conclude that B is positive semidenite. Since 0 is necessary, B + X 2 is positive semidenite in N(A). Let 0; R m ;d R n satisfy (3.1) and B + X 2 be positive semidenite in N(A). Then for all d N(A); x + d 0, we have g T d dt (B + X 2 )d g T d dt (B + X 2 )d : It means (d) =g T d dt B d = g T d dt (B + X 2 )d 1 2 d T X 2 d g T d dt (B + X 2 )d 1 2 d T X 2 d = (d )+ 1 2 (dt X 2 d d T X 2 d) (d ); which proves that d solves (S ). Lemma 3.1 establishes the necessary and sucient condition concerning ; d, and when d solves (2.1). It is well nown from solving the trust region algorithms in order to assure the global convergence of the proposed algorithm, it is a sucient condition to show that at th iteration the predicted reduction dened by Pred(d )=f (d ) which is obtained by the step d from trust region subproblem. Lemma 3.2. Let the step d be the solution of the trust region subproblem, then there exists 0 such that the step d satises the following sucient descent condition: { } P ĝ Pred(d ) P g min 1; ; (3.4) X B X for all g ; B and, where P = I  T (  T ) 1  with  = AX and ĝ = X g. Proof. From Remar 2, we now that the inequality x + d 0 must hold, if 1. Assume rst that 1. The ane scaling transformation is the one-to-one mapping ˆd = X 1 d ; (3.5) which is extended for the equality constrained trust region subproblem (S ) by dening the quantities ĝ = X g ;  = AX ; ˆB = X B X : (3.6) (3.7) (3.8)

7 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) Thus, the basic anely scaled subproblem at the th iteration is min (Ŝ ) s:t:  ˆd =0; ˆ( ˆd) def =ĝ T ˆd ˆd T ˆB ˆd ˆd 6 : Denote by ˆd the solution of the subproblem (Ŝ ). Let w = P ĝ, it is clear to see that  w =0. If P ĝ 0, consider the following problem: min (t) def = t P ĝ t2 (P ĝ ) T ˆB (P ĝ ) s:t: 0 6 t 6 P ĝ : Let t be the optimal solution of the above problem and be the optimal value of the subproblem (S ). Consider two cases: (1) w T ˆB w 0, set t = w 2 =w T ˆB w, if t w 6, then t = t is the solution of the subproblem (3.9), we have that ( ) 6 (t )= w 4 w T ˆB w 2 w 2 w T ˆB (w T w ˆB w ) 6 1 w 2 2 ˆB : (3.10) On the other hand, if t w, i.e., w ( = w 2 )(w T ˆB w ) then set t = = w,we have that ( ) 6 (t )= w ( ) 2 (w T w 2 w ˆB w ) w : (3.11) (2) w T ˆB w 6 0, set t = = w, we have that ( ) 6 (t )= w ( ) 2 (w T w 2 w ˆB w ) w : (3.12) As above, (3.10) (3.12) mean that w min{ ; w = ˆB }. From (3.10) (3.12), we have that { } P ĝ Pred(d ) P ĝ min ; ; (3.13) X B X where set = 1 2. If 1, let d be the solution of the following subproblem: min ( S ) s:t:  ˆd =0; ˆ( ˆd) def =ĝ T ˆd ˆd T ˆB ˆd ˆd 1; (3.9)

8 292 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) then d is a feasible solution of subproblem (Ŝ ) and x + d 0. So, d is also a feasible solution of subproblem (S ). Hence, similar to prove (3.13), we have { } 6 ˆ( ˆd ) 6 ˆ( P ĝ d ) 6 P ĝ min 1; : (3.14) X B X From (3.12) (3.14), we have that the conclusion of the lemma holds. The following lemma show the relation between the gradient g of the objective function and the step d generated by the proposed algorithm. We can see from the lemma that the direction of the trial step is a suciently descent direction. Lemma 3.3. At the th iteration, let d be generatedin trust region subproblem (S ), Lagrange multiplier estimates =(Â Â T ) 1 Â ĝ, then { } g T P ĝ d 6 1 P ĝ min 1; ; ; (3.15) X B X where 1 0 is a constant. Remar. The Lagrange multiplier estimates =(Â Â T ) 1 Â ĝ, we can use standard rst-order least-squares estimates by solving (Â Â T ) = Â ĝ : Proof. The inequality x + d 0 must hold if 1. Without loss of generality, assume that 1. Since d is generated in trust region subproblem (S ), Lemma 3.1 ensures that, (B + X 2 )d = g + A T : Noting X diagonal matrix and from = (Â Â T ) 1 Â ĝ X 1 d = P ĝ (X B X )X 1 d ; we tae norm in the above equation and obtain = X 1 d 6 P ĝ + X B X X 1 d : (3.16) Hence, noting X 1 d 6, P ĝ X 1 d + X B X 6 P ĝ + X B X : (3.17) From (3.1), we have that from X 1 d = (X B X + I) P ĝ ; where B is the general pseudo-inverse of B: (3.18)

9 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) Then we have that from  ˆd =0, g T d =(X g ) T X 1 d =(P ĝ ) T X 1 d = (P ĝ ) T (X B X + I) P ĝ 1 6 P ĝ 2 X B X + P ĝ X B X + P ĝ = P ĝ max{2 X B X ; P ĝ = } 6 1 { } 4 P P ĝ ĝ min X B X ; : (3.19) From (3.19) and taing 1 = 1 we have that (3.15) holds. 4 Assumption 2. X B X and X 2 f(x)x are bounded, i.e., there exist b; ˆb 0 such that X B X 6 b;, and X 2 f(x)x 6 ˆb; x L(x 0 ), here the scaling matrix X = diag{x 1 ;:::;x n } and x i is the ith component of x 0. Similar to proofs of Lemma 4.1 and Theorem 4.2 in [13], we can also obtain following main results. Lemma 3.4. Assume that Assumptions 1 2 hold. If there exists 0 such that P ĝ for all, then there is 0 such that ; : (3.20) (3.21) Theorem 3.5. Assume that Assumptions 1 2 hold. Let {x } R n be a sequence generatedby the algorithm. Then lim inf P ĝ =0: (3.22) Theorem 3.6. Assume that Assumptions 1 2 hold. Let {x } R n be a sequence generatedby the proposedalgorithm andtae B = 2 f(x ), and d T (B X 2 )d 0, where is the Lagrange multiplier given in (3.1) for the trust region constraint, then X 2 f(x )X is positive semidenite on N(A), where x is a limit point of {x }. Proof. By (3.1), we have that g T d = d T (B + X 2 )d T Ad = d T (B X 2 )d 1 2 d T X 2 )d

10 294 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) T (d T X 2 d ) = : (3.23) First we consider the case when lim inf =0 on N(A). Let min (X B X ) denote the minimum eigenvalue of X B X on N(A). By Lemma 3.1, B + X 2 is positive semidenite in N(A), is equivalent to X B X + I is positive semidenite on N(A). So, max{ min (X B X ); 0} on N(A). It is clear that when lim inf =0 on N(A) there must exist a limit point x at which X 2 f(x )X is positive semidenite on N(A). Now we prove by contradiction that lim inf = 0. Assume that 2j 0 for all suciently large. First we show that { } converges to zero. According to the acceptance rule in step 4, we have that, from (3.23) f(x l() ) f(x + d ) g T d j 2 : (3.24) Similar to the proof of Theorem in [4], we have that the sequence {f(x l() )} is nonincreasing for all, and hence {f(x l() )} is convergent. (3.24) means that f(x l() ) 6 f(x l(l() 1) ) l() 1 j 2 (3.25) that {f(x l() )} is convergent means lim l() 1 2 l() 1 = 0 (3.26) which implies that either or lim l() 1 = 0 (3.27) lim inf l() 1 =0: (3.28) If (3.28) holds, by the updating formula of, for all j, j 1 6 +j 6 j 2 so that M+1 l() M+1 2. It means that lim inf =0: (3.29) If (3.27) holds, similar to the proof of Theorem in [6], we can also obtain that lim f(x l()) = lim f(x ): (3.24) and (3.30) mean that if (3.28) is not true, then lim =0: The acceptance rule (2.1) means that, for large enough, f ( x +! d ) f(x )! gt d : (3.30) (3.31)

11 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) Since ( f x + )! d f(x )= ( )! gt d +o! d we have that (1 ) ( )! gt d +o! d 0: (3.32) Dividing (3.32) by =! d and noting that 1 0, we have that from (3.23) and j g T 0 6 lim d d 6 lim j 2 d 6 j lim 6 0 (3.33) (3.29) and (3.33) imply also that lim =0: Since f(x) is twice continuously dierentiable we have that (3.34) f(x + d )=f + g T d dt 2 f(x )d +o( d 2 ) 6 f(x l() )+g T d +( 1 2 )gt d (gt d + d T 2 f(x )d )+o( d 2 ): From (3.1), we can obtain that from B = 2 f, g T d + d T 2 f d = d T X 2 d + Ad = d T X 2 d 6 j 2 : Therefore, by (3.23), we have that for large enough, f(x + d ) 6 f(x l() )+ g T d ; which means that the step size = 1, i.e., h = d for large enough. If x is close enough to x and is close to 0, we have that f(x + d ) f (d ) =o( d 2 ): (3.35) Again by (3.1), we have that from B + X 2 being positive semidenite in N(A) Pred (d )= (g T d dt B d ) = d T (B + X 2 )d 1 2 dt B d = 1 2 dt (B + X 2 )d + 2 dt X j 2 : d (3.36)

12 296 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) By (3.35) (3.36), we have that set = Ared (d )=Pred (d ) with Ared (d )=f f(x + d ) 1 = Ared (d ) Pred (d )) Pred (d ) = o( d 2 ) j 0 as 0: We conclude that the entire sequence { } converges to unity. ˆ 1 implies that is not decreased for suciently large and hence bounded away from zero. Thus, { } cannot converge to zero, contradicting (3.34). 4. Local convergence Theorem 3.5 indicates that at least one limit point of {x } is a stationary point. In this section, we shall rst extend this theorem to a stronger result and the local convergent rate, but it requires more assumptions. We denote the set of active constraints by I(x) def ={i x i =0; i=1;:::;n}: (4.1) To any I {1;:::;n} we associate the optimization subproblem (P) I min f(x); s:t: Ax= b; x I =0: (4.2) Assumption 3. For all I {1;:::;n}, the rst-order optimality system associated to (P) I nonisolated solutions. has no Assumption 4. The constraints of (1.1) are qualied in the sense that (A T ) i =0; i I(x) implies that =0. Assuming that ( ; ) is associated with a unique pair x which satises Assumption 3. Dene the set of strictly active constraints as J (x) def ={i i 0; i=1;:::;n} (4.3) and the extended critical cone as T def ={d R n Ad =0; d i =0; i J (x)}: (4.4) Assumption 5. The solution x of problem (1.1) satises the strong second-order condition, that is, there exists 0 such that p T 2 f(x )p p 2 ; p T: (4.5) This is a sucient condition for the strong regularity (see [2] or[12]). Given d N(A), the null space of A, we dene d Y ;d N as the orthogonal projection of d onto T and N, where N is the orthogonal complement of T in N(A), i.e., N = {z N(A) z T d =0; d T }; which means that d = d Y + d N and d 2 = d Y 2 + d N 2.

13 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) The following lemma is actually Lemma 3.5 in [2]. Lemma 4.1. Assume that Assumptions 3 5 hold. Given 0, if x is suciently close to x then d T (B +2 X 2 )d 2 d 2 + d N 2 ; (4.6) where given in (1.1) andmultiplier given in (3.1). Similar to the proof of the above lemma, we can also obtain that given =2 1 0, d T (B + X 2 )d 2 d d N 2 : (4.7) Theorem 4.2. Assume that Assumptions 3 5 hold. Let {x } be a sequence generatedby the algorithm. Then d 0. Furthermore, if x is close enough to x, and x is a strict local minimum of problem (1.1), then x x. Proof. By (3.1) and (4.7), we get that g T d = d T (B + X 2 )d T Ad = d T (B + X 2 )d 6 2 d 2 1 d N 2 : (4.8) According to the acceptance rule in step 4, we have f(x l() ) f(x + d ) g T d 2 d 2 : (4.9) Similar to the proof of Theorem in [6], we have that the sequence {f(x l() )} is nonincreasing for all, and therefore {f(x l() )} is convergent. (4.8) and (4.9) mean that f(x l() ) 6 f(x l(l() 1) ) l() 1 2 d l() 1 2 : (4.10) That {f(x l() )} is convergent means lim l() 1 d l() 1 2 =0: Similar to the proof of Theorem in [6], we can also obtain that lim f(x l()) = lim f(x ): (4.9) and (4.12) imply that lim d 2 =0: Assume that there exists a subsequence K {} such that lim ; K d 0: (4.11) (4.12) (4.13) (4.14)

14 298 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) Then, Assumption (4.14) means lim =0: (4.15) ; K Similar to prove (3.32), the acceptance rule (2.1) means that for large enough, (1 ) ( )! gt d +o! d 0: (4.16) Dividing (4.16) by =! d and noting that 1 0 and g T d 6 0, we obtain g T 0 6 lim d d 6 lim d 2 2 d = lim d 6 0: (4.17) From (4.17), we have that lim d =0; ; K which contradicts (4.14). Therefore, we have that lim d =0: (4.18) Assume that there exists a limit point x which is a local minimum of f, let {x } K be a subsequence of {x } converging to x.as l() M, for any, x = x l(+m+1) d l(+m+1) 1 d l(+m+1) 1 ; there exists a point x l() such that from (4.18) lim x l() x =0; so that we can obtain (4.19) lim x l() x 6 lim x x + lim x l() x =0: (4.20) K; K; K; This means that also the subsequence {x l() } K converges to x. As Assumption 3 necessarily holds in a neighborhood of x, then x is the only limit point {x } in some neighborhood N(x ; ) ofx, where 0 is an any constant. On the other hand, we now that the sequence {f(x l() )} is nonincreasing for all large, that is, f(x l() ) f(x l(+1) ). Dene ˆf def =inf {f(x); x N(x ; ) \ N(x ; =2M)}: Because x is a stric local minimum of problem (1.1), we may assume ˆf f. Now, assuming that f(x ) 6 f(x l() ) 6 ˆf and x l() N(x ; =2M), it follows that f(x l(+1) ) 6 ˆf and x l(+1) N(x ; ); using the denition of ˆf, we nd that x l(+1) N(x ; =2M), and hence x l(+2) ;:::;x l(+m+1) N(x ; =2M) again. This implies that sequence {x l() } remains in N(x ; =2M). By x +1 x 6 x x l(+1) + x l(+1) x +1 6 =2 we get that x x which means that the conclusion of the theorem is true.

15 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) Theorem 4.3. Assume that Assumptions 3 5 hold. Let {x } be a sequence generatedby the algorithm. Then lim P ĝ =0: (4.21) Proof. Assume that there are an 1 (0; 1) and a subsequence {P mi ĝ mi } of {P ĝ } such that for all m i ; i=1; 2;::: P mi ĝ mi 1 : (4.22) Theorem 3.5 guarantees the existence of another subsequence {P li ĝ li } such that and P ĝ 2 for m i 6 l i (4.23) P li ĝ li 6 2 (4.24) for an 2 (0; 1 ). From Theorem 4.2, we now that lim d =0: Because f(x) is twice continuously dierentiable, we have that, from above, (4.25) f(x + d )=f(x )+g T d dt 2 f(x )d +o( d 2 ) ( ) 1 6 f(x l() )+g T d + 2 g T d [gt d + d T B d ] From (3.1), we can obtain dt ( 2 f(x ) B )d +o( d 2 ): (4.26) g T d + d T B d = d T X 2 d T Ad = d T X 2 d 6 0: Similar to the proof of Theorem 3.1 in [2], we can obtain that g T 1 d 6 (2 ) (dy ) T 2 f(x )d Y 2 2 ; (4.27) 1 2 dt (B 2 f(x ))d 1 2(2 ) (dy ) T 2 f(x )d Y d N dy 2 ; (4.28) where def =sup { 2 f(x ) B + 2 f(x ) B 2 def =} and 0 =sup {d T ( 2 f(x ) 2 f(x +d ))d = d 2 }; (0; 1). Since (1=2 )=(2 ) ( 1)=2(2 )==2(2 ) 0. Hence, ( ) 1 2 g T d dt ( 2 f(x ) B )d +o( d 2 ) 0:

16 300 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) So, we have that for large enough i and m i 6 l i, f(x + d ) 6 f(x l() )+g T d ; (4.29) which means that the step size = 1, i.e., h = d for large enough i and m i 6 l i. If x is close enough to x, d is then close to 0, and hence B is close to 2 f(x ). We deduce that f(x + d ) f(x ) (d ) = g T d dt 2 f(x )d +o( d 2 ) (g T d dt B d ) =o( d 2 ): (4.30) From (2.4) and (4.6), for large enough i; m i 6 l i, Pred(d )=d T ( 1 2 B + X 2 )d + T Ad = d T ( 1 2 B + X 2 )d 2 d 2 + d N 2 ; (4.31) where given in (4.6). As d = h, for large i; m i 6 l i, we obtain that ˆ = f f(x + h ) Pred(h ) =1+ f f(x + d )+ (d ) Pred(h ) o( d 2 ) 1 2 d 2 + d N 2 2 : (4.32) This means that for large i; m i 6 l i, f f(x + h ) 2 Pred(h ) 2 2 d 2 : Therefore, we can deduce that, for large i, x mi x li 2 6 = l i 1 =m i x +1 x 2 l i 1 =m i h 2

17 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) = l i 1 =m i [f(x ) f(x + h )] (f mi f li ): (4.33) (4.33) and (4.12) mean that for large i, we have x mi x li 6 1 L (L +1) ; where L is the Lipschitz constant of g(x) inl(x 0 ) and L is X and g(x) bounded in L(x 0 ), that is, g(x) 6 L and X 6 L. We then use the triangle inequality to show P mi ĝ mi 6 P mi ĝ mi P mi ĝ li + P mi ĝ li P li ĝ li + P li ĝ li 6 LL x mi x li + L P mi P li L (L +1) x mi x li + 2 : (4.34) We choose 2 = 1 =4, then (4.34) contradicts (4.22). This implies that (4.22) is not true, and hence the conclusion of the theorem holds. Theorem 4.4. Let {x } be propose by the algorithm. Assume that {B } is bounded, then (a) any limit point x of {x } is a solution of the rst-order optimality system associated to problem (P) I(x ), that is, f(x )=A T =0; Ax = b; (x ) I(x ) =0; i =0; i I(x ); where i is the ith component of. (b) If Assumptions 3 4 and (4.7) hold, then x satises the rst-order optimality system of (1.1); i.e., there exist R m ; R n such that f(x ) A T =0; Ax = b; x 0; 0; x T =0: Proof. Following Lemma 2.3 in [1], we only prove that the sequence {x } satises the following conditions: (i) 0, (ii) d 0, (iii) X [ f A T ] 0.

18 302 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) In fact, we have proved that (a) holds, similar to prove Theorem 3.6. From Theorem 4.2, we obtain that (b) holds. Theorem 4.3 means P ĝ 0 which implies P ĝ =[I  T (  T ) 1  ]ĝ = X [ f A T ] because of  = AX ; ĝ = X g and def =( T Â) 1  T ĝ. By conditions (i), (ii), (iii), similar to the proofs of Theorem 2.2 in [2], we can also obtain that the conclusions of the theorem are true. The conclusions of Theorem 4.4 are the same results as Theorem 2.2 in [2] under the same as assumptions in [2]. We now discuss the convergence rate for the proposed algorithm. For this purpose, it is show that for large enough, the step size 1, and there exists ˆ 0 such that ˆ: Theorem 4.5. Assume that Assumptions 3 5 hold. For suciently large, then the step 1 andthe trust region constraint is inactive, that is, there exists ˆ 0 such that K ; K ; where K is a large enough index. Proof. Similar to prove (4.29), we can also obtain that at the suciently large th iteration, f(x + d ) 6 f(x l() )+g T d ; (4.35) which means that the step size 1, i.e., h = d for large enough. By the above inequality, we now that x +1 = x + d : By Assumptions 3 5, we can obtain that from (4.30) 1= Ared(h ) Pred(h ) Pred(h ) = (gt h ht B h ) (g T h ht 2 f(x )h +o( h 2 )) Pred(h ) = o( h 2 ) Pred(h ) : Similar to prove (4.31), for large enough, (4.36) Pred(d ) 2 d 2 + d N 2 ; (4.37) where given in (4.6).

19 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) Similar to the proof of Theorem 4.2, we can also obtain that d 0. Hence, (4.36) and (4.37) mean that 1. Hence there exists ˆ 0 such that when d 6 ˆ; ˆ 2, and therefore, +1.Ash 0, there exists an index K such that d 6 ˆ whenever K. Thus K ; K : The conclusion of the theorem holds. Theorem 4.5 means that the local convergence rate for the proposed algorithm depends on the Hessian of objective function at x and the local convergence rate of the step d. Since the step size 1, and the trust region constraint is inactive, for suciently large. 5. Numerical experiments Numerical experiments on the nonmonotonic bac-tracing trust region interior point algorithm given in this paper have been performed on an IBM 586 personal computer. In this section, we present the numerical results. We compare with dierent nonmonotonic parameters M =0; 4 and 8, respectively, for the proposed algorithm. A monotonic algorithm is realized by taing M =0. In order to chec eectiveness of the bac-tracing technique, we select the same parameters as used in [4]. The selected parameter values are: ˆ =0:01; 1 =0:001; 2 =0:75; 1 =0:2; 2 =0:5; 3 =2;! =0:5; max =10; =0:2, and initially 0 = 5. The computation terminates when one of the following stopping criterions is satised P ĝ , or f f max{1; f }. The experiments are carried out on 15 standard test problems which are quoted from [10,7]. Besides the recommended starting points in [10,7] (HS: the problems from Hoc and Schittowsi [7] and SC: from Schittowsi [10]), denoted by x 0a, we also test these methods with another set of starting points x 0b. The computational results for B = 2 f(x ), the real Hessian, are presented at the following table, where ALG denote it variation proposed in this paper with nonmonotonic decreasing and bac-tracing techniques. NF and NG stand for the numbers of function evaluations and gradient evaluations, respectively. NO stands for the number of iterations in which nonmonotonic decreasing situation occurs, that is, the number of times f f The number of iterations is not presented in the following table because it always equals NG. The results under ALG (M = 0) represent mixture of trust region and line-search techniques via interior point of feasible set considered in this paper. Our type of approximate trust region method is very easy to resolve the subproblem (S ) with a reduced radius via interior point. The bac-tracing steps can outperform the traditional method when the trust region subproblem is solved accurately over the whole hyperball. The last three parts of the table, under the headings of M = 0, 4 and 8 respectively, show that for most test problems the nonmonotonic technique does bring in some noticeable improvement (Table 1).

20 304 D. Zhu / Journal of Computational andappliedmathematics 155 (2003) Table 1 Experimental results Problem name Initial ALG point M =0 M =4 M =8 NF NG NF NG NO NF NG NO Rosenbroc x 0a (C = 100) x 0b Rosenbroc x 0a (C = ) x 0b Rosenbroc x 0a (C = ) x 0b Freudenstein x 0a x 0b Huang x 0a HS048 x 0b Huang x 0a HS049 x 0b Huang x 0a HS050 x 0b Hsia x 0a HS055 x 0b Betts(HS062) x Sheela(HS063) x While x SC265 x SC347 x Banana x 0a (n =6) x 0b Banana x 0a (n = 16) x 0b

21 Acnowledgements D. Zhu / Journal of Computational andappliedmathematics 155 (2003) The author would lie to than an anonymous referee for his helpful comments and suggestions. The author gratefully acnowledges the partial supports of the National Science Foundation Grant ( ) of China, the Science Foundation Grant 02ZA14070 of Shanghai Technical Sciences Committee and the Science Foundation Grant (02DK06) of Shanghai Education Committee. References [1] F. Bonnans, T. Coleman, Combining trust region and ane scaling for linearly constrained nonconvex minimization, in: Y. Yuan (Ed.), Advances in Nonlinear Programming, Kluwer Academic Publishers, Dordrecht, 1998, pp [2] J.F. Bonnans, C. Pola, A trust region interior point algorithm for linear constrained optimization, SIAM J. Optim. 7 (3) (1997) [3] N.Y. Deng, Y. Xiao, F.J. Zhou, A nonmonotonic trust region algorithm, J. Optim. Theory Appl. 76 (1993) [4] J.E. Jr. Dennis, R.B. Schnable, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall, New Jersey, [5] J.E. Dennis, L.N. Vicente, On the convergence theory of trust-region-based algorithms for equality-constrained optimization, SIAM J. Optim. 7 (1997) [6] L. Grippo, F. Lampariello, S. Lucidi, A nonmonotonic line-search technique for Newton s methods, SIAM J. Numer. Anal. 23 (1986) [7] W. Hoc, K. Schittowsi, Test Examples for Nonlinear Programming Codes, in: Lecture Notes in Economics and Mathematical Systems, Vol. 187, Springer, Berlin, [8] J.J. More, D.C. Sorensen, Computing a trust region step, SIAM J. Sci. Statist. Comput. 4 (1983) [9] J. Nocedal, Y. Yuan, Combining trust region and line-search techniques, in: Y. Yuan (Ed.), Advances in Nonlinear Programming, Kluwer, Dordrecht, 1998, pp [10] K. Schittowsi, More Test Examples for Nonlinear Programming Codes, in: Lecture Notes in Economics and Mathematical Systems, Vol. 282, Springer, Berlin, [11] D.C. Sorensen, Newton s method with a model trust region modication, SIAM J. Numer. Anal. 19 (1982) [12] Y. Ye, E. Tse, An extension of Karmarar s projective algorithm for convex quadratic programming, Math. Programming 44 (1989) [13] D. Zhu, Curvilinear paths and trust region methods with nonmonotonic bac tracing technique for unconstrained optimization, J. Comput. Math. 19 (2001)

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints

A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line

154 ADVANCES IN NONLINEAR PROGRAMMING Abstract: We propose an algorithm for nonlinear optimization that employs both trust region techniques and line 7 COMBINING TRUST REGION AND LINE SEARCH TECHNIQUES Jorge Nocedal Department of Electrical and Computer Engineering, Northwestern University, Evanston, IL 60208-3118, USA. Ya-xiang Yuan State Key Laboratory

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ

Affine scaling interior Levenberg-Marquardt method for KKT systems. C S:Levenberg-Marquardt{)KKTXÚ 2013c6 $ Ê Æ Æ 117ò 12Ï June, 2013 Operations Research Transactions Vol.17 No.2 Affine scaling interior Levenberg-Marquardt method for KKT systems WANG Yunjuan 1, ZHU Detong 2 Abstract We develop and analyze

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Math 164: Optimization Barzilai-Borwein Method

Math 164: Optimization Barzilai-Borwein Method Math 164: Optimization Barzilai-Borwein Method Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 online discussions on piazza.com Main features of the Barzilai-Borwein (BB) method The BB

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Adaptive two-point stepsize gradient algorithm

Adaptive two-point stepsize gradient algorithm Numerical Algorithms 27: 377 385, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Adaptive two-point stepsize gradient algorithm Yu-Hong Dai and Hongchao Zhang State Key Laboratory of

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n

230 L. HEI if ρ k is satisfactory enough, and to reduce it by a constant fraction (say, ahalf): k+1 = fi 2 k (0 <fi 2 < 1); (1.7) in the case ρ k is n Journal of Computational Mathematics, Vol.21, No.2, 2003, 229 236. A SELF-ADAPTIVE TRUST REGION ALGORITHM Λ1) Long Hei y (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

New Inexact Line Search Method for Unconstrained Optimization 1,2

New Inexact Line Search Method for Unconstrained Optimization 1,2 journal of optimization theory and applications: Vol. 127, No. 2, pp. 425 446, November 2005 ( 2005) DOI: 10.1007/s10957-005-6553-6 New Inexact Line Search Method for Unconstrained Optimization 1,2 Z.

More information

1. Introduction In this paper we study a new type of trust region method for solving the unconstrained optimization problem, min f(x): (1.1) x2< n Our

1. Introduction In this paper we study a new type of trust region method for solving the unconstrained optimization problem, min f(x): (1.1) x2< n Our Combining Trust Region and Line Search Techniques Jorge Nocedal y and Ya-xiang Yuan z March 14, 1998 Report OTC 98/04 Optimization Technology Center Abstract We propose an algorithm for nonlinear optimization

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

On the iterate convergence of descent methods for convex optimization

On the iterate convergence of descent methods for convex optimization On the iterate convergence of descent methods for convex optimization Clovis C. Gonzaga March 1, 2014 Abstract We study the iterate convergence of strong descent algorithms applied to convex functions.

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Step-size Estimation for Unconstrained Optimization Methods

Step-size Estimation for Unconstrained Optimization Methods Volume 24, N. 3, pp. 399 416, 2005 Copyright 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam Step-size Estimation for Unconstrained Optimization Methods ZHEN-JUN SHI 1,2 and JIE SHEN 3 1 College of Operations

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization

Optimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization 5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 234 (2) 538 544 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Complexity of gradient descent for multiobjective optimization

Complexity of gradient descent for multiobjective optimization Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization

More information

Summary Notes on Maximization

Summary Notes on Maximization Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Lecture 3: Basics of set-constrained and unconstrained optimization

Lecture 3: Basics of set-constrained and unconstrained optimization Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018 Optimization basics Outline Optimization

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L. McMaster University Advanced Optimization Laboratory Title: A Proximal Method for Identifying Active Manifolds Authors: Warren L. Hare AdvOl-Report No. 2006/07 April 2006, Hamilton, Ontario, Canada A Proximal

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

A filled function method for finding a global minimizer on global integer optimization

A filled function method for finding a global minimizer on global integer optimization Journal of Computational and Applied Mathematics 8 (2005) 200 20 www.elsevier.com/locate/cam A filled function method for finding a global minimizer on global integer optimization You-lin Shang a,b,,,

More information

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics

More information

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization

Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Complexity analysis of second-order algorithms based on line search for smooth nonconvex optimization Clément Royer - University of Wisconsin-Madison Joint work with Stephen J. Wright MOPTA, Bethlehem,

More information

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for

More information

Search Directions for Unconstrained Optimization

Search Directions for Unconstrained Optimization 8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem

1. Introduction. We analyze a trust region version of Newton s method for the optimization problem SIAM J. OPTIM. Vol. 9, No. 4, pp. 1100 1127 c 1999 Society for Industrial and Applied Mathematics NEWTON S METHOD FOR LARGE BOUND-CONSTRAINED OPTIMIZATION PROBLEMS CHIH-JEN LIN AND JORGE J. MORÉ To John

More information

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S. Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

More information

Key words. nonlinear programming, pattern search algorithm, derivative-free optimization, convergence analysis, second order optimality conditions

Key words. nonlinear programming, pattern search algorithm, derivative-free optimization, convergence analysis, second order optimality conditions SIAM J. OPTIM. Vol. x, No. x, pp. xxx-xxx c Paper submitted to Society for Industrial and Applied Mathematics SECOND ORDER BEHAVIOR OF PATTERN SEARCH MARK A. ABRAMSON Abstract. Previous analyses of pattern

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

More First-Order Optimization Algorithms

More First-Order Optimization Algorithms More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations

A family of derivative-free conjugate gradient methods for large-scale nonlinear systems of equations Journal of Computational Applied Mathematics 224 (2009) 11 19 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: www.elsevier.com/locate/cam A family

More information

INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University

INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS Ronald J. Stern Concordia University Department of Mathematics and Statistics Montreal, Quebec H4B 1R6, Canada and Henry Wolkowicz

More information

Second Order Optimization Algorithms I

Second Order Optimization Algorithms I Second Order Optimization Algorithms I Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 7, 8, 9 and 10 1 The

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

On the acceleration of augmented Lagrangian method for linearly constrained optimization

On the acceleration of augmented Lagrangian method for linearly constrained optimization On the acceleration of augmented Lagrangian method for linearly constrained optimization Bingsheng He and Xiaoming Yuan October, 2 Abstract. The classical augmented Lagrangian method (ALM plays a fundamental

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 14: Unconstrained optimization Prof. John Gunnar Carlsson October 27, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 27, 2010 1

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Improved Damped Quasi-Newton Methods for Unconstrained Optimization

Improved Damped Quasi-Newton Methods for Unconstrained Optimization Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

A proximal-like algorithm for a class of nonconvex programming

A proximal-like algorithm for a class of nonconvex programming Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,

More information

A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem:

A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION. 1. Introduction Consider the following unconstrained programming problem: J. Korean Math. Soc. 47, No. 6, pp. 53 67 DOI.434/JKMS..47.6.53 A NOVEL FILLED FUNCTION METHOD FOR GLOBAL OPTIMIZATION Youjiang Lin, Yongjian Yang, and Liansheng Zhang Abstract. This paper considers the

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

ON TRIVIAL GRADIENT YOUNG MEASURES BAISHENG YAN Abstract. We give a condition on a closed set K of real nm matrices which ensures that any W 1 p -grad

ON TRIVIAL GRADIENT YOUNG MEASURES BAISHENG YAN Abstract. We give a condition on a closed set K of real nm matrices which ensures that any W 1 p -grad ON TRIVIAL GRAIENT YOUNG MEASURES BAISHENG YAN Abstract. We give a condition on a closed set K of real nm matrices which ensures that any W 1 p -gradient Young measure supported on K must be trivial the

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

Applied Mathematics &Optimization

Applied Mathematics &Optimization Appl Math Optim 29: 211-222 (1994) Applied Mathematics &Optimization c 1994 Springer-Verlag New Yor Inc. An Algorithm for Finding the Chebyshev Center of a Convex Polyhedron 1 N.D.Botin and V.L.Turova-Botina

More information