A Derivative-Free Algorithm for Constrained Global Optimization based on Exact Penalty Functions

Size: px
Start display at page:

Download "A Derivative-Free Algorithm for Constrained Global Optimization based on Exact Penalty Functions"

Transcription

1 Noname manuscript No. (will be inserted by the editor) A Derivative-Free Algorithm for Constrained Global Optimization based on Exact Penalty Functions Gianni Di Pillo Stefano Lucidi Francesco Rinaldi the date of receipt and acceptance should be inserted later Abstract Constrained global optimization problems can be tackled by using exact penalty approaches. In a preceding paper, we proposed an exact penalty algorithm for constrained problems which combines an unconstrained global minimization technique for minimizing a non-differentiable exact penalty function for given values of the penalty parameter, and an automatic updating of the penalty parameter that occurs only a finite number of times. However, in the updating of the penalty parameter, the method requires the evaluation of the derivatives of the problem functions. In this work, we show that an efficient updating can be implemented also without using the problem derivatives, in this way making the approach suitable for globally solving constrained problems where the derivatives are not available. In the algorithm, any efficient derivative-free unconstrained global minimization technique can be used. In particular, we adopt an improved version of the DIRECT algorithm. In addition, to improve the performances, the approach is enriched by resorting to derivative-free local searches, in a multistart framework. In this context, we prove that, under suitable assumptions, for every global minimum point there exists a neighborhood of attraction for the local search. An extensive numerical experience is reported. Keywords Nonlinear Optimization Global Optimization Exact Penalty Functions Derivative-Free Minimization DIRECT Algorithm This work has been partially funded by the UE(ENIAC Joint Undertaking) in the MODERN project (ENIAC ). G. Di Pillo, S. Lucidi (corresponding author) Dipartimento di Ingegneria Informatica, Automatica e Gestionale A. Ruberti, Sapienza Università di Roma, Via Ariosto 25, Rome (Italy) dipillo@dis.uniroma1.it, lucidi@dis.uniroma1.it F. Rinaldi Dipartimento di Matematica, Università di Padova, Via Trieste 63, Padova (Italy) rinaldi@math.unipd.it

2 2 Gianni Di Pillo et al. Mathematics Subject Classification (2000) 90C26 90C30 90C56 1. Introduction The search for global rather than local solutions of optimization problems has been receiving growing attention in many fields like Engineering, Economics and Applied Sciences. Many real-world problems in these fields are of the constrained type, while most of the algorithms in global optimization are for unconstrained or simply bound-constrained problems. For the unconstrained case, many algorithmic approaches, either deterministic or probabilistic, have been developed (see [1 7] and references therein). The more difficult case of nonlinearly-constrained global optimization problems has been investigated more recently and various approaches have been described in, e.g., [1, 7, 8]. In this context, a particular emphasis is given to the use of some Augmented Lagrangian function to deal with the general constraints (see [9 11]). However, the Augmented Lagrangian approach is sequential in nature and requires, in principle, an infinite number of global minimizations. Therefore, we adopt an exact penalty approach so that the number of unconstrained global minimizations needed to globally solving the original constrained problem results to be finite. Moreover, by adopting an exact penalty approach we may take advantage of efficient derivative-free unconstrained algorithms. In this way, we can deal with constrained global optimization problems, where the derivatives of the functions are not available, as happens, for instance, in black-box simulation-based optimization, a topic that is receiving growing interest in the literature (see [12] and references therein). In particular, based on the approach described in [13], we develop a new strategy having the two following distinguishing features: - differently from [13], derivatives of the objective and constraints functions are not required for the penalty parameter updating; - a local search phase is used for speeding-up the convergence to a global solution of the constrained problem: indeed, we prove that, under suitable assumptions, for every global minimum point there exists a neighborhood of attraction for the local search. The paper is organized as follows. In Section 2, we give some preliminary results that will be used throughout the paper. In section 3, we describe the Global Optimization Framework and we give some convergence results. In Section 4, we discuss the local search properties. In Section 5, we report an extended numerical experience using the proposed algorithm either alone, or combined with a derivative-free local minimization technique, and we discuss on the efficiency of our approach. Finally, we draw some conclusions in Section 6.

3 A Derivative-Free Algorithm for Constrained Global Optimization 3 2. Preliminaries In this paper, we are interested in the global solution of the general nonlinear programming problem: min f(x) s.t. g(x) 0, l x u (1) where f : R n R, g : R n R p, l,u R n both finite, and we assume that f and g are continuously differentiable functions. We assume that no global information (convexity, Lipschitz constants,...) on the problem is available. To simplify notations, we denote the bound constraints as [ ] l x s(x) := 0. x u We denote by F the feasible set of problem (1): F := {x R n : g(x) 0, s(x) 0}. In order to solve problem (1), we adopt an exact penalty function approach like that described in [13]. In particular, We use a non-differentiable penalty function having the following form: where 1 < q <, and P q (x;ε) = f(x)+ 1 ε [max{0,g(x)},max{0, s(x)} ] q (2) with α j > 0, j = 1,...,2n. s j (x) = s j(x) α j s j (x), As pointed out in [13], even if, in general, the penalty function P q (x;ε) is non-differentiable, it turns out to be continuously differentiable at infeasible points. We denote by D the set Then, we consider the problem D := {x R n : s j (x) α j, j = 1,...,2n}. We make use of the following Assumptions: Assumption 2.1 D is a compact set. min x D P q(x;ε). (3) Assumption 2.2 The Mangasarian-Fromovitz Constraint Qualification (MFCQ) holds at every global solution x of Problem (1).

4 4 Gianni Di Pillo et al. We note that Assumption 2.1 is common in global optimization and is always satisfied in real-world problems and Assumption 2.2 is only needed in global minimizers, therefore they are very weak assumptions. In the following theorem (see [13]), we recall the main exactness property of the penalty function P q, which is at the basis of the proposed approach: Theorem 2.1 Under Assumption 2.2, there exists a threshold value ε such that, for any ε ]0, ε], every global solution of Problem (1) is a global solution of Problem (3), and conversely. Remark 2.1 We note that by construction, lim P q(x;ε) = +, x D where D denotes the boundary of D. Therefore Problem (3) has global solutions contained only in the interior of D, so that determining a global solution of Problem (3) is essentially an unconstrained problem. Remark 2.2 Theorem 2.1 implies that, for a sufficiently small value ε, Problem (3) has only unconstrained global minimizers. Remark 2.3 The global solution of Problem (3) can be obtained by using any method for unconstrained global optimization. On these bases, in [13] we were able to describe an algorithm for the global solution of the constrained Problem (1), by combining any algorithm that globally solves the unconstrained global optimization Problem (3) with an automatic updating rule for the penalty parameter which makes use of first order derivatives of the problem functions. In the paper, given a vector v R n, we denote by v its transpose, by v q its q-norm, and by v + = max{0,v} the n-vector (max{0,v 1 },...,max{0,v n }). 3. Derivative-Free Exact Penalty Global Optimization Algorithm In this section, we introduce the DF-EPGO (derivative-free Exact Penalty Global Optimization) algorithm model for finding a global solution of Problem (1) using the exact penalty function (2), and we analyze its convergence properties. We assume that F k R n approximates the gradient of the objective function f calculated in x k, in the sense that there exists a value τ k 0 such that: F k f(x k ) q M 1 τ k, (4) with M 1 > 0. In the same way, we assume that V k approximates the gradient of the function [g + (x), s + (x) ] q (5)

5 A Derivative-Free Algorithm for Constrained Global Optimization 5 calculated in x k / F, in the sense that there exists a value τ k 0 such that: with M 2 > 0. V k [g + (x k ), s + (x k ) ] q q M 2 τ k, (6) In Figure 1 we report the scheme of the DF-EPGO Algorithm, where we use the following condition to check if an updating of the penalty parameter is timely: ε k ( F k q + [g + (x k ), s + (x k ) ] q ) > ρ V k q. (7) At iteration k of the algorithm, we first calculate x k as a δ k -global minimizer of Problem (3), defined as the point x k such that P q (x k,ε k ) P q (x,ε k )+δ k, x D. In principle, any global unconstrained optimization method can be used to compute aδ k -globalminimizerx k.then, wecheckfeasibilityofx k forproblem (1) and, if x k is not feasible, we check if an updating of the penalty parameter is timely by means of condition (7). Finally, we reduce the value of δ k in order to find a better approximation of the global solution of Problem (3) and the valueτ k toobtainabetter approximationofthe problemfunctionsderivatives. We point out that the DF-EPGO is a conceptual algorithm model; in practice, as usual in global optimization, it will be stopped when a δ k -global minimizer x k which is reputed a good approximation of a global minimizer x is found. In order to state the convergence properties of the algorithm, we preliminary prove the following Lemma. Lemma 3.1 Every accumulation point x of a sequence {x k } produced by DF- EPGO Algorithm belongs to the set F. Proof We consider two different cases: Case 1) an index k exists such that, for any k k, ε k = ε: We assume, by contradiction, that there exists an accumulation point x / F. We have, for k sufficiently large, that the following inequality holds: [g + (x k ), s + (x k ) ] q ρ ε [g+ (x k ), s + (x k ) ] q q f(x k ) q +(M 1 +M 2 )τ k. Let x be the limit of the subsequence {x k } K. Taking the limit for k on both sides, as τ k 0, we get [g + ( x), s + ( x) ] q ρ ε [g+ ( x), s + ( x) ] q q f( x) q. (8) From this point, the proof follows the same steps of the proof of Lemma 1 in [13].

6 6 Gianni Di Pillo et al. begin iteration k, ρ (0,1), ε k,δ k,τ k > 0, σ,θ,ξ (0,1) Compute a δ k -global minimizer x k of Problem (3) Set ε k+1 = ε k yes x k F no Set ε k+1 = σε k Condition (7) Set ε k+1 = ε k. yes no Set δ k+1 = θδ k, τ k+1 = ξτ k end iteration k Fig. 1 Scheme of DF-EPGO Algorithm (iteration k). Case 2) {ε k } 0: The proof is a verbatim repetition of the proof of Case 2 of Lemma 1 in [13]. Then we can state the first convergence result. Theorem 3.1 Every accumulation point x of a sequence {x k } produced by DF-EPGO Algorithm is a global minimizer of Problem (1). Proof See proof of Theorem 2 in [13].

7 A Derivative-Free Algorithm for Constrained Global Optimization 7 In next theorem, we prove that, if Assumption 2.2 holds, the penalty parameter ε is updated a finite number of times. Theorem 3.2 Let us assume that Assumption 2.2 holds. Let {x k } and {ε k } be the sequences produced by DF-EPGO Algorithm. Then an index k and a value ε > 0 exist such that, for any k k, ε k = ε. Proof By contradiction, let us assume {ε k } 0. Then, from the scheme of the DF-EPGO Algorithm, there exists a subsequence {x k } K such that, for all k K, x k / F and the test is satisfied. By rewriting (9) as ε k ( F k q + [g + (x k ), s + (x k ) ] q ) > ρ V k q (9) ε k ( f(x k ) q +M 1 τ k + [g + (x k ), s + (x k ) ] q ) > ρ [g + (x k ), s + (x k ) ] q q M 2 τ k, and considering the fact that τ k 0, we obtain: lim k K, k [g+ (x k ), s + (x k ) ] q = 0. (10) From this point, the proof follows the same steps of the proof of Theorem 3 in [13]. 4. Enriching the Global Search by a Derivative-Free Local Approach In practice, we can enrich the global search by means of a derivative-free local minimization algorithm using as starting point the δ k -global minimizer x k obtained at iteration k by the DF-EPGO algorithm. In this section, we motivate analytically this enrichment by proving that, under reasonable assumptions, the sequence generated by the derivative-free local algorithm is attracted by a global solution of Problem (3). To simplify notation, we will develop the analysis making reference to a locally Lipschitz function φ : R n R and to the problem min x R n φ(x). (11) Let us denote by d φ(x,d) the Clarke-directional derivative of φ at x R n in the direction d R n : d φ(y +td) φ(y) φ(x,d) := limsup ; y x t 0 t + then we can recall the definition of stationary point for the nonsmooth Problem (11):

8 8 Gianni Di Pillo et al. Definition 4.1 x is a Clarke stationary point for φ, iff the following condition is satisfied: d φ(x,d) 0, d R n. (12) An alternative definition of Clarke stationary point for Problem (11) can by given by using the Clarke subdifferential φ(x) of φ at xl given as: φ(x) = conv{v R n {x k } : x k T,x k x, φ(x k ) v}, T being the subset of R n where φ is differentiable. Even if we will not use the alternative definition, we will make use of the Clarke subdifferential in order to exploit some second order properties of function φ. To this aim we need some additional notations and definitions. Definition 4.2 The sequence {x k } converges to x in the direction d, denoted by x k d x, iff x k x and the sequence { (xk x) d x k } converges to x d. Then, we can denote by d φ(x) the subset of φ(x), defined as follows: d φ(x) := {g : x k and g k φ(x k ) such that x k d x and g k g}. Definition 4.3 Let g d φ(x). Then φ (x,g,d) is defined as the infimum of all numbers 1 lim inf (t k ) 2{φ(xk ) φ(x) g T (x k x)} (13) taken over all triples of sequences {x k }, {g k } and {t k }, for which t k > 0 for each k and {t k } converges to 0, {x k } converges to x and (x k x)/t k converges to d, {g k } converges to g with g k in φ(x k ) for each k. Moreover,let us denote by d + φ(x,d) the lower Dini-directional derivative at x R n in the direction d R n : d + φ(x,d) := lim inf d d t 0 φ(x+td ) φ(x). t We can now introduce the definition of second order Dini stationary point for Problem (11). Definition 4.4 x is a second order Dini stationary point for φ, iff the following conditions are satisfied: (i) d + φ(x,d) 0 for all unit vectors d R n ; (ii) φ (x,0,d) β > 0 for all unit vectors d Rn for which d + φ(x,d) = 0. We state a preliminary result [14] that will be used to prove the main result:

9 A Derivative-Free Algorithm for Constrained Global Optimization 9 Lemma 4.1 Suppose that φ(x) φ( x) for all x B( x,δ). Let 0 d R n, t > 0, α > 1 and 0 < η < (α d ) 2 such that φ( x+td) φ( x) tη and t( d +η 1/2 ) < δ. Then there exist z x in R n and g φ(z) such that (i) z x td tη 1/2 α 1 (< tη 1/2 ), (ii) φ(z) φ( x+td) and (iii) g αη 1/2. Now we can prove the following result: Proposition 4.1 Let x R n be a local minimum point of φ which satisfies the conditions for being a second order Dini stationary point. Then there exists a neighborhood B( x,η) and a value γ > 0 such that x x 2 1 (φ(x) φ( x)), x B( x,η). γ Proof Suppose, by contradiction, that η k > 0 and γ k > 0 there exists a point x k B( x,η k ) such that Let x k x 2 > 1 γ k(φ(xk ) φ( x)). (14) x k x x k x = dk, we consider the sequences {γ k },{η k },{d k } such that γ k 0, η k 0, d k d. Now, by (14) and (i) of Definition 4.4, when k is large enough, we can write the following x k x > 1 (φ(x k ) φ( x)) γ k x k (φ(xk ) φ( x)) x x k d + φ( x,d) 0. (15) x Taking the limit for k, we have Now, we can denote d + φ( x,d) = 0. (16) d k = xk x x k x and t k = x k x, (17) and by (14), we can write φ( x+t k d k ) < φ( x)+γ k (t k ) 2. (18) Using Lemma 4.1 with α = 2 and η = γ k t k, there exists a sequence {z k } such that

10 10 Gianni Di Pillo et al. (i) z k x t k d k tk (γ k t k ) 1/2, 2 (ii) φ(z k ) φ( x+t k d k ) and (iii) g k 2(γ k t k ) 1/2. By using (i) and (iii), we have that By using (ii) and (14), we have Then, we can write lim k zk = x, (19) z k x lim k t k = lim k dk = d, (20) z k x lim k t k = 1, (21) lim k gk = g = 0 d φ( x). (22) φ(z k ) φ(x k ) < φ( x)+γ k (t k ) 2. (23) φ(z k ) φ( x) z k x 2 z k x 2 (t k ) 2 γ k. By using (21) and the fact that γ k 0, we have φ(z k ) φ( x) lim k z k x 2 0. (24) Finally, taking into account (ii) of Definition 4.4, (13) and (16), we get the following contradiction φ(z k ) φ( x) g T (z k x) 0 lim k z k x 2 f ( x,0,d) β > 0. Now, we extend to the case of locally Lipschitz functions a result established in [15]. Theorem 4.1 Let φ be a locally Lipschitz function and {x k } be a sequence of points generated by an iterative method satisfying the following condition x k+1 = x k +β k d k φ(x k+1 ) φ(x k ) ν(β k d k ) 2, for all k, where ν > 0, and such that every accumulation point is a Clarke stationary point. Then, for every global minimum point x of φ(x), which is an isolated Clarke stationary point and a second order Dini stationary point, there exists an open set L containing x such that, if x k L for some k 0, then we have:

11 A Derivative-Free Algorithm for Constrained Global Optimization 11 - x k L for all k k; - {x k } x. Proof Let B(x,δ 2 ) be a neighborhood introduced in Proposition 4.1, where we have φ(x k) φ(x ) γ x k x 2, x B(x,δ 2 ). (25) We consider now the following open set: L = {x B(x,δ),φ(x) < φ(x )+ δ2 γν }, (26) 2(2ν +γ) where δ = min{δ 1, δ 2 }. Now we prove that, if x k L for some k 0, then x k L for all k k and {x k } x. Recalling the hypothesis φ(x k+1 ) φ(x k ) ν(β k ) 2 d k 2 we have (β k) 2 d k 2 1 ν (φ(x k) φ(x k+1 )) (27) 1 ν (φ(x k) φ(x )), where the second inequality follows from the hypothesis that x is a global minimum. By using (25) and (27), we have x k+1 x 2 = x k x +β kd k 2 (28) 2 x k x 2 +2(β k) 2 d k 2 4 γ (φ(x k) φ(x ))+ 2 ν (φ(x k) φ(x )) = 2(2ν +γ) (φ(x k) φ(x )). γν Recalling that x k L we have φ(x k) φ(x ) < δ2 γν 2(2ν +γ). Combining the above inequality with (28), we obtain x k+1 x < δ. (29) Furthermore, by using again the hypothesis φ(x k+1 ) φ(x k ) ν(β k ) 2 d k 2, we have that φ(x k+1 ) φ(x k ) for all k and hence φ(x k+1 ) φ(x k) < φ(x )+ δ2 γν 2(2ν +γ). (30) Using the above two inequalities, it follows that x k+1 B(x,δ) and similarly x k L for all k k. In particular, using the fact that L B(x,δ), then x k B(x,δ) for all k k. Since, by definition, x k L C = {x R n : x x δ 1 },

12 12 Gianni Di Pillo et al. and the set C is compact, the sequence {x k } will have at least one limit point which must be a stationary point of Problem (11). Since within B(x,δ) the only Clarke stationary point is x, then x k x. As in the derivative-free local search used to enrich the global algorithm we minimize the exact penalty function P q, which is a locally Lipschitz function, we can apply to it all the results given for φ in this section. Then, recalling Remark 2.1, we have that, under the assumptions of Theorem 4.1 there exists a neighborhood L of a global solution of Problem (3), where the sequence produced by the derivative-free local algorithm applied to P q remains and is attracted by the global solution. 5. Numerical Results In the numerical experimentation of the derivative-free exact penalty approach we used: - the set A of 20 small dimensional test problems reported in [9], where a global optimization algorithm for general constrained problems, which uses first and second order derivatives of the problem functions, is described; - the set B of 97 problems from the GLOBALLib collection of COCONUT [16] with dimension n 10. These choices are motivated by the fact that derivative-free approaches cannot tackle large dimensional problems. All test problems are of the kind min f(x) s.t. g(x) 0, h(x) = 0, l x u (31) with h(x) : R n R t, with t n. Of course, the DF-EPGO Algorithm can be easily adapted in order to take into account also the equality constraints h(x) = 0. In the practical application of the proposed approach, we must choose: - A value q for the vector norm. We chose q = 2. - A method for approximating the derivatives. We chose the simple forward difference: P q (x k ;ε) x i P q(x k +γe i ;ε) P q (x k ;ε) γ k, (32) with γ k+1 = ξγ k,γ 0 = 10 6 and ξ = 0.9. Of course, by this choice the conditions (4) and (6) are satisfied. - A method for globally solving the problem with bounded feasible set. In principle, any method could be used. In our numerical experience we used: 1) the DIRECT method as improved in [17]; 2) the DIRECT method as improved in [17], enriched by a derivative-free local search phase (DFNS method) based on the same exact penalty function (2) used in the global search [18].

13 A Derivative-Free Algorithm for Constrained Global Optimization 13 - Values for the data of the exact penalty approach. We set ε 0 = 0.1, σ = 0.9 and ρ = 0.5. Our choice in case 1) is motivated by the the fact that DIRECT is reputed to be one of the most efficient methods for problems with simple bounds. DIRECT belongs to the class of derivative-free deterministic methods, which perform the sampling of the objective function in points generated by the previous iterates of the method. The derivative-free algorithm used in case 2) seems to be very effective when first order information is not available. The practical computation of a δ k -global minimizer x k of Problem (3) has been performed in case 1) using as stopping rule the standard stopping rule of the DIRECT Algorithm [19]. Namely, the DIRECT algorithm stops when the diameter of the hyperrectangle containing the best found value of the objective function is less than a given threshold ζ k. We set ζ 0 = and r k+1 = θζ k with θ = 0.1. In case 2) the local search phase starts from the point obtained bythedirectalgorithmasdescribedbeforeandstopsassoonasthestepsize α in [18] is lower than As concerns the checking of feasibility for the new point x k, we consider x k feasible if the constraints violation cv = max{ g + (x), h(x) }, is lower than or equal to 10 3, infeasible otherwise. The numerical experimentation was carried out on an Intel Core 2 Duo 3.16 GHz processor with 3.25 GB RAM. DF-EPGO and DFNS Algorithm were implemented in Fortran 90 (double precision). The results for the problem set A are collected in Tables 1 and 2, which refer respectively to case 1) and 2). In both tables we report the name of the problem, the number n of variables, the number me and mi of equality and inequality constraints (excluding simple bounds), the CPU time required to attain the stop condition, the optimal function value f(x ), the constraints violation cv, and the reference value f reported in [9]. In Table 1, we note that the values f(x ) and f are practically the same for all problems 3-12, 15 and 16, f(x ) is lower than f for problem 2b, while f is lower than f(x ) for problems 1, 2a, 2c, 2d, 13 and 14. These results can be considered satisfactory considering that the results reported in [9] are obtained using first and second order derivatives. Furthermore, we point out that the results reported in Table 1 are comparable with the ones reported in [13] where first order derivatives of the problem functions are used. In particular, the values of f(x ) and cv are practically the same, while there is a slight increase of the CPU time, as expected. In Table 2, we note that the values f(x ) are practically the same as the ones reported in Table 1, but we have better results in terms of constraints violation. The results for the problem set B are collected in Tables 3-6. Again in Tables 3-6 we report the name of the problem, the number n of variables, the number

14 14 Gianni Di Pillo et al number of feasible solutions constraints violation x 10 3 Fig. 2 Performance Comparison between the DF-EPGO Algorithm (dotted line) and the DF-EPGO+DFNS Algorithm (dash-dot line) on the test set B: number of feasible solutions. me and mi of equality and inequality constraints (excluding simple bounds), the CPU time required to attain the stop condition, the optimal function value f(x ), the constraints violation cv. In order to synthesize the results we make reference to Figures 2 and 3. The plot in Figure 2 gives on the y-axis the number of obtained solutions whose constraints violation is smaller or equal than the value given in the x-axis. The plot in Figure 3 gives on the y-axis the number of obtained solutions whose objective function value has a relative error, given by f(x ) f max{1, f }, where f is the best known value reported in [16], smaller or equal than 0.01 and whose constraints violation is smaller or equal than the value given in the x-axis. From both Figures, we note that the DF-EPGO algorithm guarantees quite good performances in terms of feasibility and optimality and that these performances are much improved by combining the DF-EPGO algorithm with a local search phase based on the DFNS algorithm. 6. Conclusions In this paper, we have shown that the approach for constrained global optimization developed in [13], and based on the use of a non-differentiable exact

15 A Derivative-Free Algorithm for Constrained Global Optimization number of best feasible solutions constraints violation x 10 3 Fig. 3 Performance Comparison between the DF-EPGO Algorithm (dotted line) and the DF-EPGO+DFNS Algorithm (dash-dot line) on the test set B: number of best feasible solutions. penalty function, can be fruitfully modified in such a way that the derivatives of the problem functions are not required, thus making the approach derivative-free. Moreover, the approach has been enriched by a local search phase that uses the same penalty function used in the global search phase. This is motivated by the fact that the local searches are performed in such a way that the sequence produced by the algorithm, under suitable assumptions, is attracted by any global minimum point. Avoiding the use of derivatives makes the approach suitable to deal with realworld problems where the analytical expressions of the problem functions are not available, as happens for instance in black-box simulation-based optimization. The extensive numerical experimentation, which has been carried out on a well-known set of problems, shows that the new method is efficient and the results are comparable with those obtained by other methods which instead rely on the use of derivatives. References 1. Floudas, C.A.: Deterministic global optimization: theory, methods and application. Kluwer Academic Publishers, Dordrecht, The Netherlands (1999) 2. Horst, R., Tuy, H.: Introduction to Global Optimization. Kluwer Academic Publishers, Dordrecht, The Netherlands (2000)

16 16 Gianni Di Pillo et al. 3. Horst, R., Pardalos, P.M., Thoai, N.V.: Global Optimization: Deterministic Approaches. Springer-Verlag, Berlin, Germany (1990) 4. Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the lipschitz constant. Journal of Optimization Theory and Applications 79(1), (1993) 5. Jones, D.R.: The direct global optimization algorithm. C. Floudas, P. Pardalos (edit.) Encyclopedia of Optimization pp (2001) 6. Pinter, J.D.: Global optimization in action. Kluwer Academic Publishers, Dordrecht, Boston, London (1996) 7. Tawarmalani, M., Sahinidis, N.V.: Convexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming. Kluwer Academic Publishers, Dordrecht, The Netherlands (2002) 8. Neumaier, A., Shcherbina, O., Huyer, W., Vinkó, T.: A comparison of complete global optimization solvers. Mathematical Programming 103, (2005) 9. Birgin, E.G., Floudas, C.A., Martinez, J.M.: Global minimization using an augmented lagrangian method with variable lower-level constraints. Mathematical Programming Serie A 125(1), (2010) 10. Luo, H.Z., Sun, X.L., Li, D.: On the convergence of augmented lagrangian methods for constrained global optimization. SIAM Journal on Optimization 18, (2007) 11. Wang, C.Y., Li, D.: Unified theory of augmented lagrangian methods for constrained global optimization. Journal of Global Optimization 44, (2009) 12. Cassioli, A., Schoen, F.: Global optimization of expensive black box problems with a known lower bound. Journal of Global Optimization (2011). doi : /s Di Pillo, G., Lucidi, S., Rinaldi, F.: An approach to constrained global optimization based on exact penalty functions. Journal of Global Optimization 54(2), (2012) 14. Huang, L.R., Ng, K.F.: Second-order necessary and sufficient conditions in nonsmooth optimization. Mathematical Programming 66(3), (1994) 15. Liuzzi, G., Lucidi, S., Piccialli, V., Sotgiu, A.: A magnetic resonance device designed via global optimization techniques. Mathematical Programming 101, (2004) 16. Neumaier, A., Sam-Haroud, D., Vu, X.H., Nguyen, T.V.: Benchmarking global optimization and constraint satisfaction codes. Ch. Bliek, Ch. Jermann and A. Neumaier (eds.), Global Optimization and Constraint Satisfaction pp (2003) 17. Liuzzi, G., Lucidi, S., Piccialli, V.: An efficient data structure for the direct algorithm. Tech. rep., IASI (2012) 18. Fasano, G., Liuzzi, G., Lucidi, S., Rinaldi, F.: An exact penalty method for constrained lipschitz optimization. Tech. rep., DIIAG (2013) 19. Gablonsky, J.M.: Direct version 2.0, user guide. Tech. rep., North Carolina State University (2001)

17 A Derivative-Free Algorithm for Constrained Global Optimization Tables PROBLEM n me mi CPU time f(x ) cv f problem E problem02a E problem02b E problem02c E problem02d E problem03a E problem03b E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E Table 1 Performace of the DF-EPGO Algorithm on the test set A. PROBLEM n me mi CPU time f(x ) cv f problem E problem02a E problem02b E problem02c E problem02d E problem03a E problem03b E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E problem E Table 2 Performace of the DF-EPGO+DFNS Algorithm on test set A).

18 18 Gianni Di Pillo et al. PROBLEM n me mi time val funct viol f ex E ex E chance E circle E dispatch E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex522case E ex522case E ex522case E ex E ex E ex E Table 3 Performace of the DF-EPGO Algorithm on the test set B (Part I).

19 A Derivative-Free Algorithm for Constrained Global Optimization 19 PROBLEM n me mi time val funct viol f ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E himmel E house E least E like E meanvar E mhw4d E process E rbrock E sample E wall E Table 4 Performace of the DF-EPGO Algorithm on the test set B (Part II).

20 20 Gianni Di Pillo et al. PROBLEM n me mi time val funct viol f ex E ex E chance E circle E dispatch E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex522case E ex522case E ex522case E ex E ex E ex E Table 5 Performace of the DF-EPGO+DFNS Algorithm on the test set B (Part I).

21 A Derivative-Free Algorithm for Constrained Global Optimization 21 PROBLEM n me mi time val funct viol f ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E ex E himmel E house E least E like E meanvar E mhw4d E process E rbrock E sample E wall E Table 6 Performace of the DF-EPGO+DFNS Algorithm on the test set B (Part II).

A Derivative-Free Algorithm for Constrained Global Optimization Based on Exact Penalty Functions

A Derivative-Free Algorithm for Constrained Global Optimization Based on Exact Penalty Functions J Optim Theory Appl (2015) 164:862 882 DOI 10.1007/s10957-013-0487-1 A Derivative-Free Algorithm for Constrained Global Optimization Based on Exact Penalty Functions Gianni Di Pillo Stefano Lucidi Francesco

More information

An approach to constrained global optimization based on exact penalty functions

An approach to constrained global optimization based on exact penalty functions DOI 10.1007/s10898-010-9582-0 An approach to constrained global optimization based on exact penalty functions G. Di Pillo S. Lucidi F. Rinaldi Received: 22 June 2010 / Accepted: 29 June 2010 Springer Science+Business

More information

Exact Penalty Functions for Nonlinear Integer Programming Problems

Exact Penalty Functions for Nonlinear Integer Programming Problems Exact Penalty Functions for Nonlinear Integer Programming Problems S. Lucidi, F. Rinaldi Dipartimento di Informatica e Sistemistica Sapienza Università di Roma Via Ariosto, 25-00185 Roma - Italy e-mail:

More information

A DIRECT-type approach for derivative-free constrained global optimization

A DIRECT-type approach for derivative-free constrained global optimization A DIRECT-type approach for derivative-free constrained global optimization G. Di Pillo G. Liuzzi S. Lucidi V. Piccialli F. Rinaldi September 2, 2016 Abstract In the field of global optimization, many efforts

More information

ISTITUTO DI ANALISI DEI SISTEMI ED INFORMATICA Antonio Ruberti

ISTITUTO DI ANALISI DEI SISTEMI ED INFORMATICA Antonio Ruberti ISTITUTO DI ANALISI DEI SISTEMI ED INFORMATICA Antonio Ruberti CONSIGLIO NAZIONALE DELLE RICERCHE G. Fasano, G. Liuzzi, S. Lucidi, F. Rinaldi A LINESEARCH-BASED DERIVATIVE-FREE APPROACH FOR NONSMOOTH OPTIMIZATION

More information

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for

More information

Derivative-free Methods for Mixed-Integer Constrained Optimization Problems

Derivative-free Methods for Mixed-Integer Constrained Optimization Problems Noname manuscript No. (will be inserted by the editor) Derivative-free Methods for Mixed-Integer Constrained Optimization Problems Giampaolo Liuzzi Stefano Lucidi Francesco Rinaldi the date of receipt

More information

Derivative-free Methods for Mixed-Integer Constrained Optimization Problems

Derivative-free Methods for Mixed-Integer Constrained Optimization Problems Noname manuscript No. (will be inserted by the editor) Derivative-free Methods for Mixed-Integer Constrained Optimization Problems Giampaolo Liuzzi Stefano Lucidi Francesco Rinaldi the date of receipt

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

A Filled Function Method with One Parameter for R n Constrained Global Optimization

A Filled Function Method with One Parameter for R n Constrained Global Optimization A Filled Function Method with One Parameter for R n Constrained Global Optimization Weixiang Wang Youlin Shang Liansheng Zhang Abstract. For R n constrained global optimization problem, a new auxiliary

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

On sequential optimality conditions for constrained optimization. José Mario Martínez martinez

On sequential optimality conditions for constrained optimization. José Mario Martínez  martinez On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions

A Trust Region Algorithm Model With Radius Bounded Below for Minimization of Locally Lipschitzian Functions The First International Symposium on Optimization and Systems Biology (OSB 07) Beijing, China, August 8 10, 2007 Copyright 2007 ORSC & APORC pp. 405 411 A Trust Region Algorithm Model With Radius Bounded

More information

A DERIVATIVE-FREE APPROACH TO CONSTRAINED MULTIOBJECTIVE NONSMOOTH OPTIMIZATION

A DERIVATIVE-FREE APPROACH TO CONSTRAINED MULTIOBJECTIVE NONSMOOTH OPTIMIZATION A DERIVATIVE-FREE APPROACH TO CONSTRAINED MULTIOBJECTIVE NONSMOOTH OPTIMIZATION G. LIUZZI, S. LUCIDI, AND F. RINALDI Abstract. In this work, we consider multiobjective optimization problems with both bound

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization

A New Sequential Optimality Condition for Constrained Nonsmooth Optimization A New Sequential Optimality Condition for Constrained Nonsmooth Optimization Elias Salomão Helou Sandra A. Santos Lucas E. A. Simões November 23, 2018 Abstract We introduce a sequential optimality condition

More information

Step lengths in BFGS method for monotone gradients

Step lengths in BFGS method for monotone gradients Noname manuscript No. (will be inserted by the editor) Step lengths in BFGS method for monotone gradients Yunda Dong Received: date / Accepted: date Abstract In this paper, we consider how to directly

More information

Douglas-Rachford splitting for nonconvex feasibility problems

Douglas-Rachford splitting for nonconvex feasibility problems Douglas-Rachford splitting for nonconvex feasibility problems Guoyin Li Ting Kei Pong Jan 3, 015 Abstract We adapt the Douglas-Rachford DR) splitting method to solve nonconvex feasibility problems by studying

More information

A Continuation Method for the Solution of Monotone Variational Inequality Problems

A Continuation Method for the Solution of Monotone Variational Inequality Problems A Continuation Method for the Solution of Monotone Variational Inequality Problems Christian Kanzow Institute of Applied Mathematics University of Hamburg Bundesstrasse 55 D 20146 Hamburg Germany e-mail:

More information

A double projection method for solving variational inequalities without monotonicity

A double projection method for solving variational inequalities without monotonicity A double projection method for solving variational inequalities without monotonicity Minglu Ye Yiran He Accepted by Computational Optimization and Applications, DOI: 10.1007/s10589-014-9659-7,Apr 05, 2014

More information

On the minimization of possibly discontinuous functions by means of pointwise approximations

On the minimization of possibly discontinuous functions by means of pointwise approximations On the minimization of possibly discontinuous functions by means of pointwise approximations E. G. Birgin N. Krejić J. M. Martínez April 5, 2016 Abstract A general approach for the solution of possibly

More information

On solving simple bilevel programs with a nonconvex lower level program

On solving simple bilevel programs with a nonconvex lower level program On solving simple bilevel programs with a nonconvex lower level program Gui-Hua Lin, Mengwei Xu and Jane J. Ye December 2011, Revised September 2012 Abstract. In this paper, we consider a simple bilevel

More information

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Mathematical and Computational Applications Article Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems Wenling Zhao *, Ruyu Wang and Hongxiang Zhang School of Science,

More information

Priority Programme 1962

Priority Programme 1962 Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation

More information

Generalized Pattern Search Algorithms : unconstrained and constrained cases

Generalized Pattern Search Algorithms : unconstrained and constrained cases IMA workshop Optimization in simulation based models Generalized Pattern Search Algorithms : unconstrained and constrained cases Mark A. Abramson Air Force Institute of Technology Charles Audet École Polytechnique

More information

A globally convergent Levenberg Marquardt method for equality-constrained optimization

A globally convergent Levenberg Marquardt method for equality-constrained optimization Computational Optimization and Applications manuscript No. (will be inserted by the editor) A globally convergent Levenberg Marquardt method for equality-constrained optimization A. F. Izmailov M. V. Solodov

More information

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA

Differentiable exact penalty functions for nonlinear optimization with easy constraints. Takuma NISHIMURA Master s Thesis Differentiable exact penalty functions for nonlinear optimization with easy constraints Guidance Assistant Professor Ellen Hidemi FUKUDA Takuma NISHIMURA Department of Applied Mathematics

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints

Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints An Li and Jane J. Ye Abstract. In this paper we study an optimal control problem with nonsmooth

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

PATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION

PATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION PATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION ROBERT MICHAEL LEWIS AND VIRGINIA TORCZON Abstract. We extend pattern search methods to linearly constrained minimization. We develop a general

More information

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming Mathematical Problems in Engineering Volume 2010, Article ID 346965, 12 pages doi:10.1155/2010/346965 Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

More information

Observer design for a general class of triangular systems

Observer design for a general class of triangular systems 1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals

More information

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions

Mathematical Programming Involving (α, ρ)-right upper-dini-derivative Functions Filomat 27:5 (2013), 899 908 DOI 10.2298/FIL1305899Y Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Mathematical Programming Involving

More information

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1.

ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS. 1. Introduction. Many practical optimization problems have the form (1. ON AUGMENTED LAGRANGIAN METHODS WITH GENERAL LOWER-LEVEL CONSTRAINTS R. ANDREANI, E. G. BIRGIN, J. M. MARTíNEZ, AND M. L. SCHUVERDT Abstract. Augmented Lagrangian methods with general lower-level constraints

More information

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2

Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 2, pp. 449-467, AUGUST 1997 Stationary Points of Bound Constrained Minimization Reformulations of Complementarity Problems1,2 M. V. SOLODOV3

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Semi-infinite programming, duality, discretization and optimality conditions

Semi-infinite programming, duality, discretization and optimality conditions Semi-infinite programming, duality, discretization and optimality conditions Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205,

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES

SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ARCHIVUM MATHEMATICUM (BRNO) Tomus 42 (2006), 167 174 SOME PROPERTIES ON THE CLOSED SUBSETS IN BANACH SPACES ABDELHAKIM MAADEN AND ABDELKADER STOUTI Abstract. It is shown that under natural assumptions,

More information

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials

On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials Int. Journal of Math. Analysis, Vol. 7, 2013, no. 18, 891-898 HIKARI Ltd, www.m-hikari.com On Optimality Conditions for Pseudoconvex Programming in Terms of Dini Subdifferentials Jaddar Abdessamad Mohamed

More information

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming

Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Exact Augmented Lagrangian Functions for Nonlinear Semidefinite Programming Ellen H. Fukuda Bruno F. Lourenço June 0, 018 Abstract In this paper, we study augmented Lagrangian functions for nonlinear semidefinite

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear

More information

Ville-Pekka Eronen Marko M. Mäkelä Tapio Westerlund. Nonsmooth Extended Cutting Plane Method for Generally Convex MINLP Problems

Ville-Pekka Eronen Marko M. Mäkelä Tapio Westerlund. Nonsmooth Extended Cutting Plane Method for Generally Convex MINLP Problems Ville-Pekka Eronen Marko M. Mäkelä Tapio Westerlund Nonsmooth Extended Cutting Plane Method for Generally Convex MINLP Problems TUCS Technical Report No 1055, July 2012 Nonsmooth Extended Cutting Plane

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS

ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS ON LICQ AND THE UNIQUENESS OF LAGRANGE MULTIPLIERS GERD WACHSMUTH Abstract. Kyparisis proved in 1985 that a strict version of the Mangasarian- Fromovitz constraint qualification (MFCQ) is equivalent to

More information

Second-order negative-curvature methods for box-constrained and general constrained optimization

Second-order negative-curvature methods for box-constrained and general constrained optimization Second-order negative-curvature methods for box-constrained and general constrained optimization R. Andreani E. G. Birgin J. M. Martínez M. L. Schuverdt March 4, 2009. Abstract A Nonlinear Programming

More information

Global convergence of trust-region algorithms for constrained minimization without derivatives

Global convergence of trust-region algorithms for constrained minimization without derivatives Global convergence of trust-region algorithms for constrained minimization without derivatives P.D. Conejo E.W. Karas A.A. Ribeiro L.G. Pedroso M. Sachine September 27, 2012 Abstract In this work we propose

More information

Date: July 5, Contents

Date: July 5, Contents 2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........

More information

Derivative-free methods for nonlinear programming with general lower-level constraints*

Derivative-free methods for nonlinear programming with general lower-level constraints* Volume 30, N. 1, pp. 19 52, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam Derivative-free methods for nonlinear programming with general lower-level constraints* M.A. DINIZ-EHRHARDT 1, J.M.

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function

Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function Characterizations of Solution Sets of Fréchet Differentiable Problems with Quasiconvex Objective Function arxiv:1805.03847v1 [math.oc] 10 May 2018 Vsevolod I. Ivanov Department of Mathematics, Technical

More information

Qing-Juan Xu & Jin-Bao Jian

Qing-Juan Xu & Jin-Bao Jian A nonlinear norm-relaxed method for finely discretized semi-infinite optimization problems Qing-Juan Xu & Jin-Bao Jian Nonlinear Dynamics An International Journal of Nonlinear Dynamics and Chaos in Engineering

More information

Optimality Functions and Lopsided Convergence

Optimality Functions and Lopsided Convergence Optimality Functions and Lopsided Convergence Johannes O. Royset Operations Research Department Naval Postgraduate School joroyset@nps.edu Roger J-B Wets Department of Mathematics University of California,

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

arxiv: v2 [math.oc] 13 Apr 2018

arxiv: v2 [math.oc] 13 Apr 2018 Optim Lett (will be inserted by the editor) A note on approximate Karush Kuhn Tucker conditions in locally Lipschitz multiobjective optimization Nguyen Van Tuyen 1,2 Jen-Chih Yao 3 Ching-Feng Wen 4,5 arxiv:1711.08551v2

More information

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems

On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective Optimization Problems Int. Journal of Math. Analysis, Vol. 7, 2013, no. 60, 2995-3003 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijma.2013.311276 On Weak Pareto Optimality for Pseudoconvex Nonsmooth Multiobjective

More information

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH GLOBAL CONVERGENCE OF CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH Jie Sun 1 Department of Decision Sciences National University of Singapore, Republic of Singapore Jiapu Zhang 2 Department of Mathematics

More information

Some new facts about sequential quadratic programming methods employing second derivatives

Some new facts about sequential quadratic programming methods employing second derivatives To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS VINCENT GUIGUES Abstract. We consider a class of sampling-based decomposition methods

More information

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions

Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum Functions 260 Journal of Advances in Applied Mathematics, Vol. 1, No. 4, October 2016 https://dx.doi.org/10.22606/jaam.2016.14006 Newton-type Methods for Solving the Nonsmooth Equations with Finitely Many Maximum

More information

Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization

Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization Mathematical Problems in Engineering Volume 2010, Article ID 602831, 11 pages doi:10.1155/2010/602831 Research Article Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization

More information

A smoothing augmented Lagrangian method for solving simple bilevel programs

A smoothing augmented Lagrangian method for solving simple bilevel programs A smoothing augmented Lagrangian method for solving simple bilevel programs Mengwei Xu and Jane J. Ye Dedicated to Masao Fukushima in honor of his 65th birthday Abstract. In this paper, we design a numerical

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

On the complexity of an Inexact Restoration method for constrained optimization

On the complexity of an Inexact Restoration method for constrained optimization On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization

More information

Semismooth Support Vector Machines

Semismooth Support Vector Machines Semismooth Support Vector Machines Michael C. Ferris Todd S. Munson November 29, 2000 Abstract The linear support vector machine can be posed as a quadratic program in a variety of ways. In this paper,

More information

New hybrid conjugate gradient methods with the generalized Wolfe line search

New hybrid conjugate gradient methods with the generalized Wolfe line search Xu and Kong SpringerPlus (016)5:881 DOI 10.1186/s40064-016-5-9 METHODOLOGY New hybrid conjugate gradient methods with the generalized Wolfe line search Open Access Xiao Xu * and Fan yu Kong *Correspondence:

More information

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization

Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization Rudabeh Meskarian 1 Department of Engineering Systems and Design, Singapore

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems Chapter 1 Optimality Conditions: Unconstrained Optimization 1.1 Differentiable Problems Consider the problem of minimizing the function f : R n R where f is twice continuously differentiable on R n : P

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY

DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY DUALIZATION OF SUBGRADIENT CONDITIONS FOR OPTIMALITY R. T. Rockafellar* Abstract. A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex,

More information

On the relation between concavity cuts and the surrogate dual for convex maximization problems

On the relation between concavity cuts and the surrogate dual for convex maximization problems On the relation between concavity cuts and the surrogate dual for convex imization problems Marco Locatelli Dipartimento di Ingegneria Informatica, Università di Parma Via G.P. Usberti, 181/A, 43124 Parma,

More information

Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2

Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2 Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming 1 2 J. M. Martínez 3 December 7, 1999. Revised June 13, 2007. 1 This research was supported

More information

LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS

LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS LIMITED MEMORY BUNDLE METHOD FOR LARGE BOUND CONSTRAINED NONSMOOTH OPTIMIZATION: CONVERGENCE ANALYSIS Napsu Karmitsa 1 Marko M. Mäkelä 2 Department of Mathematics, University of Turku, FI-20014 Turku,

More information

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT

Pacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given

More information

arxiv: v2 [cs.gt] 1 Dec 2018

arxiv: v2 [cs.gt] 1 Dec 2018 Exact Penalization of Generalized Nash Equilibrium Problems Qin Ba Jong-Shi Pang Original November 18, 2018 arxiv:1811.10674v2 [cs.gt] 1 Dec 2018 Abstract This paper presents an exact penalization theory

More information

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS

OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS OPTIMALITY CONDITIONS FOR GLOBAL MINIMA OF NONCONVEX FUNCTIONS ON RIEMANNIAN MANIFOLDS S. HOSSEINI Abstract. A version of Lagrange multipliers rule for locally Lipschitz functions is presented. Using Lagrange

More information