Inexact Solution of NLP Subproblems in MINLP

Size: px
Start display at page:

Download "Inexact Solution of NLP Subproblems in MINLP"

Transcription

1 Ineact Solution of NLP Subproblems in MINLP M. Li L. N. Vicente April 4, 2011 Abstract In the contet of conve mied-integer nonlinear programming (MINLP, we investigate how the outer approimation method and the generalized Benders decomposition method are affected when the respective NLP subproblems are solved ineactly. We show that the cuts in the corresponding master problems can be changed to incorporate the ineact residuals, still rendering equivalence and finiteness in the limit case. Some numerical results will be presented to illustrate the behavior of the methods under NLP subproblem ineactness. Keywords: Mied integer nonlinear programming, outer approimation, generalized Benders decomposition, ineactness, conveity. 1 Introduction Recently, mied integer nonlinear programming (MINLP has become again a very active research area [1, 2, 4, 5, 6, 7, 14, 16]. Benders [3] developed in the 60 s a technique for solving linear mied-integer problems, later called Benders decomposition. Geoffrion [11] etended it to MINLP in 1972, in what become known as generalized Benders decomposition (GBD. Much later, in 1986, Duran and Grossmann [8] derived a new outer approimation (OA method to solve a particular class of MINLP problems, which become widely used in practice. Although the authors shown finiteness of the OA algorithm, their theory was restricted to problems where the discrete variables appear linearly and the functions involving the continuous variables are conve. Both OA and GBD are iterative schemes requiring at each iteration the solution of a (feasible or infeasible NLP subproblem and one mied-integer linear programming (MILP master problem. For these particular MINLP problems, Quesada and Grossmann [15] then proved that the cuts in the master problem of OA imply the cuts in the master problem of GBD, showing that the GBD algorithm provides weaker lower bounds and generally requires more maor iterations to converge. Fletcher and Leyffer [9] generalized the OA method of Duran and Grossmann [8] into a wider class of problems where nonlinearities in the discrete variables are allowed as long as the corresponding functions are conve in these variables. They also introduced a new and Department of Mathematics, University of Coimbra, Coimbra, Portugal (limin@mat.uc.pt. Support for this author was provided by FCT under the scholarship SFRH/BD/33369/2008. CMUC, Department of Mathematics, University of Coimbra, Coimbra, Portugal (lnv@mat.uc.pt. Support for this author was provided by FCT under the grant PTDC/MAT/098214/

2 simpler proof of finiteness of the OA algorithm. The relationship between OA and GBD was then addressed, again, by Grossmann [12] in this wider contet of MINLP problems, showing once more that the lower bound predicted by the relaed master problem of OA is greater than or equal to the one predicted by the relaed master problem of GBD (see also Flippo and Rinnooy Kan [10] for the relationship between the two techniques. Recently, Bonami et al. [4] suggested a different OA algorithm using linearizations of both the obective function and the constraints, independently of being taken at the feasible or infeasible NLP subproblem, to build the MILP master problem. This technique is, in fact, different from the traditional OA (see [9], where the cuts in the master MILP problems do not involve linearizations of the obective function in the infeasible case. Westerlund and Pettersson [18] generalized the cutting plane method [13] from conve NLP to conve MINLP, in what is known as the etended cutting plane (ECP method (see also [19, 20]. While OA and GBD alternate between the solution of MILP and NLP subproblems, the ECP relies only on the solution of MILP problems. In the above mentioned OA and GBD approaches, the NLP subproblems are solved eactly, at least for the derivation of the theoretical properties, such as equivalence between original and master problem and finite termination of the corresponding algorithms. In this paper we investigate the effect of NLP subproblem ineactness in these two techniques. We show how the cuts in the master problems can be changed to incorporate the ineact residuals of the first order necessary conditions of the NLP subproblems, in a way that still renders the equivalence and finiteness properties, as long as the size of these residuals allow inferring the cuts from conveity properties. In this paper, we will adopt the MINLP formulation min f(, y P s.t. g(, y 0, X Z n d, y Y, where X is a bounded polyhedral subset of R n d and Y a polyhedral subset of R nc. The functions f : X Y R and g : X Y R m are assumed continuously differentiable. We will also assume that P is conve, i.e., that f and g are conve functions. Let be any element of X Z n d. Consider, then, the (conve subproblem min f(, y NLP( s.t. g(, y 0, y Y, and suppose it is feasible. In this case, y will represent an approimated optimal solution of NLP(. For an k in X Z n d for which NLP( k is infeasible, y k is instead defined as an approimated optimal solution of the following feasibility (conve subproblem min u NLPF( k s.t. g i ( k, y u, i = 1,..., m, y Y, u R, where one minimizes the l -norm of the measure of infeasibility of subproblem NLP( k. 2

3 For a matter of simplification, and without loss of generality, we suppose that the constraints y Y are part of the constraints g(, y 0 and g i ( k, y u, i = 1,..., m, in the subproblems NLP( and NLPF( k, respectively. In addition, let us assume that the approimated optimal solutions of the NLP subproblems satisfy an ineact form of the corresponding first order necessary Karush-Kuhn-Tucker (KKT conditions. More particularly, in the case of NLP(, we assume the eistence of λ R m +, r R nc, and s R m, such that y f(, y + λ i yg i (, y = r, (1 λ i g i(, y = s i, i = 1,..., m. (2 When NLP( k is infeasible, we assume, for NLPF( k, the eistence of µ k R m +, z k R m, w k R, and v k R nc, such that µ k i y g i ( k, y k = v k, (3 1 µ k i = w k, (4 µ k i (g i ( k, y k u k = z k i, i = 1,..., m. (5 Points satisfying the ineact KKT conditions can be seen as solutions of appropriate perturbed subproblems (see the Appendi. The following two sets will then be used to inde these two sets of approimated optimal solutions: and T = { : X Z n d, NLP( is feasible and y appr. solves NLP( } S = {k : k X Z n d, NLP( k is infeasible and y k appr. solves NLPF( k }. The ineact versions of OA and GBD studied in this paper will attempt to find the best pair among all of the form (, y corresponding to T. Implicitly, we are thus redefining a perturbed version of problem P and will denote it by P: P min T f(, y. (6 This problem is well defined if T which in turn can be assumed when the original MINLP problem P has a finite optimal value. We use the superscripts l, p, and q to denote the iteration count, superscript to inde the feasible NLP subproblems defined above, and k to indicate infeasible subproblems. The following notation is adopted to distinguish between function values and functions. f l = f( l, y l denotes the value of f evaluated at the point ( l, y l, similarly, f l = f( l, y l is the value of the gradient of f at the point ( l, y l, f l = f( l, y l is the value of the gradient of f with respect to at the point ( l, y l, and y f l = y f( l, y l is the value of the gradient of f with respect to y at the point ( l, y l. Moreover, the same conventions apply for all other functions. 3

4 We organize the paper in the following way. In Section 2, we etend OA for the ineact solution of the NLP subproblems, rederiving the corresponding background theory and main algorithm. In Section 3 we proceed similarly for GBD, also discussing the relationship between the ineact forms of OA and GBD. Section 4 describes a set of preliminary numerical eperiments, reported to better understand some of the theoretical features encountered in our study of ineactness in MINLP. 2 Ineact outer approimation 2.1 Equivalence between perturbed and master problems for OA OA relies on the fact that the original problem P is equivalent to a MILP (master problem formed by minimizing the least of the linearized forms of f for indices in T subect to the linearized forms of g for indices in S and T. When the NLP subproblems are solved ineactly, one has to consider perturbed forms of such cuts or linearized forms in order to keep an equivalence, this time to the perturbed problem P. In turn, these ineact cuts lead to a different, perturbed MILP (master problem given by P OA min s.t. where, for i = 1,..., m, and α ( f(, y y f(, y r ( + f(, y α, ( y y g(, y y y + g(, y t, T, ( g i ( k, y k ( k y g i ( k, y k 1 v k 1 w y y k k i = 1,..., m, k S, X Z n d, y Y, α R, a k i = t i = s i λ i { mz k i w k u k, if λ i > 0, 0, if λ i = 0, mµ k i, if µ k i > 0, 0, if µ k i = 0. + g i ( k, y k a k i, Note that when r, s, v, w, and z are zero, we obtain the well-known master problem in OA. Also, optionally, one could have added the cuts ( f( k, y k k y y k + f( k, y k α, k S, (9 corresponding to linearizations of the obective function in the infeasible cases, as suggested in [4]. (7 (8 4

5 From the conveity and continuous differentiability of f and g, we know that, for any ( l, y l R n d R nc, ( f(, y f( l, y l + f( l, y l l y y l, (10 g(, y g( l, y l + g( l, y l ( l y y l. (11 In addition, when y is a feasible point of NLP(, we obtain from (11 and g(, y 0 that ( 0 g( l, y l + g( l, y l l y y l. (12 The ineact OA method reported in this section as well as the GBD method of the net section require the residuals of the ineact KKT conditions to satisfy the bounds given in the net two assumptions, in order to validate the equivalence between perturbed and master problems, and to ensure finiteness of the respective algorithms. Essentially, these bounds will ensure that the above conveity properties will still imply the ineact cuts at the remaining points. We first give the bounds on the residuals r and s for the feasible case. Assumption 2.1 Given any l, T, with l, assume that ( τ[( f l l r l y y l + f l f ] y y l, for some τ [0, 1], and s l i σ i λ l i[( g l i ( l y y l + g l i], for some σ i [0, 1], i = 1,..., m. Now, we state the bounds for the residuals v, w, and z in the infeasible case. Assumption 2.2 Given any T and any k S, and for all i {1,..., m}, if µ k i 0, assume that 1 1 w k vk y y k + 1 ( µ k zi k + uk i mµ k w k β i [( gi k k y y k + gi k ], i for some β i [0, 1], otherwise, assume that 1 1 w k vk y y k η i [( g k i ( k y y k + g k i ], for some η i [0, 1]. We are now in a position to state the equivalence between the original, perturbed MINLP problem and the MILP master problem P OA. 5

6 Theorem 2.1 Let P be a conve MINLP problem and P be its perturbed problem as defined in the Introduction. Assume that P is feasible with a finite optimal value and that the residuals of the KKT conditions of the NLP subproblems satisfy Assumptions 2.1 and 2.2. Then P OA and P have the same optimal value. Proof. The proof follows closely the lines of the proof of [4, Theorem 1]. Since problem P has a finite optimal value it follows that, for every X Z n d, either problem NLP( is feasible with a finite optimal value or it is infeasible, that the sets T and S are well defined, and that the set T is nonempty. Now, given any l X Z n d with l T S, let P OA denote the l problem in α and y obtained from P OA when is fied to l. First we will prove that problem P OA is infeasible for every k S. k Part I. Establishing infeasibility of P OA for k S. k In this case, problem NLP( k is infeasible and y k is an approimated optimal solution of NLPF( k with corresponding ineact nonnegative Lagrange multipliers µ k. When we set = k, the corresponding constraints in P OA will result in ( y g i ( k, y k 1 1 w k vk (y y k + g i ( k, y k a k i, (13 for i = 1,..., m. Multiplying the inequalities in (13 by the nonnegative multipliers µ k 1,..., µk m, and summing them up, one obtains ( µ k i y g i ( k, y k v k (y y k (zi k µ k i g i ( k, y k w k u k. (14 By using (3, one can see that the left hand side of the inequality in (14 is equal to 0. On the other hand, by using equation (5, the right hand side of the inequality in (14 results in m (zk i µ k i g i( k, y k w k u k = (Σ m µk i + w k u k, which is equal to u k by (4. Since NLP( k is infeasible, u k must be strictly negative. We have thus proved that the inequality (14 has no solution y. This derivation implies that the minimum value of P OA should be found as the minimum value of P OA over all X Z n d with T. We prove in the net two separate subparts that, for every T, the optimal value ᾱ of P OA coincides with the approimated optimal value of NLP(. Part II. Establishing that P OA has the same obective value as the perturbed NLP( for T. We will show net that (y, f(, y is a feasible solution of P OA, and therefore that f(, y is an upper bound on the optimal value ᾱ of P OA. Part II A. Establishing that f(, y is an upper bound for the optimal value of P OA for T. In this case, it is easy to see that P OA contains all the constraints indeed by l T ( f( l, y l y f( l, y l r l ( l y y l g( l, y l ( l y y l + f( l, y l α, (15 + g( l, y l t l, (16 6

7 where, for i = 1,..., m, t l i = { s l i λ l i, if λ l i > 0, 0, if λ l i = 0, as well as all the constraints indeed by k S and i {1,..., m} ( g i ( k, y k y g i ( k, y k 1 1 w k v k ( k y y k + g i ( k, y k a k i, (17 where a k i is given as in (8. First take any l T and assume that y l is an approimated optimal solution of NLP( l with corresponding ineact nonnegative Lagrange multipliers λ l. If l =, it is easy to verify that (y, f(, y satisfies (15 and (16. Assume then that l. From Assumption 2.1, we know that, for some τ [0, 1], ( (r l (y y l r l y y l τ[( f l l y y l + f l f ]. Thus, [( f l ( l y y l + f l f ] (r l (y y l (1 τ[( f l ( l y y l + f l f ] 0, where the last inequality comes from 1 τ 0 and (10 with (, y = (, y. We then see that (15 is satisfied with α = f(, y and y = y. Now, from Assumption 2.1, one has for some σ i [0, 1], i = 1,..., m, λ l i[( g l i ( l y y l ( + gi] l s l i λ l i[( gi l l y y l 0, σ i λ l i[( g l i ( l y y l + g l i] (1 σ i λ l i[( g l i ( l y y l where the last inequality is ustified by (12 and σ i [0, 1]. Thus, λ l i[( g l i ( l y y l + g l i] + g l i] + g l i] s l i, i = 1,..., m. (18 If λ l i is equal to 0, so is tl i by its definition and we see that (y, f(, y satisfies the constraints (16 with y = y. If λ l i 0, then (18 can be written as: g i ( l, y l ( l y y l + g i ( l, y l sl i λ l i = t l i, 7

8 which also shows that the constraints (16 hold with y = y. Finally, we take any k S and assume that y k is an approimate optimal solution of NLP( k with corresponding ineact Lagrange multipliers µ k. For every i {1,..., m}, if µ k i 0, from Assumption 2.2, we have for some β i [0, 1], that i.e., 1 1 w k (vk (y y k 1 µ k zi k + i uk mµ k w k i β i [( g k i ( k y y k 1 ( 1 w k (vk (y y k a k i β i [( gi k k y y k + g k i ] + g k i ], by the definition of a k i. Thus, the constraints (17 are satisfied with y = y. When µ k i = 0, it results that a k i = 0 by its definition and, also by Assumption 2.2, we have that, for some η i [0, 1], ( g k i ( k y y k + gi k 1 ( 1 w k (vk (y y k (1 η i [( gi k k y y k 0. + g k i ] This also shows that the constraints (17 hold with y = y. We can therefore say that (y, f(, y is a feasible point of P OA, and thus ᾱ f(, y. Net, we will prove that f(, y is also a lower bound, i.e., ᾱ f(, y. Part II B. Establishing that f(, y is a lower bound for the optimal value of P OA for T. Recall that y is an approimated optimal solution of NLP( satisfying the ineact KKT conditions (1 and (2. By construction, any solution of P OA has to satisfy the ineact outerapproimation constraints: ( y f(, y r (y y + f(, y α, (19 y g(, y (y y + g(, y t. (20 Multiplying the inequalities (20 by the nonnegative multipliers λ 1,..., λ m and summing them together with (19, one obtains ( y f(, y r (y y + f(, y + The left hand side of the inequality (21 can be rewritten as: ( y f(, y + λ i ( yg i (, y (y y + g i (, y s i λ i yg i (, y r (y y + α. (21 (λ i g i(, y s i + f(, y. By using (1 and (2, this quantity is equal to f(, y, and it follows that inequality (21 is equivalent to f(, y α. 8

9 In conclusion, for any X Z n d with T, problems P OA and perturbed NLP( have the same optimal value. In other words, the MILP problem P OA has the same optimal value as the perturbed problem P given by (6. Since in the eact case, all the KKT residuals are zero, it results from Theorem 2.1 what is well known for OA: Corollary 2.1 Let P be a conve MINLP problem. Assume that P is feasible with a finite optimal value and the residuals of the KKT conditions of the NLP subproblems are zero. Then P OA and P have the same optimal value. In the paper [4], the feasibility NLP subproblem is stated as min m u i s.t. g( k, y u, P F k u 0, y Y, u R m. Note that one could easily rederive a result similar to [4, Theorem 1] replacing their P F by our k NLPF( k. In fact, the argument needed here is essentially Part I of the proof of Theorem 2.1 with v, w, and z set to zero. In their approach, the cuts (9 are included in the master problem, but one can also see that the proof of Theorem 2.1 remains true in this case (it would suffice to observe that (9 is satisfied trivially in the conve case when y = y and α = f(, y. 2.2 Ineact-OA algorithm One knows that the outer approimation algorithm terminates finitely in the conve case and when the optimal solutions of the NLP subproblems satisfy the first order KKT conditions (see [9]. In this section, we will etend the outer approimation algorithm to the ineact solution of the NLP subproblems by incorporating the corresponding residuals in the cuts of the master problems. As in the eact case, at each step of the ineact OA algorithm, one tries to solve a subproblem NLP( p, where p is chosen as a new discrete assignment. Two results can then occur: either NLP( p is feasible and an approimated optimal solution y p can be given, or this subproblem is found infeasible and another NLP subproblem, NLPF( p, is solved, yielding an approimated optimal solution y p. In the algorithm, the sets T and S defined in the Introduction will be replaced by: and T p = { : p, X Z n d, NLP( is feasible and y appr. solves NLP( } S p = {k : k p, k X Z n d, NLP( k is infeasible and y k appr. solves NLPF( k }. In order to prevent any, T p, from becoming the solution of the relaed master problem to be solved at the p iteration, one needs to add the constraint α < UBD p, 9

10 where UBD p = min p, T p f(, y. Then we define the following ineact relaed MILP master problem (P OA p min α s.t. α < UBD p, ( f(, y ( y f(, y r y y + f(, y α, ( g(, y y y + g(, y t, T p, ( g i ( k, y k ( k y g i ( k, y k 1 v k 1 w y y k + g i ( k, y k a k i, k i = 1,..., m, k S p, X Z n d, y Y, α R, where t and a k i were defined in (7 and (8, respectively. The presentation of the ineact OA algorithm (given net and the proof of its finiteness in Theorem 2.2 follows the lines in [9]. Algorithm 2.1 (Ineact Outer Approimation Initialization Let 0 be given. Set p = 0, T 1 =, S 1 =, and UBD = +. REPEAT 1. Ineactly solve the subproblem NLP( p, or the feasibility subproblem NLPF( p provided NLP( p is infeasible, and let y p be an approimated optimal solution. At the same time, obtain the corresponding ineact Lagrange multipliers λ p of NLP( p (resp. µ p of NLPF( p. Evaluate the residuals r p and s p of NLP( p (resp. v p, w p, and z p of NLPF( p. 2. Linearize the obective functions and constraints at ( p, y p. Renew T p = T p 1 {p} or S p = S p 1 {p}. 3. If (NLP( p is feasible and f p < UBD, then update current best point by setting = p, ȳ = y p, and UBD = f p. 4. Solve the relaed master problem (P OA p, obtaining a new discrete assignment p+1 to be tested in the algorithm. Increment p by one unit. UNTIL ((P OA p is infeasible. If termination occurs with UBD = +, then the algorithm visited every discrete assignment X Z n d but did not obtain a feasible point for the original MINLP problem P, or perturbed version P. In this case, the MINLP is declared infeasible. Net, we will show that the ineact OA algorithm also terminates in a finite number of steps. 10

11 Theorem 2.2 Let P be a conve MINLP problem and P be its perturbed problem as defined in the Introduction. Assume that either P has a finite optimal value or is infeasible, and that the residuals of the KKT conditions of the NLP subproblems satisfy Assumptions 2.1 and 2.2. Then Algorithm 2.1 terminates in a finite number of steps at an optimal solution of P or with an indication that P is infeasible. Proof. Since the set X is bounded by assumption, finite termination of Algorithm 2.1 will be established by proving that no discrete assignment is generated twice by the algorithm. Let q p. If q S p, it has been shown in Part I of the proof of Theorem 2.1 that the corresponding constraint in P OA p, derived from the feasibility problem NLPF(q, cannot be satisfied, showing that q cannot be feasible for (P OA p. We will now show that q cannot be feasible for (P OA p when q T p. For this purpose, let us assume that q is feasible in (P OA p and try to reach a contradiction. Let y q be an approimated optimal solution of NLP( q satisfying the ineact KKT conditions, that is, there eist λ q R m +, r q R nc, and s q R m, such that y f q + λ q i yg i ( q, y q = r q, (22 λ q i g i( q, y q = s q i, i = 1,..., m. (23 If q would be feasible for (P OA p it would satisfy the following set of inequalities for some y: where, for i = 1,..., m, ( f q y f q r q ( ( g q ( t q i = α p < UBD p f q, (24 0 y y q + f q α p, (25 0 y y q { s q i λ q, if λ q i > 0, i 0, if λ q i = 0. + g q t q, (26 Multiplying the rows in (26 by the Lagrange multipliers λ q i 0, i = 1,..., m, and adding (25, we obtain that ( y f q r q (y y q + f q + λ q i yg i ( q, y q (y y q + λ q i gq i α p + λ q i tq i, which, by the definition of t q, is equivalent to ( y f q r q (y y q + f q + λ q i yg i ( q, y q (y y q + The left hand side of this inequality can be written as: [ y f q r q + λ q i yg i ( q, y q ] (y y q + 11 (λ q i gq i sq i αp. (λ q i gq i sq i + f q. =1

12 Using (22 and (23, this is equal to f q and therefore we obtain the inequality f q α p, which contradicts (24. The rest of the proof is eactly as in [9, Theorem 2] but we repeat here for completeness and possible changes in notation. Finally, we will show that Algorithm 2.1 always terminates at a solution of P or with an indication that P is infeasible (which occurs when UBD = + at the eit. If P is feasible, then let (, y be an optimal solution of P with optimal value f. Without loss of generality, we will not distinguish between (, y and any other optimal solution with the same obective value f. Note that from Theorem 2.1, (, y, f is also an optimal solution of P OA. Now assume that the algorithm terminates indicating a nonoptimal point (, y with f > f. In such a situation, the previous relaation of the master problem P OA after adding the constraints at the point (, y, f, called (P OA p, is infeasible, causing the above mentioned termination. We will get a contradiction by showing that (, y, f is feasible for (P OA p. First, by the assumption that UBD = f > f, the first constraint α = f < UBD of (P OA p holds. Secondly, since (, y, f is an optimal solution to P OA, it must be feasible for all other constraints of (P OA p. Therefore, the algorithm could not terminate at (, y with UBD = f. 3 Ineact generalized Benders decomposition 3.1 Equivalence between perturbed and master problems for GBD In the generalized Benders decomposition (GBD, the MILP master problem involves only the discrete variables. When considering the ineact case, the master problem of GBD is the following: P GBD min s.t. α f(, y + f(, y ( + m λ i g i (, y ( α, T, m µk i [g i( k, y k + g i ( k, y k ( k ] + w k u k m zk i 0, k S, X Z n d, α R. One can easily recognize the classical form of (eact GBD master problem when w k = 0 and z k = 0. Moreover, as we show in the Appendi, this MILP can also be derived in the ineact case from a perturbed duality representation of the original, perturbed problem. A proof similar to the one of eact GBD and eact and ineact OA (Theorem 2.1 allows us to establish the desired equivalence between the original, perturbed MINLP problem and the MILP master problem P GBD. Theorem 3.1 Let P be a conve MINLP problem and P be its perturbed problem as defined in the Introduction. Assume that P is feasible with a finite optimal value and that the residuals of the KKT conditions of the NLP subproblems satisfy Assumptions 2.1 and 2.2. Then P GBD and P have the same optimal value. 12

13 Proof. Given any l X Z n d with l T S, let P GBD denote the problem in α obtained l from P GBD when is fied to l. First we will prove that problem P GBD is infeasible for every k k S. When we set = k, in the corresponding constraint of P GBD, one obtains µ k i g i ( k, y k + w k u k zi k 0 From (4 and (5, it results that u k 0, but one knows that u k is strictly positive when NLP( k is infeasible. Net, we will prove that for each X Z n d, with T, P GBD has the same optimal value as the perturbed NLP(. First, we will prove that the following constraints of P GBD f( l, y l + f( l, y l ( l + λ l i g i ( l, y l ( l α, l T, (27 µ k i [g i ( k, y k + g i ( k, y k ( k ] + w k u k zi k 0, k S. (28 are satisfied with α = f(, y. Under Assumptions 2.1 and Assumption 2.2, we know from the proof of Theorem 2.1 (Part II A that the following hold: (15 with y = y and α = f(, y, (16 with y = y, and (17 with y = y. When l T, multiplying the inequalities (16 with y = y by the nonnegative multipliers λ 1,..., λ m and summing them together with (15 with y = y and α = f(, y, one obtains f( l, y l + f( l, y l ( l + f(, y [ y f( l, y l + λ l i g i ( l, y l ( l λ l i y g i ( l, y l r l ] (y y l λ l ig( l, y l + λ l it l i. The right hand side is equal to f(, y by the definitions of r l, s l, and t l, showing that (27 holds with α = f(, y. When k S, multiplying the inequalities in (17 with y = y by the nonnegative multipliers µ k 1,..., µk m, and summing them up, one obtains using (3 and (4 µ k i g i ( k, y k ( k + µ k i g i ( k, y k µ k i a k i, which, by the definition of a k, is the same as (28. Thus, f(, y is a feasible point of P GBD, and therefore f(, y is an upper bound on the optimal value ᾱ of P OA. To show that is also a lower bound, i.e., that ᾱ f(, y, note that from (27, when l =, P GBD contains the constraint: f(, y α. We have thus proved that for any X Z n d, with T, problems P GBD and perturbed NLP( have the same optimal value, which concludes the proof. When all the KKT residuals are zero we obtain as a corollary the known equivalence result in GBD: 13

14 Corollary 3.1 Let P be a conve MINLP problem. Assume that P is feasible with a finite optimal value and the residuals of the KKT conditions of the NLP subproblems are zero. Then P GBD and P have the same optimal value. Remark 3.1 It is well known that the constraints of the GBD master problem can be derived from the corresponding ones of the OA master problem, in the conve, eact case (see [15]. The same happens naturally in the ineact case. In fact, from the proof of Theorem 3.1 above, we can see that the constraints in P OA, for T, imply the corresponding ones in P GBD. Moreover, one can easily see that any of the constraints in P OA imply the corresponding ones in P GBD. Thus, one can also say in the ineact case that the lower bounds produced iteratively by the OA algorithm are stronger than the ones provided by the corresponding GBD algorithm (given net. 3.2 Ineact GBD algorithm As we know for eact GBD, it is possible to derive an algorithm for the ineact case, terminating finitely, by solving at each iteration a relaed MILP formed by the cuts collected so far. The definitions of UBD p, T p, and S p are the same as those in Section 2.2. The relaed MILP to be solved at each iteration is thus given by min α (P GBD p s.t. α < UBD p f(, y + f(, y ( + m λ i g i (, y ( α, T p m µk i [g i( k, y k + g i ( k, y k ( k ] + w k u k m zk i 0, k S p X Z n d, α R. The ineact GBD algorithm is given net (and follows the presentation in [9] for OA. Algorithm 3.1 (Ineact GBD Approimation Initialization Let 0 be given. Set p = 0, T 1 =, S 1 =, and UBD = +. REPEAT 1. Ineactly solve the subproblem NLP( p, or the feasibility subproblem NLPF( p provided NLP( p is infeasible, and let y p be an approimated optimal solution. At the same time, obtain the corresponding ineact Lagrange multipliers λ p of NLP( p (resp. µ p of NLPF( p. Evaluate the residuals r p and s p of NLP( p (resp. v p, w p, and z p of NLPF( p. 2. Linearize the obective functions and constraints at p. Renew T p = T p 1 {p} or S p = S p 1 {p}. 14

15 3. If (NLP( p is feasible and f p < UBD, then update current best point by setting = p, ȳ = y p, and UBD = f p. 4. Solve the relaed master problem (P GBD p, obtaining a new discrete assignment p+1 to be tested in the algorithm. Increment p by one unit. UNTIL ((P GBD p is infeasible. Similarly as in Theorem 2.2 for OA, one can establish that the above ineact GBD algorithm terminates in a finite number of steps. Theorem 3.2 Let P be a conve MINLP problem and P be its perturbed problem as defined in the Introduction. Assume that either P has a finite optimal value or is infeasible, and that the residuals of the KKT conditions of the NLP subproblems satisfy Assumptions 2.1 and 2.2. Then Algorithm 3.1 terminates in a finite number of steps at an optimal solution of P or with an indication that P is infeasible. 4 Numerical eperiments We will illustrate some of the practical features of ineact OA and GBD algorithms by reporting numerical results on three test problems: Eample 1, taken from [8, test problem no. 1], has 3 discrete variables and 3 continuous variables; Eample 2, taken from [8, test problem no. 2], has 5 discrete variables and 6 continuous variables; Eample 3, taken from [8, test problem no. 3], and has 8 discrete variables and 9 continuous variables. All the three eamples are conve and linear in the discrete variables, and consist of simplified versions of process synthesis problems. In the third eample we found a point better than the one in [8]: = (0, 2, , , 2, 0, 0, , , y = (0, 1, 0, 1, 0, 1, 0, 1 with corresponding optimal value f = The implementation and testing of Algorithms 2.1 and 3.1 was made in MATLAB (version , R2010b. We used fmincon (from MATLAB to solve the NLP subproblems and ip1 [17] to solve the MILP problems, arising in both algorithms. The linear equality constraints possibly present in the original problems were kept in the MILP master problems. For both methods, we report results for two variants, depending on the form of the cuts. In a first variant the subproblems are solved ineactly (with tolerances varying from 10 6 to 10 1 but the cuts are the eact ones. When the tolerance is set to 10 6 we are essentially running eact OA and GBD. The second variant also incorporates ineact solution of NLP subproblems (again with tolerances varying from 10 6 to 10 1 but the cuts are now the ineact ones. In the tables of results we report the number N of iterations taken by Algorithms 2.1 and 3.1. We also report, in the tables corresponding to the second variant, the number C of constraint inequalities of Assumptions 2.1 and 2.2 that were violated by more than The stopping criteria of both algorithms consisted of the corresponding master program being infeasible or the number of iterations eceeding 50 or the solution of the MILP master program coinciding with a previous one (these third cases were marked with NPC, standing for no proper convergence. We note that in all the NPC cases found, the repeated integer solution of the MILP master problem was indeed the solution of the original MINLP. 15

16 Table 1: Application of OA (ineact solution of NLP subproblems and eact cuts to Eample 1. The table reports the number N of iterations taken. Tolerances initial point 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 (0, 0, NPC (1, 0, NPC (0, 1, NPC (1, 0, NPC (0, 1, NPC (0, 0, NPC NPC stands for no proper convergence (repeated solution of MILP master problem. Table 2: Application of OA (ineact solution of NLP subproblems and ineact cuts to Eample 1. The table reports the number N of iterations taken as well as the number C of inequalities violated in Assumptions 2.1 and 2.2. Tolerances 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 initial point N C N C N C N C N C N C (0, 0, NPC 0 (1, 0, NPC 0 (0, 1, NPC 0 (1, 0, NPC 0 (0, 1, NPC 0 (0, 0, NPC 0 The maimum number for C is 13t(t 1/2 + 12st, where t = T, s = S, and s + t = N. NPC stands for no proper convergence (repeated solution of MILP master problem. 4.1 Results for ineact OA method Tables 1 6 summarize the application of ineact OA (Algorithm 2.1 (variant ineact solution of NLP subproblems and eact cuts, and variant ineact solution of NLP subproblems and ineact cuts to Eamples 1 3. Comparing Tables 1 and 2, one can see little difference between using eact or ineact cuts for the first, smaller eample. However, looking at Tables 3 and 4 for the second eample and Tables 5 and 6 for the third eample, one can see, for larger values of the tolerances, that the ineact case with eact cuts has more tendency for unproper convergence (i.e., the MILP is incapable of either provide a new integer solution or render infeasible, while the variant incorporating the ineactness in the cuts does not. We also observe that ineact OA converged even neglecting the imposition of the inequalities of Assumptions 2.1 and

17 Table 3: Application of OA (ineact solution of NLP subproblems and eact cuts to Eample 2. The table reports the number N of iterations taken. Tolerances initial point 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 (1, 0, 0, 0, NPC NPC NPC (0, 1, 0, 0, NPC NPC NPC (1, 0, 1, 0, NPC NPC NPC (1, 0, 0, 1, NPC NPC NPC (1, 0, 0, 0, NPC NPC NPC (0, 1, 1, 0, NPC NPC NPC (0, 1, 0, 1, NPC NPC NPC (0, 1, 0, 0, NPC NPC NPC (1, 0, 1, 1, NPC NPC NPC (1, 0, 1, 0, NPC NPC NPC (0, 1, 1, 1, NPC NPC NPC (0, 1, 1, 0, NPC NPC NPC NPC stands for no proper convergence (repeated solution of MILP master problem. Table 4: Application of OA (ineact solution of NLP subproblems and ineact cuts to Eample 2. The table reports the number N of iterations taken as well as the number C of inequalities violated in Assumptions 2.1 and 2.2. Tolerances 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 initial point N C N C N C N C N C N C (1, 0, 0, 0, (0, 1, 0, 0, (1, 0, 1, 0, (1, 0, 0, 1, (1, 0, 0, 0, (0, 1, 1, 0, (0, 1, 0, 1, (0, 1, 0, 0, (1, 0, 1, 1, (1, 0, 1, 0, (0, 1, 1, 1, (0, 1, 1, 0, The maimum number for C is 27t(t 1/2 + 26st, where t = T, s = S and s + t = N. 17

18 Table 5: Application of OA (ineact solution of NLP subproblems and eact cuts to Eample 3. The table reports the number N of iterations taken. Tolerances initial point 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 (1, 0, 0, 0, 0, 0, 0, NPC NPC NPC (1, 0, 0, 0, 0, 0, 0, NPC NPC NPC (1, 0, 1, 0, 0, 0, 0, NPC NPC NPC (1, 0, 0, 0, 1, 0, 0, NPC NPC NPC (1, 0, 0, 0, 1, 0, 0, NPC NPC NPC (1, 0, 1, 0, 1, 0, 0, NPC NPC NPC (1, 0, 0, 1, 0, 1, 0, NPC NPC NPC (1, 0, 0, 1, 0, 1, 0, NPC NPC NPC (1, 0, 0, 1, 0, 0, 1, NPC NPC NPC (1, 0, 0, 1, 0, 0, 1, NPC NPC NPC (0, 1, 0, 0, 0, 0, 0, NPC NPC NPC (0, 1, 0, 0, 0, 0, 0, NPC NPC NPC (0, 1, 1, 0, 0, 0, 0, NPC NPC NPC (0, 1, 0, 0, 1, 0, 0, NPC NPC NPC (0, 1, 0, 0, 1, 0, 0, NPC NPC NPC (0, 1, 1, 0, 1, 0, 0, NPC NPC NPC (0, 1, 0, 1, 0, 1, 0, NPC NPC NPC (0, 1, 0, 1, 0, 1, 0, NPC NPC NPC (0, 1, 1, 1, 0, 1, 0, NPC NPC NPC (0, 1, 0, 1, 0, 0, 1, NPC NPC NPC (0, 1, 0, 1, 0, 0, 1, NPC NPC NPC (1, 0, 1, 1, 0, 1, 0, NPC NPC NPC NPC stands for no proper convergence (repeated solution of MILP master problem. 18

19 Table 6: Application of OA (ineact solution of NLP subproblems and ineact cuts to Eample 3. The table reports the number N of iterations taken as well as the number C of inequalities violated in Assumptions 2.1 and 2.2. Tolerances 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 initial point N C N C N C N C N C N C (1, 0, 0, 0, 0, 0, 0, (1, 0, 0, 0, 0, 0, 0, (1, 0, 1, 0, 0, 0, 0, (1, 0, 0, 0, 1, 0, 0, (1, 0, 0, 0, 1, 0, 0, (1, 0, 1, 0, 1, 0, 0, (1, 0, 0, 1, 0, 1, 0, (1, 0, 0, 1, 0, 1, 0, (1, 0, 0, 1, 0, 0, 1, (1, 0, 0, 1, 0, 0, 1, (0, 1, 0, 0, 0, 0, 0, (0, 1, 0, 0, 0, 0, 0, (0, 1, 1, 0, 0, 0, 0, (0, 1, 0, 0, 1, 0, 0, (0, 1, 0, 0, 1, 0, 0, (0, 1, 1, 0, 1, 0, 0, (0, 1, 0, 1, 0, 1, 0, (0, 1, 0, 1, 0, 1, 0, (0, 1, 1, 1, 0, 1, 0, (0, 1, 0, 1, 0, 0, 1, (0, 1, 0, 1, 0, 0, 1, (1, 0, 1, 1, 0, 1, 0, The maimum number for C is 21t(t st, where t = T, s = S, and s + t = N. 19

20 Table 7: Application of GBD (ineact solution of NLP subproblems and eact cuts to Eample 1. The table reports the number N of iterations taken. Tolerances initial point 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 (0, 0, (1, 0, (0, 1, (1, 0, (0, 1, (0, 0, Table 8: Application of GBD (ineact solution of NLP subproblems and ineact cuts to Eample 1. The table reports the number N of iterations taken as well as the number C of inequalities violated in Assumptions 2.1 and 2.2. Tolerances 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 initial point N C N C N C N C N C N C (0, 0, (1, 0, (0, 1, (1, 0, (0, 1, (0, 0, The maimum number for C is as in Table Results for ineact GBD method Tables 7 12 summarize the application of ineact GBD (Algorithm 3.1 (variant ineact solution of NLP subproblems and eact cuts, and variant ineact solution of NLP subproblems and ineact cuts to Eamples 1 3. One can observe that there is little difference between the two variants since the eact cuts in the first variant already incorporate ineact information coming from the ineact Lagrange multipliers. One observes that ineact GBD takes more iterations than ineact OA in these eamples, which, according to Remark 3.1, is epected since ineact GBD yields weaker lower bounds and hence generally requires more maor iterations to converge than ineact OA. The number of inequalities of Assumptions 2.1 and 2.2 violated in ineact GBD is also higher than the one in ineact OA. 5 Conclusions and final remarks In this paper we have attempted to gain a better understanding of the effect of ineactness when solving NLP subproblems in two well known decomposition techniques for Mied Integer 20

21 Table 9: Application of GBD (ineact solution of NLP subproblems and eact cuts to Eample 2. The table reports the number N of iterations taken. Tolerances initial point 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 (1, 0, 0, 0, (0, 1, 0, 0, (1, 0, 1, 0, (1, 0, 0, 1, (1, 0, 0, 0, (0, 1, 1, 0, (0, 1, 0, 1, (0, 1, 0, 0, (1, 0, 1, 1, (1, 0, 1, 0, (0, 1, 1, 1, (0, 1, 1, 0, Table 10: Application of GBD (ineact solution of NLP subproblems and ineact cuts to Eample 2. The table reports the number N of iterations taken as well as the number C of inequalities violated in Assumptions 2.1 and 2.2. Tolerances 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 initial point N C N C N C N C N C N C (1, 0, 0, 0, (0, 1, 0, 0, (1, 0, 1, 0, (1, 0, 0, 1, (1, 0, 0, 0, (0, 1, 1, 0, (0, 1, 0, 1, (0, 1, 0, 0, (1, 0, 1, 1, (1, 0, 1, 0, (0, 1, 1, 1, (0, 1, 1, 0, The maimum number for C is as in Table 4. 21

22 Table 11: Application of GBD (ineact solution of NLP subproblems and eact cuts to Eample 3. The table reports the number N of iterations taken. Tolerances initial point 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 (1, 0, 0, 0, 0, 0, 0, (1, 0, 0, 0, 0, 0, 0, (1, 0, 1, 0, 0, 0, 0, (1, 0, 0, 0, 1, 0, 0, (1, 0, 0, 0, 1, 0, 0, (1, 0, 1, 0, 1, 0, 0, (1, 0, 0, 1, 0, 1, 0, (1, 0, 0, 1, 0, 1, 0, (1, 0, 0, 1, 0, 0, 1, (1, 0, 0, 1, 0, 0, 1, (0, 1, 0, 0, 0, 0, 0, (0, 1, 0, 0, 0, 0, 0, (0, 1, 1, 0, 0, 0, 0, (0, 1, 0, 0, 1, 0, 0, (0, 1, 0, 0, 1, 0, 0, (0, 1, 1, 0, 1, 0, 0, (0, 1, 0, 1, 0, 1, 0, (0, 1, 0, 1, 0, 1, 0, (0, 1, 1, 1, 0, 1, 0, (0, 1, 0, 1, 0, 0, 1, (0, 1, 0, 1, 0, 0, 1, (1, 0, 1, 1, 0, 1, 0,

23 Table 12: Application of GBD (ineact solution of NLP subproblems and ineact cuts to Eample 3. The table reports the number N of iterations taken as well as the number C of inequalities violated in Assumptions 2.1 and 2.2. Tolerances 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1 initial point N C N C N C N C N C N C (1, 0, 0, 0, 0, 0, 0, (1, 0, 0, 0, 0, 0, 0, (1, 0, 1, 0, 0, 0, 0, (1, 0, 0, 0, 1, 0, 0, (1, 0, 0, 0, 1, 0, 0, (1, 0, 1, 0, 1, 0, 0, (1, 0, 0, 1, 0, 1, 0, (1, 0, 0, 1, 0, 1, 0, (1, 0, 0, 1, 0, 0, 1, (1, 0, 0, 1, 0, 0, 1, (0, 1, 0, 0, 0, 0, 0, (0, 1, 0, 0, 0, 0, 0, (0, 1, 1, 0, 0, 0, 0, (0, 1, 0, 0, 1, 0, 0, (0, 1, 0, 0, 1, 0, 0, (0, 1, 1, 0, 1, 0, 0, (0, 1, 0, 1, 0, 1, 0, (0, 1, 0, 1, 0, 1, 0, (0, 1, 1, 1, 0, 1, 0, (0, 1, 0, 1, 0, 0, 1, (0, 1, 0, 1, 0, 0, 1, (1, 0, 1, 1, 0, 1, 0, The maimum number for C is as in Table 6. 23

24 Nonlinear Programming (MINLP, the outer approimation (OA and the generalized Benders decomposition (GBD. As pointed out to us by I. E. Grossmann, solving the NLP subproblems ineactly in OA positions this approach somewhere in between eact OA and the etended cutting plane method [18]. Although all the conditions required on the residuals of the ineact KKT conditions can be imposed, one can see from Assumptions 2.1 and 2.2 that the complete satisfaction of all those inequalities would ask for repeated NLP subproblem solution for all the previous addressed discrete assignments. Such requirement would then undermine the practical purpose of saving computational effort aimed by the NLP subproblem ineactness. In our preliminary numerical tests we disregarded the conditions of Assumptions 2.1 and 2.2 and verified, after terminating each run of ineact OA or GBD, how many of them were violated. The results indicated that proper convergence can be achieved without imposing Assumptions 2.1 and 2.2, and that the number of violated inequalities was relatively low. The results also seem to indicate that the cuts in OA and GBD must be changed accordingly when the corresponding NLP subproblems are solved ineactly. Testing these ineact approaches in a wider test set of larger problems is out of the scope of this paper, although it seems a necessary step to further validate these indications. Our study was performed under the assumption of conveity of the functions involved. Moreover, we also assumed that the approimated optimal solutions of the NLP subproblems were feasible in these subproblems, and that the corresponding ineact Lagrange multipliers were nonnegative. Relaing these assumptions introduces another layer of difficulty but certainly deserves proper attention in the future. A Appendi A.1 Ineact KKT conditions and perturbed problems As we said in the Introduction of the paper, the point y satisfying the ineact KKT conditions (1 (2 of the subproblem NLP( can be interpreted as a solution of a perturbed NLP subproblem, which has the form min f(, y (r (y y perturbed NLP( s.t. g(, y t 0, y Y, where t is given by (7. The data of this perturbed subproblem depends, however, on the approimated optimal solution y and ineact Lagrange multipliers λ. Similarly, the point y k satisfying the ineact KKT conditions (3 (5 of the subproblem NLPF( k can be interpreted as a solution of the following perturbed NLP subproblem min u w k (u u k (v k (y y k perturbed NLPF( k s.t. g i ( k, y u c k i 0, i = 1,..., m, y Y, u R, where, for i = 1,..., m, c k i = { z k i, if µ k µ k i > 0, i 0, if µ k i = 0. 24

25 A.2 Derivation of the master problem for ineact GBD As in the eact case, the MILP master problem P GBD can be derived from a more general master problem closer to the original duality motivation of GBD: min α P GBD 1 s.t. inf y Y {f(, y + (λ g(, y (r (y y } m s i α, T, inf y Y {(µ k g(, y (v k (y y k } + w k u k m zk i 0, k S, X Z n d, α R. In fact, we will show net that the constraints in problem P GBD 1 imply those of P GBD. When l T, one knows that NLP( l has an approimated optimal solution y l, satisfying the corresponding ineact KKT conditions with ineact Lagrange multipliers λ l. By the conveity of f and g (see (10 and (11, f(, y + (λ l g(, y (r l (y y l f( l, y l + f( l, y l ( l + y f( l, y l (y y l Thus, using the ineact KKT conditions (1, + λ l i[g i ( l, y l + g i ( l, y l ( l α inf y Y {f(, y + (λl g(, y (r l (y y l } + y g i ( l, y l (y y l ] (r l (y y l. inf y Y {f(l, y l + f( l, y l ( l + λ l i g i ( l, y l ( l + λ l ig i ( l, y l } = f( l, y l + f( l, y l ( l + λ l i g i ( l, y l ( l + λ l ig i ( l, y l = f( l, y l + f( l, y l ( l + s l i λ l i g i ( l, y l ( l. The last equality holds due to (2. When l S, we know that NLPF( l has an approimated optimal solution y l satisfying the corresponding ineact KKT conditions with ineact Lagrange multipliers µ l. Also by the conveity of g (see (11, we have that (µ l g(, y (v l (y y l (µ l [g( l, y l + g( l, y l ( l ] + ( µ l i y g i ( l, y l v l (y y l. Then, using the ineact KKT conditions (4, 0 inf y Y {(µl g(, y (v l (y y l } + w l u l = inf { m µ l i[g i ( l, y l + g i ( l, y l ( l ]} + w l u l y Y In summary we have the following property. z l i µ l i[g i ( l, y l + g i ( l, y l ( l ] + w l u l zi. l z l i s l i s l i 25

26 Property A.1 Given some sets T and S, the lower bound predicted by the master problem P GBD 1 is greater than or equal to the one predicted by the master problem P GBD. References [1] A. Ahlatçıoğlu and M. Guignard. Conve hull relaation (CHR for conve and nonconve MINLP problems with linear constraints [2] P. Belotti, J. Lee, L. Liberti, F. Margot, and A. Wächter. Branching and bounds tightening techniques for non-conve MINLP. Optim. Methods Softw., 24: , [3] J. F. Benders. Partitioning procedures for solving mied-variables programming problems. Numerische Mathematik, 4: , [4] P. Bonami, L. T. Biegler, A. R. Conn, G. Cornuéols, I. E. Grossmann, C. D. Laird, J. Lee, A. Lodi, F. Margot, N. Sawaya, and A. Wächter. An algorithmic framework for conve mied integer nonlinear programs. Discrete Optim., 5: , [5] P. Bonami, G. Cornuéols, A. Lodi, and F. Margot. A feasibility pump for mied integer nonlinear programs. Math. Program., 119: , [6] P. Bonami and J. P. M. Gonçalves. Heuristics for conve mied integer nonlinear programs. Comput. Optim. Appl., pages 1 19, [7] I. Castillo, J. Westerlund, S. Emet, and T. Westerlund. Optimization of block layout design problems with unequal areas: A comparison of MILP and MINLP optimization methods. Computers and Chemical Engineering, 30:54 69, [8] M. A. Duran and I. E. Grossmann. An out-approimation algorithm for a class of mied-integer nonlinear programs. Math. Program., 36: , [9] R. Fletcher and S. Leyffer. Solving mied integer nonlinear programs by outer approimation. Math. Program., 66: , [10] O. E. Flippo and A. H. G. Rinnooy Kan. Decomposition in general mathematical programming. Math. Program., 60: , [11] A. M. Geoffrion. Generalized Benders decomposition. J. Optim. Theory Appl., 10: , [12] I. E. Grossmann. Review of nonlinear mied-integer and disunctive programming techniques. Optim. Eng., 3: , [13] J. E. Kelley. The cutting-plane method for solving conve programs. J. Soc. Indust. Appl. Math., 8: , [14] G. Nannicini, P. Belotti, J. Lee, J. Linderoth, F. Margot, and A. Wächter. A probing algorithm for MINLP with failure prediction by SVM. Technical Report RC25103 (W , IBM Research, Computer Science, [15] I. Quesada and I. E. Grossmann. An LP/NLP based branch and bound algorithm for conve MINLP optimization problems. Computers and Chemical Engineering, 16: , [16] M. Tawarmalani and N. V. Sahinidis. Global optimization of mied-integer nonlinear programs: A theoretical and computational study. Math. Program., 99: , [17] S. A. Tawfik. [18] T. Westerlund and F. Pettersson. An etended cutting plane method for solving conve MINLP problems. Computers and Chemical Engineering, 19: ,

Feasibility Pump for Mixed Integer Nonlinear Programs 1

Feasibility Pump for Mixed Integer Nonlinear Programs 1 Feasibility Pump for Mixed Integer Nonlinear Programs 1 Presenter: 1 by Pierre Bonami, Gerard Cornuejols, Andrea Lodi and Francois Margot Mixed Integer Linear or Nonlinear Programs (MILP/MINLP) Optimize

More information

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy)

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy) Mario R. Eden, Marianthi Ierapetritou and Gavin P. Towler (Editors) Proceedings of the 13 th International Symposium on Process Systems Engineering PSE 2018 July 1-5, 2018, San Diego, California, USA 2018

More information

IBM Research Report. A Feasibility Pump for Mixed Integer Nonlinear Programs

IBM Research Report. A Feasibility Pump for Mixed Integer Nonlinear Programs RC386 (W060-09) February, 006 Mathematics IBM Research Report A Feasibility Pump for Mied Integer Nonlinear Programs Pierre Bonami, Gérard Cornuéjols, Andrea Lodi*, François Margot Tepper School of Business

More information

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms Jeff Linderoth Industrial and Systems Engineering University of Wisconsin-Madison Jonas Schweiger Friedrich-Alexander-Universität

More information

Basic notions of Mixed Integer Non-Linear Programming

Basic notions of Mixed Integer Non-Linear Programming Basic notions of Mixed Integer Non-Linear Programming Claudia D Ambrosio CNRS & LIX, École Polytechnique 5th Porto Meeting on Mathematics for Industry, April 10, 2014 C. D Ambrosio (CNRS) April 10, 2014

More information

Solving Mixed-Integer Nonlinear Programs

Solving Mixed-Integer Nonlinear Programs Solving Mixed-Integer Nonlinear Programs (with SCIP) Ambros M. Gleixner Zuse Institute Berlin MATHEON Berlin Mathematical School 5th Porto Meeting on Mathematics for Industry, April 10 11, 2014, Porto

More information

Some Recent Advances in Mixed-Integer Nonlinear Programming

Some Recent Advances in Mixed-Integer Nonlinear Programming Some Recent Advances in Mixed-Integer Nonlinear Programming Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com SIAM Conference on Optimization 2008 Boston, MA

More information

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs A Lifted Linear Programming Branch-and-Bound Algorithm for Mied Integer Conic Quadratic Programs Juan Pablo Vielma Shabbir Ahmed George L. Nemhauser H. Milton Stewart School of Industrial and Systems Engineering

More information

Improved quadratic cuts for convex mixed-integer nonlinear programs

Improved quadratic cuts for convex mixed-integer nonlinear programs Improved quadratic cuts for convex mixed-integer nonlinear programs Lijie Su a,b, Lixin Tang a*, David E. Bernal c, Ignacio E. Grossmann c a Institute of Industrial and Systems Engineering, Northeastern

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

Software for Integer and Nonlinear Optimization

Software for Integer and Nonlinear Optimization Software for Integer and Nonlinear Optimization Sven Leyffer, leyffer@mcs.anl.gov Mathematics & Computer Science Division Argonne National Laboratory Roger Fletcher & Jeff Linderoth Advanced Methods and

More information

An Improved Approach For Solving Mixed-Integer Nonlinear Programming Problems

An Improved Approach For Solving Mixed-Integer Nonlinear Programming Problems International Refereed Journal of Engineering and Science (IRJES) ISSN (Online) 2319-183X, (Print) 2319-1821 Volume 3, Issue 9 (September 2014), PP.11-20 An Improved Approach For Solving Mixed-Integer

More information

1 Convexity, Convex Relaxations, and Global Optimization

1 Convexity, Convex Relaxations, and Global Optimization 1 Conveity, Conve Relaations, and Global Optimization Algorithms 1 1.1 Minima Consider the nonlinear optimization problem in a Euclidean space: where the feasible region. min f(), R n Definition (global

More information

Optimization of a Nonlinear Workload Balancing Problem

Optimization of a Nonlinear Workload Balancing Problem Optimization of a Nonlinear Workload Balancing Problem Stefan Emet Department of Mathematics and Statistics University of Turku Finland Outline of the talk Introduction Some notes on Mathematical Programming

More information

Hierarchy of Relaxations for Convex. and Extension to Nonconvex Problems

Hierarchy of Relaxations for Convex. and Extension to Nonconvex Problems Hierarchy of Relaations for Conve Nonlinear Generalized Disjunctive Programs and Etension to Nonconve Problems Ignacio Grossmann Sangbum Lee, Juan Pablo Ruiz, Nic Sawaya Center for Advanced Process Decision-maing

More information

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT MICHAEL LEWIS, AND VIRGINIA TORCZON Abstract. We present a new generating set search (GSS) approach

More information

GETTING STARTED INITIALIZATION

GETTING STARTED INITIALIZATION GETTING STARTED INITIALIZATION 1. Introduction Linear programs come in many different forms. Traditionally, one develops the theory for a few special formats. These formats are equivalent to one another

More information

Lessons from MIP Search. John Hooker Carnegie Mellon University November 2009

Lessons from MIP Search. John Hooker Carnegie Mellon University November 2009 Lessons from MIP Search John Hooker Carnegie Mellon University November 2009 Outline MIP search The main ideas Duality and nogoods From MIP to AI (and back) Binary decision diagrams From MIP to constraint

More information

Benders Decomposition

Benders Decomposition Benders Decomposition Yuping Huang, Dr. Qipeng Phil Zheng Department of Industrial and Management Systems Engineering West Virginia University IENG 593G Nonlinear Programg, Spring 2012 Yuping Huang (IMSE@WVU)

More information

9. Interpretations, Lifting, SOS and Moments

9. Interpretations, Lifting, SOS and Moments 9-1 Interpretations, Lifting, SOS and Moments P. Parrilo and S. Lall, CDC 2003 2003.12.07.04 9. Interpretations, Lifting, SOS and Moments Polynomial nonnegativity Sum of squares (SOS) decomposition Eample

More information

Ville-Pekka Eronen Marko M. Mäkelä Tapio Westerlund. Nonsmooth Extended Cutting Plane Method for Generally Convex MINLP Problems

Ville-Pekka Eronen Marko M. Mäkelä Tapio Westerlund. Nonsmooth Extended Cutting Plane Method for Generally Convex MINLP Problems Ville-Pekka Eronen Marko M. Mäkelä Tapio Westerlund Nonsmooth Extended Cutting Plane Method for Generally Convex MINLP Problems TUCS Technical Report No 1055, July 2012 Nonsmooth Extended Cutting Plane

More information

Heuristics for nonconvex MINLP

Heuristics for nonconvex MINLP Heuristics for nonconvex MINLP Pietro Belotti, Timo Berthold FICO, Xpress Optimization Team, Birmingham, UK pietrobelotti@fico.com 18th Combinatorial Optimization Workshop, Aussois, 9 Jan 2014 ======This

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Pessimistic Bi-Level Optimization

Pessimistic Bi-Level Optimization Pessimistic Bi-Level Optimization Wolfram Wiesemann 1, Angelos Tsoukalas,2, Polyeni-Margarita Kleniati 3, and Berç Rustem 1 1 Department of Computing, Imperial College London, UK 2 Department of Mechanical

More information

Safe and Tight Linear Estimators for Global Optimization

Safe and Tight Linear Estimators for Global Optimization Safe and Tight Linear Estimators for Global Optimization Glencora Borradaile and Pascal Van Hentenryck Brown University Bo 1910 Providence, RI, 02912 {glencora, pvh}@cs.brown.edu Abstract. Global optimization

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Lagrangian Duality for Dummies

Lagrangian Duality for Dummies Lagrangian Duality for Dummies David Knowles November 13, 2010 We want to solve the following optimisation problem: f 0 () (1) such that f i () 0 i 1,..., m (2) For now we do not need to assume conveity.

More information

Mixed Integer Non Linear Programming

Mixed Integer Non Linear Programming Mixed Integer Non Linear Programming Claudia D Ambrosio CNRS Research Scientist CNRS & LIX, École Polytechnique MPRO PMA 2016-2017 Outline What is a MINLP? Dealing with nonconvexities Global Optimization

More information

A hierarchy of relaxations for nonlinear convex generalized disjunctive programming

A hierarchy of relaxations for nonlinear convex generalized disjunctive programming A hierarchy of relaxations for nonlinear convex generalized disjunctive programming Juan P. Ruiz, Ignacio E. Grossmann 1 Carnegie Mellon University - Department of Chemical Engineering Pittsburgh, PA 15213

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

A Framework for Integrating Optimization and Constraint Programming

A Framework for Integrating Optimization and Constraint Programming A Framework for Integrating Optimization and Constraint Programming John Hooker Carnegie Mellon University SARA 2007 SARA 07 Slide Underlying theme Model vs. solution method. How can we match the model

More information

Convex Optimization Overview (cnt d)

Convex Optimization Overview (cnt d) Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize

More information

Implicitely and Densely Discrete Black-Box Optimization Problems

Implicitely and Densely Discrete Black-Box Optimization Problems Implicitely and Densely Discrete Black-Box Optimization Problems L. N. Vicente September 26, 2008 Abstract This paper addresses derivative-free optimization problems where the variables lie implicitly

More information

Consistency as Projection

Consistency as Projection Consistency as Projection John Hooker Carnegie Mellon University INFORMS 2015, Philadelphia USA Consistency as Projection Reconceive consistency in constraint programming as a form of projection. For eample,

More information

Newton-Type Algorithms for Nonlinear Constrained Optimization

Newton-Type Algorithms for Nonlinear Constrained Optimization Newton-Type Algorithms for Nonlinear Constrained Optimization Angelika Altmann-Dieses Faculty of Management Science and Engineering Karlsruhe University of Applied Sciences Moritz Diehl Department of Microsystems

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Nonlinear Equations. Chapter The Bisection Method

Nonlinear Equations. Chapter The Bisection Method Chapter 6 Nonlinear Equations Given a nonlinear function f(), a value r such that f(r) = 0, is called a root or a zero of f() For eample, for f() = e 016064, Fig?? gives the set of points satisfying y

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation -

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Radu Baltean-Lugojan Ruth Misener Computational Optimisation Group Department of Computing Pierre Bonami Andrea

More information

Optimization in Process Systems Engineering

Optimization in Process Systems Engineering Optimization in Process Systems Engineering M.Sc. Jan Kronqvist Process Design & Systems Engineering Laboratory Faculty of Science and Engineering Åbo Akademi University Most optimization problems in production

More information

Lecture 23: Conditional Gradient Method

Lecture 23: Conditional Gradient Method 10-725/36-725: Conve Optimization Spring 2015 Lecture 23: Conditional Gradient Method Lecturer: Ryan Tibshirani Scribes: Shichao Yang,Diyi Yang,Zhanpeng Fang Note: LaTeX template courtesy of UC Berkeley

More information

Decomposition Techniques in Mathematical Programming

Decomposition Techniques in Mathematical Programming Antonio J. Conejo Enrique Castillo Roberto Minguez Raquel Garcia-Bertrand Decomposition Techniques in Mathematical Programming Engineering and Science Applications Springer Contents Part I Motivation and

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course otes for EE7C (Spring 018): Conve Optimization and Approimation Instructor: Moritz Hardt Email: hardt+ee7c@berkeley.edu Graduate Instructor: Ma Simchowitz Email: msimchow+ee7c@berkeley.edu October

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

Benders' Method Paul A Jensen

Benders' Method Paul A Jensen Benders' Method Paul A Jensen The mixed integer programming model has some variables, x, identified as real variables and some variables, y, identified as integer variables. Except for the integrality

More information

Solving Bilevel Mixed Integer Program by Reformulations and Decomposition

Solving Bilevel Mixed Integer Program by Reformulations and Decomposition Solving Bilevel Mixed Integer Program by Reformulations and Decomposition June, 2014 Abstract In this paper, we study bilevel mixed integer programming (MIP) problem and present a novel computing scheme

More information

An Exact Algorithm for a Resource Allocation Problem in Mobile Wireless Communications

An Exact Algorithm for a Resource Allocation Problem in Mobile Wireless Communications An Exact Algorithm for a Resource Allocation Problem in Mobile Wireless Communications Adam N. Letchford Qiang Ni Zhaoyu Zhong To appear in Computational Optimization & Applications Abstract We consider

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Complexity of gradient descent for multiobjective optimization

Complexity of gradient descent for multiobjective optimization Complexity of gradient descent for multiobjective optimization J. Fliege A. I. F. Vaz L. N. Vicente July 18, 2018 Abstract A number of first-order methods have been proposed for smooth multiobjective optimization

More information

A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron

A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron Noname manuscript No. (will be inserted by the editor) A Smoothing SQP Framework for a Class of Composite L q Minimization over Polyhedron Ya-Feng Liu Shiqian Ma Yu-Hong Dai Shuzhong Zhang Received: date

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

Constraint qualifications for nonlinear programming

Constraint qualifications for nonlinear programming Constraint qualifications for nonlinear programming Consider the standard nonlinear program min f (x) s.t. g i (x) 0 i = 1,..., m, h j (x) = 0 1 = 1,..., p, (NLP) with continuously differentiable functions

More information

A Review and Comparison of Solvers for Convex MINLP

A Review and Comparison of Solvers for Convex MINLP A Review and Comparison of Solvers for Convex MINLP Jan Kronqvist a, David E. Bernal b, Andreas Lundell c, and Ignacio E. Grossmann b a Process Design and Systems Engineering, Åbo Akademi University, Åbo,

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient

More information

On the Properties of Positive Spanning Sets and Positive Bases

On the Properties of Positive Spanning Sets and Positive Bases Noname manuscript No. (will be inserted by the editor) On the Properties of Positive Spanning Sets and Positive Bases Rommel G. Regis Received: May 30, 2015 / Accepted: date Abstract The concepts of positive

More information

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem

Introduction to Machine Learning Spring 2018 Note Duality. 1.1 Primal and Dual Problem CS 189 Introduction to Machine Learning Spring 2018 Note 22 1 Duality As we have seen in our discussion of kernels, ridge regression can be viewed in two ways: (1) an optimization problem over the weights

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

Projection, Inference, and Consistency

Projection, Inference, and Consistency Projection, Inference, and Consistency John Hooker Carnegie Mellon University IJCAI 2016, New York City A high-level presentation. Don t worry about the details. 2 Projection as a Unifying Concept Projection

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

A Scheme for Integrated Optimization

A Scheme for Integrated Optimization A Scheme for Integrated Optimization John Hooker ZIB November 2009 Slide 1 Outline Overview of integrated methods A taste of the underlying theory Eamples, with results from SIMPL Proposal for SCIP/SIMPL

More information

Intro to Nonlinear Optimization

Intro to Nonlinear Optimization Intro to Nonlinear Optimization We now rela the proportionality and additivity assumptions of LP What are the challenges of nonlinear programs NLP s? Objectives and constraints can use any function: ma

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

10-725/36-725: Convex Optimization Spring Lecture 21: April 6 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 21: April 6 Scribes: Chiqun Zhang, Hanqi Cheng, Waleed Ammar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 21 Dr. Ted Ralphs IE406 Lecture 21 1 Reading for This Lecture Bertsimas Sections 10.2, 10.3, 11.1, 11.2 IE406 Lecture 21 2 Branch and Bound Branch

More information

Worst Case Complexity of Direct Search

Worst Case Complexity of Direct Search Worst Case Complexity of Direct Search L. N. Vicente October 25, 2012 Abstract In this paper we prove that the broad class of direct-search methods of directional type based on imposing sufficient decrease

More information

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints Noname manuscript No. (will be inserted by the editor) Using quadratic conve reformulation to tighten the conve relaation of a quadratic program with complementarity constraints Lijie Bai John E. Mitchell

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or

NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or other reproductions of copyrighted material. Any copying

More information

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization

Implementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter

More information

Robust-to-Dynamics Linear Programming

Robust-to-Dynamics Linear Programming Robust-to-Dynamics Linear Programg Amir Ali Ahmad and Oktay Günlük Abstract We consider a class of robust optimization problems that we call robust-to-dynamics optimization (RDO) The input to an RDO problem

More information

Disjunctive Cuts for Mixed Integer Nonlinear Programming Problems

Disjunctive Cuts for Mixed Integer Nonlinear Programming Problems Disjunctive Cuts for Mixed Integer Nonlinear Programming Problems Pierre Bonami, Jeff Linderoth, Andrea Lodi December 29, 2012 Abstract We survey recent progress in applying disjunctive programming theory

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Bregman Divergence and Mirror Descent

Bregman Divergence and Mirror Descent Bregman Divergence and Mirror Descent Bregman Divergence Motivation Generalize squared Euclidean distance to a class of distances that all share similar properties Lots of applications in machine learning,

More information

Critical Reading of Optimization Methods for Logical Inference [1]

Critical Reading of Optimization Methods for Logical Inference [1] Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

A Basic Course in Real Analysis Prof. P. D. Srivastava Department of Mathematics Indian Institute of Technology, Kharagpur

A Basic Course in Real Analysis Prof. P. D. Srivastava Department of Mathematics Indian Institute of Technology, Kharagpur A Basic Course in Real Analysis Prof. P. D. Srivastava Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 36 Application of MVT, Darbou Theorem, L Hospital Rule (Refer Slide

More information

Computer Sciences Department

Computer Sciences Department Computer Sciences Department Strong Branching Inequalities for Convex Mixed Integer Nonlinear Programs Mustafa Kilinc Jeff Linderoth James Luedtke Andrew Miller Technical Report #696 September 20 Strong

More information

A Review and Comparison of Solvers for Convex MINLP

A Review and Comparison of Solvers for Convex MINLP A Review and Comparison of Solvers for Convex MINLP Jan Kronqvist a, David E. Bernal b, Andreas Lundell c, and Ignacio E. Grossmann b a Process Design and Systems Engineering, Åbo Akademi University, Turku,

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

Effective Separation of Disjunctive Cuts for Convex Mixed Integer Nonlinear Programs

Effective Separation of Disjunctive Cuts for Convex Mixed Integer Nonlinear Programs Effective Separation of Disjunctive Cuts for Convex Mixed Integer Nonlinear Programs Mustafa Kılınç Jeff Linderoth James Luedtke Department of Industrial and Systems Engineering University of Wisconsin-Madison

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Department of Social Systems and Management. Discussion Paper Series

Department of Social Systems and Management. Discussion Paper Series Department of Social Systems and Management Discussion Paper Series No. 1115 Outer Approimation Method for the Minimum Maimal Flow Problem Yoshitsugu Yamamoto and Daisuke Zenke April 2005 UNIVERSITY OF

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

Strong-Branching Inequalities for Convex Mixed Integer Nonlinear Programs

Strong-Branching Inequalities for Convex Mixed Integer Nonlinear Programs Computational Optimization and Applications manuscript No. (will be inserted by the editor) Strong-Branching Inequalities for Convex Mixed Integer Nonlinear Programs Mustafa Kılınç Jeff Linderoth James

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

Projection, Consistency, and George Boole

Projection, Consistency, and George Boole Projection, Consistency, and George Boole John Hooker Carnegie Mellon University CP 2015, Cork, Ireland Projection as a Unifying Concept Projection underlies both optimization and logical inference. Optimization

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem 1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and

More information

The Kuhn-Tucker and Envelope Theorems

The Kuhn-Tucker and Envelope Theorems The Kuhn-Tucker and Envelope Theorems Peter Ireland ECON 77200 - Math for Economists Boston College, Department of Economics Fall 207 The Kuhn-Tucker and envelope theorems can be used to characterize the

More information