Issues in the Development of Global Optimization Algorithms for Bilevel Programs with a Nonconvex Inner Program

Size: px
Start display at page:

Download "Issues in the Development of Global Optimization Algorithms for Bilevel Programs with a Nonconvex Inner Program"

Transcription

1 Issues in the Development of Global Optimiation Algorithms for Bilevel Programs with a Nonconvex Inner Program Alexander Mitsos and Paul I. Barton* Department of Chemical Engineering Massachusetts Institute of Technolog, , 77 Massachusetts Avenue Cambridge, MA mitsos@mit.edu and pib@mit.edu tel: , fax: Februar 13, 2006 Updated: October 16, 2006 Abstract The co-operative formulation of a nonlinear bilevel program involving nonconvex functions is considered and two equivalent reformulations to simpler programs are presented. It is shown that previous literature proposals for the global solution of such programs are not generall valid for nonconvex inner programs and several consequences of nonconvexit in the inner program are identified. In particular, issues with the computation of lower and upper bounds as well as with the choice of branching variables in a branch-andbound framework are analed. This analsis las the foundation for the development of rigorous algorithms and some algorithmic expectations are established. Ke-words Bilevel program; nonconvex; global optimiation; GSIP; branch-and-bound AMS Classification 65K05=mathematical programming algorithms 90C26=nonconvex programming 90c33=complementarit programming 90C31=sensitivit,stabilit,parametric optimiation 1

2 1 Introduction Bilevel optimiation has applications in a variet of economic and engineering problems and several optimiation formulations can be transformed to bilevel programs. There are man contributions in the literature and the reader is directed to other publications for a review of applications and algorithms [10, 24, 4, 14, 26, 17, 18]. Here, inequalit constrained nonlinear bilevel programs of the form min x, f(x,) s.t. g(x,) 0 arg min h(x,) (1) s.t. p(x,) 0 x X, Y, are considered. The co-operative (optimistic) formulation [14] is assumed, where if for a given x the inner program has multiple optimal solutions, the outer optimier can choose among them. As has been proposed in the past, e.g., [11, 2], dumm variables ( instead of ) are used in the inner program since this clarifies some issues and facilitates discussion. A minimal assumption for an algorithmic development based on branch-and-bound (B&B) is compact host sets X R nx, Y R n and continuous functions f : X Y R, g : X Y R ng, h : X Y R and p : X Y R np. For some of the methods discussed here further assumptions, such as twice continuousl differentiable functions or constraint qualifications, are needed. Note that equalit constraints have onl been omitted for the sake of simplicit and would not alter anthing in the discussion of this paper. 2

3 For a fixed x we denote min h(x,) s.t. p(x,) 0 (2) Y, the inner program. The parametric optimal solution value of this program is denoted w(x) and the set of optimal points H(x). We focus on nonconvex inner programs, i.e., the case when some of the functions p(x,), h(x,) are not partiall convex with respect to for all possible values of x. Most of the literature is devoted to special cases of (1), in particular linear functions, e.g., [9]. Tpicall, the inner program is assumed convex satisfing a constraint qualification and is replaced b the equivalent KKT first-order optimalit conditions; the resulting single level mathematical program with equilibrium constraints (MPEC) violates the Mangasarian-Fromovit constraint qualification due to the complementarit slackness constraints [29]. Fortun-Amat and McCarl [15] reformulate the complementarit slackness conditions using integer variables. Stein and Still [29] consider generalied semi-infinite programs (GSIP) and use a regulariation of the complementarit slackness constraints. We will discuss how this strateg could be adapted to general bilevel programs (1). Ver few proposals have been made for bilevel programs with nonconvex functions. Clark and Westerberg [12] introduced the notion of a local solution where the inner and outer programs are solved to local optimalit. Gümüs and Floudas [17] proposed a B&B procedure to obtain the global solution of bilevel programs; we will show here that this procedure is not valid in general when the inner program (2) is nonconvex. For the case of GSIP an algorithm based on B&B and interval extensions has been recentl proposed b Lemonidis and Barton [20]. We first discuss what is a reasonable expectation for the solution of nonconvex bilevel programs based on state-of-the-art notions in global optimiation. To anale some of the consequences of nonconvexit in the inner program we first introduce two equivalent reformulations of (1) as a simpler bilevel program and 3

4 as a GSIP. We then discuss how to treat the inner variables in a B&B framework and identif issues with literature proposals regarding lower and upper bounds. Finall for KKT based approaches we discuss the need for bounds on the KKT multipliers. 2 Optimalit Requirement An interesting question is how exactl to interpret the requirement arg min Y,p(x,) 0 h(x,) which b definition means that an feasible is a global minimum of the inner program for a fixed value of x. Clark and Westerberg [12] proposed the notion of local solutions to the bilevel program (1) requiring that a furnished solution pair ( x, ȳ) satisfies local optimalit in the inner program. This is in general a ver strong relaxation. Note though that in single-level optimiation finitel terminating algorithms guaranteeing global optimalit exist onl for special cases, e.g., linear programs. For nonconvex nonlinear programs (NLP) f = min x f(x) s.t. g(x) 0 (3) x X state-of-the-art algorithms in general onl terminate finitel with an ε f -optimal solution. That is, given an ε f > 0 a feasible point x is furnished and a guarantee that its objective value is not more than ε f larger than the optimal objective value f( x) f + ε f. Moreover, to our best knowledge, no algorithm can provide guarantees for the distance of the points furnished from optimal solutions ( x x, where x is some optimal solution), which would allow a direct estimate of the requirement arg min. Finall global and local solvers for (3) tpicall allow an ε g violation of the constraints [25]. 4

5 We therefore propose that for bilevel programs (1) with nonconvex inner programs, it is onl plausible to expect a finitel terminating algorithm to provide guarantees that the furnished pair ( x, ȳ) satisfies the constraints of the inner and outer program, ε h -optimalit in the inner program, and ε f -optimalit in the outer program g( x,ȳ) 0 p( x,ȳ) 0 (4) h( x,ȳ) w( x) + ε h f( x,ȳ) f + ε f, where w( x) is the optimal solution value of the inner program for the given x and f is the optimal solution value of the bilevel program (1). Note that as a consequence of ε h -optimalit in the inner program, f f( x,ȳ) need not hold (as Example 2.1 shows). Example 2.2 illustrates that the same behavior can be observed in nonconvex (single level) NLPs when constraint violation is allowed. Example 2.1 (Consequences of ε-optimalit in the Inner Program). Consider the bilevel program min s.t. arg min 2 (5), [ 1 + δ, 1], for some small δ > 0. The inner program gives = 1 and therefore the unique optimal solution is = 1 with an optimal objective value of 1. Assume that the optimalit tolerance of the inner program ε h is such that ε h 2 δ δ 2. Based on definition (4) all [ 1 + δ, 1 ε h ] and [ 1 ε h, 1] are admissible and as a consequence an objective value of 1 + δ can be obtained for the bilevel program (5). Example 2.2 (Consequences of ε-feasibilit in Nonconvex Nonlinear Programs). Consider the nonconvex 5

6 nonlinear program min x x 1 x 2 0 (6) x [ 1 + δ, 2], for some small δ > 0. There are infinitel man feasible points x [1, 2] and the problem satisfies the linear/concave constraint qualification [6, p. 322]. The unique optimal solution is x = 1 with an optimal objective value of 1. Assume that the feasibilit tolerance of the nonlinear constraint ε g is such that ε g 2 δ δ 2. All x [ 1 + δ, 1 ε g ] and x [ 1 ε g, 2] are admissible and as a consequence an objective value of 1 + δ can be obtained for (6). 3 Reformulations One of the major difficulties associated with the solution of (1) is that the two optimiation programs communicate with each other through a solution set H(x), possibl infinite. To facilitate the following discussion, two equivalent optimiation formulations are introduced that communicate with each other through a solution value. These formulations have been proposed in the past b other authors [13, 14, 3] and are also similar to the difference function discussed b Amouegar [2], where convexit is assumed and the variables do not participate in the outer constraints. B the introduction of an additional variable w and the use of the dumm variables, the bilevel program 6

7 (1) is equivalent [13, 14] to min f(x,) x,,w s.t. g(x,) 0 p(x,) 0 h(x,) w 0 (7) x X Y w R w = min h(x,) s.t. p(x,) 0 Y. What is achieved with the introduction of the extra variable w is that the outer and inner program are coupled b a much simpler requirement, namel the optimal solution value of an optimiation program (min) as opposed to the set of optimal solutions (arg min). In principle (7) could be solved using multi-parametric optimiation as a subproblem. As a first step a global multi-parametric optimiation algorithm would be used for the solution of the inner program for all x X dividing X into regions where the minimum w(x) is a known (smooth) function. The second step would be a global solution of the outer program for each of the optimalit regions. This procedure is not suggested as a solution strateg since it would be overl computationall intensive. Note that application of this solution strateg to the original bilevel program (1) would require a parametric optimiation algorithm that furnishes all optimal solutions as a function of the parameter x, and to our best knowledge this is not possible at present. The simplified bilevel program (7) allows the following observation: a relaxation of the inner program results in a restriction of the overall program and vice-versa. This observation also holds for other programs with a bilevel interpretation such as GSIPs, see [21], but does not hold for the original bilevel program 7

8 (1) because an alteration of the inner program can result in an alteration of its set of optimal solutions and as such to the formulation of an unrelated optimiation problem. We will show though that since relaxing the inner program in (7) results in an underestimation of w, the constraint h(x,) w 0 becomes infeasible. Indeed, consider an x and the corresponding optimal objective value of the inner program (2) w. B the definition of optimalit we have h( x,) w for all feasible in the inner program, i.e., Y, s.t. p( x,) 0. As a consequence no exists that can satisf all the constraints of (7) exactl if w is underestimated. If ε h optimalit of the inner program is acceptable and the magnitude of the underestimation is less than ε h it is possible to obtain (approximatel) feasible points of (7). The bilevel programs (1) and (7) are also equivalent to the following GSIP [3] min x, f(x,) s.t. g(x,) 0 p(x,) 0 (8) x X Y h(x,) h(x,), Ȳ (x) Ȳ (x) = { Y : p(x,) 0}. Note that this reformulation does not contradict the observation b Stein and Still [28] that under certain assumptions GSIP problems can be viewed as a special case of bilevel programs. Stein and Still compare a GSIP with variables x to a bilevel program with variables x and whereas the bilevel considered here (1) and the GSIP (8) have the same set of variables x,. Moreover, the GSIP (8) does not necessaril satisf the assumptions made b Stein and Still [28]. Note also that the GSIP (8) does not contain an Slater points as defined in Lemonidis and Barton [21] unless ε h optimalit of the inner program is allowed. In either of the two reformulated programs our proposal of ε h optimalit can be interpreted as ε h feasibilit of the coupling constraint. As mentioned above, NLP solvers tpicall allow such a violation of ordinar nonlinear constraints, and this motivates again the proposal of ε-optimalit (4). 8

9 4 Branching on the Variables As mentioned in the introduction, the main focus of this publication is on the consequences of nonconvex inner programs for B&B procedures. In this section we briefl review some features of this algorithm class and discuss complications for the inner variables. For more details on B&B procedures the reader is referred to the literature, e.g., [19, 30]. For discussion purposes consider again the inequalit constrained nonconvex nonlinear program (NLP) (3). The goal of B&B is to bracket the optimal solution value f between an upper bound UBD, tpicall the objective value at a feasible point, and a lower bound LBD, tpicall the optimal objective value of a relaxed program. To that end the host set X is partitioned into a finite collection of subsets X i X, such that X = i X i. For each subset i a lower bound LBD i and an upper bound UBD i are established. The partitions are successivel refined and tpicall this is done in an exhaustive matter [19], i.e., the diameter of each subset tends to ero. At each iteration the bounds are updated LBD = min i LBD i and UBD = min i UBD i. Therefore UBD is non-increasing and LBD non-decreasing. The algorithm terminates when the difference UBD LBD is lower than a pre-specified tolerance. For bilevel programs the inner program can have multiple optimal solutions for some values of x and therefore a complication is whether or not to branch on the variables. Gümüs and Floudas [17] do not specif in their proposal if the do, and all their examples are solved at the root node. Amouegar [2] for a special convex case proposes to onl branch on the variables x. The dilemma is that on one hand are optimiation variables in the outer program which suggests that one should branch on them but on the other hand the requirement of optimalit in the inner program requires consideration of the whole set Y without branching. One extreme possibilit is to branch on the variables x X and Y without distinction. Example 4.1 shows that some points ( x,ȳ) X i Y i could be feasible for node i but not for the original program (1) and therefore UBD = min i UBD i cannot be used. The other extreme possibilit is to consider for a given node X i a partition of Y to subsets Y j and keeping the worst subset. This procedure is not correct either as Example 4.2 shows. The introduction of dumm variables for the inner program allows the distinction of between the inner and the outer programs. In a forthcoming publication [23] we propose an algorithm where branching is performed on the variables, 9

10 but not on the variables. An alternative would be to emplo subdivision similar to ideas presented for Semi-Infinite Programs [7, 8]. Example 4.1 (The host set of the inner program cannot be partitioned). Consider the simple linear bilevel program min s.t. 0 (9) argmin, [ 1, 1]. The inner program gives = 1 which is infeasible in the outer program b the constraint 0. If we partition the host set Y = [ 1, 1] b bisection (Y 1 = [ 1, 0] and Y 2 = [0, 1]) we obtain two bilevel programs. The first one min s.t. 0 argmin, [ 1, 0] is also infeasible b the same argument. The second min s.t. 0 argmin, [0, 1] 10

11 gives = 0. This point is not feasible in the original bilevel program (9). Taking the optimal solution of this node as an upper bound would give UBD = 0, which is clearl false. Example 4.2 (The host set Y in the outer program cannot be partitioned keeping the worst subset). Consider the simple linear bilevel program min s.t. argmin 2 (10), [ 1, 1]. The inner program gives = ±1 and therefore the unique optimal solution is = 1 with an optimal objective value of 1. If we partition the host set Y = [ 1, 1] b bisection (Y 1 = [ 1, 0] and Y 2 = [0, 1]) we obtain two bilevel programs. The first one min s.t. argmin 2, [ 1, 0] gives the unique feasible point = 1 with an objective value of 1. The second min s.t. argmin 2, [0, 1] gives the unique feasible point = 1 with an objective value of 1. Keeping the worse of the two bounds would result in an increase of the upper bound. 11

12 5 Upper Bounding Procedure Obtaining upper bounds for bilevel programs with a nonconvex inner program is ver difficult due to the constraint is a global minimum of the inner program and no obvious restriction of this constraint is available. Appling this definition to check the feasibilit of a pair of candidate solutions ( x,ȳ) results in a global optimiation problem for which in general state-of-the-art algorithms onl furnish ε f -optimal solutions at finite termination. Onl recentl cheap upper bounds have been developed for some related problems such as SIPs [7, 8] and GSIPs [21] under nonconvexit. In these proposals the upper bounds are associated onl with a feasible point x and no identification of the inner variables is needed. Cheap upper bounds are conceivable for a general bilevel program but to our best knowledge no valid proposal exists for the case of nonconvex inner programs. On termination, algorithms for bilevel programs (1) have to furnish a feasible pair ( x, ȳ), i.e., values must be given for the inner variables. If the upper bounds obtained in a B&B tree are associated with such feasible points, a global solution to a nonconvex optimiation program has to be identified for each upper bound. This leads to an exponential procedure nested inside an exponential procedure and is being emploed in a forthcoming publication [23]. To avoid the nested exponential procedure a formulation of upper bounds without the identification of feasible points is required. 5.1 Solving a MPEC Locall Does Not Produce Valid Upper Bounds Gümüs and Floudas [17] propose to obtain an upper bound b replacing the inner program b the corresponding KKT conditions, under some constraint qualification, and then solving the resulting program to local optimalit. In this section it is shown that this procedure does not in general generate valid upper bounds. In the case of a nonconvex inner program, the KKT conditions are not sufficient for a global minimum and replacing the inner (nonconvex) program b the (necessar onl) KKT conditions amounts to a relaxation of the bilevel program. Solving a (nonconvex) relaxed program to local optimalit provides no guarantees that the point found is feasible to the original bilevel program; moreover the objective value does not necessaril provide an upper bound on the original bilevel program. This behavior is shown in Example 12

13 5.1. Although this example has a special structure, it shows a general propert. Moreover in the proposal b Gümüs and Floudas [17] the complementarit constraint is replaced b the big-m reformulation and new variables are introduced; some issues associated with this are discussed in Section 7. Example 5.1 (Counterexample for upper bound based on local solution of MPEC). For simplicit a bilevel program with a single variable and box constraints is considered. min s.t. argmin 2 (11), [ 0.5, 1]. B inspection the solution to the bilevel program (11) is = 1 with an optimal objective value 1. The inner program has a concave objective function and linear inequalit constraints and therefore b the Adabie constraint qualification the KKT conditions are necessar [5, p. 187]. The KKT conditions for the inner program are given b λ 1 + λ 2 = 0 λ 1 ( 0.5) = 0 (12) λ 2 ( 1) = 0 λ 1 0 λ 2 0. Sstem (12) has three solutions. The first solution is = 0.5, λ 1 = 1, λ 2 = 0, a suboptimal local minimum of the inner program (h = 0.25). The second solution is = 0, λ 1 = 0, λ 2 = 0, the global maximum of the 13

14 inner program (h = 0). Finall the third solution is = 1, λ 1 = 0, λ 2 = 2, the global minimum of the inner program (h = 1). The one-level optimiation program proposed b Gümüs and Floudas [17] is therefore equivalent to min s.t. { 0.5, 0, 1} (13) and has three feasible points. Since the feasible set is discrete, all points are local minima and a local solver can converge to an of the points corresponding to the solutions of the sstem (12). The first solution = 0.5, with an objective value f = 0.5 and the second solution = 0, with an objective value f = 0 are both infeasible for (11). Moreover the clearl do not provide valid upper bounds on the original program (11). The third solution = 1 with an objective value of f = 1, which has the worst objective value for (13), is the true solution to the original program (11). Note that solving (13) to global optimalit would provide a valid lower bound as described in the next section. 6 Lower Bounding Procedure Lower bounds are tpicall obtained with an underestimation of the objective function and/or a relaxation of the feasible set [19] and a global solution of the resulting program. While global solution is a necessar requirement to obtain a valid lower bound, tpicall the constructed programs are convex and therefore global optimalit can be guaranteed with local solvers. 6.1 Relaxation of the Inner Program Does Not Provide a Valid Lower Bound Gümüs and Floudas [17] propose to obtain a lower bound to the bilevel program (1) b constructing a convex relaxation of the inner program (2) and replacing the convex inner program b its KKT conditions. Consider 14

15 for a fixed x a convex relaxation of (2) and the corresponding set of optimal solutions H c ( x) H c ( x) =arg min h c ( x,) s.t. p c ( x,) 0 (14) Y, where h c ( x, ) and p c ( x, ) are convex underestimating functions of h( x, ) and p( x, ) on Y for fixed x. This procedure results in an underestimation of the optimal solution value of the inner program but tpicall not in a relaxation of the feasible set of the bilevel program (1). The desired set inclusion propert for the two sets of optimal solutions H( x) H c ( x) (15) does not hold in general. It does hold when the inner constraints p( x, ) are convex and h c ( x, ) is the convex envelope of h( x, ). Constructing the convex envelope is in general not practical and Gümüs and Floudas propose to use α-bb relaxations [1]. Example 6.1 shows that convex underestimating functions based on this method do not guarantee the desired set inclusion propert even in the case of linear constraints. In the case of nonconvex constraints one might postulate that the set inclusion propert (15) holds if the feasible set is replaced b its convex hull and the objective function b the convex envelope. This is not true, as shown in Example 6.2. Both examples show that the proposal b Gümüs and Floudas does not provide valid lower bounds for bilevel programs involving a nonconvex inner program. Although the examples have a special structure, the show a general propert. Moreover in the proposal b Gümüs and Floudas [17] the complementarit constraint is replaced b the big-m reformulation and new variables are introduced; issues associated with this are discussed in Section 7. Example 6.1 (Counterexample for lower bound based on relaxation of inner program: box constraints). For 15

16 simplicit a bilevel program with a single variable and box constraints is considered. min s.t. arg min 3 (16), [ 1, 1]. The inner objective function is monotone and the unique minimum of the inner program is = 1. The bilevel program is therefore equivalent to min s.t. = 1 [ 1, 1] with the unique feasible point = 1 and an optimal objective value of 1. Taking the α-bb relaxation for α = 3 (the minimal possible) of the inner objective function, the relaxed inner program is min 3 + 3(1 )( 1 ) s.t. [ 1, 1], which has the unique solution = 0, see also Figure 1. Clearl the desired set inclusion propert (15) is violated. Using the solution to the convex relaxation of the inner program, the bilevel program now is equivalent to min s.t. = 0 [ 1, 1] 16

17 and a false lower bound of 0 is obtained. Note that since the objective function is a monomial, its convex envelope is known [22] 3 / 4 1 / 4 if 1 / 2 h c () = 3 otherwise. B replacing the inner objective with the convex envelope the minimum of the relaxed inner program would be attained at = 1 and the set inclusion propert would hold. 1 0 original function 1 convex envelope 2 3 α-bb underestimator Figure 1: Inner level objective function, its convex envelope and its α-bb underestimator for example (6.1) Example 6.2 (Counterexample for lower bound based on relaxation of inner program: nonconvex constraints). For simplicit a bilevel program with a single variable convex objective functions and a single nonconvex constraint in the inner program is considered. min s.t. argmin 2 (17) s.t , [ 10, 10]. 17

18 The two optimal solutions of the inner program are = ±1, so that the bilevel program becomes min s.t. = ±1 [ 10, 10] with the unique optimal point = 1 and an optimal objective value of 1. Taking the convex hull of the feasible set in the inner program, the relaxed inner program min 2 s.t. [ 10, 10] has the unique solution = 0. Clearl the desired set inclusion propert (15) is violated. Using the solution to the convex relaxation of the inner program, the bilevel program program becomes min s.t. = 0 [ 10, 10]. and a false lower bound of 0 is obtained. Note that using an valid underestimator for the function 1 2 would result in the same behavior as taking the convex hull. 6.2 Lower Bounds Based on Relaxation of Optimalit Constraint A conceptuall simple wa of obtaining lower bounds to the bilevel programs is to relax the constraint is a global minimum of the inner program with the constraint is feasible in the inner program and to solve the resulting program globall. Assuming some constraint qualification, a tighter bound can be 18

19 obtained b the constraint is a KKT point of the inner program. In a forthcoming publication [23] we anale how either relaxation can lead to convergent lower bounds. As mentioned in the introduction, Stein and Still [29] propose to solve GSIPs with a convex inner program b solving regularied MPECs. The note that for nonconvex inner programs this strateg would result to a relaxation of the GSIP. The complementarit slackness condition µ i g i (x,) = 0 is essentiall replaced b µ i g i (x,) = τ 2, where τ is a regulariation parameter. A sequence τ 0 of regularied programs is solved; each of the programs is a relaxation due to the special structure of GSIPs. For a general bilevel program the regulariation has to be slightl altered as can be seen in Example 6.3. The complication is that the KKT points are not feasible points in the regularied program and onl for τ 0 would a solution to the MPEC program provide a lower bound to the original bilevel program. A possibilit to use the regulariation approach for lower bounds would be to use an inequalit constraint τ 2 µ i g i (x,) as in [27]. To obtain the global solution with a branch-and-bound algorithm an additional complication is upper bounds for the KKT multipliers, see Section 7. Example 6.3 (Example for complication in lower bound based on regularied MPEC). Consider a linear bilevel program with a single variable and for simplicit take R as the host set. min s.t. arg min (18), R. s.t. 0 The inner program has the unique solution = 0, and thus there is onl one feasible point in the bilevel program with an objective value of 0. Note that all the functions are affine, and the inner program satisfies 19

20 a constraint qualification. The equivalent MPEC is min s.t. 1 µ = 0 µ 0 0 µ ( ) = 0 R. From the stationarit condition 1 µ = 0 it follows µ = 1 and therefore from the complementarit slackness = 0. As explained b Stein and Still [29] the regularied MPEC is equivalent to the solution of min s.t. 1 µ = 0 µ 0 (19) 0 µ ( ) = τ 2 R, which gives µ = 1, = τ 2 and an objective value of τ 2, which clearl is not a lower bound to 0. Replacing the requirement µ = 0 b τ 2 µ( ) or equivalentl τ 2 /µ in (19) would give a correct lower bound of 0. 20

21 7 Complication in KKT Approaches: Bounds A complication with bounds based on KKT conditions are that extra variables µ (KKT multipliers) are introduced (one for each constraint) which are bounded onl below (b ero). The big-m reformulation of the complementarit slackness condition needs explicit bounds for both the constraints g i and KKT multipliers µ i. Fortun-Amat and McCarl [15] first proposed the big M-reformulation but do not specif how big M should be. Gümüs and Floudas [17] use the upper bound on slack variables as an upper bound for the KKT multipliers which is not correct in general as Example 7.1 shows. Note also that Gümüs and Floudas [17] do not specif how to obtain bounds on the slack variables of the constraints but this can be easil done, e.g., b computing an interval extension of the inequalit constraints. Moreover, for a valid lower bound a further relaxation or a global solution of the relaxations constructed is needed and tpicall all variables need to be bounded [30, 25]. The regulariation approach as in Stein and Still [29] does not need bounds for the regulariation but if the resulting program is to be solved to global optimalit bounds on the KKT multipliers are again needed. For programs with a special structure upper bounds ma be known a priori or it ma be eas to estimate those. Examples are the feasibilit test and flexibilit index problems [16] where all KKT multipliers are bounded above b 1. Alternativel, specialied algorithms need to be emploed that do not require bounds on the KKT multipliers. Example 7.1 (Example with arbitraril large KKT multipliers). Let us now consider a simple example with a badl scaled constraint that leads to arbitraril large multipliers for the optimal KKT point min δ( 2 1) 0 (20) [ 2, 2], where δ > 0. The onl KKT point and optimal solution point is = 1 and the KKT multiplier associated with the constraint δ( 2 1) 0 is µ = 1 2δ. The slack variable associated with the constraint can at most take the value δ. As δ 0, the gradient of the constraint at the optimal solution d(δ(2 1)) d = 2 δ ζ = 2 δ 0 21

22 and µ becomes arbitraril large. For δ = 0 the Slater constraint qualification is violated. 8 Conclusions The bilevel program involving nonconvex functions has been discussed, with emphasis on the solution of programs involving nonconvexit in the inner program within a branch-and-bound framework. To that extent two equivalent reformulations of the bilevel program as a simpler bilevel program and as a GSIP have been presented. As a compromise between local and global solution of the inner program, solution to ε- optimalit in the inner and outer programs has been proposed. Several consequences of nonconvexit in the inner program have been identified, and the validit of literature proposals has been explored theoreticall and through simple illustrative examples. In particular, it was shown that replacing the inner program b its necessar but not sufficient KKT constraints and solving the resulting MPEC locall does not produce valid upper bounds. Similarl, a relaxation of the inner program does not lead to a valid lower bound whereas a relaxation of the optimalit constraint does. In the co-operative formulation the inner variables have to be treated differentl than the outer variables. For KKT based approaches, bounds on the KKT multipliers are needed for a global solution of the formulated subproblems. The issues identified with current proposals available in the literature show the need for the development of a rigorous algorithm for the global solution of bilevel programs involving nonconvex functions in the inner program. Acknowledgments This work was supported b the DoD Multidisciplinar Universit Research Initiative (MURI) program administered b the Arm Research Office under Grant DAAD We would like to thank Binita Bhattacharjee, Panaiotis Lemonidis, Benoît Chachuat, Aja Selot and Cha Kun Lee for fruitful discussions. 22

23 References [1] Claire S. Adjiman and Christodoulos A. Floudas. Rigorous convex underestimators for general twicedifferentiable problems. Journal of Global Optimiation, 9(1):23 40, [2] Mahar A. Amouegar. A global optimiation method for nonlinear bilevel programs. IEEE Transactions on Sstems, Man and Cbernetics Part B: Cbernetics, 29(6): , [3] Jonathan F. Bard. An algorithm for solving the general bilevel programming problem. Mathematics of Operations Research, 8(2): , [4] Jonathan F. Bard. Practical Bilevel Optimiation: Algorithms and Applications. Nonconvex Optimiation and Its Applications. Kluwer Academic Publishers, Dordrecht, [5] Mokhtar S. Baaraa, Hanif D. Sherali, and C. M. Shett. Nonlinear Programming. John Wile & Sons, New York, Second edition, [6] Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, Massachusetts, Second edition, [7] Binita Bhattacharjee, William H. Green Jr., and Paul I. Barton. Interval methods for semi-infinite programs. Computational Optimiation and Applications, 30(1):63 93, [8] Binita Bhattacharjee, Panaiotis Lemonidis, William H. Green Jr., and Paul I. Barton. Global solution of semi-infinite programs. Mathematical Programming, Series B, 103(2): , [9] Wane F. Bialas and Mark H. Karwan. Two-level linear-programming. Management Science, 30(8): , [10] Jerome Bracken and James T. McGill. Mathematical programs with optimiation problems in constraints. Operations Research, 21(1):37 44, [11] Peter A. Clark and Arthur W. Westerberg. Optimiation for design problems having more than one objective. Computers & Chemical Engineering, 7(4): ,

24 [12] Peter A. Clark and Arthur W. Westerberg. Bilevel programming for stead-state chemical process design. 1. Fundamentals and algorithms. Computers & Chemical Engineering, 14(1):87 97, [13] Stephan Dempe. First-order necessar optimalit conditions for general bilevel programming problems. Journal of Optimiation Theor and Applications, 95(3): , [14] Stephan Dempe. Foundations of Bilevel Programming. Nonconvex Optimiation and its Applications. Kluwer Academic Publishers, Dordrecht, [15] José Fortun-Amat and Bruce McCarl. A representation and economic interpretation of a two-level programming problem. Journal of the Operational Research Societ, 32(9): , [16] Ignasio E. Grossmann and Christodoulos A. Floudas. Active constraint strateg for flexibilit analsis in chemical processes. Computers and Chemical Engineering, 11(6): , [17] Zenep H. Gümüs and Christodoulos A. Floudas. Global optimiation of nonlinear bilevel programming problems. Journal of Global Optimiation, 20(1):1 31, [18] Zenep H. Gümüs and Christodoulos A. Floudas. Global optimiation of mixed-integer bilevel programming problems. Computational Management Science, 2(3): , [19] Reiner Horst and Hoang Tu. Global Optimiation: Deterministic Approaches. Springer, Berlin, Third edition, [20] Panaiotis Lemonidis and Paul I. Barton. Interval methods for generalied semi-infinite programs. International Conference on Parametric Optimiation and Related Topics (PARAOPT VIII), Cairo, Egpt, Nov 27-Dec [21] Panaiotis Lemonidis and Paul I. Barton. Global solution of generalied semi-infinite programs. In preparation, [22] Leo Liberti and Constantinos C. Pantelides. Convex envelopes of monomials of odd degree. Journal of Global Optimiation, 25(2): ,

25 [23] Alexander Mitsos, Panaiotis Lemonidis, and Paul I. Barton. Global solution of bilevel programs with a nonconvex inner program. Submitted to: Mathematical Programming, Series A, June 06, [24] James T. Moore and Jonathan F. Bard. The mixed integer linear bilevel programming problem. Operations Research, 38(5): , [25] Nick Sahinidis and Mohit Tawarmalani. BARON [26] Kiotaka Shimiu, Yo Ishiuka, and Jonathan F. Bard. Nondifferentiable and Two-Level Mathematical Programming. Kluwer Academic Publishers, Boston, [27] Oliver Stein, Jan Oldenburg, and Wolfgang Marquardt. Continuous reformulations of discretecontinuous optimiation problems. Computers & Chemical Engineering, 28(10): , [28] Oliver Stein and Georg Still. On generalied semi-infinite optimiation and bilevel optimiation. European Journal of Operational Research, 142(3): , [29] Oliver Stein and Georg Still. Solving semi-infinite optimiation problems with interior point techniques. SIAM Journal on Control and Optimiation, 42(3): , [30] Mohit Tawarmalani and Nikolaos V. Sahinidis. Convexification and Global Optimiation in Continuous and Mixed-Integer Nonlinear Programming. Nonconvex Optimiation and its Applications. Kluwer Academic Publishers, Boston,

A Test Set of Semi-Infinite Programs

A Test Set of Semi-Infinite Programs A Test Set of Semi-Infinite Programs Alexander Mitsos, revised by Hatim Djelassi Process Systems Engineering (AVT.SVT), RWTH Aachen University, Aachen, Germany February 26, 2016 (first published August

More information

1. Introduction. An optimization problem with value function objective is a problem of the form. F (x, y) subject to y K(x) (1.2)

1. Introduction. An optimization problem with value function objective is a problem of the form. F (x, y) subject to y K(x) (1.2) OPTIMIZATION PROBLEMS WITH VALUE FUNCTION OBJECTIVES ALAIN B. ZEMKOHO Abstract. The famil of optimization problems with value function objectives includes the minmax programming problem and the bilevel

More information

GLOBAL OPTIMIZATION WITH GAMS/BARON

GLOBAL OPTIMIZATION WITH GAMS/BARON GLOBAL OPTIMIZATION WITH GAMS/BARON Nick Sahinidis Chemical and Biomolecular Engineering University of Illinois at Urbana Mohit Tawarmalani Krannert School of Management Purdue University MIXED-INTEGER

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Mixed-Integer Nonlinear Programming

Mixed-Integer Nonlinear Programming Mixed-Integer Nonlinear Programming Claudia D Ambrosio CNRS researcher LIX, École Polytechnique, France pictures taken from slides by Leo Liberti MPRO PMA 2016-2017 Motivating Applications Nonlinear Knapsack

More information

Mixed Integer Non Linear Programming

Mixed Integer Non Linear Programming Mixed Integer Non Linear Programming Claudia D Ambrosio CNRS Research Scientist CNRS & LIX, École Polytechnique MPRO PMA 2016-2017 Outline What is a MINLP? Dealing with nonconvexities Global Optimization

More information

QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS

QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS Journal of the Operations Research Societ of Japan 008, Vol. 51, No., 191-01 QUADRATIC AND CONVEX MINIMAX CLASSIFICATION PROBLEMS Tomonari Kitahara Shinji Mizuno Kazuhide Nakata Toko Institute of Technolog

More information

Robust Nash Equilibria and Second-Order Cone Complementarity Problems

Robust Nash Equilibria and Second-Order Cone Complementarity Problems Robust Nash Equilibria and Second-Order Cone Complementarit Problems Shunsuke Haashi, Nobuo Yamashita, Masao Fukushima Department of Applied Mathematics and Phsics, Graduate School of Informatics, Koto

More information

Linear programming: Theory

Linear programming: Theory Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analsis and Economic Theor Winter 2018 Topic 28: Linear programming: Theor 28.1 The saddlepoint theorem for linear programming The

More information

Implementation of an αbb-type underestimator in the SGO-algorithm

Implementation of an αbb-type underestimator in the SGO-algorithm Implementation of an αbb-type underestimator in the SGO-algorithm Process Design & Systems Engineering November 3, 2010 Refining without branching Could the αbb underestimator be used without an explicit

More information

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem 1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and

More information

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams

Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams Convex Quadratic Relaxations of Nonconvex Quadratically Constrained Quadratic Progams John E. Mitchell, Jong-Shi Pang, and Bin Yu Original: June 10, 2011 Abstract Nonconvex quadratic constraints can be

More information

Solving Mixed-Integer Nonlinear Programs

Solving Mixed-Integer Nonlinear Programs Solving Mixed-Integer Nonlinear Programs (with SCIP) Ambros M. Gleixner Zuse Institute Berlin MATHEON Berlin Mathematical School 5th Porto Meeting on Mathematics for Industry, April 10 11, 2014, Porto

More information

AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2

AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2 AN EXACT PENALTY APPROACH FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. L. Abdallah 1 and M. Haddou 2 Abstract. We propose an exact penalty approach to solve the mathematical problems with equilibrium

More information

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS

A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS A FRITZ JOHN APPROACH TO FIRST ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics

More information

McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Function

McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Function Introduction Relaxations Rate Multivariate Relaxations Conclusions McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Function Alexander Mitsos Systemverfahrenstecnik Aachener

More information

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints

An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints An Enhanced Spatial Branch-and-Bound Method in Global Optimization with Nonconvex Constraints Oliver Stein Peter Kirst # Paul Steuermann March 22, 2013 Abstract We discuss some difficulties in determining

More information

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis

Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained

More information

1 Convexity, Convex Relaxations, and Global Optimization

1 Convexity, Convex Relaxations, and Global Optimization 1 Conveity, Conve Relaations, and Global Optimization Algorithms 1 1.1 Minima Consider the nonlinear optimization problem in a Euclidean space: where the feasible region. min f(), R n Definition (global

More information

An approach to constrained global optimization based on exact penalty functions

An approach to constrained global optimization based on exact penalty functions DOI 10.1007/s10898-010-9582-0 An approach to constrained global optimization based on exact penalty functions G. Di Pillo S. Lucidi F. Rinaldi Received: 22 June 2010 / Accepted: 29 June 2010 Springer Science+Business

More information

Deterministic Global Optimization Algorithm and Nonlinear Dynamics

Deterministic Global Optimization Algorithm and Nonlinear Dynamics Deterministic Global Optimization Algorithm and Nonlinear Dynamics C. S. Adjiman and I. Papamichail Centre for Process Systems Engineering Department of Chemical Engineering and Chemical Technology Imperial

More information

Multiobjective Mixed-Integer Stackelberg Games

Multiobjective Mixed-Integer Stackelberg Games Solving the Multiobjective Mixed-Integer SCOTT DENEGRE TED RALPHS ISE Department COR@L Lab Lehigh University tkralphs@lehigh.edu EURO XXI, Reykjavic, Iceland July 3, 2006 Outline Solving the 1 General

More information

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming Mathematical Problems in Engineering Volume 2010, Article ID 346965, 12 pages doi:10.1155/2010/346965 Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming

More information

4. Product measure spaces and the Lebesgue integral in R n.

4. Product measure spaces and the Lebesgue integral in R n. 4 M. M. PELOSO 4. Product measure spaces and the Lebesgue integral in R n. Our current goal is to define the Lebesgue measure on the higher-dimensional eucledean space R n, and to reduce the computations

More information

Priority Programme 1962

Priority Programme 1962 Priority Programme 1962 An Example Comparing the Standard and Modified Augmented Lagrangian Methods Christian Kanzow, Daniel Steck Non-smooth and Complementarity-based Distributed Parameter Systems: Simulation

More information

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1

The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,

More information

A smoothing augmented Lagrangian method for solving simple bilevel programs

A smoothing augmented Lagrangian method for solving simple bilevel programs A smoothing augmented Lagrangian method for solving simple bilevel programs Mengwei Xu and Jane J. Ye Dedicated to Masao Fukushima in honor of his 65th birthday Abstract. In this paper, we design a numerical

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Stability Analysis for Linear Systems under State Constraints

Stability Analysis for Linear Systems under State Constraints Stabilit Analsis for Linear Sstems under State Constraints Haijun Fang Abstract This paper revisits the problem of stabilit analsis for linear sstems under state constraints New and less conservative sufficient

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

How to solve a design centering problem

How to solve a design centering problem How to solve a design centering problem The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Harwood, Stuart

More information

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr.

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr. Course Title: Real Analsis Course Code: MTH3 Course instructor: Dr. Atiq ur Rehman Class: MSc-II Course URL: www.mathcit.org/atiq/fa5-mth3 You don't have to be a mathematician to have a feel for numbers.

More information

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION Part I: Short Questions August 12, 2008 9:00 am - 12 pm General Instructions This examination is

More information

Some Recent Advances in Mixed-Integer Nonlinear Programming

Some Recent Advances in Mixed-Integer Nonlinear Programming Some Recent Advances in Mixed-Integer Nonlinear Programming Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com SIAM Conference on Optimization 2008 Boston, MA

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems

On Penalty and Gap Function Methods for Bilevel Equilibrium Problems On Penalty and Gap Function Methods for Bilevel Equilibrium Problems Bui Van Dinh 1 and Le Dung Muu 2 1 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam 2 Institute of

More information

Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints

Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints Heuristics and Upper Bounds for a Pooling Problem with Cubic Constraints Matthew J. Real, Shabbir Ahmed, Helder Inàcio and Kevin Norwood School of Chemical & Biomolecular Engineering 311 Ferst Drive, N.W.

More information

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming Optim Lett (2016 10:1561 1576 DOI 10.1007/s11590-015-0967-3 ORIGINAL PAPER The exact absolute value penalty function method for identifying strict global minima of order m in nonconvex nonsmooth programming

More information

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy)

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy) Mario R. Eden, Marianthi Ierapetritou and Gavin P. Towler (Editors) Proceedings of the 13 th International Symposium on Process Systems Engineering PSE 2018 July 1-5, 2018, San Diego, California, USA 2018

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear

More information

A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations

A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations A Decomposition-based MINLP Solution Method Using Piecewise Linear Relaxations Pradeep K. Polisetty and Edward P. Gatzke 1 Department of Chemical Engineering University of South Carolina Columbia, SC 29208

More information

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms Jeff Linderoth Industrial and Systems Engineering University of Wisconsin-Madison Jonas Schweiger Friedrich-Alexander-Universität

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Ordinal One-Switch Utility Functions

Ordinal One-Switch Utility Functions Ordinal One-Switch Utilit Functions Ali E. Abbas Universit of Southern California, Los Angeles, California 90089, aliabbas@usc.edu David E. Bell Harvard Business School, Boston, Massachusetts 0163, dbell@hbs.edu

More information

Semismooth Hybrid Systems. Paul I. Barton and Mehmet Yunt Process Systems Engineering Laboratory Massachusetts Institute of Technology

Semismooth Hybrid Systems. Paul I. Barton and Mehmet Yunt Process Systems Engineering Laboratory Massachusetts Institute of Technology Semismooth Hybrid Systems Paul I. Barton and Mehmet Yunt Process Systems Engineering Laboratory Massachusetts Institute of Technology Continuous Time Hybrid Automaton (deterministic) 4/21 Hybrid Automaton

More information

Exact Computation of Global Minima of a Nonconvex Portfolio Optimization Problem

Exact Computation of Global Minima of a Nonconvex Portfolio Optimization Problem Frontiers In Global Optimization, pp. 1-2 C. A. Floudas and P. M. Pardalos, Editors c 2003 Kluwer Academic Publishers Exact Computation of Global Minima of a Nonconvex Portfolio Optimization Problem Josef

More information

Research Article Chaotic Attractor Generation via a Simple Linear Time-Varying System

Research Article Chaotic Attractor Generation via a Simple Linear Time-Varying System Discrete Dnamics in Nature and Societ Volume, Article ID 836, 8 pages doi:.//836 Research Article Chaotic Attractor Generation via a Simple Linear Time-Varing Sstem Baiu Ou and Desheng Liu Department of

More information

PICONE S IDENTITY FOR A SYSTEM OF FIRST-ORDER NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS

PICONE S IDENTITY FOR A SYSTEM OF FIRST-ORDER NONLINEAR PARTIAL DIFFERENTIAL EQUATIONS Electronic Journal of Differential Equations, Vol. 2013 (2013), No. 143, pp. 1 7. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu PICONE S IDENTITY

More information

A. Derivation of regularized ERM duality

A. Derivation of regularized ERM duality A. Derivation of regularized ERM dualit For completeness, in this section we derive the dual 5 to the problem of computing proximal operator for the ERM objective 3. We can rewrite the primal problem as

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow

AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS. Michael L. Flegel and Christian Kanzow AN ABADIE-TYPE CONSTRAINT QUALIFICATION FOR MATHEMATICAL PROGRAMS WITH EQUILIBRIUM CONSTRAINTS Michael L. Flegel and Christian Kanzow University of Würzburg Institute of Applied Mathematics and Statistics

More information

Deterministic Global Optimization for Dynamic Systems Using Interval Analysis

Deterministic Global Optimization for Dynamic Systems Using Interval Analysis Deterministic Global Optimization for Dynamic Systems Using Interval Analysis Youdong Lin and Mark A. Stadtherr Department of Chemical and Biomolecular Engineering University of Notre Dame, Notre Dame,

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/ Volume 2, Issue 2, Article 15, 21 ON SOME FUNDAMENTAL INTEGRAL INEQUALITIES AND THEIR DISCRETE ANALOGUES B.G. PACHPATTE DEPARTMENT

More information

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS ROLAND HERZOG AND FRANK SCHMIDT Abstract. Sufficient conditions ensuring weak lower

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

OPTIMAL CONTROL FOR A PARABOLIC SYSTEM MODELLING CHEMOTAXIS

OPTIMAL CONTROL FOR A PARABOLIC SYSTEM MODELLING CHEMOTAXIS Trends in Mathematics Information Center for Mathematical Sciences Volume 6, Number 1, June, 23, Pages 45 49 OPTIMAL CONTROL FOR A PARABOLIC SYSTEM MODELLING CHEMOTAXIS SANG UK RYU Abstract. We stud the

More information

Week 3 September 5-7.

Week 3 September 5-7. MA322 Weekl topics and quiz preparations Week 3 September 5-7. Topics These are alread partl covered in lectures. We collect the details for convenience.. Solutions of homogeneous equations AX =. 2. Using

More information

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization Sven Leyffer 2 Annick Sartenaer 1 Emilie Wanufelle 1 1 University of Namur, Belgium 2 Argonne National Laboratory, USA IMA Workshop,

More information

Part 4. Decomposition Algorithms

Part 4. Decomposition Algorithms In the name of God Part 4. 4.4. Column Generation for the Constrained Shortest Path Problem Spring 2010 Instructor: Dr. Masoud Yaghini Constrained Shortest Path Problem Constrained Shortest Path Problem

More information

How to Solve a Semi-infinite Optimization Problem

How to Solve a Semi-infinite Optimization Problem How to Solve a Semi-infinite Optimization Problem Oliver Stein March 27, 2012 Abstract After an introduction to main ideas of semi-infinite optimization, this article surveys recent developments in theory

More information

A mixed-discrete bilevel programming problem

A mixed-discrete bilevel programming problem A mixed-discrete bilevel programming problem Stephan Dempe 1 and Vyacheslav Kalashnikov 2 1 TU Bergakademie Freiberg, Freiberg, Germany 2 Instituto de Tecnologías y Educación Superior de Monterrey, Monterrey,

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Development of the new MINLP Solver Decogo using SCIP - Status Report

Development of the new MINLP Solver Decogo using SCIP - Status Report Development of the new MINLP Solver Decogo using SCIP - Status Report Pavlo Muts with Norman Breitfeld, Vitali Gintner, Ivo Nowak SCIP Workshop 2018, Aachen Table of contents 1. Introduction 2. Automatic

More information

Valid Inequalities and Convex Hulls for Multilinear Functions

Valid Inequalities and Convex Hulls for Multilinear Functions Electronic Notes in Discrete Mathematics 36 (2010) 805 812 www.elsevier.com/locate/endm Valid Inequalities and Convex Hulls for Multilinear Functions Pietro Belotti Department of Industrial and Systems

More information

Math 4381 / 6378 Symmetry Analysis

Math 4381 / 6378 Symmetry Analysis Math 438 / 6378 Smmetr Analsis Elementar ODE Review First Order Equations Ordinar differential equations of the form = F(x, ( are called first order ordinar differential equations. There are a variet of

More information

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony

More information

Fakultät für Mathematik und Informatik

Fakultät für Mathematik und Informatik Fakultät für Mathematik und Informatik Preprint 2017-03 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite programg problems ISSN 1433-9307 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite

More information

CPS 616 ITERATIVE IMPROVEMENTS 10-1

CPS 616 ITERATIVE IMPROVEMENTS 10-1 CPS 66 ITERATIVE IMPROVEMENTS 0 - APPROACH Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

An exact reformulation algorithm for large nonconvex NLPs involving bilinear terms

An exact reformulation algorithm for large nonconvex NLPs involving bilinear terms An exact reformulation algorithm for large nonconvex NLPs involving bilinear terms Leo Liberti CNRS LIX, École Polytechnique, F-91128 Palaiseau, France. E-mail: liberti@lix.polytechnique.fr. Constantinos

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 25 (2012) 974 979 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml On dual vector equilibrium problems

More information

Improved quadratic cuts for convex mixed-integer nonlinear programs

Improved quadratic cuts for convex mixed-integer nonlinear programs Improved quadratic cuts for convex mixed-integer nonlinear programs Lijie Su a,b, Lixin Tang a*, David E. Bernal c, Ignacio E. Grossmann c a Institute of Industrial and Systems Engineering, Northeastern

More information

Sharpening the Karush-John optimality conditions

Sharpening the Karush-John optimality conditions Sharpening the Karush-John optimality conditions Arnold Neumaier and Hermann Schichl Institut für Mathematik, Universität Wien Strudlhofgasse 4, A-1090 Wien, Austria email: Arnold.Neumaier@univie.ac.at,

More information

Can Li a, Ignacio E. Grossmann a,

Can Li a, Ignacio E. Grossmann a, A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann

More information

Algorithms for Linear Programming with Linear Complementarity Constraints

Algorithms for Linear Programming with Linear Complementarity Constraints Algorithms for Linear Programming with Linear Complementarity Constraints Joaquim J. Júdice E-Mail: joaquim.judice@co.it.pt June 8, 2011 Abstract Linear programming with linear complementarity constraints

More information

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials G. Y. Li Communicated by Harold P. Benson Abstract The minimax theorem for a convex-concave bifunction is a fundamental theorem

More information

Fuzzy Topology On Fuzzy Sets: Regularity and Separation Axioms

Fuzzy Topology On Fuzzy Sets: Regularity and Separation Axioms wwwaasrcorg/aasrj American Academic & Scholarl Research Journal Vol 4, No 2, March 212 Fuzz Topolog n Fuzz Sets: Regularit and Separation Aioms AKandil 1, S Saleh 2 and MM Yakout 3 1 Mathematics Department,

More information

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter

Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Complementarity Formulations of l 0 -norm Optimization Problems 1 Mingbin Feng, John E. Mitchell, Jong-Shi Pang, Xin Shen, Andreas Wächter Abstract: In a number of application areas, it is desirable to

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Affine Hybrid Systems

Affine Hybrid Systems Affine Hbrid Sstems Aaron D Ames and Shankar Sastr Universit of California, Berkele CA 9472, USA {adames,sastr}@eecsberkeleedu Abstract Affine hbrid sstems are hbrid sstems in which the discrete domains

More information

Feasibility Pump for Mixed Integer Nonlinear Programs 1

Feasibility Pump for Mixed Integer Nonlinear Programs 1 Feasibility Pump for Mixed Integer Nonlinear Programs 1 Presenter: 1 by Pierre Bonami, Gerard Cornuejols, Andrea Lodi and Francois Margot Mixed Integer Linear or Nonlinear Programs (MILP/MINLP) Optimize

More information

A Proximal Method for Identifying Active Manifolds

A Proximal Method for Identifying Active Manifolds A Proximal Method for Identifying Active Manifolds W.L. Hare April 18, 2006 Abstract The minimization of an objective function over a constraint set can often be simplified if the active manifold of the

More information

Lecture 7. The Cluster Expansion Lemma

Lecture 7. The Cluster Expansion Lemma Stanford Universit Spring 206 Math 233: Non-constructive methods in combinatorics Instructor: Jan Vondrák Lecture date: Apr 3, 206 Scribe: Erik Bates Lecture 7. The Cluster Expansion Lemma We have seen

More information

Technical Note Preservation of Supermodularity in Parametric Optimization Problems with Nonlattice Structures

Technical Note Preservation of Supermodularity in Parametric Optimization Problems with Nonlattice Structures Published online ahead of print October 1, 2013 OPERATIONS RESEARCH Articles in Advance, pp. 1 8 ISSN 0030-364X (print) ISSN 1526-5463 (online) http://dx.doi.org/10.1287/opre.2013.1203 2013 INFORMS Technical

More information

Lecture 2: Separable Ordinary Differential Equations

Lecture 2: Separable Ordinary Differential Equations Lecture : Separable Ordinar Differential Equations Dr. Michael Doughert Januar 8, 00 Some Terminolog: ODE s, PDE s, IVP s The differential equations we have looked at so far are called ordinar differential

More information

4 Inverse function theorem

4 Inverse function theorem Tel Aviv Universit, 2013/14 Analsis-III,IV 53 4 Inverse function theorem 4a What is the problem................ 53 4b Simple observations before the theorem..... 54 4c The theorem.....................

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

Can Li a, Ignacio E. Grossmann a,

Can Li a, Ignacio E. Grossmann a, A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with nonconvex constraints and mixed-binary first and second stage variables Can Li a, Ignacio E. Grossmann

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

1 + x 1/2. b) For what values of k is g a quasi-concave function? For what values of k is g a concave function? Explain your answers.

1 + x 1/2. b) For what values of k is g a quasi-concave function? For what values of k is g a concave function? Explain your answers. Questions and Answers from Econ 0A Final: Fall 008 I have gone to some trouble to explain the answers to all of these questions, because I think that there is much to be learned b working through them

More information

MPC Infeasibility Handling

MPC Infeasibility Handling MPC Handling Thomas Wiese, TU Munich, KU Leuven supervised by H.J. Ferreau, Prof. M. Diehl (both KUL) and Dr. H. Gräb (TUM) October 9, 2008 1 / 42 MPC General MPC Strategies 2 / 42 Linear Discrete-Time

More information

Computer Sciences Department

Computer Sciences Department Computer Sciences Department Valid Inequalities for the Pooling Problem with Binary Variables Claudia D Ambrosio Jeff Linderoth James Luedtke Technical Report #682 November 200 Valid Inequalities for the

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

ON A CLASS OF NONCONVEX PROBLEMS WHERE ALL LOCAL MINIMA ARE GLOBAL. Leo Liberti

ON A CLASS OF NONCONVEX PROBLEMS WHERE ALL LOCAL MINIMA ARE GLOBAL. Leo Liberti PUBLICATIONS DE L INSTITUT MATHÉMATIQUE Nouvelle série, tome 76(90) (2004), 0 09 ON A CLASS OF NONCONVEX PROBLEMS WHERE ALL LOCAL MINIMA ARE GLOBAL Leo Liberti Communicated by Gradimir Milovanović Abstract.

More information