A novel branch-and-bound algorithm for quadratic mixed-integer problems with quadratic constraints

Size: px
Start display at page:

Download "A novel branch-and-bound algorithm for quadratic mixed-integer problems with quadratic constraints"

Transcription

1 A novel branch-and-bound algorithm for quadratic mixed-integer problems with quadratic constraints Simone Göttlich, Kathinka Hameister, Michael Herty September 27, 2017 Abstract The efficient numerical treatment of convex quadratic mixed-integer optimization poses a challenging problem for present branch-and-bound algorithms. We introduce a method based on the duality principle for nonlinear convex problems to derive suitable bounds that can be directly exploit to improve heuristic branching rules. Numerical results indicate that the bounds allow the branch-and-bound tree to be searched and evaluated more efficiently compared to benchmark solvers. An extended computational study using different performance measures is presented for small, medium and large test instances. AMS Classification: 90C11, 90C20 Keywords: nonlinear mixed-integer programming, duality, branch-and-bound 1 Introduction Convex optimization problems with quadratic objective function and linear or quadratic constraints often appear in the mathematical modeling of real-world phenomena. Typical applications range from operations management, portfolio optimization in finance to engineering science, compare [1, 5, 6, 10, 11, 25] and the references therein. More recently, it has been observed that discretized differential equations, where integer decisions enter the problem as for example traffic lights [16, 17] or gas compressors switches [24], fit into the class of mixed-integer quadratically constrained programs (MIQCQP). MIQCQP is a special form of mixed-integer nonlinear problems (MINLP), where the nonlinearities are represented by quadratic functions. To solve general convex MINLP, Dakin [12] proposed the branch-and-bound (B&B) method in This method extends the well-known method for solving linear mixed-integer problems [20]. The relation between primal and dual problems in the convex case has been used to obtain lower bounds and early branching rules already in [9]. Therein, the authors suggested to use a quasi-newton method to solve the Lagrangian saddle point problem appearing in the branching nodes. This nonlinear problem has only bound constraints. Fletcher and Leyffer [14] report on results of this approach for mixed-integer quadratic problems (MIQP) with linear constraints. In this work, we will extend those results by investigating the dual problem in more detail for an improved tree search within a B&B algorithm. The proposed bound will then be used for the node selection strategy after the branching step in the B&B algorithm. The numerical results show an improved convergence behavior, in particular for large-scale problems, compared with benchmark solvers. Other approaches to obtain suitable lower bounds for branching have been investigated over the last years. Also in [14] the authors described how the calculation of a lower bound is done efficiently under the assumption that the parent quadratic problem (QP) is solved with an active set method. Hoogeveen and van de Velde [19] showed that lower bounds are improved by introducing University of Mannheim, Department of Mathematics, Mannheim, Germany ({goettlich,hameister}@math.uni-mannheim.de) RWTH Aachen University, Department of Mathematics, Aachen, Germany (herty@igpm.rwth-aachen.de) 1

2 slack variables. Their main focus has been on applications to machine scheduling problems. Hahn and Grant worked on a dual lower bound for the quadratic assignment problem [18] and Van Thoai [27] extended the dual bound method to general quadratic programming problems with quadratic constraints. An overview on solution methods for QPs is given by Van Thoai [28]. Further improvements on the B&B scheme for MIQCQP or MINLP are the integrating sequential quadratic method [23] and the outer approximation method [13] as well as improvements in the B&B framework [3, 29, 21, 26]. Bonami et al. have been worked on a hybrid algorithm [6], that is able to use several methods to speed up the solution time. They also developed heuristics to accelerate the computation time of the exact algorithm [7]. The impact of different methods and solver components has been also studied and documented in [5, 6, 8, 22, 30]. However, the main difference to our proposed algorithmic framework is the straightforward exploitation of the duality principle for nonlinear convex optimization. Although the bounds we compute from theory are not always sharp, they can be used within the B&B algorithm as a node selection strategy to cut off non-efficient branches in an appropriate manner. To the best of our knowledge, this idea has not been used before. The outline of the paper is as follows: We will introduce the relevant notation and the problem context in Section 2. On a theoretical level, we discuss the Lagrangian duality that can be used to determine a lower bound for the objective value of the primal problem. The computation of the dual bound will be presented and included in the B&B procedure as a new branching decision rule in Section 3. To conclude, we compare the performance of our implementation with IBM s optimization software CPLEX on academic and benchmark test instances. The results indicate the good performance of the novel branching heuristic, see Section 4. 2 Nonlinear mixed-integer problems MINLPs have been studied recently and we refer to [6, 9, 12, 13, 27] and the references therein for an overview. We focus on linear-quadratic mixed-integer problems that are also convex. This class allows for a dual problem to obtain suitable lower bounds. Within a tree search those bounds can be then used to truncate subtrees within a B&B algorithm. This in turn should (and the numerical results also verify this) improve the solution process. Obviously, the idea to consider dual problems to improve bounds is standard for linear programming problems. For linear-quadratic relaxed problems as defined below it is also known that the dual bounds are sharp. Therefore, it is reasonable to exploit those bounds for branching. However, solving the dual problem usually is as hard as the primal problem. Therefore, we provide approximations to the dual problem to obtain bounds for the truncation of subtrees. This leads to a heuristic that can be evaluated efficiently. The numerical results in Section 4 show that for a wide class of problems this heuristic improves benchmark solvers by orders of magnitude. In the following we introduce the MIQCQP as well as our approximation to the dual problem. A MIQCQP for nonlinear, differentiable strictly convex quadratic functions f : R n R and g : R n R and affine linear functions h 1,2 : R n R m is given by min f(x) x R n subject to g(x) 0, h 1 (x) 0, h 2 (x) = 0, x i Z, i I, and x j R, j {1,..., n}\i. (2.1) We assume I = {1,..., l} and m < n. The set I contains the indices of the integer components of x R n. In order to simplify the notation we introduce the subset X R n as Due to the given assumptions we may write X := {x R n x i Z, i I}. f(x) = 1 2 xt Q 0 x + c T 0 x, g(x) = 1 2 xt Q 1 x + c T 1 x, h 1 (x) = A 1 x b 1, h 2 (x) = A 2 x b 2. (2.2) 2

3 The matrices Q i are positive definite and symmetric. We also assume that the matrices A 1 R m1 n and A 2 R m2 n have maximal column rank to avoid technical difficulties. In the algorithmic framework of the B&B method we need to solve relaxation problems, where X is replaced by R n and h 1,2 are extended according to the already fixed integer variables in the current branching node, see Section 3 for more details. In order to select suitable branching nodes we require lower bounds on the relaxation problem. Those will be calculated using a dual formulation. For f, g as before we consider the relaxed problem min x R n f(x) subject to g(x) 0, h 1 (x) 0, h 2 (x) = 0, (2.3) where integer or relaxation restrictions and box constraints are included within the linear constraints of h 1 and h 2, changing possibly the dimension of m 1 and m 2, respectively. The setting of problem (2.3) allows to derive a dual formulation due to the convexity assumptions [15]. The corresponding Lagrange function is given by where α R p +, λ R m1 + and µ R m2. L(x, α, λ, µ) = f(x) + αg(x) + λ T h 1 (x) + µ T h 2 (x), Definition 2.1 (Dual problem). [15, p.315] The function q : R p + R m1 + R m2 R q(α, λ, µ) := inf L(x, α, λ, µ) x Rn is called the dual function. The optimization problem is the dual problem. max q(α, λ, µ) s.t. α, λ 0 (2.4) α,λ,µ Provided we have weak duality, we can use the Lagrange function to calculate a bound for the objective function. The corresponding theoretical result is given e.g. in [15]. Theorem 2.2 (Weak duality). [15, p.320] Let x R n be a feasible solution of the primal problem (2.3) and let (α, λ, µ) R m R p be a feasible solution of the dual problem (2.4). Then q(α, λ, µ) f(x) is fulfilled and the optimal values of both problems satisfy the following inequality sup{q(α, λ, µ) α 0, λ 0} inf{f(x) x R n, g(x) 0, h 1 (x) 0, h 2 (x) = 0}. The inequality shows that the dual function serves as lower bound to the objective function of the primal problem provided we have a feasible solution. Therefore we introduce a B&B method using the dual function to estimate lower bounds for weak branching. Let us denote by x R n the optimal solution of the relaxation problem to (2.3) and by α, λ, µ its corresponding Lagrange multipliers. Due to the convexity assumptions there exists a unique solution x R n. For the minimization problem on the feasible set of the relaxed problem we assume that inf x R n L(x, α, λ, µ) is attained at ˆx = ˆx(α, λ, µ) X for given and fixed values of the multipliers α, λ, µ. Then, the previous Theorem 2.2 states for any (α, λ, µ) with α, λ 0 and any x R n q(α, λ, µ) = L(ˆx(α, λ, µ), α, λ, µ) q(α, λ, µ ) = L(ˆx(α, λ, µ ), α, λ, µ ) = f(x ) f(x), (2.5) where x = ˆx(α, λ, µ ) is the optimal solution to the problem (2.3). Since ˆx is the minimizer of the dual function, we obtain for the multipliers the relation m 1 m 2 f(ˆx) + α g(ˆx) + λ j h 1,j (ˆx) + µ j h 2,j (ˆx) = 0. (2.6) j=0 j=0 3

4 The objective is to present a simple computation of dual bounds based on equation (2.5) using suitable choices for the multipliers similar to [9, 28]. Contrary to [9, 28], we do not apply a quasi- Newton method to calculate (2.6), but directly compute its unique minimizer ˆx dependent on the multipliers as ˆx(α, λ, µ ) = (Q 0 + α Q 1 ) 1 (c 0 + α c 1 + A T 1 λ + A T 2 µ ). (2.7) This allows to obtain a closed form of the dual function in terms of the (unknown, optimal) multipliers α, λ and µ. However, we are interested in an efficient computation of a lower bound for f(x) to obtain a node selection rule. Since a maximization on dual variables (α, λ, µ) appearing in the equation (2.7) is usually too expensive, we propose to fix the choice of one multiplier λ and to maximize over all other ones. Due to equation (2.5), this choice then provides a lower bound for f(x ). Lemma 2.1. Under assumption (2.2) and for any α 0, the value provides a lower bound for f(ˆx) with ˆx = M( c + A T 2 ˆµ), L := L(ˆx, α, 0, ˆµ) ˆµ = (ZA T 2 2BA T 2 ) 1 (Z c 2B c b), M 1 := (Q 0 + αq 1 ), Z := A 2 M T (Q 0 + αq 1 )M, B := A 2 M T, c = c 0 + αc 1. Some remarks are in order. The formulas are obtained by maximizing with respect to µ and setting λ = 0. For fixed values of α all matrices can be computed offline and prior to the B&B algorithm. The choice α = 0 simplifies the computation of the bound. In practice, we obtain better results for a positive small value of α. The computation of the lower bound requires the inversion of the positive definite matrix M that is efficiently computed using Cholesky decomposition. Proof. Note that for any choice of (α, λ, µ) with α, λ 0 the function q with ˆx given by equation (2.7) provides a lower bound for f(ˆx) due to weak duality on X = R n. We set λ = 0. Further note that due to the symmetry and positive definiteness of the matrices Q 0 and Q 1, the matrix M is well-defined, symmetric and positive definite. This implies that the matrix A 2 M T (Q 0 + αq 1 )MA T 2 2A 2 M T A T 2 = ZA T 2 2BA T 2 is symmetric and positive definite provided A 2 has full column rank. Therefore, ˆµ is well-defined. By definition of ˆx in (2.7), we obtain µˆx(α, 0, µ) = (Q 0 + αq 1 ) 1 A T 2. Finally, we observe that q(α, 0, µ) = L(ˆx, α, 0, µ) is a quadratic function in µ. Its gradient with respect to µ is given by µ q(α, 0, µ) = µˆx T (Q 0 + αq 1 )ˆx + µˆx T (c 0 + αc 1 ) + A 2ˆx + µˆx T A T 2 µ b 2 =A 2 M T QM ( c + A T 2 µ) A 2 M T c A 2 M ( c + A T 2 µ) A 2 M T A T 2 µ b 2 = (A 2 M T QMA T 2 2A 2 M T A T 2 ) µ + A 2 M T QM c 2A2 M T c b 2. Then, ˆµ is obtained as zero of the partial derivative of q with respect to µ. This finishes the proof. In many applications, the resulting optimization problem consists of more than one quadratic constraint. Therefore, we also propose an extension of Lemma 2.1 and consider a mixed-integer quadratic problem with p quadratic constraints. The objective function and the linear constraint are the same as before. The MIQCQP is given by (2.8) with x X. 4

5 min x R n f(x) = 1 2 xt Q 0 x + c T 0 x s.t. h(x) = Ax b 0 (2.8) g i (x) = 1 2 xt Q i x + c T i x r i for i = 1,..., p. Here, Q 0 and Q i, i = 1,..., p, are positive definite matrices. As before the integer restriction and box constraints of the relaxed problem are included to the linear constraints h in each branching node of the B&B tree. Then we have x R n and the Lagrangian of the problem of (2.8) is given by n L(x, α 1,..., α p, λ, µ) = f(x) + α i g i (x) + (λ, µ) T h(x). The following Lemma is an extension of the previous result with p quadratic constraints. Lemma 2.2. Let α R p +. Then, the value L provides a lower bound for the relaxed minimization problem of (2.8). L := L(ˆx, α 1,..., α p, 0, ˆµ) provides a lower bound for f(x), x X, where ˆx(α 1,..., α p, 0, µ) = M( c + A T 2 µ), i=1 i=1 ˆµ = (ZA T 2 2BA T 2 ) 1 (Z c 2B c b 2 ), n n M 1 := (Q 0 + α i Q i ), Z := A 2 M T (Q 0 + α i Q i )M, B := A 2 M T, c := (c 0 + n α i c i ). Proof. Before we verify the Lagrangian multipliers, it is necessary to show, that ˆx is the minimizer of the relaxed problem to (2.8). Therefore we solve the following equation for x: x L(x, α 1,..., α p, λ, µ) = f(x) + i=1 i=1 n α i g i (x) + (λ T, µ T ) h(x) = 0 This gives the unique minimizer ˆx in dependence of the multipliers α 1,..., α p, λ, µ ˆx(α 1,..., α p, 0, µ) = (Q 0 + i=1 n α i Q i ) 1 (c 0 + i=1 n α i c i + A T 2 µ), where the multiplier λ for the linear inequalities is zero. This allows to obtain the closed form of the dual function in terms of the multipliers. Calculating the gradient with respect to µ, we obtain ˆµ as the zero of the partial derivative of the dual function with respect to µ. We end up with ˆµ = (A 2 M T QMA T 2 2A 2 M T A T 2 ) 1 (A 2 M T QM c 2A2 M T c b 2 ) as a Lagrangian multiplier with Q := Q 0 + n i=1 α iq i, M := (Q 0 + n i=1 α iq i ) 1 and c := (c 0 + n i=1 α ic i ). This finishes the proof. In the next section we explain how Lemma 2.1 and 2.2 are used to design an efficient B&B algorithm. We will specify a node selection strategy that helps to detect subtrees that do not improve the solution. As we will see in Section 4, this strategy leads to a good performance even in the case of large problem sizes and furthermore to very good first integer feasible solutions. i=1 5

6 3 Solution procedure Starting from a general description of the B&B algorithm, we present a new node selection strategy after the branching step. This rule is integrated in a depth-first search strategy. In total, we obtain a new tree search strategy for the B&B algorithm. There exist a few examples, where a dual lower bound has been used to cut branches by the Lagrangian dual problem, cf. [9, 14, 28]. However, so far such bounds have not been exploited for the node decision inside the B&B scheme. We combine the node selection with the bound computation presented above as novel heuristic strategy for a B&B algorithm. We focus on solving the MIQCQP using a classical B&B algorithm [2, 12, 20]. At first, we solve the relaxed problem (2.3) in the root node, which is defined by relaxing the integrality constraints on the integer variables x i, i I as an initialization. By following the pruning rules, we choose an index i and a fractional x i that should be integer and will be fixed to its lower (higher) integer value x i ( x i ). The resulting subproblem has the additional constraint ˆx i x i ( x i ˆx i ) on the solution ˆx. This procedure will be continued until an integer solution to the main problem is found or infeasibility occurs. The B&B algorithm searches a tree structure for feasible integer solutions, where the nodes correspond to the relaxed quadratic constraint quadratic problems (QCQP) and the edges to branching decisions. 3.1 A tailored branch-and-bound algorithm Assuming full information, we may find a way through the B&B tree directly to the optimal solution of the main optimization problem as shown in Figure 1. P 1 x i x i x i x i P 2 f 2 f 3 P 5 P 7 P k Figure 1: Optimal tree search path: B&B with full information about subproblems, i.e., we know the solutions of all subproblems in advance. Then, we may always choose the subproblem lying on the path (fat line) to the optimal solution (node P k ). Other subtrees (small circles) do not need to be considered. Definition 3.1 (Optimal search path). Given a linear or nonlinear mixed-integer problem (MIP), we define the optimal search path through the B&B tree as the direct path from the full relaxed problem to the optimal solution of the main problem. The optimal path is characterized by a list of all visited subproblems. This is also the smallest possible search tree. Due to the fact that we do not know the solution associated with a node in advance, we have to look for alternative strategies to find a suitable path through the B&B tree. 6

7 Tree search strategy. All appearing nodes of the tree are visited according to a depth-firstsearch. The algorithm will therefore go downwards in the tree by choosing subproblems defined by a node selection strategy. It will go up again when a feasible solution is found or an infeasibility arises on the branch. By climbing up the tree the algorithm will cut off all nodes whose bounds are higher than the objective value of the best current solution. This is an upper bound to the optimal solution of (2.1). Since the dual bound is a lower bound to the objective function value of the subproblem, a dual bound higher than the global upper bound implies a better solution from the branch in this direction. Consequently, those nodes will not be visited by the proposed algorithm. Node selection strategy. Solving the subproblems before the branch selection is potentially costly. Therefore, we calculate a bound on the objective value f(x) of the arising subproblems. We intend to save computational costs by solving only one of the two subproblems. Thanks to Lemma 2.1, we are able to decide which branch should be solved first by comparing the values of the dual bound of corresponding child nodes as indicated in Figure 2. We then continue on the branch with the lower dual bound. This strategy is also known as best-of-two node selection rule or best lower bound strategy [8]. P 1 x i x i x i x i P 2 L 2 L 3 P 3 Figure 2: Node selection strategy: the best dual bound first. Here, L i denotes the dual bound defined in Lemma 2.1 for the subproblem at node P i. Selection of the branching variable. After applying the pruning rules, the first decision is to choose the fractional variable for the branching. Possible rules are for instance the most fractional component or highest pseudo costs, see [8]. We take the first fractional component. Gathering all decision rules, we end up with the so-called B&B dual algorithm. 3.2 Implementation We describe the implementation of the algorithmic framework presented before. The pseudo code of the B&B dual scheme is given in Algorithm 1. The routine requires the matrices and vectors as defined in (2.1) and (2.2) as well as box constraints for the integer variables and (not necessarily) a start solution. For simplicity, we set x 0 = 0. The output of the algorithm is the integer optimal solution x X and the objective function value f = f( x) of the computed solution. Additional information about the solution process, e.g. the number of solutions that has been found or the number of calculated nodes in the B&B tree are also provided. Within the first call of the code, the upper bound UB for the optimal objective function value is initialized to plus infinity. The lines 2 to 33 in Algorithm 1 describe the recursive part of the code. After the exit condition of the recurrence scheme and the update for the upper bound UB, the branching will be on one fractional component of the variable. The dual bound for the objective value of the resulting relaxed subproblems is calculated. In the node decision, the lower dual bound is considered first, but only if its dual bound is lower than the upper bound for the optimal objective function value. If the recurrences of all child nodes have been closed, the code returns the best found solution to the next upper recurrence level provided the first and last level of the recursive hierarchy has been solved. The B&B scheme pursues a depth-first-search strategy caused by the recursive structure. The performance of the B&B dual algorithm correlates with the choice of Lagrangian multiplier α, which influences the accuracy of the dual bound. To achieve an accurate bound at each node, 7

8 the value of the dual function is evaluated for different feasible values of α. The dual bound used for the algorithmic framework is chosen as the maximum of the calculated dual function values. Therefore the trade off between the computational costs for the dual function values and the improvement of the solution process, caused by a more accurate dual bound, has to be considered. Algorithm 1: The recursive B&B dual algorithm Require: MIQCQP, x 0 initial solution Ensure: Integer optimal solution x and objective value f of solution 1: initialize upper bound UB = 2: % Start of recursive part of the code 3: solve relaxed QCQP, let x be the optimal solution 4: if QCQP is infeasible or objective value f(x) > UB then 5: cut off this branch 6: end if 7: if QCQP is integer feasible then 8: set UB = f(x ), x = x 9: end if 10: % Branching on one variable x i with fractional value 11: if x i exists then 12: for α 1,..., α n do 13: GenerateNewBounds(x i ), let MIQCQP left be the new subproblem 14: CalculateM ultipliers(miqcqp left) % see Section 2 15: CalculatedualBound(MIQCQP left)=: L 1 (α i ) 16: end for 17: choose L 1 = max{ L 1 (α i ) i = 1,..., n} 18: else if x i exists then 19: %analogously to the first part of the branch construction 20: [...] 21: end if 22: if L 1 < L 2 and L 1 < UB then 23: recursive call beginning in line 2 on problem MIQCQP left 24: if new integer solution found then 25: update UB, x = x 26: end if 27: if L 2 < L 1 then 28: recursive call beginning in line 2 on problem MIQCQP right 29: end if 30: else if L 1 < L 2 and L 2 < UB then 31: %analogously to the first part of the node selection 32: [...] 33: end if 34: return best found integer solution x and f( x) 4 Computational results To test the performance of the B&B algorithm described in Section 3, we recursively implemented the algorithm without any parallelism in MATLAB Release 2013b based on a software for solving MINLP 1. Furthermore, we use IBM s ILOG CPLEX Interactive Optimizer to solve the relaxed subproblems. This means, that all arising QCQP during the B&B dual algorithm are solved by CPLEX. We compare the B&B dual implementation to CPLEX Version as a 1 Documentation and download: 8

9 benchmark solver. To make sure that both algorithms are running as a sequential B&B algorithm with two different search strategies, the MIP parameters of CPLEX 2 have been fixed as follows: cplex.p aram.threads.cur = 1, cplex.p aram.mip.strategy.search.cur = 1, cplex.p aram.mip.strategy.nodeselect.cur = 1. Therefore the implementation of our algorithm and CPLEX mainly differ in the decisions made during the optimization process of the B&B tree. All tests have been performed on a Unix PC equipped with 512GB Ram, IntelR XeonR CPU E GHz. 4.1 Performance measures We start with the introduction of performance measures inspired by [4] for the comparison of CPLEX and our implementation of Algorithm 1. The most intuitive way to measure the performance of two different methods is to document the progress of the solution at different stages. Therefore, we compare the time needed to find the first integer solution x 1 and the optimal solution x opt as well as the time needed to prove optimality. To show the quality of the first found integer solution, we also record the objective function value of the first and the optimal solution, i.e., f( x 1 ) and f( x opt ). Let x be again an integer optimal solution and x opt the optimum. We define t max R + as the time limit of the solution process. Then, the primal gap γ [0, 1] of x is given by 0, if f( x opt ) = f( x) = 0, γ( x) := 1, if f( x opt ) f( x) < 0, (4.1) f( x opt) f( x) max{ f( x, else. opt), f( x) } The monotone decreasing step function p : [0, t max ] [0, 1] defined as { 1, if no increment until point t, p(t) := γ( x(t)), with x(t) increment at point t is called primal gap function. The latter changes its value whenever a new increment is found. Furthermore, it holds that p(0) = 1 and p(t) = 0 for all t t opt. Hence, the primal gap function visualizes the normalized difference between the current integral and the optimal solution. Next, we define the measure P (T ) called the primal integral P (T ) for T [0, t max ] as P (T ) := T 0 p(t) dt = K p(t k 1 ) (t k t k 1 ), (4.2) k=1 where T R + is the total computation time of the procedure and t k [0, T ], k 1,..., K 1 with t 0 = 0, t K = T. Note that the primal integral P (T ) is beneficial to detect good solutions early and to identify each update of the increment. As a further performance indicator, we document the total number of found integral solutions. This number is equal to the increments of the primal gap function. 4.2 One quadratic constraint We generate test instances of the mixed-integer quadratic constraint quadratic problem (2.1) by using the integer random number generator of MATLAB. The optimal solution of these problems are computed by the algorithm described in Section 3 and the results are compared to those of 2 For further Information see: cplex.help/cplex/homepages/cplex.html 9

10 CPLEX. Note that all relevant entries of vectors and matrices have been set to integer values to ensure that an integer solution really exists. For our optimization purposes, we consider (2.1) constrained by (2.2) and additional box constraints lb x ub, where x, ul, ub R n. The matrices Q 0 and Q 1 are of the form CDC T with D being a diagonal matrix with entries d j,j [1, 4]. The components c j,k of the matrix C are c j,k [ 1, 1] for scenario type one and c j,k [0, 2] for scenario type two, respectively. All other entries of matrices and vectors have been chosen randomly according to c 0 : (c 0 ) j [0, 6], c 1 : (c 1 ) j [ n, N], A 1 : a i,j [ 1, 2], A 2 : a i,j [ 1, 3], b 1 : (b 1 ) j [ 2, 4], b 2 : (b 2 ) j [0, 6], x i, i = 1,..., l : x i [ 2, 5]. The continuous components of x are not restricted by any box constraints. To generate sufficiently large enough problem instances, we compute three different kind of problems regarding the number of variables. For each of these classes we generate 10 different examples by changing the random seed of the MATLAB random number generator randi, seed [2, 4, 6, 8, 10, 12, 14, 16]. The random seed is the number that initializes the random number generator in such a way that the constructed matrices are repeatable. For the matrix Q 1 and the vector c 1 the seed is fixed to three to guarantee that they are not the same as Q 0 and c 0 in the objective function. To increase the size of the problem under consideration, we increase the amount of integer variables and constraints step by step. For example, the smallest configuration consists of 150 variables in total, 30 integer variables, 30 linear equalities and 50 inequalities. If the number of the total variables is doubled, the problem size increases in the same way. While the number of linear equalities and inequalities increases there will be one quadratic constraint only. Our dual lower bound strategy can be easily extended to more quadratic constraints. This will be shown in Section 4.3 for some generated MIQCQPs with multiple quadratic constraints. A final comparison to relevant benchmark problems of different size will be finally presented in Section 4.4. For the following computations we set the Lagrangian multipliers to α {0.2, 0.4, 0.6, 0.95}, λ = 0 and ˆµ = (ZA T 2 2BA T 2 ) 1 (Z c 2B c b). The choice of α R + for fixed λ, ˆµ results from an extended numerical study, where we have observed that for α 1 the performance gets worse. This means that we do not expect that the lower dual bound of each subproblem is sharp in the sense of minimizing the size of the B&B tree. We remark that, even though this bound might be weak, the resulting tree search strategy described in Section 3 gives very good results, in particular for large scale problems Summary of main results To give a first impression of the numerical results, we present in Table 1 a small extract of some samples taken from Tables 8 to 13 in the Appendix A. The first column characterizes the problem size that has been computed by the solvers Bonmin-BB 3, B&B dual and CPLEX The first number indicates the number of integer variables, the second one the total number of variables of the problem. All instances in the Appendix are sorted by their size, whereby the first column gives the seed used to generate the instance. The data which has been measured during the optimization process is presented in columns 3 to 9. Here, OFV means objective function value, P (T ) is the primal integral defined in (4.2) and T denotes the total computational time. MIP Sols gives the number of solutions that have been recorded by the software. The optimization process using Bonmin-BB (version ) has been executed on a Windows 7 machine equipped with 16GB RAM and IntelRCore(TM) i GHz. Obviously, the performance of the Bonmin-BB solver is not comparable to the other approaches since medium For more information see 10

11 size solver time in sec. OFV P (T ) MIP t 1 t opt T f( x 1 ) f( x opt ) Sols 30/150 Bonmin-BB B&B dual CPLEX /300 Bonmin-BB B&B dual CPLEX /450 B&B dual CPLEX Table 1: Extract of results (see Appendix A): different problem sizes of scenario type two with seed = 6. The symbol states that the solution process has been stopped at this point. size problems (60/300) cannot be solved reliable anymore. We therefore decided to focus on the performance of B&B dual and CPLEX only. From Table 1 we observe that for MIQCQPs with a large number of variables our approach is quite promising. Let us analyze more detailed this observation in the following. For each sample (consisting of 20 instances), we calculate the mean time needed by the two approaches to find the optimal solution and the mean time to finish the solution process, see Figure 3. The problem sizes 15/75, 45/225 and 75/375 have been additionally computed to have six data points for the interpolation. Figure 3: Mean computational time to find the optimal solution (left) and the mean value of total computing time (right). The comparison of the time evolutions gives a similar picture: For the smaller problem instances CPLEX is more efficient in finding the optimal solution and finishing the solution process. However, this behavior changes significantly when the problems become larger. Figure 4 shows the mean of the primal gap function for 20 instances ranging from small (30/150) to large (90/450) test instances. Here, we sum up the primal gap function and scale the result by the total number of instances. For all 20 instances of size 30/150 CPLEX is able to find an integral solution faster than the B&B dual algorithm. This results in an earlier decrease of the primal gap function. However, in 11 out of 20 cases, the B&B dual implementation finds the optimal solution before CPLEX. For most cases (15 out of 20) the integral value of the primal gap function for the B&B dual algorithm is below the CPLEX performance, compare intersection point after 36 sec. This is due to the fact that B&B dual algorithm detects less feasible solutions (see also Table 2 and Figure 5), but with a lower primal gap and a better quality. For the large 11

12 problem instance 90/450 the mean of the primal gap function for the B&B algorithm is fully below the one of CPLEX. While our B&B implementation is able to solve the first instance after 1438 sec. and computed the optimal solution of all instances after sec., CPLEX needs the same time to solve 2 out of 20 instances. There is only one instance, where CPLEX is able to finish slightly faster than the B&B dual approach (see Table 13, seed = 4). Figure 4: Mean of the primal gap function for 20 instances: small instance (30/150) on the left and large instance (90/450) on the right. Table 2 shows a quantitative evaluation of the collected number of integer solutions of both methods during the solution process. Except for one run of the sample 30/150, CPLEX always needs at least two integer solutions to find the optimum. This becomes even more significant for increasing problem sizes (60/300 and 90/450). For instance, CPLEX finds more than 8 solutions in 9 out of 20 instances. # var l/n algorithm B&B dual CPLEX # Sols = 1 [2, 4] 5 = 1 [2, 4] [5, 7] 8 30/ / / Table 2: Summary of randomly generated test instances: number of integer solutions. In contrast, our approach is able to detect the optimal integer solution in less than 3 recorded solutions, independent of the problem size. This means particularly, in 10 out of 20 instances of small size (30/150), and in 12 out of 20 instances of the large size (90/450), the first solution found is already optimal. Figure 5 illustrates the behavior of the algorithms from Table 2. Obviously, the average number of integer feasible solutions increases with the problem size applying CPLEX while this number remains constant for the B&B dual algorithm. From Table 2, we also recognize that the B&B dual algorithm is able to find the optimal search path in the B&B tree in 33 out of 60 instances. In other words, the first solution found is optimal in 33 instances. As indicated in Table 2 and Tables 8 13 the second feasible solution found is optimal in 27 instances. In summary, the performance of the B&B dual algorithm outperforms CPLEX regarding the search path in almost all test instances. There is solely one instance, where CPLEX and the B&B dual approach get the optimal solution directly. 12

13 Figure 5: Mean number of integer solutions documented for different problem sizes. We have already mentioned that the B&B dual approach only needs up to two integer solutions to find the optimum. Let us now comment on the quality of the first found integer solution. The time needed to compute the first integer solution and the quality of its primal gap is presented in Table 3. As known from Figure 4, the B&B dual approach outperforms CPLEX especially for large problem instances. algorithm B&B dual CPLEX # var l/n mean t 1 mean p(t 1 ) mean t 1 mean p(t 1 ) 30/ sec sec / sec sec / sec sec Table 3: Mean time needed to compute the first integer feasible solution and its mean primal gap. The worst and best case values of t 1 and p(t 1 ) are added to the mean values from Table 3 in Figure 6. The time plot (Figure 6 on the left) shows that the worst and best case for the B&B dual approach are quite close. This implies a stable performance of our approach independent of the problem size. However, this behavior changes for CPLEX, where we can observe a wider spread of worst and best case values for t 1. A similar observation can be made for the quality of the primal gap for the first solution, see Figure 6 on the right. The worst case value for the first found primal gap p(t 1 ) is for the B&B dual approach significantly smaller compared to CPLEX for the medium and large scale instance. Note that typically the performance of the B&B dual algorithm correlates with the choice of parameters. This means our approach provides a good way to compute reliable solutions, even though the search tree cannot be minimized for all test cases for a priori defined fixed Lagrangian multipliers. However, there is still freedom to improve the dual bounds using more involved multipliers. 4.3 Multiple quadratic constraints In addition to the numerical examples presented in Subsection 4.2, we further test the performance of the B&B dual approach for MIQCQPs consisting of more than one quadratic constraint. We mainly use the results from Lemma 2.2 to extend the B&B dual approach to problems with multiple quadratic constraints. The instances under consideration are again of different size and generated in a similar way as before and consist now of 4 quadratic constraints. The samples include 11 optimization problems with 30 integer variables, 11 with 60 and 11 with 90. For the computation 13

14 Figure 6: Time for the first found integer feasible solution (left) and its primal gap (right). we have chosen α = {0.05, 0.1, 0.15, 0.2, 0.25}. The computational results are listed in Appendix B, where QC is used for quadratic and LC for linear constraints. While our algorithmic framework has solved 32 instances to optimality, CPLEX has only solved 54.54% of the instances at the same time. However, one of the generated instances did not have a feasible integer solution. The benefits of the B&B dual algorithm for problems with only one quadratic constraint can be also observed for multiple quadratic constrained problems, compare Figures 3 and 7. Our experiments show that especially for large scale problems our approach gets more powerful. For instance, the B&B dual approach has proven the optimal solution for a scenario of size 90/450 after 75 minutes while CPLEX needs 750 minutes. Figure 7: Mean computational time to find the optimal solution. Figure 8 shows the mean primal gap functions of two samples consisting of 11 generated problems each and four quadratic constraints. For both cases, the 30/150 and the 90/450 sample, the mean primal gap value of the B&B dual algorithm is below CPLEX. Since our implementation always finds an integer solution faster than CPLEX, the mean of the corresponding primal gap functions drops accordingly. After the computation of all optimal solutions after 148 sec. for the 30/150 instances, CPLEX still needs to find the optimal solution of 6 instances, see Figure 8 on the left. This difference is stressed when the problem size becomes larger. While our algorithm terminates each of the 32 solution processes with 90/450 variables after one hour and 15 minutes, CPLEX is not able to find an integer feasible solution for only one instance during that time. In Table 4 we see again that the number of recorded solutions does not exceed 2 for the B&B dual approach while CPLEX has at most 2 solutions in only 25 of the 43 computed instances. As 14

15 Figure 8: Mean of primal gap function for 20 instances with four quadratic constraints: small instance (30/150) on the left and large instance (90/450) on the right. pointed out before in the case of one quadratic constraint, we have seen that one key characteristic of our approach is the quality and computational time of the first found integer solution. # var l/n algorithm B&B dual CPLEX # Sols = 1 [2, 4] 5 = 1 [2, 4] [5, 7] 8 30/ / / Table 4: Summary of randomly generated test instances: number of solutions. This is still true for problems with multiple quadratic constraints, see Table 5 and Figure 9 on the left, with the only difference that the variation of the primal gap and the mean value of the primal gap is higher than in the study before. However, the quality of the first solution of the B&B dual algorithm is slightly better than the mean value for CPLEX even in the worst case, see Figure 9 on the right. As before, the best and worst case concerning the computational time are very close (about 200 sec.) for our implementation, whereas this difference is about 1200 sec. for CPLEX for the problem instance 90/450, cf. Figure 9 on the left. Summarizing, there is no instance out of 33 ones, where CPLEX reaches the optimal solution faster than our B&B scheme. algorithm B&B dual CPLEX # var l/n mean t 1 mean p(t 1 ) mean t 1 mean p(t 1 ) 30/ sec sec / sec sec / sec sec Table 5: Mean time needed to calculate the first integer feasible solution and mean primal gap at the first solution. 15

16 Figure 9: Time measure at the first documented integer feasible solution on the left side and primal gap measure on the right side 4.4 Tests for data sets from academic literature For further validation of our approach, we take six instances from the MINLPLib2 library 5 (compare problem descriptions in Table 6). All instances are convex and consist of at least one quadratic constraint. The first instances under consideration are the so-called CLay problems that are constrained layout problems. From literature we know that these problems are ill-posed in the sense that there is no feasible solution near the optimal solution of the continuous relaxation, see [6]. As a second application we consider portfolio optimization problems (called portfol classical). Those problems arise by adding a cardinality constraint to the mean-variance portfolio optimization problem, see [29]. We aim to compare the performance of the B&B dual algorithm with CPLEX and Bonmin- BB while focusing on the quality of the first solutions found. We consider the quality measures computing times t 1, t opt, primal integral P (t), primal gap γ( x 1 ) and total number of integer solutions MIP Sols. Apparently, CPLEX and Bonmin-BB benefit from additional internal heuristics to prove optimality, cf. computing times in Table 6. In particular, CPLEX is able to find good integer solutions reasonably fast, independent of the problem size and the number of quadratic constraints. However, as mentioned before, the performance of the B&B dual algorithm heavily depends on the choice of the Lagrangian multiplier α. From computational tests we know that for problems with many quadratic constraints (e.g. the CLay problems), it is reasonable to choose the multipliers close to zero. Hence, for the CLay problems the Lagrangian multiplier α has been chosen as α (0, 1 p ), where p is the number of quadratic constraints. In contrast, in case of the portfolio optimization problems, α has been selected from the set {0.5, 0.6, 0.7, 0.8, 0.9} and a small random number has been added to prevent symmetric solutions. The first four columns in Table 6 describe the problems. QC and LC count the amount of quadratic and linear constraints of the initial problem. We document the solution times, the primal integral P (T ) and integer MIP solutions. It turns out that CPLEX performs best for all instances. However, the primal gap of the first integer solution γ( x 1 ) is fairly good for the B&B dual implementation, in particular in comparison with CPLEX, compare Table 7. Figure 10 shows the evolution of the primal and dual bound for the recorded MIP solutions of the CLay0205m (left) and the portfol classical050 1 problem (right) over time. Comparing the two types of problems, we point out the main difference related to their structure. For the portfolio optimization problem the MIP solutions are near to the optimal solution of the continuous relaxation. In contrast, the integer feasible solutions of the CLay problems are scattered on a wider range of the feasible region. This is a possible explanation for the fact that the dual bounds provide

17 instance # var l/n QC LC solver time in sec. P (T ) MIP t 1 t opt T Sols CLay0204m 32/ B&B dual Bonmin-BB < CPLEX CLay0205m 50/ B&B dual Bonmin-BB CPLEX CLay0303m 21/ B&B dual Bonmin-BB CPLEX CLay0305m 55/ B&B dual Bonmin-BB (716.81) (4) CPLEX portfol 50/ B&B dual classical050 1 Bonmin-BB < CPLEX < portfol 200/ B&B dual classical200 2 Bonmin-BB CPLEX Table 6: Examples taken from MINLPLib2. The symbol states that the solution process has been stopped after 15 hours. If the solution found by the solver is different from the one given by MINLPLib2, we marked t opt with the symbol. solver CLay0204m CLay0205m CLay0303m CLay0305m B&B dual Bonmin-BB CPLEX solver portfol classical050 1 portfol classical200 2 B&B dual Bonmin-BB CPLEX Table 7: Primal gap γ( x 1 ) of the first integer feasible solution. more information independent of the choice of α. 5 Conclusion We have presented a promising algorithm based on duality concepts for convex problems to tackle MINLPs equipped with quadratic constraints. The new approach is efficient and outperforms the CPLEX solver for problems with a reasonable large amount of (integer) variables. Concerning the number of visited nodes in the B&B tree, we remark that the dual bound for pruning is not sharp to cut off all subproblems. However, this leads to an efficient strategy to get at least good (or 17

18 Figure 10: B&B dual algorithm: primal and dual bounds for the current best integer solution in case of the CLay0205m problem (left) and the portfol classical050 1 problem (right). optimal) solutions very quickly. Future work includes the investigation of techniques to strengthen the dual bounds and a sensitivity analysis for the Lagrangian multipliers α. This will be necessary since from application point of view we are interested in efficiently solving optimization problems restricted by ordinary or partial differential equations combined with discrete decisions. Acknowledgment This work has been supported by KI-Net NSF RNMS grant No , grants DFG Cluster of Excellence Production technologies for high wage countries, HE5386/13,14,15-1, GO 1920/4-1, DAAD-MIUR project. References [1] E. Ammar and H. A. Khalifa, Fuzzy portfolio optimization a quadratic programming approach, Chaos, Solitons & Fractals, 18 (2003), pp [2] P. Belotti, C. Kirches, S. Leyffer, J. Linderoth, J. Luedtke, and A. Mahajan, Mixed-integer nonlinear optimization, Acta Numerica, 22 (2013), pp [3] P. Belotti, J. Lee, L. Liberti, F. Margot, and A. Wächter, Branching and bounds tightening techniques for non-convex MINLP, Optimization Methods and Software, 24 (2009), pp [4] T. Berthold, Measuring the impact of primal heuristics, Operations Research Letters, 41 (2013), pp [5] A. Bley, A. M. Gleixner, T. Koch, and S. Vigerske, Comparing MIQCP Solvers to a Specialised Algorithm for Mine Production Scheduling, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012, pp [6] P. Bonami, L. T. Biegler, A. R. Conn, G. Cornuéjols, I. E. Grossmann, C. D. Laird, J. Lee, A. Lodi, F. Margot, N. Sawaya, and A. Wächter, An algorithmic framework for convex mixed integer nonlinear programs, Discrete Optimization, 5 (2008), pp

19 [7] P. Bonami and J. P. M. Gonçalves, Heuristics for convex mixed integer nonlinear programs, Computational Optimization and Applications, 51 (2012), pp [8] P. Bonami, M. Kilinç, and J. Linderoth, Algorithms and Software for Convex Mixed Integer Nonlinear Programs, Springer, New York, 2012, pp [9] B. Borchers and J. E. Mitchell, An improved branch and bound algorithm for mixed integer nonlinear programs, Computers & Operations Research, 21 (1994), pp [10] R. E. Burkard, E. Çela, P. M. Pardalos, and L. S. Pitsoulis, The Quadratic Assignment Problem, Springer US, Boston, MA, 1999, pp [11] J. F. Campbell, Integer programming formulations of discrete hub location problems, European Journal of Operational Research, 72 (1994), pp [12] R. J. Dakin, A tree-search algorithm for mixed integer programming problems, The Computer Journal, 8 (1965), pp [13] R. Fletcher and S. Leyffer, Solving mixed integer nonlinear programs by outer approximation, Mathematical Programming, 66 (1994), pp [14] R. Fletcher and S. Leyffer, Numerical experience with lower bounds for MIQP branchand-bound, SIAM Journal on Optimization, 8 (1998), pp [15] C. Geiger and C. Kanzow, Theorie und Numerik restringiererter Optimierungsaufgaben, Springer, Berlin, [16] S. Göttlich, M. Herty, and U. Ziegler, Modeling and optimizing traffic light settings in road networks, Computers & Operations Research, 55 (2015), pp [17] S. Göttlich, A. Potschka, and U. Ziegler, Partial outer convexification for traffic light optimization in road networks, SIAM Journal on Scientific Computing, 39 (2017), pp. B53 B75. [18] P. Hahn and T. Grant, Lower bounds for the quadratic assignment problem based upon a dual formulation, Operations Research, (1998), pp [19] J. A. Hoogeveen and S. L. van de Velde, Stronger lagrangian bounds by use of slack variables: Applications to machine scheduling problems, Mathematical Programming, 70 (1995), pp [20] A. H. Land and A. G. Doig, An automatic method of solving discrete programming problems, Econometrica, 28 (1960), p [21] R. Lazimy, Mixed-integer quadratic programming, Mathematical Programming, 22 (1982), pp [22] T. Lehmann, On Efficient Solution Methods for Mixed-Integer Nonlinear and Mixed-Integer Quadratic Optimization Problems, PhD thesis, Univertity of Bayreuth, [23] S. Leyffer, Integrating SQP and branch-and-bound for mixed integer nonlinear programming, Computational Optimization and Applications, 18 (2001), pp [24] A. Martin, M. Möller, and S. Moritz, Mixed integer models for the stationary case of gas network optimization, Mathematical Programming, 105, pp [25] R. Misener and C. A. Floudas, GloMIQO: Global Mixed-Integer Quadratic Optimizer, Journal of Global Optimization, 57 (2013), pp [26] I. Nowak, A new semidefinite programming bound for indefinite quadratic forms over a simplex, Journal of Global Optimization, 14 (1999), pp

Some Recent Advances in Mixed-Integer Nonlinear Programming

Some Recent Advances in Mixed-Integer Nonlinear Programming Some Recent Advances in Mixed-Integer Nonlinear Programming Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com SIAM Conference on Optimization 2008 Boston, MA

More information

Solving Mixed-Integer Nonlinear Programs

Solving Mixed-Integer Nonlinear Programs Solving Mixed-Integer Nonlinear Programs (with SCIP) Ambros M. Gleixner Zuse Institute Berlin MATHEON Berlin Mathematical School 5th Porto Meeting on Mathematics for Industry, April 10 11, 2014, Porto

More information

From structures to heuristics to global solvers

From structures to heuristics to global solvers From structures to heuristics to global solvers Timo Berthold Zuse Institute Berlin DFG Research Center MATHEON Mathematics for key technologies OR2013, 04/Sep/13, Rotterdam Outline From structures to

More information

Mixed Integer Non Linear Programming

Mixed Integer Non Linear Programming Mixed Integer Non Linear Programming Claudia D Ambrosio CNRS Research Scientist CNRS & LIX, École Polytechnique MPRO PMA 2016-2017 Outline What is a MINLP? Dealing with nonconvexities Global Optimization

More information

Software for Integer and Nonlinear Optimization

Software for Integer and Nonlinear Optimization Software for Integer and Nonlinear Optimization Sven Leyffer, leyffer@mcs.anl.gov Mathematics & Computer Science Division Argonne National Laboratory Roger Fletcher & Jeff Linderoth Advanced Methods and

More information

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms Jeff Linderoth Industrial and Systems Engineering University of Wisconsin-Madison Jonas Schweiger Friedrich-Alexander-Universität

More information

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints

An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:

More information

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy)

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy) Mario R. Eden, Marianthi Ierapetritou and Gavin P. Towler (Editors) Proceedings of the 13 th International Symposium on Process Systems Engineering PSE 2018 July 1-5, 2018, San Diego, California, USA 2018

More information

Feasibility Pump for Mixed Integer Nonlinear Programs 1

Feasibility Pump for Mixed Integer Nonlinear Programs 1 Feasibility Pump for Mixed Integer Nonlinear Programs 1 Presenter: 1 by Pierre Bonami, Gerard Cornuejols, Andrea Lodi and Francois Margot Mixed Integer Linear or Nonlinear Programs (MILP/MINLP) Optimize

More information

A note on : A Superior Representation Method for Piecewise Linear Functions

A note on : A Superior Representation Method for Piecewise Linear Functions A note on : A Superior Representation Method for Piecewise Linear Functions Juan Pablo Vielma Business Analytics and Mathematical Sciences Department, IBM T. J. Watson Research Center, Yorktown Heights,

More information

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation -

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Radu Baltean-Lugojan Ruth Misener Computational Optimisation Group Department of Computing Pierre Bonami Andrea

More information

Indicator Constraints in Mixed-Integer Programming

Indicator Constraints in Mixed-Integer Programming Indicator Constraints in Mixed-Integer Programming Andrea Lodi University of Bologna, Italy - andrea.lodi@unibo.it Amaya Nogales-Gómez, Universidad de Sevilla, Spain Pietro Belotti, FICO, UK Matteo Fischetti,

More information

A note on : A Superior Representation Method for Piecewise Linear Functions by Li, Lu, Huang and Hu

A note on : A Superior Representation Method for Piecewise Linear Functions by Li, Lu, Huang and Hu A note on : A Superior Representation Method for Piecewise Linear Functions by Li, Lu, Huang and Hu Juan Pablo Vielma, Shabbir Ahmed and George Nemhauser H. Milton Stewart School of Industrial and Systems

More information

Basic notions of Mixed Integer Non-Linear Programming

Basic notions of Mixed Integer Non-Linear Programming Basic notions of Mixed Integer Non-Linear Programming Claudia D Ambrosio CNRS & LIX, École Polytechnique 5th Porto Meeting on Mathematics for Industry, April 10, 2014 C. D Ambrosio (CNRS) April 10, 2014

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Conic optimization under combinatorial sparsity constraints

Conic optimization under combinatorial sparsity constraints Conic optimization under combinatorial sparsity constraints Christoph Buchheim and Emiliano Traversi Abstract We present a heuristic approach for conic optimization problems containing sparsity constraints.

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2 ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving

More information

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Christoph Buchheim 1 and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische Universität Dortmund christoph.buchheim@tu-dortmund.de

More information

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA Gestion de la production Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA 1 Contents 1 Integer Linear Programming 3 1.1 Definitions and notations......................................

More information

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,

More information

Mixed Integer Programming Solvers: from Where to Where. Andrea Lodi University of Bologna, Italy

Mixed Integer Programming Solvers: from Where to Where. Andrea Lodi University of Bologna, Italy Mixed Integer Programming Solvers: from Where to Where Andrea Lodi University of Bologna, Italy andrea.lodi@unibo.it November 30, 2011 @ Explanatory Workshop on Locational Analysis, Sevilla A. Lodi, MIP

More information

Advances in CPLEX for Mixed Integer Nonlinear Optimization

Advances in CPLEX for Mixed Integer Nonlinear Optimization Pierre Bonami and Andrea Tramontani IBM ILOG CPLEX ISMP 2015 - Pittsburgh - July 13 2015 Advances in CPLEX for Mixed Integer Nonlinear Optimization 1 2015 IBM Corporation CPLEX Optimization Studio 12.6.2

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Development of the new MINLP Solver Decogo using SCIP - Status Report

Development of the new MINLP Solver Decogo using SCIP - Status Report Development of the new MINLP Solver Decogo using SCIP - Status Report Pavlo Muts with Norman Breitfeld, Vitali Gintner, Ivo Nowak SCIP Workshop 2018, Aachen Table of contents 1. Introduction 2. Automatic

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Introduction to integer programming II

Introduction to integer programming II Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

On mathematical programming with indicator constraints

On mathematical programming with indicator constraints On mathematical programming with indicator constraints Andrea Lodi joint work with P. Bonami & A. Tramontani (IBM), S. Wiese (Unibo) University of Bologna, Italy École Polytechnique de Montréal, Québec,

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Heuristics for nonconvex MINLP

Heuristics for nonconvex MINLP Heuristics for nonconvex MINLP Pietro Belotti, Timo Berthold FICO, Xpress Optimization Team, Birmingham, UK pietrobelotti@fico.com 18th Combinatorial Optimization Workshop, Aussois, 9 Jan 2014 ======This

More information

Multiperiod Blend Scheduling Problem

Multiperiod Blend Scheduling Problem ExxonMobil Multiperiod Blend Scheduling Problem Juan Pablo Ruiz Ignacio E. Grossmann Department of Chemical Engineering Center for Advanced Process Decision-making University Pittsburgh, PA 15213 1 Motivation

More information

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines

SMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

On Perspective Functions, Vanishing Constraints, and Complementarity Programming

On Perspective Functions, Vanishing Constraints, and Complementarity Programming On Perspective Functions, Vanishing Constraints, and Complementarity Programming Fast Mixed-Integer Nonlinear Feedback Control Christian Kirches 1, Sebastian Sager 2 1 Interdisciplinary Center for Scientific

More information

IE418 Integer Programming

IE418 Integer Programming IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 2nd February 2005 Boring Stuff Extra Linux Class: 8AM 11AM, Wednesday February 9. Room??? Accounts and Passwords

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

23. Cutting planes and branch & bound

23. Cutting planes and branch & bound CS/ECE/ISyE 524 Introduction to Optimization Spring 207 8 23. Cutting planes and branch & bound ˆ Algorithms for solving MIPs ˆ Cutting plane methods ˆ Branch and bound methods Laurent Lessard (www.laurentlessard.com)

More information

Adaptive Dynamic Cost Updating Procedure for Solving Fixed Charge Network Flow Problems.

Adaptive Dynamic Cost Updating Procedure for Solving Fixed Charge Network Flow Problems. Adaptive Dynamic Cost Updating Procedure for Solving Fixed Charge Network Flow Problems. Artyom Nahapetyan, Panos Pardalos Center for Applied Optimization Industrial and Systems Engineering Department

More information

Constraint Qualification Failure in Action

Constraint Qualification Failure in Action Constraint Qualification Failure in Action Hassan Hijazi a,, Leo Liberti b a The Australian National University, Data61-CSIRO, Canberra ACT 2601, Australia b CNRS, LIX, Ecole Polytechnique, 91128, Palaiseau,

More information

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs A Lifted Linear Programming Branch-and-Bound Algorithm for Mied Integer Conic Quadratic Programs Juan Pablo Vielma Shabbir Ahmed George L. Nemhauser H. Milton Stewart School of Industrial and Systems Engineering

More information

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger Introduction to Optimization, DIKU 007-08 Monday November David Pisinger Lecture What is OR, linear models, standard form, slack form, simplex repetition, graphical interpretation, extreme points, basic

More information

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization Sven Leyffer 2 Annick Sartenaer 1 Emilie Wanufelle 1 1 University of Namur, Belgium 2 Argonne National Laboratory, USA IMA Workshop,

More information

A Note on Symmetry Reduction for Circular Traveling Tournament Problems

A Note on Symmetry Reduction for Circular Traveling Tournament Problems Mainz School of Management and Economics Discussion Paper Series A Note on Symmetry Reduction for Circular Traveling Tournament Problems Timo Gschwind and Stefan Irnich April 2010 Discussion paper number

More information

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

More information

Improved quadratic cuts for convex mixed-integer nonlinear programs

Improved quadratic cuts for convex mixed-integer nonlinear programs Improved quadratic cuts for convex mixed-integer nonlinear programs Lijie Su a,b, Lixin Tang a*, David E. Bernal c, Ignacio E. Grossmann c a Institute of Industrial and Systems Engineering, Northeastern

More information

Lift-and-Project Cuts for Mixed Integer Convex Programs

Lift-and-Project Cuts for Mixed Integer Convex Programs Lift-and-Project Cuts for Mixed Integer Convex Programs Pierre Bonami LIF, CNRS Aix-Marseille Université, 163 avenue de Luminy - Case 901 F-13288 Marseille Cedex 9 France pierre.bonami@lif.univ-mrs.fr

More information

Analyzing the computational impact of individual MINLP solver components

Analyzing the computational impact of individual MINLP solver components Analyzing the computational impact of individual MINLP solver components Ambros M. Gleixner joint work with Stefan Vigerske Zuse Institute Berlin MATHEON Berlin Mathematical School MINLP 2014, June 4,

More information

Strong-Branching Inequalities for Convex Mixed Integer Nonlinear Programs

Strong-Branching Inequalities for Convex Mixed Integer Nonlinear Programs Computational Optimization and Applications manuscript No. (will be inserted by the editor) Strong-Branching Inequalities for Convex Mixed Integer Nonlinear Programs Mustafa Kılınç Jeff Linderoth James

More information

Optimization in Process Systems Engineering

Optimization in Process Systems Engineering Optimization in Process Systems Engineering M.Sc. Jan Kronqvist Process Design & Systems Engineering Laboratory Faculty of Science and Engineering Åbo Akademi University Most optimization problems in production

More information

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano ... Our contribution PIPS-PSBB*: Multi-level parallelism for Stochastic

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 Linear programming Saad Mneimneh 1 Introduction Consider the following problem: x 1 + x x 1 x 8 x 1 + x 10 5x 1 x x 1, x 0 The feasible solution is a point (x 1, x ) that lies within the region defined

More information

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY

A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY APPLICATIONES MATHEMATICAE 23,2 (1995), pp. 151 167 G. SCHEITHAUER and J. TERNO (Dresden) A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY Abstract. Many numerical computations

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

where X is the feasible region, i.e., the set of the feasible solutions.

where X is the feasible region, i.e., the set of the feasible solutions. 3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative

More information

Integer Programming. Wolfram Wiesemann. December 6, 2007

Integer Programming. Wolfram Wiesemann. December 6, 2007 Integer Programming Wolfram Wiesemann December 6, 2007 Contents of this Lecture Revision: Mixed Integer Programming Problems Branch & Bound Algorithms: The Big Picture Solving MIP s: Complete Enumeration

More information

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0. ex-.-. Foundations of Operations Research Prof. E. Amaldi. Dual simplex algorithm Given the linear program minx + x + x x + x + x 6 x + x + x x, x, x. solve it via the dual simplex algorithm. Describe

More information

Solving nonconvex MINLP by quadratic approximation

Solving nonconvex MINLP by quadratic approximation Solving nonconvex MINLP by quadratic approximation Stefan Vigerske DFG Research Center MATHEON Mathematics for key technologies 21/11/2008 IMA Hot Topics Workshop: Mixed-Integer Nonlinear Optimization

More information

Solving Box-Constrained Nonconvex Quadratic Programs

Solving Box-Constrained Nonconvex Quadratic Programs Noname manuscript No. (will be inserted by the editor) Solving Box-Constrained Nonconvex Quadratic Programs Pierre Bonami Oktay Günlük Jeff Linderoth June 13, 2016 Abstract We present effective computational

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

An Improved Approach For Solving Mixed-Integer Nonlinear Programming Problems

An Improved Approach For Solving Mixed-Integer Nonlinear Programming Problems International Refereed Journal of Engineering and Science (IRJES) ISSN (Online) 2319-183X, (Print) 2319-1821 Volume 3, Issue 9 (September 2014), PP.11-20 An Improved Approach For Solving Mixed-Integer

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

GLOBAL OPTIMIZATION WITH GAMS/BARON

GLOBAL OPTIMIZATION WITH GAMS/BARON GLOBAL OPTIMIZATION WITH GAMS/BARON Nick Sahinidis Chemical and Biomolecular Engineering University of Illinois at Urbana Mohit Tawarmalani Krannert School of Management Purdue University MIXED-INTEGER

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

Inexact Solution of NLP Subproblems in MINLP

Inexact Solution of NLP Subproblems in MINLP Ineact Solution of NLP Subproblems in MINLP M. Li L. N. Vicente April 4, 2011 Abstract In the contet of conve mied-integer nonlinear programming (MINLP, we investigate how the outer approimation method

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Disconnecting Networks via Node Deletions

Disconnecting Networks via Node Deletions 1 / 27 Disconnecting Networks via Node Deletions Exact Interdiction Models and Algorithms Siqian Shen 1 J. Cole Smith 2 R. Goli 2 1 IOE, University of Michigan 2 ISE, University of Florida 2012 INFORMS

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

4y Springer NONLINEAR INTEGER PROGRAMMING

4y Springer NONLINEAR INTEGER PROGRAMMING NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions?

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions? SOLVING INTEGER LINEAR PROGRAMS 1. Solving the LP relaxation. 2. How to deal with fractional solutions? Integer Linear Program: Example max x 1 2x 2 0.5x 3 0.2x 4 x 5 +0.6x 6 s.t. x 1 +2x 2 1 x 1 + x 2

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

Integer programming (part 2) Lecturer: Javier Peña Convex Optimization /36-725

Integer programming (part 2) Lecturer: Javier Peña Convex Optimization /36-725 Integer programming (part 2) Lecturer: Javier Peña Convex Optimization 10-725/36-725 Last time: integer programming Consider the problem min x subject to f(x) x C x j Z, j J where f : R n R, C R n are

More information

The CPLEX Library: Mixed Integer Programming

The CPLEX Library: Mixed Integer Programming The CPLEX Library: Mixed Programming Ed Rothberg, ILOG, Inc. 1 The Diet Problem Revisited Nutritional values Bob considered the following foods: Food Serving Size Energy (kcal) Protein (g) Calcium (mg)

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information