Conic optimization under combinatorial sparsity constraints

Size: px
Start display at page:

Download "Conic optimization under combinatorial sparsity constraints"

Transcription

1 Conic optimization under combinatorial sparsity constraints Christoph Buchheim and Emiliano Traversi Abstract We present a heuristic approach for conic optimization problems containing sparsity constraints. The latter can be cardinality constraints, but our approach also covers more complex constraints on the support of the solution. For the special case that the support is required to belong to a matroid, we propose an exchange heuristic adapting the support in every iteration. The entering non-zero is determined by considering the dual of the given conic problem where the variables not belonging to the current support are fixed to zero. While this algorithm is purely heuristic, we show experimentally that it often finds solutions very close to the optimal ones in the case of the cardinality-constrained knapsack problem and for mean-variance optimization problems. 1 Introduction Consider a conic optimization problem in standard form: min c x s.t. Ax = b x K. (1) Here, A R m n is a matrix, c R n and b R m are vectors, and K R n is a closed convex cone. We aim at finding sparse solutions of Problem (1), where the notion of sparsity is given by a combinatorial set T {0, 1} n containing the feasible non-zero sets. In the following, we assume w.l.o.g. that T is hereditary, i.e., if t T and t t, we also have t T. Throughout this paper, we only assume that T is accessible via a linear optimization oracle. We are particularly interested in the case where T is a matroid, which directly generalizes the case of an ordinary cardinality contraint of the type x 0 k. This work was partially supported by the German Research Foundation (DFG) under grant no. BU 2313/4. Fakultät für Mathematik, TU Dortmund, Vogelpothsweg 87, Dortmund, Germany, christoph.buchheim@tu-dortmund.de LIPN, Université Paris 13, 99 Avenue Jean-Baptiste Clément, Villetaneuse, France, emiliano.traversi@lipn.univ-paris13.fr 1

2 The problem addressed can thus be written as min c x s.t. Ax = b x K x i = 0 if t i = 0 t T. (2) Note that we optimize over both x and t here. Not surprisingly, Problem (2) is NP-hard. This has been shown by Farias and Nemhauser [4] in the special case that (1) is a continuous knapsack problem and T is a uniform matroid, i.e., we search for cardinality constrained solutions. The second-last constraint in (2) can be replaced by complementarity constraints x i(1 t i) = 0 for i = 1,..., n. Problem (2) is equivalent to finding a t T that minimizes c t := min c x s.t. Ax = b x K x i = 0 if t i = 0. (3) We can thus rewrite (2) as a combinatorial optimization problem over T with a non-linear objective function: min t T c t (4) Our proposed heuristic for solving Problem (2) tries to solve (4) by iteratively adapting the sparsity pattern t. In the general case, we propose to compute a new pattern t T by optimizing a certain function over T, where the coefficient for each t i set to zero in the last solution is calculated based on the dual solution of (3). In the case that T is a matroid, we can also consider iterations where only one pair of entries in t is swapped. More precisely, we first decide which fixing to release and then we fix another variable. Apart from the case where T models a cardinality constraint, our approach is motivated by another application: we are interested in finding a symmetric matrix X such that Q X for a given symmetric matrix Q and such that the nonzero entries in X form a forest in the complete graph indexed by the rows (or columns) of X. Given such a matrix X, one can compute an underestimator of the quadratic binary optimization problem min x Qx s.t. x {0, 1} n (5) by replacing Q by X and then solving the resulting sparse problem, which can be done in polynomial time. The optimal value of the latter problem is then a lower bound for the original problem, which can be used to strengthen the bounds obtained from separable underestimators within the branch-and-bound algorithm presented in [2]. This paper is organized as follows. In Section 2, we give some theoretical background motivating our approach. The primal heuristic is devised 2

3 in Section 3, both for the matroid case and for the general case. Finally, the results of an experimental evaluation are presented in Section 4. They show that the heuristic is capable of finding near-optimal solutions in very short running time for various types of problems, where we consider three different underlying cones: the non-negative orthant, the cone of semidefinite matrices, and the second-order cone. Section 5 concludes. 2 Motivation and preliminaries The crucial step in our algorithm is to decide which fixings x i = 0 should be relaxed in the next iteration. For this, we use dual information. The idea is that the dual variable for the constraint x i = 0 tells us how promising it is to relax the constraint. In the following, let c t denote the restriction of c to the components i with t i = 1 and A t the restriction of A to the columns i with t i = 1. Moreover, let K t denote the intersection of K with the subspace R t, where the latter is given by the dimensions i with t i = 1, and K t the projection of K to R t. It is easy to verify that both K t and K t are convex cones again. Finally, let t denote the binary complement vector of t. After eliminating the fixed variables in Problem (3), we thus obtain c t = min c t ˆx s.t. A tˆx = b ˆx K t. (6) For the following, we have to make an assumption on K and T : Assumption 1 For each t T, we have K t K t. It is easy to verify that Assumption 1 is satisfied in the following cases: (i) K = R n +, the non-negative cone, and T is arbitrary; (ii) K = S n +, the semidefinite cone, and t ii = 1 for all i T ; (iii) K = K n, the second order cone, and t 0 = 1 for all i T. The restriction in (ii) means that only off-diagonal elements may be fixed to zero, while in (iii) we are allowed to fix only the right-hand-side variables in the second order contraint x 0 (x 1,..., x n) 2. Lemma 2 Let K and T satisfy Assumption 1. Let y R m be a dual optimal solution for Problem (6) for some t T. Then (y, c t A t y ) is a dual optimal solution for Problem (3). Proof: Define d := c t A t y and z := c A y I td. As y is dual feasible for (6), we have z t = c t A t y (K t). Moreover, z t = c t A t y d = 0. Hence z (K t) 0 = (K t R t ) (K t R t ) K, and we obtain that (y, d) is dual feasible for (3), with objective value b y = c t. This result shows that optimal values for the dual variables corresponding to the fixings can be easily computed from a dual solution of the lower-dimensional problem (6). However, a direct application of this approach does not work, as the optimal dual solution of Problem (3) is not unique in general, even if Problem (6) has a unique dual optimal solution. This is always the case for linear programs: 3

4 Lemma 3 Let t T be given and let K = R n +. Let y R m be a dual optimal solution of Problem (6). Then (y, d) is a dual optimal solution of Problem (3) if and only if d c t A t y (K ) t, i.e., if and only if d c t A t y. Proof: First, let (y, d) be dual optimal for Problem (3). Then dual feasibility means c A y I td K and hence c t A t y d = (c A y I td) t (K ) t, which implies d c t A t y (K ) t. Conversely, let d c t A t y (K ) t and define z = c A y I td. As y is dual feasible for (3), we have z t = c t A t y (K t) = R t +. Moreover, we obtain z t = c t A t y d (K ) t = R t +. Thus z 0 and hence z K. Now the rest of the proof is analogous to the proof of Lemma 2. However, it is still a natural idea to choose d := c t A t y, since this is what we obtain if we consider the limits of the dual variables of the problem min c x s.t. Ax = b x K x i ε if t i = 0 for ε 0. Apart from the advantage of uniqueness (provided that (6) has a unique optimal dual solution), it is reasonable to consider the dual variables in the limit ε 0 instead of the dual variables for ε = 0, as we are interested in how much improvement we can obtain when moving x i to a non-zero value. 3 Primal heuristic In order to describe our heuristic algorithm for solving Problem (2), we need to specify the choice of an initial solution and the update rule for the support. These topics are discussed in the following subsections. 3.1 Initial solution A simple method to find an initial solution t (0) for our heuristic would be to start with t i = 1 for all i and then greedily remove the index l := argmin l:tl =1c l x l until t T, where x is the solution of (3) for the current t. In our implementation, we use the following alternative, which is again based on the idea of using dual variables: we first solve Problem (3) with t = 0, i.e., we fix all variables to zero, thus obtaining dual multipliers d j for all variables. Then we choose t (0) so as to maximize d t over T, which can be done by calling the linear optimization oracle for T that we assume to have at hand. Compared to the first method, this approach has the advantage that the solver for (3) has to be called only once. Moreover, it is consistent with the following iterations, in which the choice of non-zeroes is based on the value of d. 4

5 3.2 Update rules For the update of t (i) to t (i+1), we distinguish between the matroid and the general case. For general T, we proceed as follows: 1. set i := 0 and choose t (0) T 2. compute c as in (3) and obtain an optimal solution x and dual t (i) multipliers d j for the constraints x j = 0 (for t (i) j = 0) 3. solve the combinatorial optimization problem ( max t T t (i) j =0 d j t j + ) t (i) j =1 cjx j t j using the oracle, let t (i+1) be the resulting optimizer 4. if t (i+1) t (k) for all k i, set i := i + 1 and go to 2 Note that, depending on the underlying cone K and the optimization approach used to solve (3), one could compute c in Step 2 by reoptimization, since usually only few fixings are updated. t (i) In the case that T is a matroid, we propose the following alternative method, which turns out to outperform the previous approach significantly; see Section 4: 1. set i := 0 and choose t (0) T 2. compute c t (i) as in (3) and obtain an optimal solution x and dual multipliers d j for the constraints x j = 0 (for t (i) j = 0) 3. find k := argmax k:t (i) k =0 d k 4. find l := argmin l:t (i) + k l T c lx l 5. set t (i+1) := t (i) + k l 6. if t (i+1) t (k) for all k i, set i := i + 1 and go to 2 The assumption that T is a matroid guarantees that there exists at least one candidate for l different from k in Step 4. For comparison, still in the matroid case, we also consider the straightforward 2-opt update in our experiments. Here, we enumerate all pairs k, l such that t (i) + k l T and choose the one leading to the smallest value c in the next iteration. t (i+1) 4 Experimental results In this section, we provide an experimental evaluation of our approach based on a wide class of test instances from different problem types. More precisely, we test our approach on the following sparse conic optimization problems: The Cardinality Constrained Continuous Knapsack Problem (CCKP) The Sparse Mean-Variance Optimization Problem (MVO) The Tree Underestimator Problem for Binary Quadratic Problems (TUP) 5

6 For each problem, we provide a brief description, a mathematical model in the form of (2), and the test-bed used in the experiments. The aim of this section is two-fold: (i) First of all, we would like to motivate the introduction of the proposed concept of sparsity. The TUP uses the concept of sparsity presented in this paper to generalize the methodology proposed in [2] to solve efficiently combinatorial problems with a quadratic objective function. In Section 4.1 we show how the application of TUD allows to improve the dual bounds obtained in the approach presented in [2]. (ii) Secondly, in the tests concerning the CCKP and the MVO, we show that our heuristic procedure is able to provide almost-optimal solutions within a short amount of time. For assessing the performance of our algorithm, we use CPLEX 12.6 [3] with an optimality tolerance of The second-last constraint in Problem (2) is modeled as a special ordered set (SOS) constraint. For CCKP and MVO, we also use CPLEX in order to solve Problem (3) as needed in Step 2 of the algorithm in the former case, we call the LP solver, while in the second case we need the SOCP solver of CPLEX. In both cases, dual solutions are also provided by CPLEX. For TUP, we use the SDP solver CSDP [1]. All experiments were carried out on Intel Xeon processors running at 2.50 GHz. 4.1 Tree underestimator problem (TUP) Problem Description The TUP is a generalization of the bounding procedure proposed in [2], where the authors devise a branch-and-bound framework to solve combinatorial problems with a quadratic objective function. For completeness, we recall the basic idea in the following, refering the reader to [2] for a detailed description of the approach. In order to be consistent with the rest of this paper, we use a different notation here. The goal is to provide dual bounds for problems of the following shape: min f(z) = z Z z Qz + L z, where Q R n n is a symmetric (not necessarily positive semidefinite) matrix, L R n is a vector, and Z {0, 1} n. More precisely, the idea is to focus on combinatorial optimization problems where the linear counterpart min z Z can be solved efficiently for any vector c R n. For a given vector x R n, let Diag(x) be the quadratic matrix with all entries equal to zero, except for the diagonal, which contains x. For a given objective function f(z) = z Qz + L z, the separable function c z g x(z) = z Diag(x)z + L z = n n x izi 2 + L iz i is a valid underestimator of f provided that Q Diag(x), i.e., that the matrix Q Diag(x) is positive semidefinite. i=1 i=1 6

7 It is possible to identify a good (non-convex) separable underestimator for f(z) by solving the following semidefinite optimization problem: max{1 x : Q Diag(x)}. (7) The idea behind the proposed underestimator method is that minimizing the separable function g x(z) is equivalent to minimizing x z + L z, since the variables z are binary. As already mentioned, the remaining combinatorial optimization problem can be efficiently solved, and hence its computation can be incorporated into a branch-and-bound framework. The computation of the underestimator according to (7) is done in a preprocessing phase via solving a series of semidefinite programs. A generalization of the method proposed in [2] can be obtained by allowing non-zero entries in some of the off-diagonal terms in the quadratic underestimator. It is easy to see that the resulting problem min z Z f(z) = z Xz + L z can still be solved efficiently if the non-zero entries of X correspond to a forest in the complete graph K n on n vertices, the latter corresponding to the rows and columns of Q. Choosing T correspondingly, the problem of finding such a matrix X leading to a good underestimator is then of type (2). Conic formulation The TUP can be formulated as follows: max J n, X Q X X ij = 0 if t ij = 0, i j t T, where Q R n n is a symmetric matrix, J n R n n is the all-ones matrix, and T is the set of incidence vectors of spanning forests in K n. Note that Assumption 1 holds in (8), since we never restrict any diagonal entry of X to be zero. As T is a matroid here, we can use the update rule based on pairwise exchanges; see Section 3.2. Test-bed used Our test-bed consists of the matrices Q used in the binary quadratic instances of the Biq-mac library; see [5]. The linear part is always zero. There are 165 instances in total, subdivided into three problem classes beasley, be, and gka, with a number of variables ranging from 20 to 500. Analysis of the results Our aim is to compare the dual bounds obtained from the new tree underestimators with the ones obtained from separable underestimators as in [2]. The results are given in Table 1. The instances are grouped horizontally according to their family (type) and number of variables (size). For each group of instances, we state the number of corresponding instances (# inst) as well as the following average information: the total cpu time (in seconds) needed by our algorithm to terminate (time), which is mostly spend for iteratively solving (8) 7

8 type size # inst time # iter sep bnd tree bnd beasley beasley beasley beasley be be be be be gka gka gka gka gka gka gka gka gka gka gka gka Table 1: Results for TUP the semidefinite problem (8) for fixed t, the number of iterations (# iter), and the dual bound provided by the separable underestimator and by the tree underestimator (sep bnd and tree bnd). The results clearly show that the use of the tree underestimator improves the quality of the dual bounds significantly. The running times are small enough to be clearly dominated by the time needed for the main phase of the approach presented in [2]. 4.2 Cardinality constrained continuous knapsack problem (CCKP) Problem Description The CCKP is a continuous (m-dimensional) knapsack problem where at most k variables are allowed to be strictly greater than zero, for some given integer k. The CCKP has been introduced and investigated in [4]. Conic formulation max The CCKP can be formulated as follows: c x a j x b j j = 1,..., m x i = 0 if t i 0 t T, (9) 8

9 where c, a j R n + and T is the set of binary vectors containing at most k ones. In other words, T is the uniform matroid, so that we can again use all update rules presented in Section 4. Test-bed used We use two sets of instances: De Farias et al. instances. These are the instances used in [4]. They contain up to 8000 variables (n) and 70 knapsack constraints (m), the values of k and the densities of the non-zero entries of the knapsack constraints (d) were chosen with the objective to make the instances hard from a computational point of view. Random instances. We produced two additional sets of random instances, one set of small instances (n up to 300 and m up to 50) and another set of large instances (n up to 8000 and m up to 5000). In both sets, the knapsack constraints have a density d of 100%, while various cardinalities k are considered, all in the (small) range where the resulting problems turn out to be non-trivial. Whenever showing results for random instances, we state averages over 10 instances in each line of the tables. Analysis of the results We first use Table 2 to compare the performances of three different update rules of our algorithm, using the small instances: (1) the matroid exchange, (2) the general exchange, and (3) the optimal pairwise exchange 2-opt. For each option 1 3, we show the best primal bound obtained (best), the computation time in seconds (time), and the number of iterations (iter). The values of the primal bounds are normalized by dividing them by the best one. As expected, the optimal pairwise exchange provides the best primal solution for each instance. However, it is several orders of magnitude slower in terms of computing time. For this reason, the optimal pairwise exchange is not suitable if we want to tackle bigger instances. Moreover, the difference in the solution quality is very small: it is below 1 % for the matroid exchange and below 4 % for the general exchange. When comparing the general exchange rule with the matroid exchange, it turns out that the matroid rule yields better (or equally good) results on all instances, while using roughly the same number of iterations and the same running time on average. This trend is confirmed by the results for large random instances presented in Table 3, but here the matroid exchange rule even outperforms the general exchange rule in terms of running time. For this reason, we use the matroid exchange rule in the following experiments. Note that the running time of all three heuristics is dominated by the time needed to solve (8) for fixed t using CPLEX. In Table 4 we present the results concerning the de Farias et al. instances. We compare the performance of our heuristic with the performance of the commercial solver CPLEX. The table first presents the time, the primal bound, and the number of iteration of the heuristic (pheur, theur and itheur), and then the best primal and dual bounds and the 9

10 n m k best 1 time 1 iter 1 best 2 time 2 iter 2 best 3 time 3 iter Table 2: Results for small random instances of CCKP, comparing matroid exchange, general exchange, and 2-opt heuristics 10

11 n m k best 1 time 1 iter 1 best 2 time 2 iter Table 3: Results for large random instances of CCKP, comparing matroid exchange and general exchange heuristics 11

12 m n k d pheur theur itheur pcplex dcplex tcplex TL TL TL TL TL Table 4: De Farias et al. instances, matroid heuristic vs. CPLEX computing time needed by CPLEX (pcplex, dcplex, and tcplex). We set a time limit of one cpu hour. The results show that the heuristic in few iterations and seconds provides an optimal solution 5 times out of 16 and in the remaining instances the solution provided is less than 0.3% worse than the optimal solution. To assess the solution quality of our heuristic on a larger set of instances, we again consider random instances and solve them to optimality by CPLEX, now without a time limit; see Table 5. Here, the value %gap states how far our heuristic solution is from optimality. While some of the instances are very hard to solve by CPLEX, our heuristic can always find a solution within 0.88 % from optimality in a running time that never exceeds 0.04 seconds for n = 200 and 0.12 seconds for n = Sparse mean-variance optimization Problem Description The CCKP discussed above plays an important role in portfolio optimization, where the aim is to choose a set of investments that does not exceed the budget of the investor and that is sparse, i.e., only a certain number of different assets may be chosen. However, in such problems, one is usually not only interested in the expected return of the investments, but also in the risk involved, which can be measured by the variance. Conic formulation max One way to model such problems is as follows: c x ε x Q 1 x a j x b j j = 1,..., m x i = 0 if t i 0 t T, 12

13 n m k %gap theur itheur tcplex itcplex Table 5: Random instances of CCKP, comparison with CPLEX 13

14 where c, a j R n + and T are as in CCKP, but now the objective contains the risk term x Qx defined by the covariance matrix Q, weighted by some ε > 0. As in CCKP, the set T is the uniform matroid, but now the risk term can be modeled using the second-order cone K n in a standard way. In the resulting conic problem, Assumption 1 is satisfied again. Test-bed used As test-bed, we use exactly the same instances as in the CCKP case. For the risk term, we set ε = 1 and compute Q 1 randomly as follows: we first compute a matrix A with all entries chosen uniformly at random in [0,1] and then set Q = (AA ) 1. Analysis of the results In Tables 6 8, we repeat the experiments of Tables 2, 3, and 5, now containing the risk part in the objective function. However, since the problem is harder to solve now, we use smaller instances. Altogether, it turns out that the results for our heuristic are similar to the CCKP case. 5 Conclusion We presented a heuristic for a very general class of sparse conic optimization problems, based on the idea of iteratively updating the support of the solution. In the case that the sparsity is defined by a matroid, we devise an exchange heuristic for updating the support that is based on dual information. Our experiments show that this approach often yields very good solutions in a short running time for different types of problems, including the cardinality-constrained continuous knapsack problem. References [1] Brian Borchers. CSDP, a C library for semidefinite programming. Optimization Methods and Software, 11/12(1 4): , [2] Christoph Buchheim and Emiliano Traversi. Quadratic combinatorial optimization using separable underestimators. INFORMS Journal on Computing, To appear. [3] IBM ILOG CPLEX Optimizer [4] Ismael R. de Farias Jr. and George L. Nemhauser. A polyhedral study of the cardinality constrained knapsack problem. Mathematical Programming, 96: , [5] Franz Rendl, Giovanni Rinaldi, and Angelika Wiegele. Solving Max- Cut to optimality by intersecting semidefinite and polyhedral relaxations. Mathematical Programming, 121(2):307,

15 n m k best 1 time 1 iter 1 best 2 time 2 iter 2 best 3 time 3 iter Table 6: Results for small random instances of MVO, comparing matroid exchange, general exchange, and 2-opt heuristics 15

16 n m k best 1 time 1 iter 1 best 2 time 2 iter Table 7: Results for large random instances of MVO, comparing matroid exchange and general exchange heuristics 16

17 n m k %gap theur itheur tcplex itcplex Table 8: Random instances of MVO, comparison with CPLEX 17

Quadratic 0 1 optimization using separable underestimators

Quadratic 0 1 optimization using separable underestimators Noname manuscript No. (will be inserted by the editor) Quadratic 0 1 optimization using separable underestimators Christoph Buchheim Emiliano Traversi Received: date / Accepted: date Abstract Binary programs

More information

Extended Linear Formulation for Binary Quadratic Problems

Extended Linear Formulation for Binary Quadratic Problems Noname manuscript No. (will be inserted by the editor) Extended Linear Formulation for Binary Quadratic Problems Fabio Furini Emiliano Traversi Received: date / Accepted: date Abstract We propose and test

More information

Lagrangean Decomposition for Mean-Variance Combinatorial Optimization

Lagrangean Decomposition for Mean-Variance Combinatorial Optimization Lagrangean Decomposition for Mean-Variance Combinatorial Optimization Frank Baumann, Christoph Buchheim, and Anna Ilyina Fakultät für Mathematik, Technische Universität Dortmund, Germany {frank.baumann,christoph.buchheim,anna.ilyina}@tu-dortmund.de

More information

Separation Techniques for Constrained Nonlinear 0 1 Programming

Separation Techniques for Constrained Nonlinear 0 1 Programming Separation Techniques for Constrained Nonlinear 0 1 Programming Christoph Buchheim Computer Science Department, University of Cologne and DEIS, University of Bologna MIP 2008, Columbia University, New

More information

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Christoph Buchheim 1 and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische Universität Dortmund christoph.buchheim@tu-dortmund.de

More information

Combinatorial Optimization with One Quadratic Term: Spanning Trees and Forests

Combinatorial Optimization with One Quadratic Term: Spanning Trees and Forests Combinatorial Optimization with One Quadratic Term: Spanning Trees and Forests Christoph Buchheim 1 and Laura Klein 1 1 Fakultät für Mathematik, TU Dortmund, Vogelpothsweg 87, 44227 Dortmund, Germany,

More information

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty Christoph Buchheim Jannis Kurtz Received:

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

A coordinate ascent method for solving semidefinite relaxations of non-convex quadratic integer programs

A coordinate ascent method for solving semidefinite relaxations of non-convex quadratic integer programs A coordinate ascent method for solving semidefinite relaxations of non-convex quadratic integer programs Christoph Buchheim 1, Maribel Montenegro 1, and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

The Ongoing Development of CSDP

The Ongoing Development of CSDP The Ongoing Development of CSDP Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Joseph Young Department of Mathematics New Mexico Tech (Now at Rice University)

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty EURO Journal on Computational Optimization manuscript No. (will be inserted by the editor) Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty Jannis Kurtz Received: date / Accepted:

More information

There are several approaches to solve UBQP, we will now briefly discuss some of them:

There are several approaches to solve UBQP, we will now briefly discuss some of them: 3 Related Work There are several approaches to solve UBQP, we will now briefly discuss some of them: Since some of them are actually algorithms for the Max Cut problem (MC), it is necessary to show their

More information

Solving Box-Constrained Nonconvex Quadratic Programs

Solving Box-Constrained Nonconvex Quadratic Programs Noname manuscript No. (will be inserted by the editor) Solving Box-Constrained Nonconvex Quadratic Programs Pierre Bonami Oktay Günlük Jeff Linderoth June 13, 2016 Abstract We present effective computational

More information

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

A Compact Linearisation of Euclidean Single Allocation Hub Location Problems

A Compact Linearisation of Euclidean Single Allocation Hub Location Problems A Compact Linearisation of Euclidean Single Allocation Hub Location Problems J. Fabian Meier 1,2, Uwe Clausen 1 Institute of Transport Logistics, TU Dortmund, Germany Borzou Rostami 1, Christoph Buchheim

More information

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA Gestion de la production Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA 1 Contents 1 Integer Linear Programming 3 1.1 Definitions and notations......................................

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Monomial-wise Optimal Separable Underestimators for Mixed-Integer Polynomial Optimization

Monomial-wise Optimal Separable Underestimators for Mixed-Integer Polynomial Optimization Monomial-wise Optimal Separable Underestimators for Mixed-Integer Polynomial Optimization Christoph Buchheim Claudia D Ambrosio Received: date / Accepted: date Abstract In this paper we introduce a new

More information

Exact Solution Methods for the k-item Quadratic Knapsack Problem

Exact Solution Methods for the k-item Quadratic Knapsack Problem Exact Solution Methods for the k-item Quadratic Knapsack Problem Lucas Létocart 1 and Angelika Wiegele 2 1 Université Paris 13, Sorbonne Paris Cité, LIPN, CNRS, (UMR 7030), 93430 Villetaneuse, France,

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Monoidal Cut Strengthening and Generalized Mixed-Integer Rounding for Disjunctions and Complementarity Constraints

Monoidal Cut Strengthening and Generalized Mixed-Integer Rounding for Disjunctions and Complementarity Constraints Monoidal Cut Strengthening and Generalized Mixed-Integer Rounding for Disjunctions and Complementarity Constraints Tobias Fischer and Marc E. Pfetsch Department of Mathematics, TU Darmstadt, Germany {tfischer,pfetsch}@opt.tu-darmstadt.de

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Min-max- robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1 Christoph Buchheim, Jannis Kurtz 2 Faultät Mathemati, Technische Universität Dortmund Vogelpothsweg

More information

Solution of Large-scale LP Problems Using MIP Solvers: Repeated Assignment Problem

Solution of Large-scale LP Problems Using MIP Solvers: Repeated Assignment Problem The Eighth International Symposium on Operations Research and Its Applications (ISORA 09) Zhangjiajie, China, September 20 22, 2009 Copyright 2009 ORSC & APORC, pp. 190 197 Solution of Large-scale LP Problems

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Lagrangean relaxation

Lagrangean relaxation Lagrangean relaxation Giovanni Righini Corso di Complementi di Ricerca Operativa Joseph Louis de la Grange (Torino 1736 - Paris 1813) Relaxations Given a problem P, such as: minimize z P (x) s.t. x X P

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2 ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving

More information

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ. Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is

More information

3.4 Relaxations and bounds

3.4 Relaxations and bounds 3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Linear Programming. Scheduling problems

Linear Programming. Scheduling problems Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n

More information

On the Existence of Ideal Solutions in Multi-objective 0-1 Integer Programs

On the Existence of Ideal Solutions in Multi-objective 0-1 Integer Programs On the Existence of Ideal Solutions in Multi-objective -1 Integer Programs Natashia Boland a, Hadi Charkhgard b, and Martin Savelsbergh a a School of Industrial and Systems Engineering, Georgia Institute

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

1 Column Generation and the Cutting Stock Problem

1 Column Generation and the Cutting Stock Problem 1 Column Generation and the Cutting Stock Problem In the linear programming approach to the traveling salesman problem we used the cutting plane approach. The cutting plane approach is appropriate when

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano ... Our contribution PIPS-PSBB*: Multi-level parallelism for Stochastic

More information

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs Adam N. Letchford Daniel J. Grainger To appear in Operations Research Letters Abstract In the literature

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators Hongbo Dong and Jeff Linderoth Wisconsin Institutes for Discovery University of Wisconsin-Madison, USA, hdong6,linderoth}@wisc.edu

More information

From structures to heuristics to global solvers

From structures to heuristics to global solvers From structures to heuristics to global solvers Timo Berthold Zuse Institute Berlin DFG Research Center MATHEON Mathematics for key technologies OR2013, 04/Sep/13, Rotterdam Outline From structures to

More information

On the Generation of Circuits and Minimal Forbidden Sets

On the Generation of Circuits and Minimal Forbidden Sets Mathematical Programming manuscript No. (will be inserted by the editor) Frederik Stork Marc Uetz On the Generation of Circuits and Minimal Forbidden Sets January 31, 2004 Abstract We present several complexity

More information

Lift-and-Project Inequalities

Lift-and-Project Inequalities Lift-and-Project Inequalities Q. Louveaux Abstract The lift-and-project technique is a systematic way to generate valid inequalities for a mixed binary program. The technique is interesting both on the

More information

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016 When in doubt on the accuracy of these notes, please cross check with the instructor

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

Polyhedral Results for A Class of Cardinality Constrained Submodular Minimization Problems

Polyhedral Results for A Class of Cardinality Constrained Submodular Minimization Problems Polyhedral Results for A Class of Cardinality Constrained Submodular Minimization Problems Shabbir Ahmed and Jiajin Yu Georgia Institute of Technology A Motivating Problem [n]: Set of candidate investment

More information

Integer and Combinatorial Optimization: Introduction

Integer and Combinatorial Optimization: Introduction Integer and Combinatorial Optimization: Introduction John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA November 2018 Mitchell Introduction 1 / 18 Integer and Combinatorial Optimization

More information

Robust combinatorial optimization with variable budgeted uncertainty

Robust combinatorial optimization with variable budgeted uncertainty Noname manuscript No. (will be inserted by the editor) Robust combinatorial optimization with variable budgeted uncertainty Michael Poss Received: date / Accepted: date Abstract We introduce a new model

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Strong Formulations of Robust Mixed 0 1 Programming

Strong Formulations of Robust Mixed 0 1 Programming Math. Program., Ser. B 108, 235 250 (2006) Digital Object Identifier (DOI) 10.1007/s10107-006-0709-5 Alper Atamtürk Strong Formulations of Robust Mixed 0 1 Programming Received: January 27, 2004 / Accepted:

More information

Technische Universität Dresden Institute of Numerical Mathematics

Technische Universität Dresden Institute of Numerical Mathematics Technische Universität Dresden Institute of Numerical Mathematics An Improved Flow-based Formulation and Reduction Principles for the Minimum Connectivity Inference Problem Muhammad Abid Dar Andreas Fischer

More information

On Counting Lattice Points and Chvátal-Gomory Cutting Planes

On Counting Lattice Points and Chvátal-Gomory Cutting Planes On Counting Lattice Points and Chvátal-Gomory Cutting Planes Andrea Lodi 1, Gilles Pesant 2, and Louis-Martin Rousseau 2 1 DEIS, Università di Bologna - andrea.lodi@unibo.it 2 CIRRELT, École Polytechnique

More information

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

More information

The Graph Realization Problem

The Graph Realization Problem The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Journal of the Operations Research Society of Japan 2003, Vol. 46, No. 2, 164-177 2003 The Operations Research Society of Japan A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Masakazu

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio CSE 206A: Lattice Algorithms and Applications Spring 2014 Basic Algorithms Instructor: Daniele Micciancio UCSD CSE We have already seen an algorithm to compute the Gram-Schmidt orthogonalization of a lattice

More information

Technische Universität Ilmenau Institut für Mathematik

Technische Universität Ilmenau Institut für Mathematik Technische Universität Ilmenau Institut für Mathematik Preprint No. M 14/05 Copositivity tests based on the linear complementarity problem Carmo Bras, Gabriele Eichfelder and Joaquim Judice 28. August

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Using the Johnson-Lindenstrauss lemma in linear and integer programming

Using the Johnson-Lindenstrauss lemma in linear and integer programming Using the Johnson-Lindenstrauss lemma in linear and integer programming Vu Khac Ky 1, Pierre-Louis Poirion, Leo Liberti LIX, École Polytechnique, F-91128 Palaiseau, France Email:{vu,poirion,liberti}@lix.polytechnique.fr

More information

Matroid Optimisation Problems with Nested Non-linear Monomials in the Objective Function

Matroid Optimisation Problems with Nested Non-linear Monomials in the Objective Function atroid Optimisation Problems with Nested Non-linear onomials in the Objective Function Anja Fischer Frank Fischer S. Thomas ccormick 14th arch 2016 Abstract Recently, Buchheim and Klein [4] suggested to

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0. ex-.-. Foundations of Operations Research Prof. E. Amaldi. Dual simplex algorithm Given the linear program minx + x + x x + x + x 6 x + x + x x, x, x. solve it via the dual simplex algorithm. Describe

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger Introduction to Optimization, DIKU 007-08 Monday November David Pisinger Lecture What is OR, linear models, standard form, slack form, simplex repetition, graphical interpretation, extreme points, basic

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

An Exact Algorithm for the Steiner Tree Problem with Delays

An Exact Algorithm for the Steiner Tree Problem with Delays Electronic Notes in Discrete Mathematics 36 (2010) 223 230 www.elsevier.com/locate/endm An Exact Algorithm for the Steiner Tree Problem with Delays Valeria Leggieri 1 Dipartimento di Matematica, Università

More information

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs A Lifted Linear Programming Branch-and-Bound Algorithm for Mied Integer Conic Quadratic Programs Juan Pablo Vielma Shabbir Ahmed George L. Nemhauser H. Milton Stewart School of Industrial and Systems Engineering

More information

The Strength of Multi-Row Relaxations

The Strength of Multi-Row Relaxations The Strength of Multi-Row Relaxations Quentin Louveaux 1 Laurent Poirrier 1 Domenico Salvagnin 2 1 Université de Liège 2 Università degli studi di Padova August 2012 Motivations Cuts viewed as facets of

More information

Constraint Qualification Failure in Action

Constraint Qualification Failure in Action Constraint Qualification Failure in Action Hassan Hijazi a,, Leo Liberti b a The Australian National University, Data61-CSIRO, Canberra ACT 2601, Australia b CNRS, LIX, Ecole Polytechnique, 91128, Palaiseau,

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS IN POLYNOMIAL OPTIMIZATION

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS IN POLYNOMIAL OPTIMIZATION A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS IN POLYNOMIAL OPTIMIZATION Kartik K. Sivaramakrishnan Department of Mathematics North Carolina State University

More information

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

A Continuation Approach Using NCP Function for Solving Max-Cut Problem A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Worst case analysis for a general class of on-line lot-sizing heuristics

Worst case analysis for a general class of on-line lot-sizing heuristics Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry assoc. prof., Ph.D. 1 1 UNM - Faculty of information studies Edinburgh, 16. September 2014 Outline Introduction Definition

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

Representations of All Solutions of Boolean Programming Problems

Representations of All Solutions of Boolean Programming Problems Representations of All Solutions of Boolean Programming Problems Utz-Uwe Haus and Carla Michini Institute for Operations Research Department of Mathematics ETH Zurich Rämistr. 101, 8092 Zürich, Switzerland

More information