Conic optimization under combinatorial sparsity constraints

Similar documents
Quadratic 0 1 optimization using separable underestimators

Extended Linear Formulation for Binary Quadratic Problems

Lagrangean Decomposition for Mean-Variance Combinatorial Optimization

Separation Techniques for Constrained Nonlinear 0 1 Programming

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming

Combinatorial Optimization with One Quadratic Term: Spanning Trees and Forests

Robust Combinatorial Optimization under Convex and Discrete Cost Uncertainty

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A coordinate ascent method for solving semidefinite relaxations of non-convex quadratic integer programs

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

The Ongoing Development of CSDP

Research Reports on Mathematical and Computing Sciences

Robust Combinatorial Optimization under Budgeted-Ellipsoidal Uncertainty

There are several approaches to solve UBQP, we will now briefly discuss some of them:

Solving Box-Constrained Nonconvex Quadratic Programs

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Semidefinite Programming Basics and Applications

BCOL RESEARCH REPORT 07.04

A Compact Linearisation of Euclidean Single Allocation Hub Location Problems

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

Monomial-wise Optimal Separable Underestimators for Mixed-Integer Polynomial Optimization

Exact Solution Methods for the k-item Quadratic Knapsack Problem

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

Monoidal Cut Strengthening and Generalized Mixed-Integer Rounding for Disjunctions and Complementarity Constraints

Solving large Semidefinite Programs - Part 1 and 2

Min-max-min robustness: a new approach to combinatorial optimization under uncertainty based on multiple solutions 1

Solution of Large-scale LP Problems Using MIP Solvers: Repeated Assignment Problem

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Lagrangean relaxation

Lifting for conic mixed-integer programming

6.854J / J Advanced Algorithms Fall 2008

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

3.4 Relaxations and bounds

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Linear Programming. Scheduling problems

On the Existence of Ideal Solutions in Multi-objective 0-1 Integer Programs

On Two Class-Constrained Versions of the Multiple Knapsack Problem

1 Column Generation and the Cutting Stock Problem

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Lecture: Cone programming. Approximating the Lorentz cone.

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

4. Algebra and Duality

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators

From structures to heuristics to global solvers

On the Generation of Circuits and Minimal Forbidden Sets

Lift-and-Project Inequalities

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Polyhedral Results for A Class of Cardinality Constrained Submodular Minimization Problems

Integer and Combinatorial Optimization: Introduction

Robust combinatorial optimization with variable budgeted uncertainty

12. Interior-point methods

On improving matchings in trees, via bounded-length augmentations 1

Optimality Conditions for Constrained Optimization

Strong Formulations of Robust Mixed 0 1 Programming

Technische Universität Dresden Institute of Numerical Mathematics

On Counting Lattice Points and Chvátal-Gomory Cutting Planes

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

The Graph Realization Problem

Computational Finance

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

SEMIDEFINITE PROGRAM BASICS. Contents

Optimization (168) Lecture 7-8-9

A strongly polynomial algorithm for linear systems having a binary solution

Lecture Note 5: Semidefinite Programming for Stability Analysis

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

Technische Universität Ilmenau Institut für Mathematik

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Using the Johnson-Lindenstrauss lemma in linear and integer programming

Matroid Optimisation Problems with Nested Non-linear Monomials in the Objective Function

Structured Problems and Algorithms

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

III. Applications in convex optimization

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

An Exact Algorithm for the Steiner Tree Problem with Delays

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs

The Strength of Multi-Row Relaxations

Constraint Qualification Failure in Action

Semidefinite Programming

A PARALLEL INTERIOR POINT DECOMPOSITION ALGORITHM FOR BLOCK-ANGULAR SEMIDEFINITE PROGRAMS IN POLYNOMIAL OPTIMIZATION

A Continuation Approach Using NCP Function for Solving Max-Cut Problem

Semidefinite Programming

Summer School: Semidefinite Optimization

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Lecture #21. c T x Ax b. maximize subject to

3.10 Lagrangian relaxation

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Worst case analysis for a general class of on-line lot-sizing heuristics

Linear Programming: Simplex

Semidefinite Programming, Combinatorial Optimization and Real Algebraic Geometry

Lecture: Introduction to LP, SDP and SOCP

Representations of All Solutions of Boolean Programming Problems

Transcription:

Conic optimization under combinatorial sparsity constraints Christoph Buchheim and Emiliano Traversi Abstract We present a heuristic approach for conic optimization problems containing sparsity constraints. The latter can be cardinality constraints, but our approach also covers more complex constraints on the support of the solution. For the special case that the support is required to belong to a matroid, we propose an exchange heuristic adapting the support in every iteration. The entering non-zero is determined by considering the dual of the given conic problem where the variables not belonging to the current support are fixed to zero. While this algorithm is purely heuristic, we show experimentally that it often finds solutions very close to the optimal ones in the case of the cardinality-constrained knapsack problem and for mean-variance optimization problems. 1 Introduction Consider a conic optimization problem in standard form: min c x s.t. Ax = b x K. (1) Here, A R m n is a matrix, c R n and b R m are vectors, and K R n is a closed convex cone. We aim at finding sparse solutions of Problem (1), where the notion of sparsity is given by a combinatorial set T {0, 1} n containing the feasible non-zero sets. In the following, we assume w.l.o.g. that T is hereditary, i.e., if t T and t t, we also have t T. Throughout this paper, we only assume that T is accessible via a linear optimization oracle. We are particularly interested in the case where T is a matroid, which directly generalizes the case of an ordinary cardinality contraint of the type x 0 k. This work was partially supported by the German Research Foundation (DFG) under grant no. BU 2313/4. Fakultät für Mathematik, TU Dortmund, Vogelpothsweg 87, 44227 Dortmund, Germany, christoph.buchheim@tu-dortmund.de LIPN, Université Paris 13, 99 Avenue Jean-Baptiste Clément, 93430 Villetaneuse, France, emiliano.traversi@lipn.univ-paris13.fr 1

The problem addressed can thus be written as min c x s.t. Ax = b x K x i = 0 if t i = 0 t T. (2) Note that we optimize over both x and t here. Not surprisingly, Problem (2) is NP-hard. This has been shown by Farias and Nemhauser [4] in the special case that (1) is a continuous knapsack problem and T is a uniform matroid, i.e., we search for cardinality constrained solutions. The second-last constraint in (2) can be replaced by complementarity constraints x i(1 t i) = 0 for i = 1,..., n. Problem (2) is equivalent to finding a t T that minimizes c t := min c x s.t. Ax = b x K x i = 0 if t i = 0. (3) We can thus rewrite (2) as a combinatorial optimization problem over T with a non-linear objective function: min t T c t (4) Our proposed heuristic for solving Problem (2) tries to solve (4) by iteratively adapting the sparsity pattern t. In the general case, we propose to compute a new pattern t T by optimizing a certain function over T, where the coefficient for each t i set to zero in the last solution is calculated based on the dual solution of (3). In the case that T is a matroid, we can also consider iterations where only one pair of entries in t is swapped. More precisely, we first decide which fixing to release and then we fix another variable. Apart from the case where T models a cardinality constraint, our approach is motivated by another application: we are interested in finding a symmetric matrix X such that Q X for a given symmetric matrix Q and such that the nonzero entries in X form a forest in the complete graph indexed by the rows (or columns) of X. Given such a matrix X, one can compute an underestimator of the quadratic binary optimization problem min x Qx s.t. x {0, 1} n (5) by replacing Q by X and then solving the resulting sparse problem, which can be done in polynomial time. The optimal value of the latter problem is then a lower bound for the original problem, which can be used to strengthen the bounds obtained from separable underestimators within the branch-and-bound algorithm presented in [2]. This paper is organized as follows. In Section 2, we give some theoretical background motivating our approach. The primal heuristic is devised 2

in Section 3, both for the matroid case and for the general case. Finally, the results of an experimental evaluation are presented in Section 4. They show that the heuristic is capable of finding near-optimal solutions in very short running time for various types of problems, where we consider three different underlying cones: the non-negative orthant, the cone of semidefinite matrices, and the second-order cone. Section 5 concludes. 2 Motivation and preliminaries The crucial step in our algorithm is to decide which fixings x i = 0 should be relaxed in the next iteration. For this, we use dual information. The idea is that the dual variable for the constraint x i = 0 tells us how promising it is to relax the constraint. In the following, let c t denote the restriction of c to the components i with t i = 1 and A t the restriction of A to the columns i with t i = 1. Moreover, let K t denote the intersection of K with the subspace R t, where the latter is given by the dimensions i with t i = 1, and K t the projection of K to R t. It is easy to verify that both K t and K t are convex cones again. Finally, let t denote the binary complement vector of t. After eliminating the fixed variables in Problem (3), we thus obtain c t = min c t ˆx s.t. A tˆx = b ˆx K t. (6) For the following, we have to make an assumption on K and T : Assumption 1 For each t T, we have K t K t. It is easy to verify that Assumption 1 is satisfied in the following cases: (i) K = R n +, the non-negative cone, and T is arbitrary; (ii) K = S n +, the semidefinite cone, and t ii = 1 for all i T ; (iii) K = K n, the second order cone, and t 0 = 1 for all i T. The restriction in (ii) means that only off-diagonal elements may be fixed to zero, while in (iii) we are allowed to fix only the right-hand-side variables in the second order contraint x 0 (x 1,..., x n) 2. Lemma 2 Let K and T satisfy Assumption 1. Let y R m be a dual optimal solution for Problem (6) for some t T. Then (y, c t A t y ) is a dual optimal solution for Problem (3). Proof: Define d := c t A t y and z := c A y I td. As y is dual feasible for (6), we have z t = c t A t y (K t). Moreover, z t = c t A t y d = 0. Hence z (K t) 0 = (K t R t ) (K t R t ) K, and we obtain that (y, d) is dual feasible for (3), with objective value b y = c t. This result shows that optimal values for the dual variables corresponding to the fixings can be easily computed from a dual solution of the lower-dimensional problem (6). However, a direct application of this approach does not work, as the optimal dual solution of Problem (3) is not unique in general, even if Problem (6) has a unique dual optimal solution. This is always the case for linear programs: 3

Lemma 3 Let t T be given and let K = R n +. Let y R m be a dual optimal solution of Problem (6). Then (y, d) is a dual optimal solution of Problem (3) if and only if d c t A t y (K ) t, i.e., if and only if d c t A t y. Proof: First, let (y, d) be dual optimal for Problem (3). Then dual feasibility means c A y I td K and hence c t A t y d = (c A y I td) t (K ) t, which implies d c t A t y (K ) t. Conversely, let d c t A t y (K ) t and define z = c A y I td. As y is dual feasible for (3), we have z t = c t A t y (K t) = R t +. Moreover, we obtain z t = c t A t y d (K ) t = R t +. Thus z 0 and hence z K. Now the rest of the proof is analogous to the proof of Lemma 2. However, it is still a natural idea to choose d := c t A t y, since this is what we obtain if we consider the limits of the dual variables of the problem min c x s.t. Ax = b x K x i ε if t i = 0 for ε 0. Apart from the advantage of uniqueness (provided that (6) has a unique optimal dual solution), it is reasonable to consider the dual variables in the limit ε 0 instead of the dual variables for ε = 0, as we are interested in how much improvement we can obtain when moving x i to a non-zero value. 3 Primal heuristic In order to describe our heuristic algorithm for solving Problem (2), we need to specify the choice of an initial solution and the update rule for the support. These topics are discussed in the following subsections. 3.1 Initial solution A simple method to find an initial solution t (0) for our heuristic would be to start with t i = 1 for all i and then greedily remove the index l := argmin l:tl =1c l x l until t T, where x is the solution of (3) for the current t. In our implementation, we use the following alternative, which is again based on the idea of using dual variables: we first solve Problem (3) with t = 0, i.e., we fix all variables to zero, thus obtaining dual multipliers d j for all variables. Then we choose t (0) so as to maximize d t over T, which can be done by calling the linear optimization oracle for T that we assume to have at hand. Compared to the first method, this approach has the advantage that the solver for (3) has to be called only once. Moreover, it is consistent with the following iterations, in which the choice of non-zeroes is based on the value of d. 4

3.2 Update rules For the update of t (i) to t (i+1), we distinguish between the matroid and the general case. For general T, we proceed as follows: 1. set i := 0 and choose t (0) T 2. compute c as in (3) and obtain an optimal solution x and dual t (i) multipliers d j for the constraints x j = 0 (for t (i) j = 0) 3. solve the combinatorial optimization problem ( max t T t (i) j =0 d j t j + ) t (i) j =1 cjx j t j using the oracle, let t (i+1) be the resulting optimizer 4. if t (i+1) t (k) for all k i, set i := i + 1 and go to 2 Note that, depending on the underlying cone K and the optimization approach used to solve (3), one could compute c in Step 2 by reoptimization, since usually only few fixings are updated. t (i) In the case that T is a matroid, we propose the following alternative method, which turns out to outperform the previous approach significantly; see Section 4: 1. set i := 0 and choose t (0) T 2. compute c t (i) as in (3) and obtain an optimal solution x and dual multipliers d j for the constraints x j = 0 (for t (i) j = 0) 3. find k := argmax k:t (i) k =0 d k 4. find l := argmin l:t (i) + k l T c lx l 5. set t (i+1) := t (i) + k l 6. if t (i+1) t (k) for all k i, set i := i + 1 and go to 2 The assumption that T is a matroid guarantees that there exists at least one candidate for l different from k in Step 4. For comparison, still in the matroid case, we also consider the straightforward 2-opt update in our experiments. Here, we enumerate all pairs k, l such that t (i) + k l T and choose the one leading to the smallest value c in the next iteration. t (i+1) 4 Experimental results In this section, we provide an experimental evaluation of our approach based on a wide class of test instances from different problem types. More precisely, we test our approach on the following sparse conic optimization problems: The Cardinality Constrained Continuous Knapsack Problem (CCKP) The Sparse Mean-Variance Optimization Problem (MVO) The Tree Underestimator Problem for Binary Quadratic Problems (TUP) 5

For each problem, we provide a brief description, a mathematical model in the form of (2), and the test-bed used in the experiments. The aim of this section is two-fold: (i) First of all, we would like to motivate the introduction of the proposed concept of sparsity. The TUP uses the concept of sparsity presented in this paper to generalize the methodology proposed in [2] to solve efficiently combinatorial problems with a quadratic objective function. In Section 4.1 we show how the application of TUD allows to improve the dual bounds obtained in the approach presented in [2]. (ii) Secondly, in the tests concerning the CCKP and the MVO, we show that our heuristic procedure is able to provide almost-optimal solutions within a short amount of time. For assessing the performance of our algorithm, we use CPLEX 12.6 [3] with an optimality tolerance of 10 6. The second-last constraint in Problem (2) is modeled as a special ordered set (SOS) constraint. For CCKP and MVO, we also use CPLEX in order to solve Problem (3) as needed in Step 2 of the algorithm in the former case, we call the LP solver, while in the second case we need the SOCP solver of CPLEX. In both cases, dual solutions are also provided by CPLEX. For TUP, we use the SDP solver CSDP [1]. All experiments were carried out on Intel Xeon processors running at 2.50 GHz. 4.1 Tree underestimator problem (TUP) Problem Description The TUP is a generalization of the bounding procedure proposed in [2], where the authors devise a branch-and-bound framework to solve combinatorial problems with a quadratic objective function. For completeness, we recall the basic idea in the following, refering the reader to [2] for a detailed description of the approach. In order to be consistent with the rest of this paper, we use a different notation here. The goal is to provide dual bounds for problems of the following shape: min f(z) = z Z z Qz + L z, where Q R n n is a symmetric (not necessarily positive semidefinite) matrix, L R n is a vector, and Z {0, 1} n. More precisely, the idea is to focus on combinatorial optimization problems where the linear counterpart min z Z can be solved efficiently for any vector c R n. For a given vector x R n, let Diag(x) be the quadratic matrix with all entries equal to zero, except for the diagonal, which contains x. For a given objective function f(z) = z Qz + L z, the separable function c z g x(z) = z Diag(x)z + L z = n n x izi 2 + L iz i is a valid underestimator of f provided that Q Diag(x), i.e., that the matrix Q Diag(x) is positive semidefinite. i=1 i=1 6

It is possible to identify a good (non-convex) separable underestimator for f(z) by solving the following semidefinite optimization problem: max{1 x : Q Diag(x)}. (7) The idea behind the proposed underestimator method is that minimizing the separable function g x(z) is equivalent to minimizing x z + L z, since the variables z are binary. As already mentioned, the remaining combinatorial optimization problem can be efficiently solved, and hence its computation can be incorporated into a branch-and-bound framework. The computation of the underestimator according to (7) is done in a preprocessing phase via solving a series of semidefinite programs. A generalization of the method proposed in [2] can be obtained by allowing non-zero entries in some of the off-diagonal terms in the quadratic underestimator. It is easy to see that the resulting problem min z Z f(z) = z Xz + L z can still be solved efficiently if the non-zero entries of X correspond to a forest in the complete graph K n on n vertices, the latter corresponding to the rows and columns of Q. Choosing T correspondingly, the problem of finding such a matrix X leading to a good underestimator is then of type (2). Conic formulation The TUP can be formulated as follows: max J n, X Q X X ij = 0 if t ij = 0, i j t T, where Q R n n is a symmetric matrix, J n R n n is the all-ones matrix, and T is the set of incidence vectors of spanning forests in K n. Note that Assumption 1 holds in (8), since we never restrict any diagonal entry of X to be zero. As T is a matroid here, we can use the update rule based on pairwise exchanges; see Section 3.2. Test-bed used Our test-bed consists of the matrices Q used in the binary quadratic instances of the Biq-mac library; see [5]. The linear part is always zero. There are 165 instances in total, subdivided into three problem classes beasley, be, and gka, with a number of variables ranging from 20 to 500. Analysis of the results Our aim is to compare the dual bounds obtained from the new tree underestimators with the ones obtained from separable underestimators as in [2]. The results are given in Table 1. The instances are grouped horizontally according to their family (type) and number of variables (size). For each group of instances, we state the number of corresponding instances (# inst) as well as the following average information: the total cpu time (in seconds) needed by our algorithm to terminate (time), which is mostly spend for iteratively solving (8) 7

type size # inst time # iter sep bnd tree bnd beasley 50 10 0.60 19.60 9057.68 5871.09 beasley 100 10 8.18 44.90 28584.30 19401.50 beasley 250 10 131.23 93.10 126256.12 93784.32 beasley 500 10 1446.01 165.00 371126.76 289109.91 be 100 10 4.73 31.50 49881.47 34742.32 be 120 20 9.19 43.35 47460.84 33484.71 be 150 20 21.35 51.55 67363.73 49104.10 be 200 20 48.60 68.30 105519.61 77984.56 be 250 10 98.95 77.80 64013.84 47937.08 gka 20 1 0.06 16.00 5583.52 3742.11 gka 30 3 0.23 19.00 10168.52 6523.19 gka 40 2 0.21 11.50 12804.97 8474.36 gka 50 4 0.73 22.25 13681.36 9541.97 gka 60 3 0.76 16.67 17710.67 12311.41 gka 70 3 2.01 26.67 22796.77 14993.31 gka 80 3 2.81 32.33 25439.37 16987.01 gka 90 2 4.49 34.50 30978.50 22630.76 gka 100 13 5.91 36.54 34381.92 24873.15 gka 125 1 16.54 57.00 79859.91 57941.10 gka 200 5 60.70 81.00 77180.38 56676.81 gka 500 5 1460.60 162.20 407649.86 319289.17 Table 1: Results for TUP the semidefinite problem (8) for fixed t, the number of iterations (# iter), and the dual bound provided by the separable underestimator and by the tree underestimator (sep bnd and tree bnd). The results clearly show that the use of the tree underestimator improves the quality of the dual bounds significantly. The running times are small enough to be clearly dominated by the time needed for the main phase of the approach presented in [2]. 4.2 Cardinality constrained continuous knapsack problem (CCKP) Problem Description The CCKP is a continuous (m-dimensional) knapsack problem where at most k variables are allowed to be strictly greater than zero, for some given integer k. The CCKP has been introduced and investigated in [4]. Conic formulation max The CCKP can be formulated as follows: c x a j x b j j = 1,..., m x i = 0 if t i 0 t T, (9) 8

where c, a j R n + and T is the set of binary vectors containing at most k ones. In other words, T is the uniform matroid, so that we can again use all update rules presented in Section 4. Test-bed used We use two sets of instances: De Farias et al. instances. These are the instances used in [4]. They contain up to 8000 variables (n) and 70 knapsack constraints (m), the values of k and the densities of the non-zero entries of the knapsack constraints (d) were chosen with the objective to make the instances hard from a computational point of view. Random instances. We produced two additional sets of random instances, one set of small instances (n up to 300 and m up to 50) and another set of large instances (n up to 8000 and m up to 5000). In both sets, the knapsack constraints have a density d of 100%, while various cardinalities k are considered, all in the (small) range where the resulting problems turn out to be non-trivial. Whenever showing results for random instances, we state averages over 10 instances in each line of the tables. Analysis of the results We first use Table 2 to compare the performances of three different update rules of our algorithm, using the small instances: (1) the matroid exchange, (2) the general exchange, and (3) the optimal pairwise exchange 2-opt. For each option 1 3, we show the best primal bound obtained (best), the computation time in seconds (time), and the number of iterations (iter). The values of the primal bounds are normalized by dividing them by the best one. As expected, the optimal pairwise exchange provides the best primal solution for each instance. However, it is several orders of magnitude slower in terms of computing time. For this reason, the optimal pairwise exchange is not suitable if we want to tackle bigger instances. Moreover, the difference in the solution quality is very small: it is below 1 % for the matroid exchange and below 4 % for the general exchange. When comparing the general exchange rule with the matroid exchange, it turns out that the matroid rule yields better (or equally good) results on all instances, while using roughly the same number of iterations and the same running time on average. This trend is confirmed by the results for large random instances presented in Table 3, but here the matroid exchange rule even outperforms the general exchange rule in terms of running time. For this reason, we use the matroid exchange rule in the following experiments. Note that the running time of all three heuristics is dominated by the time needed to solve (8) for fixed t using CPLEX. In Table 4 we present the results concerning the de Farias et al. instances. We compare the performance of our heuristic with the performance of the commercial solver CPLEX. The table first presents the time, the primal bound, and the number of iteration of the heuristic (pheur, theur and itheur), and then the best primal and dual bounds and the 9

n m k best 1 time 1 iter 1 best 2 time 2 iter 2 best 3 time 3 iter 3 100 1 30 0.999 0.0 5.2 0.987 0.0 8.7 1.000 0.3 5.4 100 1 40 0.999 0.0 15.6 0.988 0.0 16.8 1.000 1.5 16.4 100 1 50 1.000 0.0 18.2 1.000 0.0 7.2 1.000 1.3 18.4 100 2 30 0.998 0.0 5.7 0.975 0.0 9.1 1.000 0.6 6.1 100 2 40 1.000 0.0 16.7 0.971 0.0 12.1 1.000 2.8 16.7 100 2 50 1.000 0.0 12.0 1.000 0.0 8.2 1.000 1.3 12.3 100 5 30 0.996 0.0 9.2 0.968 0.0 9.0 1.000 1.3 7.3 100 5 40 1.000 0.0 13.6 0.990 0.0 20.1 1.000 3.3 14.6 100 5 50 1.000 0.0 8.4 1.000 0.0 9.1 1.000 1.8 11.9 100 10 30 0.991 0.0 9.3 0.963 0.0 12.7 1.000 2.8 8.8 100 10 40 1.000 0.0 11.3 0.998 0.0 25.5 1.000 4.2 13.0 100 10 50 1.000 0.0 6.8 1.000 0.0 8.7 1.000 4.3 17.2 100 20 30 0.992 0.0 10.9 0.963 0.0 16.9 1.000 5.0 9.4 100 20 40 1.000 0.0 12.1 0.997 0.0 20.6 1.000 5.9 13.5 100 20 50 1.000 0.0 7.1 1.000 0.0 9.0 1.000 4.3 12.7 100 50 30 0.996 0.0 8.5 0.968 0.0 19.7 1.000 9.8 9.6 100 50 40 1.000 0.0 11.0 0.999 0.0 21.6 1.000 9.7 12.5 100 50 50 1.000 0.0 7.6 1.000 0.0 8.5 1.000 18.4 26.1 200 1 60 0.999 0.0 5.8 0.993 0.0 12.3 1.000 4.8 6.8 200 1 80 0.998 0.0 27.9 0.992 0.0 30.8 1.000 17.8 28.5 200 1 100 1.000 0.0 34.7 1.000 0.0 7.9 1.000 16.5 36.0 200 2 60 0.999 0.0 6.8 0.985 0.0 10.6 1.000 9.0 7.4 200 2 80 1.000 0.0 29.2 0.975 0.0 13.4 1.000 30.8 29.5 200 2 100 1.000 0.0 21.6 1.000 0.0 7.4 1.000 14.7 23.1 200 5 60 0.996 0.0 9.1 0.977 0.0 9.9 1.000 25.6 10.8 200 5 80 1.000 0.0 19.7 0.995 0.0 29.2 1.000 28.0 22.0 200 5 100 1.000 0.0 12.5 1.000 0.0 9.0 1.000 24.7 24.0 200 10 60 0.994 0.0 13.9 0.973 0.0 12.4 1.000 45.9 11.7 200 10 80 1.000 0.0 15.6 1.000 0.1 28.3 1.000 39.2 19.0 200 10 100 1.000 0.0 9.0 1.000 0.0 9.2 1.000 80.3 46.8 200 20 60 0.994 0.0 12.3 0.972 0.0 14.2 1.000 70.8 11.6 200 20 80 1.000 0.0 13.8 1.000 0.0 12.4 1.000 48.0 15.4 200 20 100 1.000 0.0 7.9 1.000 0.0 9.1 1.000 108.7 40.0 200 50 60 0.996 0.0 12.4 0.973 0.1 14.3 1.000 159.6 13.2 200 50 80 1.000 0.0 13.8 1.000 0.0 12.6 1.000 115.0 17.8 200 50 100 1.000 0.0 6.5 1.000 0.0 9.4 1.000 325.2 52.6 300 1 90 1.000 0.0 6.4 0.993 0.0 8.2 1.000 15.9 6.8 300 1 120 0.999 0.0 36.8 0.993 0.0 31.4 1.000 73.6 36.9 300 1 150 1.000 0.0 45.1 1.000 0.0 7.4 1.000 70.1 46.2 300 2 90 1.000 0.0 7.3 0.993 0.0 9.6 1.000 27.2 7.1 300 2 120 1.000 0.0 39.5 0.994 0.0 13.2 1.000 126.1 39.4 300 2 150 1.000 0.0 27.3 1.000 0.0 7.5 1.000 59.1 28.4 300 5 90 0.998 0.0 9.7 0.983 0.0 13.0 1.000 65.5 8.6 300 5 120 1.000 0.0 24.6 0.998 0.1 21.3 1.000 108.5 28.9 300 5 150 1.000 0.0 15.0 1.000 0.0 8.6 1.000 92.0 31.3 300 10 90 0.996 0.0 16.1 0.967 0.0 14.9 1.000 210.5 14.8 300 10 120 1.000 0.0 17.2 1.000 0.0 13.4 1.000 122.2 20.5 300 10 150 1.000 0.0 9.9 1.000 0.0 9.2 1.000 275.1 53.4 300 20 90 0.995 0.1 18.3 0.969 0.1 13.2 1.000 388.8 15.4 300 20 120 1.000 0.0 14.7 1.000 0.0 13.2 1.000 205.2 19.9 300 20 150 1.000 0.0 7.9 1.000 0.0 9.7 1.000 876.4 85.8 300 50 90 0.997 0.1 17.3 0.973 0.1 17.8 1.000 947.7 17.8 300 50 120 1.000 0.0 14.7 1.000 0.1 11.4 1.000 438.4 20.4 300 50 150 1.000 0.0 7.4 1.000 0.1 10.1 1.000 1468.5 76.2 Table 2: Results for small random instances of CCKP, comparing matroid exchange, general exchange, and 2-opt heuristics 10

n m k best 1 time 1 iter 1 best 2 time 2 iter 2 6000 10 1800 1.000 16.8 42.7 0.995 41.6 20.6 6000 10 2400 1.000 2.8 172.0 1.000 6.2 11.7 6000 10 3000 1.000 2.2 53.0 1.000 5.0 10.0 6000 20 1800 1.000 55.2 80.0 0.993 49.7 20.6 6000 20 2400 1.000 3.7 90.2 1.000 11.2 15.1 6000 20 3000 1.000 3.1 14.0 1.000 8.5 11.6 6000 50 1800 1.000 42.2 66.3 0.992 40.8 14.1 6000 50 2400 1.000 6.2 37.3 1.000 23.9 16.4 6000 50 3000 1.000 5.7 5.0 1.000 22.1 9.8 6000 100 1800 1.000 46.4 78.3 0.992 75.0 16.9 6000 100 2400 1.000 10.5 21.9 1.000 43.9 16.6 6000 100 3000 1.000 9.9 4.7 1.000 36.3 9.7 6000 200 1800 1.000 70.9 93.8 0.993 163.2 20.8 6000 200 2400 1.000 19.4 17.1 1.000 58.5 17.2 6000 200 3000 1.000 19.5 4.8 1.000 56.7 11.2 6000 500 1800 1.000 139.6 103.5 0.994 313.5 18.5 6000 500 2400 1.000 47.4 12.4 1.000 137.5 17.1 6000 500 3000 1.000 47.0 4.7 1.000 119.1 10.2 6000 1000 1800 1.000 250.0 104.0 0.995 512.1 18.5 6000 1000 2400 1.000 87.9 10.9 1.000 241.7 17.7 6000 1000 3000 1.000 86.5 4.7 1.000 215.2 10.3 6000 2000 1800 1.000 478.9 107.4 0.995 1639.6 21.6 6000 2000 2400 1.000 207.2 10.2 1.000 535.4 16.6 6000 2000 3000 1.000 227.7 4.7 1.000 479.6 10.2 6000 5000 1800 1.000 1025.2 114.2 0.996 4700.6 21.2 6000 5000 2400 1.000 546.6 8.6 1.000 1364.3 16.0 6000 5000 3000 1.000 611.3 4.7 1.000 1242.5 9.2 8000 10 2400 1.000 24.3 45.1 0.994 72.7 19.9 8000 10 3200 1.000 5.0 232.4 1.000 10.8 10.2 8000 10 4000 1.000 3.9 70.4 1.000 9.0 9.6 8000 20 2400 1.000 65.9 70.1 0.993 48.3 16.7 8000 20 3200 1.000 6.7 110.9 1.000 20.2 15.4 8000 20 4000 1.000 5.7 17.5 1.000 15.1 10.5 8000 50 2400 1.000 84.8 75.4 0.993 77.6 17.1 8000 50 3200 1.000 11.6 40.5 1.000 46.6 16.9 8000 50 4000 1.000 11.1 5.4 1.000 40.1 12.2 8000 100 2400 1.000 82.8 88.2 0.993 132.0 17.4 8000 100 3200 1.000 20.7 23.7 1.000 90.2 18.9 8000 100 4000 1.000 22.7 4.9 1.000 85.0 11.1 8000 200 2400 1.000 145.3 115.8 0.993 283.3 20.9 8000 200 3200 1.000 40.2 17.7 1.000 151.3 17.7 8000 200 4000 1.000 40.9 4.7 1.000 147.6 10.3 8000 500 2400 1.000 327.7 139.3 0.995 646.2 25.4 8000 500 3200 1.000 84.2 12.0 1.000 252.3 16.8 8000 500 4000 1.000 85.6 4.7 1.000 241.1 9.9 8000 1000 2400 1.000 525.4 132.7 0.996 861.7 18.6 8000 1000 3200 1.000 150.0 9.6 1.000 425.1 16.7 8000 1000 4000 1.000 151.1 4.6 1.000 406.8 10.3 8000 2000 2400 1.000 992.9 145.3 0.996 3421.7 22.7 8000 2000 3200 1.000 403.4 9.3 1.000 1016.8 17.7 8000 2000 4000 1.000 430.8 4.6 1.000 936.9 10.4 8000 5000 2400 1.000 2024.1 140.9 0.996 7565.4 21.4 8000 5000 3200 1.000 1024.2 9.0 1.000 2663.6 17.3 8000 5000 4000 1.000 1192.0 4.6 1.000 2689.5 10.9 Table 3: Results for large random instances of CCKP, comparing matroid exchange and general exchange heuristics 11

m n k d pheur theur itheur pcplex dcplex tcplex 20 500 150 50 3379.7 0.09 19 3401.0 3401.0 215.7 20 1000 300 50 6845.3 0.94 56 6861.0 6861.0 279.7 20 1500 450 50 10283.2 0.36 23 10331.0 10331.7 TL 20 2000 600 30 13803.7 1.07 36 13833.0 13833.4 TL 20 2500 750 42 17296.3 0.42 13 17326.0 17326.0 0.9 30 3000 1000 33 22335.8 7.06 139 22425.6 22438.0 TL 30 3500 1000 28 23214.4 0.72 14 23220.0 23220.0 1.6 50 4000 1000 25 23491.0 0.08 5 23491.0 23491.0 3.0 50 4500 2000 30 34021.5 7.56 353 34021.5 34021.5 0.7 50 5000 2000 20 39577.3 24.31 668 39577.3 39577.3 0.9 50 5500 2000 18 43368.6 48.27 400 43514.5 43547.1 TL 50 6000 2000 16 45024.5 47.99 202 45217.3 45238.1 TL 50 6500 1000 15 24221.0 0.19 5 24221.0 24221.0 6.3 50 7000 2000 14 46353.2 2.72 32 46388.0 46388.0 7.5 70 7500 2000 13 46663.0 0.45 8 46669.0 46669.0 11.9 70 8000 3000 25 60202.5 92.96 804 60202.5 60202.5 1.9 Table 4: De Farias et al. instances, matroid heuristic vs. CPLEX computing time needed by CPLEX (pcplex, dcplex, and tcplex). We set a time limit of one cpu hour. The results show that the heuristic in few iterations and seconds provides an optimal solution 5 times out of 16 and in the remaining instances the solution provided is less than 0.3% worse than the optimal solution. To assess the solution quality of our heuristic on a larger set of instances, we again consider random instances and solve them to optimality by CPLEX, now without a time limit; see Table 5. Here, the value %gap states how far our heuristic solution is from optimality. While some of the instances are very hard to solve by CPLEX, our heuristic can always find a solution within 0.88 % from optimality in a running time that never exceeds 0.04 seconds for n = 200 and 0.12 seconds for n = 400. 4.3 Sparse mean-variance optimization Problem Description The CCKP discussed above plays an important role in portfolio optimization, where the aim is to choose a set of investments that does not exceed the budget of the investor and that is sparse, i.e., only a certain number of different assets may be chosen. However, in such problems, one is usually not only interested in the expected return of the investments, but also in the risk involved, which can be measured by the variance. Conic formulation max One way to model such problems is as follows: c x ε x Q 1 x a j x b j j = 1,..., m x i = 0 if t i 0 t T, 12

n m k %gap theur itheur tcplex itcplex 200 1 50 0.00 0.000 4.8 0.0 0.0 200 2 50 0.00 0.000 4.8 0.0 0.0 200 5 50 0.00 0.000 4.8 0.0 0.0 200 10 50 0.00 0.000 4.8 0.1 0.0 200 20 50 0.00 0.000 4.8 0.1 0.0 200 50 50 0.00 0.000 4.8 0.3 0.0 200 1 60 0.12 0.000 5.8 0.0 91.0 200 2 60 0.15 0.000 6.8 0.0 41.7 200 5 60 0.51 0.004 9.1 0.4 1182.2 200 10 60 0.66 0.016 13.9 12.1 43317.5 200 20 60 0.88 0.021 12.3 9.4 23404.3 200 50 60 0.66 0.038 12.4 140.5 96191.6 200 1 70 0.43 0.000 15.5 0.1 216.8 200 2 70 0.46 0.004 20.3 0.2 673.7 200 5 70 0.00 0.000 24.2 0.0 0.0 200 10 70 0.00 0.002 21.7 0.0 0.0 200 20 70 0.00 0.012 20.0 0.0 3.0 200 50 70 0.00 0.014 18.1 0.1 58.1 200 1 80 0.25 0.000 27.9 0.1 184.0 200 2 80 0.00 0.004 29.2 0.0 0.0 200 5 80 0.00 0.001 19.7 0.0 0.0 200 10 80 0.00 0.001 15.6 0.0 0.0 200 20 80 0.00 0.005 13.8 0.0 0.3 200 50 80 0.00 0.012 13.8 0.0 0.3 400 1 100 0.00 0.000 5.0 0.0 0.0 400 2 100 0.00 0.000 5.0 0.0 0.0 400 5 100 0.00 0.000 5.0 0.0 0.0 400 10 100 0.00 0.000 5.0 0.1 0.0 400 20 100 0.00 0.000 5.0 0.1 0.0 400 50 100 0.00 0.000 5.0 0.2 0.0 400 1 120 0.07 0.002 5.5 0.1 68.0 400 2 120 0.13 0.001 6.9 0.1 64.8 400 5 120 0.21 0.020 12.9 0.8 2120.2 400 10 120 0.46 0.075 18.7 428.3 26607.8 400 20 120 0.57 0.113 20.8 1492.4 64157.5 400 50 120 0.44 0.112 17.8 3630.3 2.7 400 1 140 0.45 0.006 22.9 0.1 258.6 400 2 140 0.31 0.032 33.9 3.1 8681.8 400 5 140 0.00 0.024 41.7 0.0 0.0 400 10 140 0.00 0.026 34.0 0.0 0.0 400 20 140 0.00 0.031 29.0 0.0 0.0 400 50 140 0.00 0.061 25.4 0.0 0.0 400 1 160 0.22 0.008 48.5 0.1 406.7 400 2 160 0.00 0.018 50.5 0.0 0.0 400 5 160 0.00 0.015 32.1 0.0 0.0 400 10 160 0.00 0.019 24.5 0.0 0.0 400 20 160 0.00 0.022 17.9 0.0 0.0 400 50 160 0.00 0.037 15.4 0.0 0.0 Table 5: Random instances of CCKP, comparison with CPLEX 13

where c, a j R n + and T are as in CCKP, but now the objective contains the risk term x Qx defined by the covariance matrix Q, weighted by some ε > 0. As in CCKP, the set T is the uniform matroid, but now the risk term can be modeled using the second-order cone K n in a standard way. In the resulting conic problem, Assumption 1 is satisfied again. Test-bed used As test-bed, we use exactly the same instances as in the CCKP case. For the risk term, we set ε = 1 and compute Q 1 randomly as follows: we first compute a matrix A with all entries chosen uniformly at random in [0,1] and then set Q = (AA ) 1. Analysis of the results In Tables 6 8, we repeat the experiments of Tables 2, 3, and 5, now containing the risk part in the objective function. However, since the problem is harder to solve now, we use smaller instances. Altogether, it turns out that the results for our heuristic are similar to the CCKP case. 5 Conclusion We presented a heuristic for a very general class of sparse conic optimization problems, based on the idea of iteratively updating the support of the solution. In the case that the sparsity is defined by a matroid, we devise an exchange heuristic for updating the support that is based on dual information. Our experiments show that this approach often yields very good solutions in a short running time for different types of problems, including the cardinality-constrained continuous knapsack problem. References [1] Brian Borchers. CSDP, a C library for semidefinite programming. Optimization Methods and Software, 11/12(1 4):613 623, 1999. [2] Christoph Buchheim and Emiliano Traversi. Quadratic combinatorial optimization using separable underestimators. INFORMS Journal on Computing, 2018. To appear. [3] IBM ILOG CPLEX Optimizer 12.6. www.ibm.com/software/integration/optimization/cplex-optimizer, 2015. [4] Ismael R. de Farias Jr. and George L. Nemhauser. A polyhedral study of the cardinality constrained knapsack problem. Mathematical Programming, 96:439 467, 2003. [5] Franz Rendl, Giovanni Rinaldi, and Angelika Wiegele. Solving Max- Cut to optimality by intersecting semidefinite and polyhedral relaxations. Mathematical Programming, 121(2):307, 2010. 14

n m k best 1 time 1 iter 1 best 2 time 2 iter 2 best 3 time 3 iter 3 60 1 18 0.996 0.0 4.9 0.981 0.0 5.1 1.000 6.4 5.6 60 1 24 1.000 0.0 11.8 0.996 0.0 15.5 1.000 25.3 11.2 60 1 30 1.000 0.1 12.8 1.000 0.0 6.9 1.000 42.0 13.4 60 2 18 0.994 0.0 5.3 0.967 0.0 7.5 1.000 6.4 5.8 60 2 24 1.000 0.0 11.2 0.965 0.0 11.9 1.000 24.4 11.1 60 2 30 1.000 0.0 9.0 1.000 0.0 7.7 1.000 26.0 9.4 60 5 18 0.987 0.0 7.3 0.954 0.0 8.9 1.000 9.1 6.9 60 5 24 1.000 0.0 9.9 0.994 0.0 11.8 1.000 27.2 11.0 60 5 30 1.000 0.0 7.7 1.000 0.0 6.9 1.000 23.9 8.4 60 10 18 0.989 0.0 7.8 0.956 0.0 11.2 1.000 9.5 7.1 60 10 24 1.000 0.0 9.0 0.996 0.0 16.0 1.000 21.1 8.9 60 10 30 1.000 0.0 6.5 1.000 0.0 6.8 1.000 21.9 7.8 60 20 18 0.989 0.0 6.7 0.954 0.0 15.6 1.000 12.6 7.8 60 20 24 1.000 0.0 8.9 0.997 0.0 11.4 1.000 23.8 8.4 60 20 30 1.000 0.0 6.7 1.000 0.0 7.2 1.000 36.1 9.3 60 50 18 0.995 0.0 8.0 0.963 0.1 15.1 1.000 15.2 7.4 60 50 24 1.000 0.0 8.1 0.999 0.0 9.7 1.000 34.1 7.7 60 50 30 1.000 0.0 6.4 1.000 0.1 7.0 1.000 44.0 7.1 80 1 24 0.999 0.0 5.2 0.985 0.0 6.8 1.000 14.6 5.3 80 1 32 0.999 0.1 14.9 0.992 0.0 11.0 1.000 88.2 14.7 80 1 40 1.000 0.1 14.6 1.000 0.0 7.4 1.000 142.2 16.7 80 2 24 0.997 0.0 6.4 0.981 0.0 8.6 1.000 16.8 5.7 80 2 32 1.000 0.1 13.0 0.988 0.0 10.6 1.000 80.0 13.4 80 2 40 1.000 0.0 10.0 1.000 0.0 6.8 1.000 108.4 13.7 80 5 24 0.992 0.0 7.2 0.961 0.0 10.4 1.000 24.6 7.1 80 5 32 1.000 0.0 10.0 0.990 0.1 13.8 1.000 76.7 12.4 80 5 40 1.000 0.0 8.0 1.000 0.0 7.1 1.000 82.3 10.2 80 10 24 0.993 0.0 7.5 0.962 0.0 12.6 1.000 31.6 8.3 80 10 32 1.000 0.0 9.4 0.995 0.1 14.5 1.000 80.4 11.9 80 10 40 1.000 0.1 7.9 1.000 0.0 7.7 1.000 129.8 12.9 80 20 24 0.992 0.0 8.9 0.962 0.1 12.9 1.000 40.9 8.8 80 20 32 1.000 0.1 8.8 1.000 0.1 15.7 1.000 89.7 10.0 80 20 40 1.000 0.1 6.7 1.000 0.1 7.2 1.000 118.4 9.9 80 50 24 0.995 0.1 8.6 0.962 0.1 14.7 1.000 55.4 8.9 80 50 32 1.000 0.1 8.6 0.998 0.2 14.3 1.000 119.7 8.5 80 50 40 1.000 0.1 7.0 1.000 0.1 8.0 1.000 138.3 8.6 Table 6: Results for small random instances of MVO, comparing matroid exchange, general exchange, and 2-opt heuristics 15

n m k best 1 time 1 iter 1 best 2 time 2 iter 2 500 10 150 1.000 4.1 14.5 0.984 3.6 11.5 500 10 200 1.000 6.0 15.1 1.000 2.2 7.4 500 10 250 1.000 2.8 6.9 1.000 2.3 6.7 500 20 150 1.000 5.6 17.5 0.982 5.1 14.0 500 20 200 1.000 4.9 12.0 1.000 2.7 8.3 500 20 250 1.000 2.4 5.7 1.000 2.4 6.4 500 50 150 1.000 6.8 18.9 0.983 5.9 15.1 500 50 200 1.000 4.6 9.1 1.000 3.4 7.7 500 50 250 1.000 2.7 5.3 1.000 3.1 6.8 500 100 150 1.000 9.7 21.1 0.985 6.7 13.9 500 100 200 1.000 4.7 8.2 1.000 4.2 7.9 500 100 250 1.000 3.8 5.4 1.000 4.1 6.5 1000 10 300 1.000 35.4 19.5 0.987 15.3 9.4 1000 10 400 1.000 25.6 12.2 1.000 11.9 7.3 1000 10 500 1.000 11.9 5.8 1.000 9.6 5.8 1000 20 300 1.000 42.9 22.3 0.987 24.5 13.5 1000 20 400 1.000 19.0 9.3 1.000 11.0 6.8 1000 20 500 1.000 10.1 5.2 1.000 9.8 5.8 1000 50 300 1.000 53.9 27.0 0.988 20.7 11.2 1000 50 400 1.000 23.6 9.7 1.000 15.5 7.7 1000 50 500 1.000 11.6 5.1 1.000 10.4 5.4 1000 100 300 1.000 58.8 27.3 0.990 31.2 14.8 1000 100 400 1.000 24.2 8.1 1.000 19.2 7.4 1000 100 500 1.000 17.5 5.2 1.000 15.0 5.5 Table 7: Results for large random instances of MVO, comparing matroid exchange and general exchange heuristics 16

n m k %gap theur itheur tcplex itcplex 60 1 12 0.00 0.002 4.5 1.2 244.8 60 2 12 0.00 0.005 4.4 0.1 7.9 60 5 12 0.00 0.005 4.4 1.0 243.5 60 10 12 0.00 0.005 4.4 0.9 246.7 60 1 18 0.38 0.008 4.9 2.7 487.9 60 2 18 0.68 0.014 5.3 1.6 247.2 60 5 18 1.46 0.021 7.3 5.0 1071.7 60 10 18 1.41 0.024 7.8 8.1 1778.7 60 1 24 0.04 0.037 11.8 4.7 744.6 60 2 24 0.00 0.029 11.2 0.0 0.0 60 5 24 0.00 0.031 9.9 0.0 0.6 60 10 24 0.00 0.033 9.0 3.9 483.9 60 1 30 0.00 0.046 12.8 0.0 0.0 60 2 30 0.00 0.031 9.0 0.0 0.0 60 5 30 0.00 0.021 7.7 0.0 0.6 60 10 30 0.00 0.022 6.5 0.0 0.7 80 1 16 0.00 0.011 5.0 17.7 3996.7 80 2 16 0.00 0.016 5.1 31.4 28212.7 80 5 16 0.00 0.017 5.0 637.4 41180.6 80 10 16 0.00 0.018 5.2 1477.9 9280.0 80 1 24 0.15 0.023 5.2 439.5 66975.3 80 2 24 0.33 0.033 6.4 528.9 84235.2 80 5 24 0.95 0.036 7.2 809.8 11558.3 80 10 24 0.81 0.039 7.5 37.3 28638.7 80 1 32 0.09 0.077 14.9 41.0 3766.0 80 2 32 0.00 0.071 13.0 93.0 5145.5 80 5 32 0.00 0.053 10.0 82.3 3869.8 80 10 32 0.00 0.052 9.4 49.3 3543.2 80 1 40 0.00 0.098 14.6 49.4 3258.7 80 2 40 0.00 0.055 10.0 96.3 5145.5 80 5 40 0.00 0.041 8.0 81.3 3869.8 80 10 40 0.00 0.057 7.9 53.0 3543.2 Table 8: Random instances of MVO, comparison with CPLEX 17