Eigenvalue techniques for convex objective, nonconvex optimization problems Daniel Bienstock, Columbia University, New York.

Size: px
Start display at page:

Download "Eigenvalue techniques for convex objective, nonconvex optimization problems Daniel Bienstock, Columbia University, New York."

Transcription

1 Eigenvalue techniques for convex objective, nonconvex optimization problems Daniel Bienstock, Columbia University, New York November, 2009 Abstract Consider a minimization problem given by a nonlinear, convex objective function over a nonconvex feasible region. Traditional optimization approaches will frequently encounter a fundamental difficulty when dealing with such problems: even if we can efficiently optimize over the convex hull of the feasible region, the optimum will likely lie in the interior of a high dimensional face, far away from any feasible point. As a result (and in particular, because of the nonconvex objective) the lower bound provided by a convex relaxation will typically be extremely poor. Furthermore, we will tend to see very large branch-and-bound (or -cut) trees with little or no improvement over the lower bound. In this work we present theory and implementation for an approach that relies on three ingredients: (a) the S-lemma, a major tool in convex analysis (b) efficient projection of quadratics to lower dimensional hyperplanes, and (c) efficient computation of combinatorial bounds for the minimum distance from a given point to the feasible set, in the case of several signficant optimization problems. Altogether, our approach strongly improves lower bounds at a small computational cost, even in very large examples. 1

2 1 Introduction We consider problems with the general form Here, (F) : F := min F (x), (1) s.t. x P, (2) x K. (3) F (x) is a convex function; in this abstract, a convex quadratic, i.e. F (x) = x T Mx + v T x (with M 0 and v R n ). P R n is a convex set over which we can efficiently optimize F, K R n is a non-convex set with special structure. In the applications we consider, n can be quite large. We assume that a given convex relaxation of the set described by (2), (3) is under consideration. For example, when dealing with a particularly complex set K, this may be the only sensible relaxation that we are able to produce. Or, it may be the only computationally practicable relaxation we have, especially in the case of large n. Or, as it is often in the case of practice, the relaxation is one from which it is easy to quickly produce (perhaps, good) heuristic solutions to problem F. Regardless of the case, one common fundamental difficulty is likely to be encountered: because of the convexity of F, the optimum solution to a convex relaxation will frequently be attained in the interior of a high-dimentional face of the relaxation, and far from the set K. Thus, the lower bound proved by the relaxation will be weak (often, very weak) compared to F. What is more, if one were to rely on branch-and-cut (the major tool of modern mixed-integer programming) the proved lower bound may improve little if at all when n is large, even after massive amounts of branching and extremely long computational time. This stalling of the lower bounding procedure is commonly encountered in practice and constitutes a significant challenge, the primary subject of our study. We present a body of techniques that are designed to alleviate this difficulty. After obtaining the solution x to the given relaxation for problem F, our methods will use techniques of convex analysis, of eigenvalue optimization, and combinatorial estimations, in order to quickly obtain a valid lower on F which is strictly larger (often, significantly so) than F (x ). We will describe an important class of problems where our method, applied to a cheap but weak formulation, produces bounds comparable to or better than those produced by much more sophisticated formulations, and at a small fraction of the computational cost. To motivate our discussion, we introduce two significant examples. Cardinality constrained optimization problems. Here, for some integer 0 < K n, K = { x R n : x 0 K }, where the zero-norm v 0 of a vector v is used to denote the number of nonzero entries of v. A classical example of such a constraint arises in portfolio optimization (see e.g. [2]) but modern applications involving this constraint arise in statistics, machine learning [12], 2

3 and, especially, in engineering and biology [18]. Problems related to compressive sensing have an explicit cardinality constraint (see for material). Also see [7]. The simplest canonical example of problem F is as follows: s.t. F = min F (x), (4) x j = 1, x 0, (5) j x 0 K. (6) This example is significant because (a) one can show that this problem is strongly NP-hard, and (b) it does arise in practice, exactly as stated. Note that modulo a simple change in variables, the linear constraint in (5) is simply a stand-in for a general non-homogeneous linear equation with nonzero coefficients. In spite of its difficulty, { this example already incorporates the fundamental difficulty alluded to above: clearly, conv x R n + : } { j x j = 1, x 0 K = x R n + : } j x j = 1. In other words, from a convexity standpoint the cardinality constraint disappears. Moreover, if the quadratic in F is positive definite and dominates the linear term, then the minimizer of F over the unit simplex will be an interior point, i.e., a point with all coordinates positive (and in real-life examples, of similar orders of magnitude); whereas K n in practice. Multi-term disjunctive inequalites. Consider vectors vectors c i R n and reals β i (1 i m). We are interested in a non-convex set of the form m K i, where K i = {x R n : c it x β i or c it x β i + 1}, for 1 i m. (7) i=1 More generally, we are interested in (and our methodology applies to) cases where each K i is a disjunction among multiple polyhedral sets. The classical split cuts [6] provide an example of disjunctions based on inequalities as in (7), with integral c i and β i. However, disjunctive inequalities naturally arise in numerous settings as valid strong statements on a combinatorial set (see e.g. [3], [4]). Note that in general, given a polyhedral set P, the convex hull of the intersection of P and a set (7) will have exponentially many facets (even the case m = 1 is not trivial). Thus, most likely, one would work with a relaxation of P m i=1 K i. Moreover, in the typical application of disjunctions (or split cuts) the disjunctions are found sequentially as a result of a cutting procedure. For simplicity, consider the case m = 1 the very first disjunction is added because it is violated by the solution to an initial relaxation. But thinking about the resulting geometry in the case of a (strictly) convex F (x), it is clear that, quite possibly, argmin{f (x) : x conv(p K 1 )} will still violate the first disjunction. For m > 1 the likelihood of such a stall in the cutting procedure increases. To the extent that disjunctive sets are a general-purpose technique for formulating combinatorial constraints, the methods in this paper apply to a wide variety of optimization problems, and should prove effective when the objective is strictly convex. 1.1 Techniques Our methods embody two primary techniques: 3

4 (a) The S-lemma (see [19], also [1], [5], [14]). Let f, g : R n R be quadratic functions and suppose there exists x R N such that g( x) > 0. Then f(x) 0 whenever g(x) 0 if and only if there exists µ 0 such that (f µg)(x) 0 for all x. Remark: here, a quadratic may contain a linear as well as a constant term. The S-lemma can be used as an algorithmic framework for minimizing a quadratic subject to a quadratic constraint. Let p, q be quadratic functions and let α, β be reals. Then min{p(x) : q(x) β} α, iff µ 0 s.t. p(x) α µq(x) + µβ 0 x. (8) In other words, the minimization problem in (8) can be approached as a simultaneous search for two reals α and µ 0, with α largest possible such that the last inequality in (8) holds. The S-lemma is significant in that it provides a good characterization (i.e. polynomial-time) for a a usually non-convex optimization problem. See [13], [15], [16], [17], [20] and the references therein, in particular regarding the connection to the trust-region subproblem. (b) Consider a given nonconvex set K. We will assume, as a primitive, that (possibly after an appropriate change of coordinates), given a point ˆx R n, we can efficiently compute a strong (combinatorial) lower bound for the Euclidean distance between ˆx and the nearest point in P K. We will make this assumption more precise in Section 1.2, but we will show that this is indeed the case for the cardinality constrained case (see Section 1.4; in the full paper we will show how to do so for the multi-term disjunctive case). Roughly speaking, it is the structure of a set K of interest that makes the assumption possible. In the rest of this section we will denote by D(ˆx) our lower bound on the minimum distance from ˆx to P K. We can put together (a) and (b) into a simple template for proving lower bounds for F : S.1 Compute an optimal solution x to the given relaxation to problem F. S.2 Next we obtain the quantity D(x ). S.3 Finally, we apply the S-lemma as in (8), using F (x) for p(x), and (the exterior of) the ball centered at x with radius D(x ) for q(x) β. x* y 0 01 Figure 1: A simple case. For a simple application of this template, consider Figure 1. This shows an instance of problem (4)-(6), with n = 3 and K = 2 where all coordinates of x are positive. The figure also assumes that D(x ) is exact it equals the minimum distance from x to the feasible region. If we minimize F (x), subject to being on the exterior of this ball (in other words, if we apply the S-lemma using F (x) as the objective and the ball as the constraint) the optimum will be attained at y. Thus, F (y) is a valid lower bound on F ; we have F (y) = F (x ) + λ 1 R 2, where R is the radius of the ball and λ 1 is the minimum eigenvalue of the restriction of F (x) to the unit simplex. Note: λ 1 is lower bounded by the smallest eigenvalue of F (x). 4

5 Now consider the example in Figure 2, corresponding to the case of a single disjunctive inequality. Here, x F is the optimizer of F (x) over the affine hull of the set P. A straightforward application of the S-Lemma will yield as a lower bound (on F ) the value F (y), which is weak weaker, in fact, than F (x ). The problem is caused by the fact that x F is not in the relative interior of the convex hull of the feasible region. In summary, a direct use of our template will not work. 1.2 Adapting the template In order to correct the general form of the difficulty depicted by Figure 2 we would need to solve a problem of the form: { V := min F (x) : x x C, (x x ) T (x x ) δ 2} (9) where δ > 0, and C is the cone of feasible directions (for P) at x. We can view this as a cone constrained version of the problem addressed by the S-Lemma. Clearly, F (x ) V F with the first inequality in general strict. If we are dealing with polyhedral sets, (9) becomes (after some renaming): { min F (ω) : Cω 0, ω T ω δ 2} (10) where C is an appropriate matrix. However, we have (proof in full paper): Theorem 1.1 Problem (10) is strongly NP-hard. We stress that the NP-hardness result is not simply a consequence of the nonconvex constraint in (10) without the linear constraints, the problem becomes polynomially solvable (i.e., it is handled by the S-lemma, see the references). To bypass this negative result, we will adopt a different approach. We assume that there is a positive-definite quadratic function q(x) such that for any y R n, in polynomial time we can produce a (strong, combinatorial) lower bound Dmin 2 (y, q) on the quantity x* y x F min{q(y x) : x P K}. In Section 1.4 we will address how to produce the Figure 2: The simple template fails. quadratic q(x) and the value D 2 (y, q) when K is defined by a cardinality constraint (and a similar construct exists in the disjunctive inequalities case). Let c = F (x ) (other choices for c discussed in full paper). Note that for any x P K, c T (x x ) 0. For α 0, let p α = x + αc, and let H α be the hyperplane through p α orthogonal to c. Finally, define V (α) := min{f (x) : q(x p α ) D 2 (p α, q), x H α }, (11) and let y α attain the minimum. Note: computing V (α) entails an application of the S-lemma, restricted to H α. See Figure 3. Clearly, V (α) F. Then Suppose α = 0, i.e. p α = x. Then x is a minimizer of F (x) subject to x H 0. Thus V (0) > F (x ) when F is positive-definite. 5

6 Suppose α > 0. Since c T (y α x ) > 0, by convexity V (α) = F (y) > F (x ). Thus, F (x ) inf α 0 V (α) F ; the first inequality being strict in the positive-definite case. [It can be shown that the inf is a min ]. Each value V (α) incorporates combinatorial information (through the quantity D 2 (p α, q)) and thus the computation of min α 0 V (α) cannot be obtained through direct convex optimization techniques. As a counterpoint to Theorem 1.1, we have (proof in full paper): Theorem 1.2 In (10), if C has one row and q(x) = j x2 j then V inf α 0 V (α). In order to develop a computationally practicable approach that uses these observations, let 0 = α (0) < α (1) <... < α (J), such that for any x P K, c T x α (J) c 2 2. Then: Updated Template 1. For 0 i < J, compute a value Ṽ (i) min{ V (α) : α(i) α α (i+1) }. 2. Output min 0 i<j Ṽ (i). The idea here is that if (for all i) α (i+1) α (i) is small then V (α (i) ) V (α (i+1) ). Thus the quantity output in (2) will closely approximate min α 0 V (α). In our implementation, we compute Ṽ (i) by appropriately interpolating between V (α (i) ) and V (α (i+1) ) (details, full paper). Thus our approach reduces to computing quantities of the form V (α). We need a fast procedure for this task (since J may be large). Considering eq. (11) we see that this H α Bounding ellipsoid in H α c p α x* yα minimizer H α of F(x) in Figure 3: A better paradigm. involves an application of the S-lemma, restricted to the hyperplane H α. An efficient realization of this idea, which allows for additional leveraging of combinatorial information, is obtained by computing the projection of the quadratic F (x) to H α. This is the subject of the next section. 1.3 Projecting a quadratic Let M = QΛQ T be a n n matrix. Here the columns of Q are the eigenvectors of M and Λ = diag{λ 1,.{.., λ n } where the λ} i are the eigenvalues of M. We assume λ 1... λ n. Let c 0, denote H = x R n : c T x = 0, and let P be the projection matrix onto H. In this section we describe an efficient algorithm for computing an eigenvalue-eigenvector decomposition of the projected quadratic P MP. Note that if x H, x T P MP x = x T Mx. The vector c could be dense (is dense in important cases) and Q could also be dense. Our approach reverse engineers, and extends, results from [8] (also see Section 12.6 of [9] and references therein). Clearly, c is an eigenvector of P M P (corresponding to eigenvalue 0). The remaining eigenvalues λ 1,..., λ n 1 are known to satisfy λ 1 λ 1 λ 2 λ 2... λ n 1 λ n 1 λ n. Definition 1.3 An eigenvector q of M is called acute if q T c 0. An eigenvalue λ of M is called acute if at least one eigenvector corresponding to λ is acute. In (e.2) below we will use the convention 0/0 = 0. 6

7 Lemma 1.4 Let α 1 < α 2 <... < α q be the acute eigenvalues of M. Write d = Q T c. Then, for 1 i q 1, (e.1) The equation n d 2 j λ j λ = 0 has a unique solution ˆλ i in (α i, α i+1 ). (e.2) Let w i = Q(λ ˆλ i I) 1 d. Then c T w i = 0 and P MP w i = ˆλ i w i. Altogether, Lemma 1.4 produces q 1 eigenvalue/eigenvector pairs of P MP. The vector in (e.2) should not be explicitly computed; rather the factorized form in (e.2) will suffice. The root to the equation in (e.1) can be quickly obtained using numerical methods (such as golden section search) since the expression in (e.1) is monotonely increasing in (α i, α i+1 ). A different construction, which handles the non-acute eigenvectors and the eigenvalues of M with multiplicity greater than one, produces n q + 1 additional distinct eigenvalues/eigenvector pairs for P MP orthogonal to c which are also distinct from those obtained through Lemma 1.4 (details in full paper). To conclude this section, we note that it is straightforward to iterate the procedure in this section, so as to project a quadratic to hyperplanes of dimension less than n 1. More details will be provided in the talk and in the full paper. 1.4 Combinatorial bounds on distance functions Here we take up the problem of computing strong lower bounds on the Euclidean distance from a point to the set P K. In this abstract we will focus on the cardinality constrained problem, but results of a similar flavor hold for the disjunctive inequalities case. Let a R n, b R, K < n be a positive integer, and ω R n. Consider the problem n Dmin(ω, 2 a) := min (x j ω j ) 2, : a T x = b and x 0 K. (12) Clearly, the sum of smallest n K values ωj 2 constitutes a ( naive ) lower bound for problem (12). But it is straightforward to show that an exact solution to (12) is obtained by choosing S {1,..., n} with S K, so as to minimize (b j S a jω j ) 2 j S a2 j + j / S ω 2 j. (13) [We use the convention that 0/0 = 0.] Empirically, the naive bound mentioned above is very weak since the first term in (13) is typically at least an order of magnitude larger than the second; and it is the bound, rather than the set S itself, that matters. Suppose a j = 1 for all j. It can be shown, using (13), that the optimal set S has the following structure: S = P N, where P + N K, and P consists of the indices of the P smallest nonnegative ω j (resp., N consists of the indices of the N smallest ω j with ω j < 0). The optimal S can be computed in O(K) time, after sorting the ω j. When ω 0 or ω 0 we recover the naive procedure mentioned above (though again we stress that the first term in (13) dominates). In general, however, we have: Theorem 1.5 (a) It is NP-hard to compute Dmin 2 (ω, a). (b) Let 0 < ɛ < 1. We can compute a vector ˆx with j a j ˆx j = b and ˆx 0 K, and such that n (ˆx j ω j ) 2 (1 + ɛ)dmin(ω, 2 a), 7

8 in time polynomial in n, ɛ 1, and the number of bits needed to represent ω and a. In our current implementation we have not used the algorithm in part (b) of the Lemma, though we certainly plan to evaluate this option. Instead, we proceed as follows. Assume a j 0 for all j. Rather than solving problem (12), instead we consider n min a 2 j(x j ω j ) 2 : a T x = b and x 0 K. { n Writing ω j = a j ω j (for all j), this becomes min (x j ω j ) 2 : which as noted above can be efficiently solved. 1.5 Application of the S-Lemma } j x j = b and x 0 K, Let M = QΛQ T 0 be a matrix given by its eigenvector factorization. Let H be a hyperplane through the origin, ˆx H, v R n, δ j > 0 for 1 j n, β > 0, and v R n. Here we solve the problem min x T Mx + v T x, subject to n δ i (x i ˆx i ) 2 β, and x H. (14) i=1 By rescaling, translating, and appropriately changing notation, the problem becomes: min x T Mx + v T x, subject to n x 2 i β, and x H. (15) i=1 Let P be the n n matrix corresponding to projection onto H. Using Section 1.3 we can produce a representation of P MP as Q Λ Q T, where the the n th eigenvector q n is orthogonal to H, and λ 1 = min i<n { λ i }. Thus, problem (15) becomes, for appropriately defined ṽ, Γ := min n 1 λ j y 2 j + 2ṽ T y, subject to n 1 y 2 j β. (16) Using the S-lemma, we have that Γ γ, iff there exists µ 0 s.t. n 1 n 1 λ j yj 2 + 2ṽ T y γ µ yj 2 β 0 y R n 1. (17) Using some linear algebra, this is equivalent to { Γ = max µβ n 1 i=1 ṽ 2 i λ i µ : 0 µ < λ 1 This is a simple numerical task, since in [0, λ 1 ) the objective in (18) is concave in µ. }. (18) Remarks: (1) Our updated template in Section 1.2 requires the solution of multiple problems of the form 18 (for different β and ṽ) but just one computation of Q and Λ. (2) Consider any integer 1 p < n 1. When µ < λ 1, the expression maximized in (18) is lower bounded by µβ n 1 p ṽi 2 i=1 λ i=p+1 ṽ2 i i µ λ p+1 µ. This, and related facts, yield an approximate version of our approach which only asks for the first p elements of the eigenspace of P MP (and M). 8

9 1.5.1 Capturing the second eigenvalue We see that Γ < λ 1 β (and frequently this bound is close). In experiments, the solution y to (15) often cheats in that y1 is close to zero. We can then improve on the bound if the second projected eigenvalue, λ 2, is significantly larger than λ 1. Assuming that is the case, pick a value θ with y1 2 /β < θ < 1. (a) If we assert that y 2 1 θβ then we can strengthen the constraint in (14) to n i=1 δ i (x i ˆx i ) 2 γ, where γ = γ(θ) > β. This is certainly the case for the cardinality constraint and for the disjunctive inequalities case (details, full paper). So the assertion amounts to applying the S-lemma, but using γ in place of β. (b) Otherwise, we have that n 1 i=2 y2 i (1 θ)β. In this case, instead of the right-hand side of (18), we will have max { µ(1 θ)β n 1 i=2 ṽ 2 i λ i µ : 0 µ λ 2 }. (19) The minimum of the quantities obtained in (a) and (b) yields a valid lower bound on Γ; we can evaluate several candidates for θ and choose the strongest bound. When λ 2 is significantly larger than λ 1 we often obtain an improvement over the basic approach as in Section 1.5. Note: the approach in this section constitutes a form of branching and in our testing has proved very useful when λ 2 > λ 1. It is, intrinsically, a combinatorial approach, and quite distinct from the S-lemma and related problems (e.g. the trust region subproblem). It is thus not easily reproducible using convexity arguments alone. Also see remark (2) of the previous section. 2 Computational experiments For the sake of brevity, we describe a partial set of experiments (more in the talk and in the full paper). The purpose of our experiments is to (a) investigate the speed of our numerical algebra routines, in particular the projection of quadratics, for large n and large number of nonzeroes in the quadratic, and (b) to study the strength of the bound our method produces. We study problems of the form min{ x T Mx + v T x : j x j = 1, x 0, x 0 K }. The matrix M is given in its eigenvector/eigenvalue factorization QΛQ T. To stress-test our linear algebar routines, we construct Q as the product of random rotations: as the number of rotations increases, so does the number of nonzeroes in Q, and the overall complexity of M. In our experiments, we ran our procedure after computing the solution to the (diagonalized) weak formulation min{ y T Λy + v T x : Q T x = y, x j = 1, x 0}. j We compare our bounds to those obtained by running the (again, diagonalized) perspective formulation [10], [11], a strong conic formulation (here, λ min is the minumum λ i ): 9

10 min λ min j w j s.t. Q T x = y, + (λ j λ min )yj 2 j x j = 1 j x 2 j w j z j 0, 0 z j 1 j, (20) z j k, x j z j j, j x, w R n +. For our experiments, we used Cplex 12.1 on a single-core 2.66GHz Xeon machine with 16 Gb of physical memory, which was never exceeded, even in the largest examples. For the set of tests in Table 1, we used n = 2443 and a vector or real-world eigenvalues from a finance application. Q is the product of 5000 random rotations, resulting in nonzeros in Q (and thus, not particularly large). K rqmip PRSP SLE rqmip PRSP SLE LB LB LB sec sec sec Table 1: Examples with few nonzeroes Here, rqmip refers to the weak formulation, PRSP to the perspective formulation, and SLE to the approach in this paper. LB is the lower bound produced by a given approach, and sec is the CPU time in seconds. We see that that rqmip is quite weak especially for smaller K. Both PRSP and SLE substantially improve on rqmip, though PRSP is significantly more expensive. In Table 2 we consider examples with n = and random Λ. In the table, Nonz indicates the number of nonzeroes in Q; as this number increases the quadratic becomes less diagonal dominant. Nonz rqmip PRSP SLE rqmip PRSP SLE in Q LB LB LB sec sec sec 5.3e e e e e e e e e e e e e e e e e e e e e e e e e e Table 2: Larger examples 10

11 As in Table 1, SLE and PRSP provide similar improvements over rqmip (which is clearly extremely weak). Moreover, SLE proves uniformly fast essentially, free compared to rqmip. In the examples in Table 2, the smallest ten (or so) eigenvalues are approximately equal, with larger values after that. The techniques in Section should extend to this situation in order to obtain an even stronger bound we hope to present results on this in the talk. Also note that the perspective formulation quickly proves impractical. A cutting-plane procedure that replaces the conic constraints in (20) with (outer approximating) linear inequalities is outlined in [10], [11] and tested on random problems with n 400 (which we will also test in the full paper). Such a procedure begins by solving rqmip and then iteratively adds the inequalities; or it could simply solve a formulation consisting of rqmip, augmented with a set of pre-computed inequalities. In either case the running time will be slower than that for rqmip. In our initial experiments with this linearized approximation, we found that (a) it can provide a very good lower bound to the conic perspective formulation, (b) it can run significantly faster than the full conic formulation, but, (c) it proves significantly slower than rqmip, and, in particular, still significantly slower than the combination of rqmip and SLE. The discussion regarding the perspective formulation is of interest because it is precisely an example of the paradigm that we consider in this paper: a convex formulation for a nonconvex problem with a convex objective. In our tests involving large cases, and using branch-and-cut, the lower bound proved by the perspective formulation exhibited the expected stalling behavior. Figure 3 concerns the K = 70 case of Table 1 where we ran the mixed-integer programming version of several formulations using Cplex 12.1 on a faster machine (on which rqmip requires 4.35 seconds and our method, 3.54 seconds, to prove the lower bound of ). The runs in the table used four parallel threads of execution. formulation nodes time LB UB QPMIP (2.75 sec/node) PRSP-MIP (56.04 sec/node) LPRSP-MIP (11.21 sec/node) root Table 3: Detailed analysis of K = 70 case of Table 1 In Table 3, QPMIP is the weak formulation, PRSP-MIP is the perspective formulation [Comment: on conic MIPs, Cplex appears to keep the lower bound fixed at 0 for a very long time]. LPRSP-MIP it the linearized perspective version. The figures in parentheses indicate CPU seconds per branch-and-cut node. time indicates the observed time (i.e. total CPU time will be, roughly, four times larger). Note the stalling of the lower bound in LPRSP-MIP at a value strictly smaller than the bound SLE proved with no branching. QPMIP with all Cplex cuts suppressed (not shown in the table) yielded an upper bound of in 200 seconds this underlines the point concerning fast heuristics that we made in the introduction. Our thechniques apply to the solution computed by the perspective formulation (after projecting out the auxiliary variables). This approach can only be desirable if there is a fast and tight approximation to the perspective formulation. However, Table 2 suggests that for large problems rqmip is already too expensive. It might simply better to approximate its solution, quickly, perhaps using a first-order method (and apply our techniques to the resulting solution vector). We plan to investigate these issues in the full paper. A more significant focus of our upcoming work concerns problems in complex applications where auxiliary binary variables cannot be naturally employed. A major issue that we plan to investigate is how to incorporate our techniques within branch-and-cut, so that the bound proved at each node strictly improves on the bound at its parent node no stalling. In this context, the single most important idea that we will investigate and report on concerns branching based on eigenvector structure, along the lines of Section

12 References [1] A. Ben-Tal and A. Nemirovsky, em Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications (2001) MPS-SIAM Series on Optimization, SIAM, Philadelphia, PA. [2] D. Bienstock, Computational study of a family of mixed-integer quadratic programming problems, Math. Programming 74 (1996), [3] D. Bienstock and M. Zuckerberg, Subset algebra lift algorithms for 0-1 integer programming, SIAM J. Optimization 105 (2006), [4] D. Bienstock and B. McClosky, Tightening simple mixed-integer sets with guaranteed bounds (2008). Submitted. [5] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear matrix inequalities in system and control theory (1994). SIAM, Philadelphia, PA. [6] W. Cook, R. Kannan and A. Schrijver, Chv atal closures for mixed ineger programs, Math. Programming 47 (1990), [7] I. De Farias, E. Johnson and G. Nemhauser, A polyhedral study of the cardinality constrained knapsack problem, Math. Programming 95 (2003), [8] G.H. Golub, Some modified matrix eigenvalue problems, SIAM Review 15 (1973), [9] G.H. Golub and C. van Loan, Matrix Computations. Johns Hopkins University Press (1996). [10] A. Frangioni and C. Gentile, Perspective cuts for a class of convex 0-1 mixed integer programs, Mathematical Programming 106, (2006). [11] O. Günlük and J. Linderoth, Perspective Reformulations of Mixed Integer Nonlinear Programs with Indicator Variables, Optimization Tech. Report, U. of Wisconsin-Madison (2008). [12] B. Moghaddam, Y. Weiss, S. Avidan, Generalized spectral bounds for sparse LDA, Proc. 23rd Int. Conf. on Machine Learning (2006), [13] J.J. Moré and D.C. Sorensen, Computing a trust region step, SIAM J. Sci. Statisr Comput 4 (1983), [14] I. Pólik and T. Terlaky, A survey of the S-lemma, SIAM Review 49 (2007), [15] F. Rendl, H. Wolkowicz, A semidefinite framework for trust region subproblems with applications to large scale minimization, Math. Program 77 (1997) [16] R. J. Stern and H. Wolkowicz, Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations, SIAM J. Optim. 5 (1995) [17] J. Sturm and S. Zhang, On cones of nonnegative quadratic functions, Mathematics of Operations Research 28 (2003), [18] W. Miller, S. Wright, Y. Zhang, S. Schuster and V. Hayes, Optimization methods for selecting founder individuals for captive breeding or reintroduction of endangered species, manuscript (2009). [19] V. A. Yakubovich, S-procedure in nonlinear control theory, Vestnik Leningrad University, 1 (1971) [20] Y. Ye and S. Zhang, New results on quadratic minimization, SIAM J. Optim. 14 (2003),

Eigenvalue techniques for proving bounds for convex objective, nonconvex programs 1 Daniel Bienstock Columbia University New York

Eigenvalue techniques for proving bounds for convex objective, nonconvex programs 1 Daniel Bienstock Columbia University New York Eigenvalue techniques for proving bounds for convex objective, nonconvex programs 1 Daniel Bienstock Columbia University New York 1 Introduction March 2009, version Fri.Sep.25.113050.2009 We consider problems

More information

Polynomial Solvability of Variants of the Trust-Region Subproblem

Polynomial Solvability of Variants of the Trust-Region Subproblem Polynomial Solvability of Variants of the Trust-Region Subproblem Daniel Bienstock Alexander Michalka July, 2013 Abstract We consider an optimization problem of the form x T Qx + c T x s.t. x µ h r h,

More information

BCOL RESEARCH REPORT 07.04

BCOL RESEARCH REPORT 07.04 BCOL RESEARCH REPORT 07.04 Industrial Engineering & Operations Research University of California, Berkeley, CA 94720-1777 LIFTING FOR CONIC MIXED-INTEGER PROGRAMMING ALPER ATAMTÜRK AND VISHNU NARAYANAN

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Lifting for conic mixed-integer programming

Lifting for conic mixed-integer programming Math. Program., Ser. A DOI 1.17/s117-9-282-9 FULL LENGTH PAPER Lifting for conic mixed-integer programming Alper Atamtürk Vishnu Narayanan Received: 13 March 28 / Accepted: 28 January 29 The Author(s)

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse

Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse Merve Bodur 1, Sanjeeb Dash 2, Otay Günlü 2, and James Luedte 3 1 Department of Mechanical and Industrial Engineering,

More information

On mathematical programming with indicator constraints

On mathematical programming with indicator constraints On mathematical programming with indicator constraints Andrea Lodi joint work with P. Bonami & A. Tramontani (IBM), S. Wiese (Unibo) University of Bologna, Italy École Polytechnique de Montréal, Québec,

More information

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid

More information

Convex hull of two quadratic or a conic quadratic and a quadratic inequality

Convex hull of two quadratic or a conic quadratic and a quadratic inequality Noname manuscript No. (will be inserted by the editor) Convex hull of two quadratic or a conic quadratic and a quadratic inequality Sina Modaresi Juan Pablo Vielma the date of receipt and acceptance should

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Split Rank of Triangle and Quadrilateral Inequalities

Split Rank of Triangle and Quadrilateral Inequalities Split Rank of Triangle and Quadrilateral Inequalities Santanu Dey 1 Quentin Louveaux 2 June 4, 2009 Abstract A simple relaxation of two rows of a simplex tableau is a mixed integer set consisting of two

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Conic optimization under combinatorial sparsity constraints

Conic optimization under combinatorial sparsity constraints Conic optimization under combinatorial sparsity constraints Christoph Buchheim and Emiliano Traversi Abstract We present a heuristic approach for conic optimization problems containing sparsity constraints.

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

Solving Box-Constrained Nonconvex Quadratic Programs

Solving Box-Constrained Nonconvex Quadratic Programs Noname manuscript No. (will be inserted by the editor) Solving Box-Constrained Nonconvex Quadratic Programs Pierre Bonami Oktay Günlük Jeff Linderoth June 13, 2016 Abstract We present effective computational

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

1 Strict local optimality in unconstrained optimization

1 Strict local optimality in unconstrained optimization ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s

More information

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators Hongbo Dong and Jeff Linderoth Wisconsin Institutes for Discovery University of Wisconsin-Madison, USA, hdong6,linderoth}@wisc.edu

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

The Split Closure of a Strictly Convex Body

The Split Closure of a Strictly Convex Body The Split Closure of a Strictly Convex Body D. Dadush a, S. S. Dey a, J. P. Vielma b,c, a H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive

More information

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Kurt M. Anstreicher Dept. of Management Sciences University of Iowa Seminar, Chinese University of Hong Kong, October 2009 We consider

More information

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer

More information

Integer Programming ISE 418. Lecture 13. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 13. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 13 Dr. Ted Ralphs ISE 418 Lecture 13 1 Reading for This Lecture Nemhauser and Wolsey Sections II.1.1-II.1.3, II.1.6 Wolsey Chapter 8 CCZ Chapters 5 and 6 Valid Inequalities

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Disjunctive Cuts for Cross-Sections of the Second-Order Cone

Disjunctive Cuts for Cross-Sections of the Second-Order Cone Disjunctive Cuts for Cross-Sections of the Second-Order Cone Sercan Yıldız Gérard Cornuéjols June 10, 2014 Abstract In this paper we provide a unified treatment of general two-term disjunctions on crosssections

More information

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably

More information

1 The linear algebra of linear programs (March 15 and 22, 2015)

1 The linear algebra of linear programs (March 15 and 22, 2015) 1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

Decomposition-based Methods for Large-scale Discrete Optimization p.1

Decomposition-based Methods for Large-scale Discrete Optimization p.1 Decomposition-based Methods for Large-scale Discrete Optimization Matthew V Galati Ted K Ralphs Department of Industrial and Systems Engineering Lehigh University, Bethlehem, PA, USA Départment de Mathématiques

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011 Contents 1 The knapsack problem 1 1.1 Complete enumeration..................................

More information

New Lower Bounds on the Stability Number of a Graph

New Lower Bounds on the Stability Number of a Graph New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), 533 540]

More information

Strong Formulations of Robust Mixed 0 1 Programming

Strong Formulations of Robust Mixed 0 1 Programming Math. Program., Ser. B 108, 235 250 (2006) Digital Object Identifier (DOI) 10.1007/s10107-006-0709-5 Alper Atamtürk Strong Formulations of Robust Mixed 0 1 Programming Received: January 27, 2004 / Accepted:

More information

Strong Formulations for Convex Functions over Non-Convex Domains

Strong Formulations for Convex Functions over Non-Convex Domains Strong Formulations for Convex Functions over Non-Convex Domains Daniel Bienstock and Alexander Michalka University JMM 2013 Introduction Generic Problem: min Q(x), s.t. x F, Introduction Generic Problem:

More information

Distributionally Robust Convex Optimization

Distributionally Robust Convex Optimization Submitted to Operations Research manuscript OPRE-2013-02-060 Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However,

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

The Ellipsoid Algorithm

The Ellipsoid Algorithm The Ellipsoid Algorithm John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA 9 February 2018 Mitchell The Ellipsoid Algorithm 1 / 28 Introduction Outline 1 Introduction 2 Assumptions

More information

MIP reformulations of some chance-constrained mathematical programs

MIP reformulations of some chance-constrained mathematical programs MIP reformulations of some chance-constrained mathematical programs Ricardo Fukasawa Department of Combinatorics & Optimization University of Waterloo December 4th, 2012 FIELDS Industrial Optimization

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane

Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane Alberto Del Pia Department of Industrial and Systems Engineering & Wisconsin Institutes for Discovery, University of Wisconsin-Madison

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational

More information

Two-Term Disjunctions on the Second-Order Cone

Two-Term Disjunctions on the Second-Order Cone Noname manuscript No. (will be inserted by the editor) Two-Term Disjunctions on the Second-Order Cone Fatma Kılınç-Karzan Sercan Yıldız the date of receipt and acceptance should be inserted later Abstract

More information

Fixed-charge transportation problems on trees

Fixed-charge transportation problems on trees Fixed-charge transportation problems on trees Gustavo Angulo * Mathieu Van Vyve * gustavo.angulo@uclouvain.be mathieu.vanvyve@uclouvain.be November 23, 2015 Abstract We consider a class of fixed-charge

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

The Split Closure of a Strictly Convex Body

The Split Closure of a Strictly Convex Body The Split Closure of a Strictly Convex Body D. Dadush a, S. S. Dey a, J. P. Vielma b,c, a H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive

More information

On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times

On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times CORE DISCUSSION PAPER 2000/52 On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times Andrew J. Miller 1, George L. Nemhauser 2, and Martin W.P. Savelsbergh 2 November 2000

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Intersection cuts for nonlinear integer programming: convexification techniques for structured sets

Intersection cuts for nonlinear integer programming: convexification techniques for structured sets Noname manuscript No. (will be inserted by the editor) Intersection cuts for nonlinear integer programming: convexification techniques for structured sets Sina Modaresi Mustafa R. Kılınç Juan Pablo Vielma

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

More information

Convex sets, conic matrix factorizations and conic rank lower bounds

Convex sets, conic matrix factorizations and conic rank lower bounds Convex sets, conic matrix factorizations and conic rank lower bounds Pablo A. Parrilo Laboratory for Information and Decision Systems Electrical Engineering and Computer Science Massachusetts Institute

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Polyhedral Approach to Integer Linear Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh

Polyhedral Approach to Integer Linear Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh Polyhedral Approach to Integer Linear Programming Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh 1 / 30 Brief history First Algorithms Polynomial Algorithms Solving

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

On Counting Lattice Points and Chvátal-Gomory Cutting Planes

On Counting Lattice Points and Chvátal-Gomory Cutting Planes On Counting Lattice Points and Chvátal-Gomory Cutting Planes Andrea Lodi 1, Gilles Pesant 2, and Louis-Martin Rousseau 2 1 DEIS, Università di Bologna - andrea.lodi@unibo.it 2 CIRRELT, École Polytechnique

More information

Cutting planes from extended LP formulations

Cutting planes from extended LP formulations Cutting planes from extended LP formulations Merve Bodur University of Wisconsin-Madison mbodur@wisc.edu Sanjeeb Dash IBM Research sanjeebd@us.ibm.com March 7, 2016 Oktay Günlük IBM Research gunluk@us.ibm.com

More information

How to Convexify the Intersection of a Second Order Cone and a Nonconvex Quadratic

How to Convexify the Intersection of a Second Order Cone and a Nonconvex Quadratic How to Convexify the Intersection of a Second Order Cone and a Nonconvex Quadratic arxiv:1406.1031v2 [math.oc] 5 Jun 2014 Samuel Burer Fatma Kılınç-Karzan June 3, 2014 Abstract A recent series of papers

More information

Structure of Valid Inequalities for Mixed Integer Conic Programs

Structure of Valid Inequalities for Mixed Integer Conic Programs Structure of Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç-Karzan Tepper School of Business Carnegie Mellon University 18 th Combinatorial Optimization Workshop Aussois, France January

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

On the Minimization Over Sparse Symmetric Sets: Projections, O. Projections, Optimality Conditions and Algorithms

On the Minimization Over Sparse Symmetric Sets: Projections, O. Projections, Optimality Conditions and Algorithms On the Minimization Over Sparse Symmetric Sets: Projections, Optimality Conditions and Algorithms Amir Beck Technion - Israel Institute of Technology Haifa, Israel Based on joint work with Nadav Hallak

More information

Intersection cuts for nonlinear integer programming: convexification techniques for structured sets

Intersection cuts for nonlinear integer programming: convexification techniques for structured sets Noname manuscript No. (will be inserted by the editor Intersection cuts for nonlinear integer programming: convexification techniques for structured sets Sina Modaresi Mustafa R. Kılınç Juan Pablo Vielma

More information

Lift-and-Project Inequalities

Lift-and-Project Inequalities Lift-and-Project Inequalities Q. Louveaux Abstract The lift-and-project technique is a systematic way to generate valid inequalities for a mixed binary program. The technique is interesting both on the

More information

IE 521 Convex Optimization

IE 521 Convex Optimization Lecture 1: 16th January 2019 Outline 1 / 20 Which set is different from others? Figure: Four sets 2 / 20 Which set is different from others? Figure: Four sets 3 / 20 Interior, Closure, Boundary Definition.

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

arxiv: v1 [cs.cc] 5 Dec 2018

arxiv: v1 [cs.cc] 5 Dec 2018 Consistency for 0 1 Programming Danial Davarnia 1 and J. N. Hooker 2 1 Iowa state University davarnia@iastate.edu 2 Carnegie Mellon University jh38@andrew.cmu.edu arxiv:1812.02215v1 [cs.cc] 5 Dec 2018

More information

Change in the State of the Art of (Mixed) Integer Programming

Change in the State of the Art of (Mixed) Integer Programming Change in the State of the Art of (Mixed) Integer Programming 1977 Vancouver Advanced Research Institute 24 papers 16 reports 1 paper computational, 4 small instances Report on computational aspects: only

More information

Optimizing Convex Functions over Non-Convex Domains

Optimizing Convex Functions over Non-Convex Domains Optimizing Convex Functions over Non-Convex Domains Daniel Bienstock and Alexander Michalka University Berlin 2012 Introduction Generic Problem: min Q(x), s.t. x F, Introduction Generic Problem: min Q(x),

More information

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs) ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality

More information

Low-Complexity Relaxations and Convex Hulls of Disjunctions on the Positive Semidefinite Cone and General Regular Cones

Low-Complexity Relaxations and Convex Hulls of Disjunctions on the Positive Semidefinite Cone and General Regular Cones Low-Complexity Relaxations and Convex Hulls of Disjunctions on the Positive Semidefinite Cone and General Regular Cones Sercan Yıldız and Fatma Kılınç-Karzan Tepper School of Business, Carnegie Mellon

More information

Fundamental Domains for Integer Programs with Symmetries

Fundamental Domains for Integer Programs with Symmetries Fundamental Domains for Integer Programs with Symmetries Eric J. Friedman Cornell University, Ithaca, NY 14850, ejf27@cornell.edu, WWW home page: http://www.people.cornell.edu/pages/ejf27/ Abstract. We

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms Jeff Linderoth Industrial and Systems Engineering University of Wisconsin-Madison Jonas Schweiger Friedrich-Alexander-Universität

More information

Key words. Integer nonlinear programming, Cutting planes, Maximal lattice-free convex sets

Key words. Integer nonlinear programming, Cutting planes, Maximal lattice-free convex sets ON MAXIMAL S-FREE CONVEX SETS DIEGO A. MORÁN R. AND SANTANU S. DEY Abstract. Let S Z n satisfy the property that conv(s) Z n = S. Then a convex set K is called an S-free convex set if int(k) S =. A maximal

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Algebraic Statistics progress report

Algebraic Statistics progress report Algebraic Statistics progress report Joe Neeman December 11, 2008 1 A model for biochemical reaction networks We consider a model introduced by Craciun, Pantea and Rempala [2] for identifying biochemical

More information

Solving LP and MIP Models with Piecewise Linear Objective Functions

Solving LP and MIP Models with Piecewise Linear Objective Functions Solving LP and MIP Models with Piecewise Linear Obective Functions Zonghao Gu Gurobi Optimization Inc. Columbus, July 23, 2014 Overview } Introduction } Piecewise linear (PWL) function Convex and convex

More information

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set ON THE FACETS OF THE MIXED INTEGER KNAPSACK POLYHEDRON ALPER ATAMTÜRK Abstract. We study the mixed integer knapsack polyhedron, that is, the convex hull of the mixed integer set defined by an arbitrary

More information

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming

Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Christoph Buchheim 1 and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische Universität Dortmund christoph.buchheim@tu-dortmund.de

More information

Computational testing of exact separation for mixed-integer knapsack problems

Computational testing of exact separation for mixed-integer knapsack problems Computational testing of exact separation for mixed-integer knapsack problems Pasquale Avella (joint work with Maurizio Boccia and Igor Vasiliev ) DING - Università del Sannio Russian Academy of Sciences

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities

Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities Rank-one Generated Spectral Cones Defined by Two Homogeneous Linear Matrix Inequalities C.J. Argue Joint work with Fatma Kılınç-Karzan October 22, 2017 INFORMS Annual Meeting Houston, Texas C. Argue, F.

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information