Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics
|
|
- Kory Robinson
- 6 years ago
- Views:
Transcription
1 Potential reduction algorithms for structured combinatorial optimization problems Report J.P. Warners T. Terlaky C. Roos B. Jansen Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics
2 J.P. Warners 1, T. Terlaky, C. Roos, B. Jansen 2 Department of Statistics, Probability and Operations Research Faculty of Technical Mathematics and Informatics Delft University of Technology P.O. Box 5031, 2600 GA Delft, The Netherlands. Tel.: (015) , FA: (015) e{mail: j.p.warners@twi.tudelft.nl This research was partially supported by the EUCLID program, SEPA 6 (Articial Intelligence), RTP 6.4 (Combinatorial Algorithms for Military Applications). 1 This author was partially supported by the Dutch Organization for Scientic Research (NWO) under grant 612{33{ This author was partially supported by the Dutch Organization for Scientic Research (NWO) under grant 611{304{028. ISSN Copyright c 1995 by the Faculty of Technical Mathematics and Informatics, Delft, The Netherlands. No part of this Journal may be reproduced in any form, by print, photoprint, microlm, or any other means without permission from the Faculty of Technical Mathematics and Informatics, Delft University of Technology, The Netherlands. Copies of these reports may be obtained from the bureau of the Faculty of Technical Mathematics and Informatics, Julianalaan 132, 2628 BL Delft, phone A selection of these reports is available in PostScript form at the Faculty's anonymous ftp-site. They are located in the directory /pub/publications/tech-reports at ftp.twi.tudelft.nl
3 Abstract Recently Karmarkar proposed a potential reduction algorithm for binary feasibility problems. In this paper we point out a practical drawback of his potential function and we propose a modied potential function that is computationally more attractive. As the main result of the paper, we will consider a special class of binary feasibility problems, and show how problems of this class can be reformulated as nonconvex quadratic optimization problems. The reformulation is very compact and a further interesting property is, that (instead of just one) multiple solutions may be found by optimizing it. We introduce a potential function to optimize the new model. Finally, we report on computational results on several instances of the graph coloring problem, comparing the three potential functions. Key words: interior point methods, potential reduction methods, binary programming, combinatorial optimization, graph coloring.
4 1 Introduction In 1984 Karmarkar showed that linear programming problems can be solved by an interior point approach in polynomial time [6]. An interior point method traverses the interior of the feasible region in search of an optimum of the linear program, rather than moving along the boundary of the feasible region as the simplex method does. Karmarkar uses a logarithmic potential function to measure the progress of the algorithm; solving a linear program is equivalent to sequentially minimizing this convex potential function under ane constraints. Karmarkar's work initiated an extensive research concerning the development of interior point methods for linear and, more general, convex programming (see, e.g., bibliography [9]). In practice, it has become clear that interior point methods compete favorably with the simplex method, especially for large scale problems. More recently, research has been done by Karmarkar et al. [7, 5, 8] to extend the potential reduction idea to solve dicult combinatorial optimization problems. Karmarkar [7] describes an approximate interior point algorithm to solve f0; 1g feasibility problems. He claims that ecient algorithms for many dicult combinatorial problems can be based on this approach. Results are reported of the application of the algorithm to two combinatorial problems; the satisability problem [5] (see also Shi et al. [10]) and the set covering problem [8]. The obtained results are encouraging. However, the potential function that Karmarkar et al. propose [5, 8] is not suitable to solve large scale combinatorial problems, since it involves solving linear systems with completely dense matrices. Therefore, when using this potential function there is no way to utilize sparsity properties of the specic combinatorial optimization problem under consideration. In this paper we propose two improvements on Karmarkar's original algorithm. First, we propose an alternative potential function that yields sparse linear systems. This potential function is valid for any optimization problem to which Karmarkar's original algorithm is applicable. Second, as the main result of the paper, we consider a special class of binary feasibility problems and reformulate problems of this class as nonconvex quadratic optimization problems. The quadratic reformulation is much more compact than the linear formulation, since all inequality constraints are incorporated in the objective function. A further property of the quadratic model is, that instead of just one, multiple feasible solutions may be found by solving the model. We introduce a potential function to optimize the quadratic model. We solve several instances of the graph coloring problem, using all three potential functions. It will become clear that the modied Karmarkar potential function is much more ecient than the original. Furthermore, the new quadratic formulation and its potential function will prove to be a major improvement on both other potential functions. This paper is organized as follows. Karmarkar's original method is briey summarized in Section 2. In Section 3 we will give the improved Karmarkar type potential function. We will consider the special class of combinatorial problems in Section 4, and introduce the potential function to optimize the reformulation in Section 5. In Section 6 we will report on computational results on the graph coloring problem. Finally, concluding remarks will be made in Section 7. 2 Karmarkar's algorithm for binary programming Karmarkar et al. [7, 5, 8] consider the following binary feasibility problem: where A 2 IR tm and b 2 IR t. (IP ) nd x 2 f0; 1g m such that Ax b; 1
5 2.1 The basic algorithm The interior point algorithm proposed by Karmarkar et al. generates a sequence of iterates in the interior of the feasible region. It consists of the following steps: Transform the f0; 1g feasibility problem to a f?1; 1g feasibility problem, using the substitution resulting in the problem x i := 2x i? 1; (IP ) nd x 2 f?1; 1g m such that ~ Ax ~ b. Relax the integrality constraints x 2 f?1; 1g m to linear constraints?1 x i 1; i = 1; : : :; m. Introduce a nonconvex potential function, whose minimizers are feasible solutions of (IP ). Starting from a feasible interior point, minimize the potential function in the following way: 1. Minimize a quadratic approximation of the potential function over an inscribed ellipsoid in the feasible region around the current feasible interior point, to obtain a descent direction for the potential function. 2. Use a line search to nd the minimum of the potential function along the descent direction (this is an extension of the algorithm proposed by Shi et al. [10]). Thus the new iterate is obtained. 3. Round the new iterate to a f?1; 1g integer solution. If this solution is feasible the problem is solved. If it is infeasible, the algorithm proceeds. 4. If the potential value of the new iterate is lower than the previous potential value, go to step 1; otherwise, a (non{integral) local minimum is found. Modify the potential function in some way to avoid running in this local minimum again, and restart the process. In the next subsections we explain the main elements of the algorithm, with the purpose to point out a practical drawback of the algorithm proposed in [5, 8]. 2.2 Karmarkar's potential function Let us consider (IP ). We relax the integrality constraints and incorporate them in the set of constraints, denoting this by Ax b, where A is an n m matrix with full rank m (note that n = t + 2m). Karmarkar et al. introduce the following quadratic optimization problem: (P ) max m i=1 s.t. Ax b: x 2 i = xt x As problem (P ) is a concave maximization problem, it is NP{complete. Further we have that x T x m, with equality if and only if x is integral, hence the optima of this quadratic problem automatically provide integer solutions of (IP ). To solve (P ) Karmarkar et al. [5, 8] propose the potential function p '(x) = log m? x T x? 1 log s i ; (1) n i=1 where s = b? Ax is the slack vector. n 2
6 2.3 Minimizing the potential function Instead of (P ) consider the nonconvex optimization problem: (P ' ) min '(x) s.t. Ax b: To solve (P ' ) the algorithm starts with a given initial interior point x 0, i.e. Ax 0 < b, and generates a sequence of points fx k g in the interior of the feasible region. Denote the k th iterate by x k. Let S = diag (s k 1; : : :; s k n) and f 0 = m? x kt x k. Then the gradient h ' and the Hessian H ' of ' in x k are: h ' = r'(x k ) =? 1 f 0 x k + 1 n AT S?1 e; (2) H ' = r 2 '(x k ) =? 1 I? 2 f 0 f0 2 x k x kt + 1 n AT S?2 A; (3) where e is an all{one vector of appropriate length. The quadratic approximation of ' around x k is given by Q(x) = 1 2 (x? xk ) T H ' (x? x k ) + h T ' (x? xk ) + '(x k ): (4) In general, minimizing (4) subject to Ax b is NP{complete. However, when the polytope Ax b is substituted by an inscribed ellipsoid, the so{called Dikin ellipsoid [2], we obtain a problem which can be solved in polynomial time (see Ye [13]). The Dikin ellipsoid around x k is given by where 0 < r < 1. E(r) = fx 2 IR m j (x? x k ) T A T S?2 A(x? x k ) r 2 g; Substituting the polytope by the appropriate Dikin ellipsoid and letting x x? x k we nd the following optimization problem: min 1 (P E ) 2 (x)t H ' (x) + h T ' (x) s.t. (x) T A T S?2 A(x) r 2 : This problem has been studied by, among others, Sorensen [11] and Flippo and Jansen [4]. The optimal solution x to (P E ) is a descent direction of Q(x) from x k. We formulate the optimality conditions of (P E ) (see [11, 8, 4]). The vector x is an optimal solution of (P E ) if and only if there exists a 0, such that: (H ' + A T S?2 A)x =?h ' (5) ((x ) T A T S?2 A(x )? r 2 ) = 0 (6) H ' + A T S?2 A is positive semidenite: (7) Without going into further detail, we observe that to nd a solution x that both satises the optimality conditions and lies on an appropriate ellipsoid, the linear system (5) needs to be solved at least once. If the matrix on the left hand side is sparse, solving this system can be done more eciently by sparse matrix techniques, than when the matrix is dense. The density of system (5) is determined by the Hessian of the potential function used and/or by the matrices A and A T A. We note that the Hessian H ' of Karmarkar's potential function is completely dense due to its second term. Therefore, solving large optimization problems requires unacceptable computational eort. This motivates the need for a potential function that yields a sparse Hessian. 3
7 3 An improved potential function Instead of (1) we propose to use the following potential function: (x) = m? x T x? where > 0 is some parameter. The gradient and Hessian of (8) are: n i=1 log s i ; (8) h =?2x + A T S?1 e; (9) H =?2I + A T S?2 A: (10) The density of H is determined by the density of A T A. So if A and A T A are sparse matrices, the left hand side of (5) will be sparse, and the linear system can eciently be solved [3]. Note that is built from two parts. The rst term, m? x T x, represents the objective function of (P ). The second term is the logarithmic barrier function. Karmarkar et al. [5, 8] use a xed weight = 1 n of the barrier term. An important dierence concerning (1) and (8) is, that the rst term of (1) approaches?1 when x k approaches a feasible f?1; 1g solution, whereas the rst term of (8) is equal to zero for any f?1; 1g solution. Therefore, to ascertain that the value of (8) approaches zero when the iterates x k approach an integer solution, the weight must subsequentially be decreased during the minimization process. To make this process more exible, we introduce the weighted logarithmic barrier function: n (x; w) = m? x T x? w i log s i ; (11) i=1 where w 2 IR n is a positive weight vector. For the sake of completeness we give expressions of the gradient and Hessian of, using the notation W = diag(w 1 ; : : :; w n ). h =?2x + A T S?1 w; (12) H =?2I + A T S?1 W S?1 A: (13) The potential function allows us to change weights of dierent constraints independently of each other. This may be helpful, for example, after nding a local minimum, to avoid running into the same local minimum again. 4 A special class of binary feasibility problems The idea behind the previous potential functions is to introduce an objective function that forces the variables to integer values. In this section we will consider a special class of binary feasibility problems, to which a large number of combinatorial problems belong, such as the graph coloring and the maximum independent set problem. It will be shown how problems of this class can be reformulated as nonconvex quadratic programming problems with known optimal value, by making use of the special property that for any feasible solution also the slacks are binary. Instead of explicitly forcing the variables to binary values, we do this implicitly by using an objective function that forces the slacks to binary values. This reformulation yields a reduction of the number of constraints which can be quite signicant, as all inequality constraints are incorporated in the objective function. We consider the binary feasibility problem: (BP ) nd x 2 f0; 1g m such that Ax e; Bx = d: Here A 2 IR nm ; B 2 IR pm ; e = (1; : : :; 1) T 2 IR n ; d 2 IR p. We make the following assumptions: 4
8 Assumption 1 All elements of A are binary. Assumption 2 All elements of B are binary. Assumption 3 Each column of B contains at most one nonzero element. Note that Assumption 2 implies that d is integral (provided (BP ) is feasible). Assumption 3 implies that each variable occurs in at most one equality constraint. So, the equality constraints are of the following form: x i = d k ; k = 1; : : :; p; where without loss of generality we may assume that [ p k=1 E k = f1; : : :; mg, and the sets E k ; k = 1; : : :; p are disjunct: E j \ E k = ;; 8 j; k = 1; : : :; p; j 6= k: If we are given a nonnegative integer matrix A which contains elements larger than one, the columns in which these elements occur can be removed from the problem, as the corresponding variable must be zero. Also, if a variable does not occur in an equality constraint, we may remove it from the problem. We introduce the (symmetric) matrix Q: Q = sgn A T A? diag(a T A) ; (14) where diag(a T A) denotes the diagonal matrix containing the diagonal entries of the matrix A T A and the function sgn is 1, 0 or {1 if the corresponding entry is positive, zero or negative respectively. Due to Assumption 1, the matrix Q is binary and it has the same nonzero structure as A T A, except for the diagonal: all diagonal entries are set to zero. This implies that Q is indenite. Now let us consider the following optimization problem: (QP ) min s.t. x T Qx Bx = d x i 2 f0; 1g; i = 1; : : :; m: By replacing the constraints x i 2 f0; 1g by 0 x i 1 we obtain the relaxation (QR) of (QP ). Since Q is indenite, the programs (QP ) and (QR) are nonconvex. Furthermore, the construction of Q implies the following result. Proposition 1 x T Qx 0 for any feasible solution x of (QP ) and (QR). Now we are ready to prove the following important relation between (BP ) and (QP ). Lemma 1 Under Assumption 1 it holds that the vector x is a feasible solution of (BP ) if and only if x is an optimal solution of (QP ) with x T Qx = 0. Proof: We observe that a (binary) solution x of (BP ) is feasible if and only if the slack vector s = e? Ax is (also) binary and the equality constraints are satised. So we have that x is a feasible solution if and only if Bx = d and 0 =?s T (e? s) = (Ax? e) T Ax = x T A T Ax? e T Ax: (15) 5
9 Since A is binary, for every column a j ; j = 1; : : :; m, of A we have e T a j = a T j a j: Note that the right hand side of this expression is the j th diagonal element of the matrix A T A. Using that x is binary, we nd that e T Ax = e T diag(a T A)x = x T diag(a T A)x: Substituting this in (15) we have that x is a feasible solution of (BP ) if and only if Bx = d and 0 = x T A T Ax? e T Ax = x T? A T A? diag(a T A) x = x T sgn A T A? diag(a T A) x = x T Qx; where we use that A T A and Q are nonnegative matrices. 2 The next lemma gives a relation between (QP ) and (QR). Lemma 2 Consider (QP ) and its relaxation (QR). The following statements hold: 1. Assume the optimal value of (QP ) is zero. If x is an optimal solution of (QP ) then x is also an optimal solution of (QR). 2. Assume that Assumptions 2 and 3 hold. If x is an optimal solution of (QR) with x T Qx = 0, then x is either a (binary) solution of (QP ), or we can trivially construct multiple optimal solutions of (QP ) from x. Proof: The rst statement is trivial since the objectives of both problems are nonnegative (Proposition 1), hence a feasible solution with zero value is optimal. Now we consider the second statement. Let x be an optimal solution of (QR) such that x T Qx = 0. If x is binary, x is a solution of (QP ) and the lemma is proven. Suppose x is not binary. Since we have 0 = x T Qx = 2 m m n a li a lj x i x j ; i=1 j=i+1 l=1 a li a lj x i x j = 0; 8 l = 1; : : :; n; 8 i; j = 1; : : :; m; i 6= j: We conclude that if a li ; a lj > 0 then x i x j = 0. So if we let x = sgn(x), i.e. x is a binary vector, then we still have x T Qx = 0: For x we may have that Bx d. However, as B contains only binary values (Assumption 2) and each variable occurs in at most one equality constraint (Assumption 3), we can set variables x i from one to zero until the left hand sides of the equality constraints are decreased to the desired d value, thus constructing a feasible solution ~x of (QP ). Thus, it is possible to construct multiple solutions, since any binary vector ~x that satises ~x x and B~x = d is a feasible solution of (QP ): 2 Combining Lemmas 1 and 2 we can state the following theorem. Theorem 1 If the Assumptions 1, 2 and 3 hold, then we have 1. If x is a feasible solution of (BP ), then x is an optimal solution of (QR). 2. If x is an optimal solution of (QR) with x T Qx = 0, then x is either a (binary) solution of (BP ), or we can trivially construct multiple solutions of (BP ) from x. 6
10 Observe that, given a (partly) fractional solution x of (QR) with x T Qx = 0, the number of solutions of (BP ) that we can construct from x can explicitly be computed. Let f k be the number of positive variables in the set E k ; k = 1; : : :; p. Then the number of solutions is given by! py f k : (16) d k k=1 In the case that (BP ) is infeasible, the global minima of (QP ) and (QR) will not be equal to zero. Also, we are not certain that a global minimum of (QR) corresponds to one or more binary vectors. However, if the following assumption is satised, all minima (local and global) of (QR) yield one ore more binary solutions. Assumption 4 For each equality constraint k 2 f1; : : :; pg at least one of the following statements holds: 1. d k = Two variables that occur simultaneously in equality constraint k do not occur simultaneously in any inequality constraint. We can state the second part of Assumption 4 more formally. Given k 2 f1; : : :; pg with d k > 1, then for all i; j 2 E k ; i 6= j, we have that a li a lj = 0, for all l = 1; : : :; n. Theorem 2 Let the Assumptions 1,2,3 and 4 hold. Given a feasible non{integral solution x of (QR), we can construct a feasible integral solution ~x of (QP ), such that ~x T Q~x x T Qx. Proof: Without loss of generality we assume that x is a binary solution, except for at least one of the variables x i ; i 2 E k ; for a given k 2 f1; : : :; pg. (Note that this implies that at least two variables are fractional, since the vector d is integral.) First we rewrite the product x T Qx to a form that is more convenient for our purpose, using the symmetry of Q: x T Qx = q ij x i x j + 2 q ij x i x j + q ij x i x j : (17) j2e k j =2E k j =2E k Denote the rst term of (17) by K 1 0, the third by K 2 0, and let the cost coecients be given by c i = 2 q ij x j 0, then we can rewrite (17) as: j =2E k x T Qx = K 1 + i =2E k c i x i + K 2 : (18) Note that the value of K 1 depends on the values of x i ; i 2 E k, but that K 2 and c i are independent of them. Now let us assume that the rst part of Assumption 4 holds. Let and set i = arg min c i ; ~x i := 1; ~x i := 0; 8 i 2 E k nfi g; ~x i := x i ; 8 i =2 E k : Using (18), K 1 0, x i 0; 8 i, and d k = 1, it holds x T Qx? K 2 c i x i min c i x i = 7 min c i d k = c i : (19)
11 We observe that c i ~x i = c i and q ij ~x i ~x j = q i i = 0. Substituting this in (19) and using (18), j2e k we nd that x T Qx? K 2 c i = c i ~x i = ~x T Q~x? K 2 : Thus the variables ~x i ; i 2 E k, are set to binary values, without increasing the objective value. Under the second part of Assumption 4 the analysis becomes a little more complex. In this case, we have that K 1 = 0, since n q ij = a li a lj = 0; 8 i; j 2 E k ; i 6= j: l=1 Now we can follow a procedure similar to the one described above. We pick the d k lowest cost coecients c i and set the corresponding variables ~x i to one, while setting the rest to zero. Let I k = fi 1 ; : : :; i dk g be the set of indices corresponding to the d k lowest cost coecients and let Now let Using (18) and K 1 = 0, it follows x T Qx? K 2 = # = max i2i k c i c j ; 8j 2 E k ni k : ~x i := 1; 8 i 2 I k ; ~x i := 0; 8 i 2 E k ni k ; ~x i := x i ; 8 i =2 E k : c i x i = i2i k c i x i + ni k c i x i = i2i k c i ~x i + i2i k c i (x i? ~x i ) + From (i) x i? ~x i 0; 8 i 2 I k ; (ii) x i 0; 8 i 2 E k ; and (iii) the denition of #, we nd that i2i k c i (x i? ~x i ) # i2i k (x i? ~x i ) and Furthermore, by the construction of ~x it is obvious that c i ~x i = i2i k Substituting (21) and (22) in (20) we obtain x T Qx? K 2 c i ~x i + # 2 4 i2ik ni k c i x i # c i ~x i and i2i k ~x i = (x i? ~x i ) + ni k x i 3 5 = ni k c i x i : (20) ni k x i : (21) x i = d k : (22) c i ~x i + #(d k? d k ) = c i ~x i = ~x T Q~x? K 2 ; Again, we have set the variables ~x i ; i 2 E k ; to binary values, without increasing the objective value. So given an arbitrary fractional solution x, we can repeat this procedure for each k = 1; : : :; p, until there are no fractional variables left, thereby constructing a binary solution ~x with ~x T Q~x x T Qx: 2 Due to Theorem 2, the optimal values of (QP ) and (QR) are equal, also when (BP ) is infeasible. Furthermore, all minima (local and global) of (QR) have integral values, and yield one ore more binary solutions. Using formula (16), we can again explicitly compute the number of binary solutions with the same objective value that can be constructed from a (partly) fractional solution which yields a minimum of (QR). From Theorem 2 we can give the following interpretation to the (integral) objective value of a local minimizer x of (QR). Let x T Qx = 2. Any solution ~x of (QP ) that we can construct from x has the same objective value 2. For such a solution ~x there are pairs i; j for which there exists an l 2 f1; : : :; ng such that a li = a lj = x i = x j = 1. This implies that the number of constraint violations of the solution ~x in (BP ) is. Finally, we observe that Theorem 2 holds for any nonnegative matrix ~ Q with the same nonzero structure as Q. This allows us to weigh the constraints; if the constraint concerning the variables i 1 and j 1 is considered to be more important than the constraint concerning a pair of variables i 2 and j 2 we can set ~q i1j 1 to a large value L >> 1, while setting ~q i2j 2 to one. 8
12 5 A potential function to solve the new model To solve problem (QR) we can use the weighted logarithmic barrier function. 2m (x; w) = 1 2 xt Qx? w i log s i ; (23) i=1 where w 2 IR 2m is a positive weight vector, and the values s i are the slacks of the constraints 0 x e. The gradient and Hessian of are: h = Qx? (I?I) S?1 w; (24) H = Q? (I?I) S?1 W S?1 (I?I) T : (25) The nonzero structures (and so the densities) of the Hessian of and the Hessian of (11) are identical. This can be seen immediately, as the nonzero structure of the rst is determined by Q, and that of the second by A T A. Note that the equality constraints Bx = d do not occur in (23). As in other potential reduction methods, there are several ways to deal with the equality constraints: The equality constraints may be replaced by inequality constraints: Bx d, and subsequently be added to the polytope 0 x e. A projection onto the null space of B may be used. 6 Computational results The three dierent potential functions and corresponding models have been used to solve a number of instances of the Graph Coloring Problem (GCP). The feasibility version of the GCP can be formulated as follows: Given an undirected graph G = (V; E), with V the set of vertices and E the set of edges, and a set of colors C, nd a coloring of the vertices of the graph such that any two connected vertices have dierent colors. We can model the GCP as follows. Dened are the binary decision variables: x vc = ( 1 color c is assigned to vertex v, 0 otherwise, 8v 2 V; 8c 2 C: We construct a set of linear constraints modeling the GCP and show that it satises the assumptions made in Section 4. First, we have to assign exactly one color to each vertex: c2c Second, two connected vertices may not get the same color: x vc = 1; 8v 2 V: (26) x uc + x vc 1; 8(u; v) 2 E; 8c 2 C: (27) Now we can write the GCP in the form (BP ), where A and B are binary matrices, given by (27) and (26) respectively. Each variable occurs in exactly one equality constraint and two variables occurring in the same 9
13 equality constraint do not occur simultaneously in an inequality constraint. So Theorems 1 and 2 apply. The density of the matrix Q can be computed with the following formula: density(q) = 2 jejjcj (jv jjcj) 2 : This shows that Q (hence A T A) is sparse, even if G is a dense graph. Therefore, computation times will reduce considerably when using a potential function that benets from this sparsity. Example We illustrate the construction of the matrices A; B and Q with a small example. Let G be the graph shown in Figure 1. The number of variables required to model this GCP is m = jv jjcj. The incidence matrix M Figure 1: Example graph and the matrices A and B are given by: M = C A ; A = 0 I I I I 0 I I I 0 I 0 0 I 0 0 I 0 0 I I I 0 I 1 C A ; B = 0 e T e T e T e T e T 1 C A ; where I denotes the jcj jcj identity matrix and e denotes the all{one jcj{vector. The number of rows of the matrix A is n = jejjcj and the number of nonzeros is 2n. We can now readily compute the matrix Q: Q = 0 0 I 0 0 I I 0 I I I 0 I 0 I I 0 I I 0 0 I I I 0 0 Note the similarity of the incidence matrix M and Q. The element (uc 1; vc 2) of the matrix Q is equal to one if (and only if) [u; v] 2 E (so m uv = 1) and c 1 = c 2: 2 In the following we shall discuss a number of implementational issues. 1 C A : Starting points We deal with the equality constraints by relaxing them to inequality constraints. Therefore our initial point needs to satisfy x vc > 1; 8v 2 V: c2c 10
14 We could simply take x 0 vc = 1 jcj? 1 ; 8v 2 V; 8c 2 C as our starting point. When using potential function (1), experimentation shows that this starting point is close to a maximum of the potential function as the gradient is close to zero. Therefore, the initial steps that are made are very short. However, if we slightly perturb the starting point no such problem is encountered. We perturb the starting point (in f0; 1g formulation) by multiplying each component by a factor that is chosen from a normal distribution with mean = 1 and variance 2 = jcj?1 1. In the results reported in this section, the same perturbed starting point was used for all three potential functions. Weights The weights of the barrier functions are chosen as follows. When using potential function ' the (constant) weight of the barrier is set to 1 10n. Karmarkar et al. [5, 8] propose to use 1 n, but this resulted in slow progress for this particular problem class. The weights w i of the potential functions and are initially set to 1 and 100, for all i, respectively. The weights are decreased by a factor two in each iteration. n n Rounding scheme We use the following rounding scheme. First, the largest fractional variable is found and this is set to one. All variables corresponding to both the same color and connected vertices are set to zero and subsequently again the largest fractional variable is determined. This rounding scheme terminates either when a feasible coloring has been found, or when a partial coloring has been found that cannot be extended without violating constraints. Given a fractional solution x: while any x vc is not rounded (v ; c ) := argmax fx vc j v 2 V; c 2 C; x vc fractionalg; x v c := 1; x v c := 0; 8c 2 Cnfc g; x vc := 0; 8v 2 V : [v; v ] 2 E; endwhile Stopping criteria As stopping criteria of the algorithm we take: m? x T x < or x T Qx < ; where = 10?3, or the algorithm stops when the potential value does not improve during two subsequent iterations. Local minima If the algorithm runs into a local minimum, the weights of the near{active constraints in the nal interior solution x K are increased. A constraint i is called near{active when s K i is close to zero. Subsequently, the process is restarted from a new starting point x 0 new = x K + (x 0? x K ) where > 1 is such that Ax 0 new < b. Implementation The algorithm was implemented in MATLAB TM and uses some FORTRAN routines provided by the linear programming interior point solver LIPSOL (Zhang [14]). These FORTRAN routines use sparse matrix techniques to do the minimum degree ordering, symbolic factorization, Cholesky factorization and back substitution to solve the linear system (5) provided this system is sparse. The tests were run on a HP9000/720 workstation, 144 Mb, 50 mhz. 11
15 Test problem generation The GCP test problems were generated using the test problem generator GRAP H (Van Benthem [1]). This generator was originally intended to generate frequency assignment problems, but it was adapted to generate GCP s. The GCP s it generates have known optimal value. In the computational tests, we set the number of available colors jcj equal to the optimal value of the instance of the GCP under consideration. Results Table 1 shows for each of the potential functions the time and number of iterations (i.e. runs through steps 1; : : :; 4 on page 2) required to generate the rst admissible coloring. Using potential function ' problems up to a size of 1350 variables were solved; for larger problems the memory requirements and solution times appeared to be too high using our implementation. ' (1) (11) (23) GjV j:jcj jej time iter. time iter. time iter. G G G G G G G G G G G G G G G Table 1: Solution times and number of iterations for the three potential functions. ' ' means no solution was found; '*' means that the solution was found after initially running into a local minimum; ' ' means that the problem was not tried. To see which potential function is easiest to minimize, we let the algorithm run until a minimum was found. Table 2 shows for all three potential functions the time and number of iterations required to converge to a minimum. If the minimum found was global, the numbers are printed in bold face. The last column of the table shows the number of feasible solutions that could be constructed from the solution found by minimizing potential function. This number is always larger than one, and in some cases quite substantial. It gives an indication of the minimal number of feasible solutions of the given GCP. An examination of the tables leads us to the conclusion that potential function leads to signicant improvement when compared to '. Furthermore, the new quadratic model gives the best results, both with respect to solution time and required number of iterations. The average number of linear systems that had to be solved per iteration for the three potential functions was 1.48, 1.56 and 1.34 linear systems per iteration respectively. Also in this respect seems to be the most stable. 12
16 ' (1) (11) (23) GjV j:jcj time iter. time iter. time iter. # sol. G G G G G G G G G G G G G G G Table 2: Times and number of iterations required to converge to rst minimum (local or global). Also the number of solutions if larger than one is given. '*' : Second minimum; ' ' means that the problem was not tried. 7 Concluding remarks In this paper a number of potential reduction methods for binary feasibility problems have been investigated. It has been shown that the potential function that Karmarkar et al. propose [7, 8, 5] has a major practical drawback when solving large scale combinatorial optimization problems, due to its dense Hessian. An improved potential function has been proposed. As this potential function makes use of the sparsity of the problem, problems of larger sizes can be solved. A nonconvex quadratic formulation for a special class of binary feasibility problems has been developed, which results in a much smaller and computationally more attractive problem, as all inequality constraints are incorporated in the objective function. Furthermore, optimizing this model may result in nding multiple feasible solutions. All three potential functions (Karmarkar's potential function, the modied Karmarkar type potential function and the potential function to optimize the new model) have been applied to several randomly generated instances of the graph coloring problem. { It appears that the modied Karmarkar potential function yields much shorter computation times than the original potential function, especially for larger problems. The number of iterations required for those potential functions are comparable. Also the number of times the algorithm converges to a global minimum are almost the same. { Using the new quadratic model leads to the best results. In all cases it nds a solution in the least time, usually after only one or a few iterations. Furthermore, it nds a global minimum for all instances, and requires, at least for the larger problems, fewer iterations to converge to a global minimum. { The number of solutions that can be constructed from a minimum of the new quadratic model can be quite substantial. In some cases hundreds or thousands of solutions are found simultaneously, whereas using the other potential functions results in nding just one solution. 13
17 Other combinatorial problems can also be tackled using one of the above mentioned potential functions. Any binary feasibility problem can be solved by using Karmarkar's or the modied Karmarkar potential function, where the latter is preferable due to its sparse Hessian. Some combinatorial problems can be solved by using the new quadratic model. Examples of such problems are, apart from the GCP : { The maximum independent set problem. { Frequency assignment problems (see Warners [12]). The results described in this paper were obtained using a MATLAB TM /FORTRAN implementation. When using a more ecient low level implementation computation times can considerably be improved. References [1] H.P. van Benthem (1995), "GRAP H: Generating Radiolink frequency Assignment Problems Heuristically", Master's Thesis, Faculty of Technical Mathematics and Informatics, Delft University of Technology, Delft, The Netherlands. [2] I.I. Dikin (1967), "Iterative solution of problems of linear and quadratic programming", Doklady Akademiia Nauk SSSR 174, 747{748. Translated into English in Soviet Mathematics Doklady 8, 674{675. [3] I.S. Du, A.M. Erisman and J.K. Reid (1989), Direct methods for sparse matrices, Oxford University Press, New York. [4] O.E. Flippo and B. Jansen (1992), "Duality and sensitivity in nonconvex quadratic optimization over an ellipsoid", Technical Report 92-65, Faculty of Technical Mathematics and Informatics, Delft University of Technology, Delft, The Netherlands. To appear in European Journal of Operational Research. [5] A.P. Kamath, N.K. Karmarkar, K.G. Ramakrishnan and M.G.C. Resende (1990), "Computational experience with an interior point algorithm on the Satisability problem", Annals of Operations Research 25, 43{58. [6] N. Karmarkar (1984), "A new polynomial-time algorithm for linear programming", Combinatorica 4, 373{395. [7] N. Karmarkar (1990), "An interior{point approach to NP{complete problems part I", Contemporary Mathematics 114, 297{308. [8] N. Karmarkar, M.G.C. Resende and K.G. Ramakrishnan (1991), "An interior point algorithm to solve computationally dicult set covering problems", Mathematical Programming 52, 597{618. [9] E. Kranich (1991), "Interior point methods for mathematical programming: a bibliography", Discussion paper 171, Institute of Economy and Operations Research, Fern Universitat Hagen, P.O. Box 940, D{5800 Hagen 1, Germany. [10] C.-J. Shi, A. Vannelli and J. Vlach (1992), "An improvement on Karmarkar's algorithm for integer programming", COAL Bulletin, Mathematical Programming Society, vol. 21, 23{28. [11] D.C. Sorensen (1982), "Newton's method with a model trust region modication", SIAM Journal on Numerical Analysis 19: 409{426. [12] J.P. Warners (1995), "A potential reduction approach to the Radio Link Frequency Assignment Problem", Master's Thesis, Faculty of Technical Mathematics and Informatics, Delft University of Technology, Delft, The Netherlands. [13] Y. Ye (1992), "On ane scaling algorithms for nonconvex quadratic programming", Mathematical Programming 56, 285{300. [14] Y. Zhang (1994), "LIPSOL - a MATLAB TM toolkit for linear programming interior-point solvers", Department of Mathematics and Statistics, University of Maryland Baltimore Country, Baltimore, Maryland. FORTRAN routines written by E.G. Ng and B.W. Peyton (ORNL), J.W.H. Liu (Waterloo), Y. and D. Zhang (UMBC). 14
The iterative convex minorant algorithm for nonparametric estimation
The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationLimit Analysis with the. Department of Mathematics and Computer Science. Odense University. Campusvej 55, DK{5230 Odense M, Denmark.
Limit Analysis with the Dual Ane Scaling Algorithm Knud D. Andersen Edmund Christiansen Department of Mathematics and Computer Science Odense University Campusvej 55, DK{5230 Odense M, Denmark e-mail:
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More informationApproximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko
Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru
More informationAn exploration of matrix equilibration
An exploration of matrix equilibration Paul Liu Abstract We review three algorithms that scale the innity-norm of each row and column in a matrix to. The rst algorithm applies to unsymmetric matrices,
More informationDiscrete (and Continuous) Optimization WI4 131
Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl
More informationPolynomiality of Linear Programming
Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is
More informationy Ray of Half-line or ray through in the direction of y
Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationNotes on Dantzig-Wolfe decomposition and column generation
Notes on Dantzig-Wolfe decomposition and column generation Mette Gamst November 11, 2010 1 Introduction This note introduces an exact solution method for mathematical programming problems. The method is
More informationINDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS. Ronald J. Stern. Concordia University
INDEFINITE TRUST REGION SUBPROBLEMS AND NONSYMMETRIC EIGENVALUE PERTURBATIONS Ronald J. Stern Concordia University Department of Mathematics and Statistics Montreal, Quebec H4B 1R6, Canada and Henry Wolkowicz
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationSOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.
SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD FOR QUADRATIC PROGRAMMING Emil Klafszky Tamas Terlaky 1 Mathematical Institut, Dept. of Op. Res. Technical University, Miskolc Eotvos University Miskolc-Egyetemvaros
More informationInstitute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{93{10 TR{3030 On the Convergence of Multipoint Iterations G. W. Stewart y February, 1993 Reviseed,
More informationThe rate of convergence of the GMRES method
The rate of convergence of the GMRES method Report 90-77 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics
More informationAN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT
October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University
More informationA new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints
Journal of Computational and Applied Mathematics 161 (003) 1 5 www.elsevier.com/locate/cam A new ane scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality
More informationOn the implementation of symmetric and antisymmetric periodic boundary conditions for incompressible flow
On the implementation of symmetric and antisymmetric periodic boundary conditions for incompressible flow Report 9-1 Guus Segal Kees Vuik Kees Kassels Technische Universiteit Delft Delft University of
More informationStructural Grobner Basis. Bernd Sturmfels and Markus Wiegelmann TR May Department of Mathematics, UC Berkeley.
I 1947 Center St. Suite 600 Berkeley, California 94704-1198 (510) 643-9153 FAX (510) 643-7684 INTERNATIONAL COMPUTER SCIENCE INSTITUTE Structural Grobner Basis Detection Bernd Sturmfels and Markus Wiegelmann
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA
More informationA STRENGTHENED SDP RELAXATION. via a. SECOND LIFTING for the MAX-CUT PROBLEM. July 15, University of Waterloo. Abstract
A STRENGTHENED SDP RELAXATION via a SECOND LIFTING for the MAX-CUT PROBLEM Miguel Anjos Henry Wolkowicz y July 15, 1999 University of Waterloo Department of Combinatorics and Optimization Waterloo, Ontario
More informationCoins with arbitrary weights. Abstract. Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to
Coins with arbitrary weights Noga Alon Dmitry N. Kozlov y Abstract Given a set of m coins out of a collection of coins of k unknown distinct weights, we wish to decide if all the m given coins have the
More information1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r
DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization
More informationOptimization (168) Lecture 7-8-9
Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6
More informationCSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance
More informationCritical Reading of Optimization Methods for Logical Inference [1]
Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationLecture 14 - P v.s. NP 1
CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation
More informationDiscrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131
Discrete (and Continuous) Optimization Solutions of Exercises 2 WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek
More informationVariable Objective Search
Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method
More informationOn a Polynomial Fractional Formulation for Independence Number of a Graph
On a Polynomial Fractional Formulation for Independence Number of a Graph Balabhaskar Balasundaram and Sergiy Butenko Department of Industrial Engineering, Texas A&M University, College Station, Texas
More informationA polynomial relaxation-type algorithm for linear programming
A polynomial relaxation-type algorithm for linear programming Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 6th February 20
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationChapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],
Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;
More informationFurther experiences with GMRESR
Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics
More information58 Appendix 1 fundamental inconsistent equation (1) can be obtained as a linear combination of the two equations in (2). This clearly implies that the
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationNotes taken by Graham Taylor. January 22, 2005
CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationThe model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho
Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September
More informationSEMIDEFINITE PROGRAMMING RELAXATIONS FOR THE QUADRATIC ASSIGNMENT PROBLEM. Abstract
SEMIDEFINITE PROGRAMMING RELAXATIONS FOR THE QUADRATIC ASSIGNMENT PROBLEM Qing Zhao x Stefan E. Karisch y, Franz Rendl z, Henry Wolkowicz x, February 5, 998 University of Waterloo CORR Report 95-27 University
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationA Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region
A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationSpecial Classes of Fuzzy Integer Programming Models with All-Dierent Constraints
Transaction E: Industrial Engineering Vol. 16, No. 1, pp. 1{10 c Sharif University of Technology, June 2009 Special Classes of Fuzzy Integer Programming Models with All-Dierent Constraints Abstract. K.
More informationApplied Mathematics &Optimization
Appl Math Optim 29: 211-222 (1994) Applied Mathematics &Optimization c 1994 Springer-Verlag New Yor Inc. An Algorithm for Finding the Chebyshev Center of a Convex Polyhedron 1 N.D.Botin and V.L.Turova-Botina
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationExample Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones
Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.
More informationOn the Chvatál-Complexity of Binary Knapsack Problems. Gergely Kovács 1 Béla Vizvári College for Modern Business Studies, Hungary
On the Chvatál-Complexity of Binary Knapsack Problems Gergely Kovács 1 Béla Vizvári 2 1 College for Modern Business Studies, Hungary 2 Eastern Mediterranean University, TRNC 2009. 1 Chvátal Cut and Complexity
More information462 Chapter 11. New LP Algorithms and Some Open Problems whether there exists a subset of fd 1 ::: d n g whose sum is equal to d, known as the subset
Chapter 11 NEW LINEAR PROGRAMMING ALGORITHMS, AND SOME OPEN PROBLEMS IN LINEAR COMPLEMENTARITY Some open research problems in linear complementarity have already been posed among the exercises in previous
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationAverage Reward Parameters
Simulation-Based Optimization of Markov Reward Processes: Implementation Issues Peter Marbach 2 John N. Tsitsiklis 3 Abstract We consider discrete time, nite state space Markov reward processes which depend
More informationA Continuation Approach Using NCP Function for Solving Max-Cut Problem
A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More information5.5 Quadratic programming
5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More informationEigenvalue problems and optimization
Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we
More informationOn Optimal Frame Conditioners
On Optimal Frame Conditioners Chae A. Clark Department of Mathematics University of Maryland, College Park Email: cclark18@math.umd.edu Kasso A. Okoudjou Department of Mathematics University of Maryland,
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationTopics in Theoretical Computer Science April 08, Lecture 8
Topics in Theoretical Computer Science April 08, 204 Lecture 8 Lecturer: Ola Svensson Scribes: David Leydier and Samuel Grütter Introduction In this lecture we will introduce Linear Programming. It was
More information1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)
Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationBranch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems
Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and
More informationAPPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract
APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the
More informationRelation of Pure Minimum Cost Flow Model to Linear Programming
Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More information334 Chapter 8. Polynomially Bounded Algorithms for Some Classes of LCPs General Step: Let q be the present update right hand side constants vector. If
Chapter 8 POLYNOMIALLY BOUNDED ALGORITHMS FOR SOME CLASSES OF LCPs In this chapter we discuss algorithms for special classes of LCPs, whose computational complexity is bounded above by a polynomial in
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationTightening linearizations of non-linear binary optimization problems
Tightening linearizations of non-linear binary optimization problems Elisabeth Rodríguez Heck and Yves Crama QuantOM, HEC Management School, University of Liège Partially supported by Belspo - IAP Project
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More informationInterior Point Algorithm for Linear Programming Problem and Related inscribed Ellipsoids Applications.
International Journal of Computational Science and Mathematics. ISSN 0974-3189 Volume 4, Number 2 (2012), pp. 91-102 International Research Publication House http://www.irphouse.com Interior Point Algorithm
More informationUpper and Lower Bounds on the Number of Faults. a System Can Withstand Without Repairs. Cambridge, MA 02139
Upper and Lower Bounds on the Number of Faults a System Can Withstand Without Repairs Michel Goemans y Nancy Lynch z Isaac Saias x Laboratory for Computer Science Massachusetts Institute of Technology
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationBranching Rules for Minimum Congestion Multi- Commodity Flow Problems
Clemson University TigerPrints All Theses Theses 8-2012 Branching Rules for Minimum Congestion Multi- Commodity Flow Problems Cameron Megaw Clemson University, cmegaw@clemson.edu Follow this and additional
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationOptimization Methods. Lecture 18: Optimality Conditions and. Gradient Methods. for Unconstrained Optimization
5.93 Optimization Methods Lecture 8: Optimality Conditions and Gradient Methods for Unconstrained Optimization Outline. Necessary and sucient optimality conditions Slide. Gradient m e t h o d s 3. The
More informationTHE REAL POSITIVE DEFINITE COMPLETION PROBLEM. WAYNE BARRETT**, CHARLES R. JOHNSONy and PABLO TARAZAGAz
THE REAL POSITIVE DEFINITE COMPLETION PROBLEM FOR A SIMPLE CYCLE* WAYNE BARRETT**, CHARLES R JOHNSONy and PABLO TARAZAGAz Abstract We consider the question of whether a real partial positive denite matrix
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More information