Variable Objective Search

Size: px
Start display at page:

Download "Variable Objective Search"

Transcription

1 Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method utilizes different objective functions used in alternative mathematical programming formulations of the same combinatorial optimization problem in an attempt to improve the solutions obtained using each of these formulations individually. The proposed technique is illustrated using alternative quadratic unconstrained binary formulations of the classical maximum independent set problem in graphs. Keywords. Variable objective search, Binary quadratic programming, Maximum independent set 1 Introduction Due to the inherent computational complexity of most important combinatorial optimization problems, one cannot expect to be able to solve very large-scale instances to optimality, and (meta) heuristics are usually applied in practice. The simplicity and effectiveness of metaheuristic approaches earned them considerable popularity in the optimization community [5]. In their now classical paper, Mladenović and Hansen [8] (see also [6]) proposed to take advantage of different neighborhood structures for the same combinatorial optimization problem by systematically changing neighborhoods within the search. The resulting variable neighborhood search (VNS) Industrial and Systems Engineering, Texas A&M University, College Station, TX , {butenko,yaleksa}@tamu.edu Industrial Engineering & Management, Oklahoma State University, Stillwater, OK 74078, baski@okstate.edu 1

2 is based on the observation that a local optimum with respect to one neighborhood may not be a local optimum with respect to another neighborhood, whereas a global optimum is a local optimum with respect to any neighborhood structure. Therefore, if we find a solution which is locally optimal with respect to several neighborhood structures rather than just one, intuitively such a solution stands a higher chance of being a global optimum. VNS has been successfully applied to many different combinatorial optimization problems. Inspired by this success and motivated by the observation that the properties similar to those stated for neighborhoods also hold for alternative objective functions for the same problem, in this paper we propose a new metaheuristic framework, variable objective search (VOS). It is in the spirit of VNS, however, instead of varying the neighborhood structures we propose to consider different equivalent formulations of the combinatorial optimization problem solved. Given two or more equivalent formulations of the same combinatorial optimization problem, and assuming for simplicity that all formulations share the same feasible region but have different objective functions, we can impose a neighborhood structure for our problem that will be the same for all the considered formulations. Then the following properties hold: (i) a local optimum for one formulation may not correspond to a local optimum with respect to another formulation; (ii) a global optimal solution of the considered combinatorial optimization problem should correspond to a global optimum for any formulation of this problem. Therefore, it seems natural to use the local maximum for one of the formulations as a starting point of search for another formulation. The proposed variable objective search framework is based on these properties and can be applied to almost any optimization problem allowing multiple equivalent formulations (discrete or continuous). In particular, many classical combinatorial optimization problems, such as the quadratic assignment problem, the max-cut problem, and the maximum clique problem can be expressed using unconstrained binary quadratic formulations, and therefore allow for multiple equivalent binary quadratic reformulations [7]. One way to obtain such reformulations is as follows. Assume that one has the problem in the form max x {0,1} n f(x) = xt Qx + c T x, 2

3 where Q is a real symmetric n n matrix. Given any real vector q R n, let Q = Q + diag(q), where diag(q) is the n n diagonal matrix having q as its diagonal, and let c = c q. Then, since f(x) = f(x) for any x {0, 1} n, the above 0 1 quadratic problem is equivalent to the problem: max f(x) = x {0,1} xt Qx + c T x. n The above modification does not have any impact on a local search based on any binary neighborhood, however, it does change the curvature of the quadratic function and can be useful for continuous local search. If the vector q above is selected so that the resulting matrix Q either has zero diagonal or is positive definite then we are guaranteed that the continuous problem max f(x) has a binary global maximum, thus we obtain two equivalent continuous reformulations of the original binary quadratic problem. Applying x [0,1] n classical nonlinear programming algorithms to one of these reformulations will yield a stationary point x (0) that is not necessarily a stationary point for the other reformulation and does not have to be a local maximum with respect to a binary neighborhood structure. Thus, local search procedures can be applied for the latter two cases using x (0) as initial guess. In addition, selecting the vector q above that does not result in an equivalent reformulation but provides a continuous relaxation for the binary problem (e.g., by choosing q 0) may also be of value, as it may provide good search directions. To illustrate this point, consider the following simple example: [ ] [ ] Q = ; c = There are four feasible points for the corresponding binary quadratic maximization problem, x (1) = [0, 0] T, x (2) = [0, 1] T, x (3) = [1, 0] T, x (4) = [1, 1] T with f(x (1) ) = f(x (4) ) = 0, f(x (2) ) = 4, and f(x (3) ) = 2. Hence, x (3) is a local maximum of f(x) with respect to the binary one-flip neighborhood as well as a local maximum for the corresponding continuous relaxation (see Fig. 1). Note that all feasible directions at x (3) are descent directions for f(x). However, applying the above transformation with q = [ 4, 5] T, we obtain f(x) f(x) x [0, 1] 2, with the gradient f(x (3) ) = [ 1, 1] T, meaning that [ 1, 1] T is a feasible ascent direction for f(x) at x (3). Using this direction for line search with either zero-diagonal or a positive definite continuous reformulation of the considered binary problem, we immediately obtain the global maximum given by x (2). 3

4 4 3 x x x 1 Figure 1: Plots of x 3 = f(x) and x 3 = f(x) considered in the example. To illustrate the proposed VOS approach on a specific problem, we will use alternative quadratic formulations of the maximum independent set problem in graphs, therefore, some definitions from graph theory are in order. Given a simple undirected graph G = (V, E), let A G denote the adjacency matrix of G. For a vertex v V let N(v) = {u : (u, v) E} denote the neighborhood of v. The complement graph Ḡ of G is given by Ḡ = (V, Ē), where Ē = {(u, v) V V \ E : u v}. A subset of vertices I V is called an independent (stable) set if (u, v) / E for any pair of vertices u, v I. The maximum independent set problem is to find an independent set of the largest size in G. The cardinality of the largest independent set is denoted by α(g) and is called the independence (stability) number. This classical combinatorial optimization problem is of great theoretical importance and has numerous practical applications of diverse origins, therefore it has been well studied in the literature. See [3] for an in-depth survey of the maximum clique problem, which is equivalent to the maximum independent set problem in the complement graph. The remainder of this paper is organized as follows. Section 2 describes the proposed methodology in detail. The approach is illustrated numerically on the maximum independent set problem in Section 3. Finally, Section 4 concludes the paper. 4

5 2 The method A combinatorial optimization problem P can be generally defined as a set of problem instances and a minimization or maximization objective, where an instance of P is a pair (S, f), with S being the set of feasible solutions and f : S R being the objective function [1]. Without loss of generality, we will assume that P has the maximization objective. The problem consists in finding a globally optimal solution, which is a feasible solution s S such that f(s ) f(s) for all s S. Let us denote by f = f(s ) = max s S f(s) the optimal objective function value, and by S = {s S : f(s) = f } = arg max s S f(s) the set of optimal solutions of the considered instance of the problem P. For simplicity, in this paper we will assume that we are dealing with a specific instance of the problem P, therefore problem P will refer to the given instance of P. Many combinatorial optimization problems can be equivalently modeled as mathematical programs of different types, e.g., as (mixed) integer programming problems or continuous nonconvex optimization problems. The equivalence is in the sense that every global maximum of a mathematical programming formulation corresponds, in a straightforward fashion, to a global maximum of the combinatorial optimization problem that this formulation models. Assume that our problem P allows the following p alternative equivalent formulations: f = max x X i f i (x), i = 1,..., p, where X i, i = 1,...,p is a feasible set that may be either discrete or continuous. If we denote by X 0 = S and by f 0 = f, then the original combinatorial formulation of the problem P can be written as f = max x X 0 f 0 (x). Consider the set of all optimal solutions for the i th formulation: Xi = arg max f i(x). Before we proceed with the formal description of the variable x X objective framework, we need to make the following important assumptions. 5

6 Assumption 1. There are mappings h ij : X i X j, i = 0,..., p (where h ii, i = 0,...,p are the identity maps) between the feasible sets of the i th and j th formulations such that any global optimal solution of the i th formulation is mapped to a global optimum of the j th formulation, i.e., h ij (X i ) X j. In some cases the mappings may be straightforward, while in others they may need to be calculated using efficient algorithms. Given x (i), one would want to have f 0 (h i0 (x (i) )) f i (x (i) ), however this may not be the case in general. Assumption 2. We assume that the same neighborhood structure N can be imposed on each feasible region X i, i = 1,...,p. This neighborhood structure will be used for local search applied to each formulation within the proposed variable objective framework. Note that if we deal with alternative continuous formulations of P, the continuous ǫ-neighborhood definition based on the Euclidean norm is the natural choice of the neighborhood structure. Assumption 3. We assume that the mappings h ij preserve the neighborhood structure N, i.e., for any x X i we have h ij (N(x)) = N(h ij (x)). This assumption will allow us to perform a local search for all problems by essentially concentrating on one of X i s. The last two assumptions are not critical and are used to simplify the presentation. They can be relaxed resulting in the generalized variable objective search or variable neighborhood/objective search, which naturally generalizes both VOS and VNS. 2.1 Basic variable objective search In the basic variable objective search (BVOS), we start with some feasible solution for the first formulation and perform a local search in the neighborhood N yielding a locally optimal solution x (1) for max f 1 (x). This solution x X 1 is then transformed into a feasible solution h 12 ( x (1) ) of the second formulation, which is then used as a starting point of a local search for the second formulation. Because of Assumption 3 above, the local search for the second formulation is performed by searching the neighborhood of x (1) in X 1 and mapping the neighbors to the corresponding points of X 2, and the resulting local maximum for the second formulation is given by h 12 ( x (2) ), where x (2) X 1. Next, h 13 ( x (2) ) is used as the starting point of local search with the third formulation, etc. To avoid cycling, local search moves for the i th 6

7 Algorithm 1 BVOS with best improvement move Require: f i, X i, i = 0,...,p; N; Ensure: a feasible solution s of P; 1: Find an initial solution x X 1 ; 2: x (i) = h 1i (x), i = 1,...,p; 3: fi = f i ( x (i) ), i = 1,...,p; 4: repeat 5: ˆx = x; 6: for i = 1,..., p do 7: for each y N(x) do 8: if f i (h 1i (y)) > f i then 9: x = y; 10: x (i) = x; 11: fi = f i (h 1i ( x (i) )); 12: end if 13: end for 14: end for 15: until ˆx = x; 16: X0 = {h 10 ( x (i) ) : i = 1,...,p}; 17: X0 = arg max{f 0 (s) : s X 0 }; 18: f0 = f 0 (s), s X 0 ; 19: return s X 0, f 0 ; formulation are performed only if there is an improvement compared to the best previously computed solution x (i). The corresponding pseudocodes are given in Algorithms 1 and 2, which describe BVOS with best improvement and first improvement local search strategies, respectively. 2.2 Uniform variable objective search The uniform variable objective search (UVOS) is similar to BVOS, however, the local search moves are performed based on analyzing the objective function values for all p formulations simultaneously rather than one at a time. Like in BVOS, we start with a feasible solution x of the first formulation. Throughout the execution of the algorithm, we store the best computed solutions with respect to each of the p formulations, where, as before, x (i) X 1 is the best current solution with respect to the i th formulation such that 7

8 Algorithm 2 BVOS with first improvement move Require: f i, X i, i = 0,...,p; N; Ensure: a feasible solution s of P; 1: Find an initial solution x X 1 ; 2: x (i) = h 1i (x), i = 1,...,p; 3: fi = f i ( x (i) ), i = 1,...,p; 4: repeat 5: ˆx = x; 6: for i = 1,..., p do 7: for each y N(x) do 8: if f i (h 1i (y)) > f i then 9: x = y; 10: x (i) = x; 11: fi = f i (h 1i ( x (i) )); 12: go to line 7; 13: end if 14: end for 15: end for 16: until ˆx = x; 17: X0 = {h 10 ( x (i) ) : i = 1,...,p}; 18: X0 = arg max{f 0 (s) : s X 0 }; 19: f0 = f 0 (s), s X 0 ; 20: return s X 0, f 0 ; h 1i ( x (i) ) X i is the corresponding local maximum. The local search proceeds as follows: for each y in the neighborhood of the current solution x, for each i = 1,..., p we check whether y improves the previously best objective value f i, and if it does, we update x (i) and f i and either continue searching the same neighborhood until all neighbors of x have been explored (the best improvement strategy), or, after updating all fi, i = 1,...,p that improved, move to y and start exploring the neighborhood of y (the first improvement strategy). The pseudocodes given in Algorithms 3 and 4 describe UVOS with best improvement and first improvement local search strategies, respectively. 8

9 Algorithm 3 UVOS with best improvement move Require: f i, X i, i = 0,...,p; N; Ensure: a feasible solution s of P; 1: Find an initial solution x X 1 ; 2: x (i) = x, i = 1,...,p; 3: fi = f i (h 1i ( x (i) )), i = 1,...,p; 4: repeat 5: ˆx = x; 6: for each y N(x) do 7: for i = 1,..., p do 8: if f i (h 1i (y)) > f i then 9: x = y; 10: x (i) = x; 11: fi = f i (h 1i ( x (i) )); 12: end if 13: end for 14: end for 15: until ˆx = x; 16: X0 = {h 10 ( x (i) ) : i = 1,...,p}; 17: X0 = arg max{f 0 (s) : s X 0 }; 18: f0 = f 0 (s), s X 0 ; 19: return s X 0, f 0 ; 3 Experiments with the maximum independent set problem In this section, we evaluate and compare the performance of the proposed algorithms using the maximum independent set problem as the testbed. The main objective of this exercise is to explore how much the proposed approaches improve the solutions found using similar local search strategies with different formulations individually. The maximum independent set problem can be equivalently formulated using the following continuous nonconvex programs (see, e.g., [2]): α(g) = max x [0,1] n et x 1 2 xt A G x. (1) 9

10 Algorithm 4 UVOS with first improvement move Require: f i, X i, i = 0,...,p; N; Ensure: a feasible solution s of P; 1: Find an initial solution x X 1 ; 2: x (i) = x, i = 1,...,p; 3: fi = f i (h 1i ( x (i) )), i = 1,...,p; 4: repeat 5: ˆx = x; 6: for each y N(x) do 7: improvement = 0; 8: for i = 1,..., p do 9: if f i (h 1i (y)) > f i then 10: x = y; 11: x (i) = x; 12: fi = f i (h 1i ( x (i) )); 13: improvement = 1; 14: end if 15: if (improvement ==1) then 16: go to line 5; 17: end if 18: end for 19: end for 20: until ˆx = x; 21: X0 = {h 10 ( x (i) ) : i = 1,...,p}; 22: X0 = arg max{f 0 (s) : s X 0 }; 23: f0 = f 0 (s), s X 0 ; 24: return s X 0, f 0 ; 1 1 α(g) = max x S xt A Ḡ x, (2) where S = {x R V : e T x = 1, x 0} and e is the vector with all components equal to 1. In formulation (1), the continuous feasible region [0, 1] n can be replaced with the discrete one, {0, 1} n without altering the optimal objective function value. Also, the feasible region of the Motzkin-Straus [9] formulation (2) can 10

11 be discretized by considering S = {y = x/(e T x) : x {0, 1} n, x 0} in place of S, since there is always a global optimal solution x of (2) in the form of the characteristic vector of a maximum independent set I divided by the independence number: x i = { 1/ I, i I; 0, i / I. Thus, we obtain the following discrete formulations: α(g) = max x {0,1} n et x 1 2 xt A G x. (3) 1 1 α(g) = max x S xt A Ḡ x, (4) where S = {y = x/(e T x) : x {0, 1} n, x 0}. We will use these two formulations in our experiments, with f 1 (x) = e T x 1 2 xt A G x; f 2 (x) = x T A Ḡ x. Then the obvious choice for transformation h 12 (x) is { 0, x = 0; h 12 (x) = x, x 0. e T x As for the choice of the neighborhood, it should be noted that we are not interested in the most efficient neighborhood that would yield an optimal solution for most test instances. In fact, in order to be able to make a fair comparison, we are more interested in a neighborhood structure that would leave some space for improvement after finding an initial local optimum with respect to one of the formulations used. Hence, we select the standard one-flip neighborhood, which is commonly used in heuristics for unconstrained optimization of functions of binary variables. Namely, given a vector x {0, 1} n, its neighborhood consists of all y {0, 1} n that are Hamming distance 1 away from x (i.e., y differs from x in exactly one component). 3.1 Results of experiments To test the proposed algorithms, complements of selected DIMACS clique benchmark graphs [4] were used. The number of vertices of tested graphs 11

12 ranged from 64 to 496. A set of 100 runs of each algorithm was performed for every instance. The initial solution for each run was generated randomly and was the same for all algorithms. Table 1 represents the results of executing BVOS method. The columns in each row of this table contain the graph instance name ( Instance ) fol- Table 1: Results for BVOS BVOS (best improvement) f 0 1 BVOS (first improvement) Instance V f0 1 f 1 Best % f1 f 1 Best % f1 brock brock brock brock c-fat c-fat c-fat hamming hamming hamming johnson johnson johnson johnson keller mann a p hat p hat p hat san san san sanr

13 Table 2: Results for UVOS UVOS (best improvement) UVOS (first improvement) Instance V f 1 f1 f 2 f2 f 1 f1 f 2 f2 brock brock brock brock c-fat c-fat c-fat hamming hamming hamming johnson johnson johnson johnson keller mann a p hat p hat p hat san san san sanr lowed by its number of vertices ( V ). The next four columns contain the following information: the mean (over the total number of runs) of the values of the first found local optimal solutions for the formulation (3) ( f 1 ), 0 the mean of the maximum values of the objective function (3) found ( f 1 ), the maximum (over the 100 trials) percentage improvement from the first found 13

14 local optimal solution to the best found solution ( Best % ), and the best overall value found for the objective function (3) ( f1 ) obtained under the BVOS method with best improvement move. The last four columns contain the same information for the first improvement move strategy. The results of applying UVOS algorithm to the same graph instances are shown in Table 2. Similar to Table 1, the first two columns in this table represent the graph instance name and its number of vertices, while the other two groups of four columns contain the mean ( f 1 ) and the maximum ( f1 ) of objective values for formulation (3), and the mean ( f 2 ) and the maximum ( f2 ) of objective values for formulation (4) obtained using best and first improvement approach, respectively. The results of the experiments indicate that the overall performance of the VOS on the given set of instances was rather encouraging. As can be seen from Table 1 in many cases it yielded a considerable (more than 50%) improvement in the value of the objective function under either the BVOS with best improvement or with first improvement, or both. We obtained only one result with no improvement in both BVOS strategies. However, there were three instances with a very large percentage of improvement indicating that the first obtained value of objective function was far from the optimal and it considerably improved with the help of BVOS with first improvement move. Although, in general the better results of the maximum value of objective function were generated by BVOS with best improvement move. In general we observed a tendency (about 70% of all cases) of getting the same maximum values of objective functions under both BVOS and UVOS methods. The majority of the rest 30% of the problems was better solved using the UVOS algorithm. We observed no significant difference in performance of UVOS with either best improvement or first improvement approach. In most cases both strategies showed similar results. 4 Conclusion We introduced the general framework of the variable objective search for combinatorial optimization, which is based on a simple observation that systematically combining different mathematical programming formulations of the same combinatorial optimization problem may lead to better heuristic solutions than those obtained by solving any of the considered formulations individually. We proposed two different variations of the method, the basic 14

15 variable objective search (BVOS), in which local search performed sequentially with respect to the considered formulations, and the uniform variable objective search (UVOS), which runs one local search that tries to look for better solutions with respect to each formulation simultaneously. In the latter part of the paper, we demonstrate the merit of BVOS and UVOS by applying them to two different formulations of the maximum independent set problem in graphs. While in this paper we restricted the discussion to two versions of VOS, many other variations of the method can be developed, similarly to how numerous versions of the VNS have been proposed in the literature [6]. Since the main objective of this paper is to introduce the general idea of this new framework, we leave the exploration of its various potentially useful variations and evaluation of their practical performance on large-scale instances of different combinatorial optimization problems for future research. In particular, the case of alternative continuous nonconvex formulations of the same problem, where the neighborhood of a feasible point is naturally defined as the intersection of the feasible region with an ǫ-ball centered at this point, is an interesting direction to investigate. References [1] E. Aarts and J. K. Lenstra, editors. Local Search in Combinatorial Optimization. John Wiley & Sons, Chichester, [2] J. Abello, S. Butenko, P. Pardalos, and M. Resende. Finding independent sets in a graph using continuous multivariable polynomial formulations. Journal of Global Optimization, 21: , [3] I. M. Bomze, M. Budinich, P. M. Pardalos, and M. Pelillo. The maximum clique problem. In D.-Z. Du and P. M. Pardalos, editors, Handbook of Combinatorial Optimization, pages Kluwer Academic Publishers, Dordrecht, The Netherlands, [4] DIMACS. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge Accessed April [5] F. Glover and G. Kochenberger, editors. Handbook Of Metaheuristics. Springer, London,

16 [6] P. Hansen and N. Mladenović. Variable neighborhood search: Principles and applications. European Journal of Operational Research, 130: , [7] R. Horst, P. M. Pardalos, and N. V. Thoai. Introduction to Global Optimization. Kluwer Academic Publishers, Dordrecht, The Netherlands, 2 edition, [8] N. Mladenović and P. Hansen. Variable neighborhood search. Computers and Operations Research, 24: , [9] T. S. Motzkin and E. G. Straus. Maxima for graphs and a new proof of a theorem of Turán. Canad. J. Math., 17: ,

On a Polynomial Fractional Formulation for Independence Number of a Graph

On a Polynomial Fractional Formulation for Independence Number of a Graph On a Polynomial Fractional Formulation for Independence Number of a Graph Balabhaskar Balasundaram and Sergiy Butenko Department of Industrial Engineering, Texas A&M University, College Station, Texas

More information

FINDING INDEPENDENT SETS IN A GRAPH USING CONTINUOUS MULTIVARIABLE POLYNOMIAL FORMULATIONS

FINDING INDEPENDENT SETS IN A GRAPH USING CONTINUOUS MULTIVARIABLE POLYNOMIAL FORMULATIONS FINDING INDEPENDENT SETS IN A GRAPH USING CONTINUOUS MULTIVARIABLE POLYNOMIAL FORMULATIONS J. ABELLO, S. BUTENKO, P.M. PARDALOS, AND M.G.C. RESENDE Abstract. Two continuous formulations of the maximum

More information

New Lower Bounds on the Stability Number of a Graph

New Lower Bounds on the Stability Number of a Graph New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), 533 540]

More information

Finding independent sets in a graph using continuous multivariable polynomial formulations

Finding independent sets in a graph using continuous multivariable polynomial formulations Journal of Global Optimization 21: 111 137, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 111 Finding independent sets in a graph using continuous multivariable polynomial formulations

More information

Exact Penalty Functions for Nonlinear Integer Programming Problems

Exact Penalty Functions for Nonlinear Integer Programming Problems Exact Penalty Functions for Nonlinear Integer Programming Problems S. Lucidi, F. Rinaldi Dipartimento di Informatica e Sistemistica Sapienza Università di Roma Via Ariosto, 25-00185 Roma - Italy e-mail:

More information

A new trust region technique for the maximum weight clique problem

A new trust region technique for the maximum weight clique problem A new trust region technique for the maximum weight clique problem Stanislav Busygin Industrial and Systems Engineering Department, University of Florida, 303 Weil Hall, Gainesville, FL 32611, USA Abstract

More information

LOCAL SEARCH ALGORITHMS FOR THE MAXIMUM K-PLEX PROBLEM

LOCAL SEARCH ALGORITHMS FOR THE MAXIMUM K-PLEX PROBLEM LOCAL SEARCH ALGORITHMS FOR THE MAXIMUM K-PLEX PROBLEM By ERIKA J. SHORT A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE

More information

Complexity of local search for the p-median problem

Complexity of local search for the p-median problem Complexity of local search for the p-median problem Ekaterina Alekseeva, Yuri Kochetov, Alexander Plyasunov Sobolev Institute of Mathematics, Novosibirsk {ekaterina2, jkochet, apljas}@math.nsc.ru Abstract

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA

More information

A Continuous Characterization of Maximal Cliques in k-uniform Hypergraphs

A Continuous Characterization of Maximal Cliques in k-uniform Hypergraphs A Continuous Characterization of Maximal Cliques in k-uniform Hypergraphs Samuel Rota Bulò, and Marcello Pelillo Dipartimento di Informatica. Università Ca Foscari di Venezia. {srotabul,pelillo}@dsi.unive.it

More information

1 Strict local optimality in unconstrained optimization

1 Strict local optimality in unconstrained optimization ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s

More information

Combinatorial algorithms for the maximum k-plex problem

Combinatorial algorithms for the maximum k-plex problem DOI 10.1007/s10878-010-9338-2 Combinatorial algorithms for the maximum k-plex problem Benjamin McClosky Illya V. Hicks Springer Science+Business Media, LLC 2010 Abstract The maximum clique problem provides

More information

A SEQUENTIAL ELIMINATION ALGORITHM FOR COMPUTING BOUNDS ON THE CLIQUE NUMBER OF A GRAPH

A SEQUENTIAL ELIMINATION ALGORITHM FOR COMPUTING BOUNDS ON THE CLIQUE NUMBER OF A GRAPH A SEQUENTIAL ELIMINATION ALGORITHM FOR COMPUTING BOUNDS ON THE CLIQUE NUMBER OF A GRAPH Bernard Gendron Département d informatique et de recherche opérationnelle and Centre de recherche sur les transports

More information

The combinatorics of pivoting for the maximum weight clique

The combinatorics of pivoting for the maximum weight clique Operations Research Letters 32 (2004) 523 529 Operations Research Letters wwwelseviercom/locate/dsw The combinatorics of pivoting for the maximum weight clique Marco Locatelli a; ;1, Immanuel M Bomze b,

More information

A solution approach for linear optimization with completely positive matrices

A solution approach for linear optimization with completely positive matrices A solution approach for linear optimization with completely positive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F.

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

f-flip strategies for unconstrained binary quadratic programming

f-flip strategies for unconstrained binary quadratic programming Ann Oper Res (2016 238:651 657 DOI 10.1007/s10479-015-2076-1 NOTE f-flip strategies for unconstrained binary quadratic programming Fred Glover 1 Jin-Kao Hao 2,3 Published online: 11 December 2015 Springer

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F. Jarre (Düsseldorf) and

More information

Combinatorial Algorithms for the Maximum k-plex Problem

Combinatorial Algorithms for the Maximum k-plex Problem Combinatorial Algorithms for the Maximum k-plex Problem Benjamin McClosky, Illya V. Hicks Department of Computational and Applied Mathematics, Rice University, 6100 Main St - MS 134, Houston, Texas 77005-1892,

More information

A simplex like approach based on star sets for recognizing convex-qp adverse graphs

A simplex like approach based on star sets for recognizing convex-qp adverse graphs A simplex like approach based on star sets for recognizing convex-qp adverse graphs Domingos M. Cardoso Carlos J. Luz J. Comb. Optim., in press (the final publication is available at link.springer.com).

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Worst Case Instances for Maximum Clique Algorithms and How to Avoid Them

Worst Case Instances for Maximum Clique Algorithms and How to Avoid Them Worst Case Instances for Maximum Clique Algorithms and How to Avoid Them Alexandre Prusch Züge Renato Carmo Universidade Federal do Paraná Programa de Pós-Graduação em Informática Grupo de Pesquisa em

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

Replicator Equations, Maximal Cliques, and Graph Isomorphism

Replicator Equations, Maximal Cliques, and Graph Isomorphism Replicator Equations, Maximal Cliques, and Graph somorphism Marcello Pelillo Dipartimento di nformatica Universita Ca' Foscari di Venezia Via Torino 155, 30172 Venezia Mestre, taly E-mail: pelillo@dsi.unive.it

More information

Integer Programming Formulations for the Minimum Weighted Maximal Matching Problem

Integer Programming Formulations for the Minimum Weighted Maximal Matching Problem Optimization Letters manuscript No. (will be inserted by the editor) Integer Programming Formulations for the Minimum Weighted Maximal Matching Problem Z. Caner Taşkın Tınaz Ekim Received: date / Accepted:

More information

A Bound for Non-Subgraph Isomorphism

A Bound for Non-Subgraph Isomorphism A Bound for Non-Sub Isomorphism Christian Schellewald School of Computing, Dublin City University, Dublin 9, Ireland Christian.Schellewald@computing.dcu.ie, http://www.computing.dcu.ie/ cschellewald/ Abstract.

More information

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming

Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming Optimization Letters 1 10 (009) c 009 Springer. Knapsack Feasibility as an Absolute Value Equation Solvable by Successive Linear Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences Department

More information

A Filled Function Method with One Parameter for R n Constrained Global Optimization

A Filled Function Method with One Parameter for R n Constrained Global Optimization A Filled Function Method with One Parameter for R n Constrained Global Optimization Weixiang Wang Youlin Shang Liansheng Zhang Abstract. For R n constrained global optimization problem, a new auxiliary

More information

Working Paper Series HEARIN CENTER FOR ENTERPRISE SCIENCE

Working Paper Series HEARIN CENTER FOR ENTERPRISE SCIENCE HCES -05-04 Working Paper Series HEARIN CENTER FOR ENTERPRISE SCIENCE Solving the Maximum Edge Weight Clique Problem via Unconstrained Quadratic Programming By Gary Kochenberger Fred Glover Bahram Alidaee

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

1 Introduction and Results

1 Introduction and Results Discussiones Mathematicae Graph Theory 20 (2000 ) 7 79 SOME NEWS ABOUT THE INDEPENDENCE NUMBER OF A GRAPH Jochen Harant Department of Mathematics, Technical University of Ilmenau D-98684 Ilmenau, Germany

More information

Solving the maximum edge weight clique problem via unconstrained quadratic programming

Solving the maximum edge weight clique problem via unconstrained quadratic programming European Journal of Operational Research 181 (2007) 592 597 Discrete Optimization Solving the maximum edge weight clique problem via unconstrained quadratic programming Bahram Alidaee a, Fred Glover b,

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Single Solution-based Metaheuristics

Single Solution-based Metaheuristics Parallel Cooperative Optimization Research Group Single Solution-based Metaheuristics E-G. Talbi Laboratoire d Informatique Fondamentale de Lille Single solution-based metaheuristics Improvement of a solution.

More information

An Analysis on Recombination in Multi-Objective Evolutionary Optimization

An Analysis on Recombination in Multi-Objective Evolutionary Optimization An Analysis on Recombination in Multi-Objective Evolutionary Optimization Chao Qian, Yang Yu, Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 20023, China

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

Meta-heuristics for combinatorial optimisation

Meta-heuristics for combinatorial optimisation Meta-heuristics for combinatorial optimisation João Pedro Pedroso Centro de Investigação Operacional Faculdade de Ciências da Universidade de Lisboa and Departamento de Ciência de Computadores Faculdade

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

Combinatorial optimization problems

Combinatorial optimization problems Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

The local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance

The local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance The local equivalence of two distances between clusterings: the Misclassification Error metric and the χ 2 distance Marina Meilă University of Washington Department of Statistics Box 354322 Seattle, WA

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Novel Approaches for Analyzing Biological Networks

Novel Approaches for Analyzing Biological Networks Novel Approaches for Analyzing Biological Networks Balabhaskar Balasundaram, Sergiy Butenko and Svyatoslav Trukhanov Department of Industrial Engineering, Texas A&M University, College Station, Texas 77843,

More information

Lecture 11: Generalized Lovász Local Lemma. Lovász Local Lemma

Lecture 11: Generalized Lovász Local Lemma. Lovász Local Lemma Lecture 11: Generalized Recall We design an experiment with independent random variables X 1,..., X m We define bad events A 1,..., A n where) the bad event A i depends on the variables (X k1,..., X kni

More information

Part II Strong lift-and-project cutting planes. Vienna, January 2012

Part II Strong lift-and-project cutting planes. Vienna, January 2012 Part II Strong lift-and-project cutting planes Vienna, January 2012 The Lovász and Schrijver M(K, K) Operator Let K be a given linear system in 0 1 variables. for any pair of inequalities αx β 0 and α

More information

Scheduling Parallel Jobs with Linear Speedup

Scheduling Parallel Jobs with Linear Speedup Scheduling Parallel Jobs with Linear Speedup Alexander Grigoriev and Marc Uetz Maastricht University, Quantitative Economics, P.O.Box 616, 6200 MD Maastricht, The Netherlands. Email: {a.grigoriev, m.uetz}@ke.unimaas.nl

More information

Lecture 1 : Probabilistic Method

Lecture 1 : Probabilistic Method IITM-CS6845: Theory Jan 04, 01 Lecturer: N.S.Narayanaswamy Lecture 1 : Probabilistic Method Scribe: R.Krithika The probabilistic method is a technique to deal with combinatorial problems by introducing

More information

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs) ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

Discrete Math, Fourteenth Problem Set (July 18)

Discrete Math, Fourteenth Problem Set (July 18) Discrete Math, Fourteenth Problem Set (July 18) REU 2003 Instructor: László Babai Scribe: Ivona Bezakova 0.1 Repeated Squaring For the primality test we need to compute a X 1 (mod X). There are two problems

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Zebo Peng Embedded Systems Laboratory IDA, Linköping University

Zebo Peng Embedded Systems Laboratory IDA, Linköping University TDTS 01 Lecture 8 Optimization Heuristics for Synthesis Zebo Peng Embedded Systems Laboratory IDA, Linköping University Lecture 8 Optimization problems Heuristic techniques Simulated annealing Genetic

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Stability of the path-path Ramsey number

Stability of the path-path Ramsey number Worcester Polytechnic Institute Digital WPI Computer Science Faculty Publications Department of Computer Science 9-12-2008 Stability of the path-path Ramsey number András Gyárfás Computer and Automation

More information

FIRST-FIT COLORING OF CARTESIAN PRODUCT GRAPHS AND ITS DEFINING SETS. In memory of a friend and combinatorist Mojtaba Mehrabadi ( )

FIRST-FIT COLORING OF CARTESIAN PRODUCT GRAPHS AND ITS DEFINING SETS. In memory of a friend and combinatorist Mojtaba Mehrabadi ( ) Volume 12, Number 1, Pages 91 105 ISSN 1715-0868 FIRST-FIT COLORING OF CARTESIAN PRODUCT GRAPHS AND ITS DEFINING SETS MANOUCHEHR ZAKER In memory of a friend and combinatorist Mojtaba Mehrabadi (1969 1998)

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6 Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra Problem Set 6 Due: Wednesday, April 6, 2016 7 pm Dropbox Outside Stata G5 Collaboration policy:

More information

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution

Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Mădălina M. Drugan Artificial Intelligence lab, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels,

More information

Modeling with semidefinite and copositive matrices

Modeling with semidefinite and copositive matrices Modeling with semidefinite and copositive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/24 Overview Node and Edge relaxations

More information

arxiv: v1 [cs.dm] 12 Jun 2016

arxiv: v1 [cs.dm] 12 Jun 2016 A Simple Extension of Dirac s Theorem on Hamiltonicity Yasemin Büyükçolak a,, Didem Gözüpek b, Sibel Özkana, Mordechai Shalom c,d,1 a Department of Mathematics, Gebze Technical University, Kocaeli, Turkey

More information

Optimization of Submodular Functions Tutorial - lecture I

Optimization of Submodular Functions Tutorial - lecture I Optimization of Submodular Functions Tutorial - lecture I Jan Vondrák 1 1 IBM Almaden Research Center San Jose, CA Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 1 Lecture I: outline 1

More information

MATH529 Fundamentals of Optimization Unconstrained Optimization II

MATH529 Fundamentals of Optimization Unconstrained Optimization II MATH529 Fundamentals of Optimization Unconstrained Optimization II Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 31 Recap 2 / 31 Example Find the local and global minimizers

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 234 (2) 538 544 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Optimization: an Overview

Optimization: an Overview Optimization: an Overview Moritz Diehl University of Freiburg and University of Leuven (some slide material was provided by W. Bangerth and K. Mombaur) Overview of presentation Optimization: basic definitions

More information

Why Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones

Why Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones Why Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones Olga Kosheleva and Vladik Kreinovich University of Texas at El Paso 500 W. University El Paso, TX 79968, USA olgak@utep.edu,

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Fall 2017 Qualifier Exam: OPTIMIZATION. September 18, 2017

Fall 2017 Qualifier Exam: OPTIMIZATION. September 18, 2017 Fall 2017 Qualifier Exam: OPTIMIZATION September 18, 2017 GENERAL INSTRUCTIONS: 1 Answer each question in a separate book 2 Indicate on the cover of each book the area of the exam, your code number, and

More information

Cluster Graph Modification Problems

Cluster Graph Modification Problems Cluster Graph Modification Problems Ron Shamir Roded Sharan Dekel Tsur December 2002 Abstract In a clustering problem one has to partition a set of elements into homogeneous and well-separated subsets.

More information

On Dominator Colorings in Graphs

On Dominator Colorings in Graphs On Dominator Colorings in Graphs Ralucca Michelle Gera Department of Applied Mathematics Naval Postgraduate School Monterey, CA 994, USA ABSTRACT Given a graph G, the dominator coloring problem seeks a

More information

Graph Coloring Inequalities from All-different Systems

Graph Coloring Inequalities from All-different Systems Constraints manuscript No (will be inserted by the editor) Graph Coloring Inequalities from All-different Systems David Bergman J N Hooker Received: date / Accepted: date Abstract We explore the idea of

More information

ON DOMINATING THE CARTESIAN PRODUCT OF A GRAPH AND K 2. Bert L. Hartnell

ON DOMINATING THE CARTESIAN PRODUCT OF A GRAPH AND K 2. Bert L. Hartnell Discussiones Mathematicae Graph Theory 24 (2004 ) 389 402 ON DOMINATING THE CARTESIAN PRODUCT OF A GRAPH AND K 2 Bert L. Hartnell Saint Mary s University Halifax, Nova Scotia, Canada B3H 3C3 e-mail: bert.hartnell@smu.ca

More information

A An Overview of Complexity Theory for the Algorithm Designer

A An Overview of Complexity Theory for the Algorithm Designer A An Overview of Complexity Theory for the Algorithm Designer A.1 Certificates and the class NP A decision problem is one whose answer is either yes or no. Two examples are: SAT: Given a Boolean formula

More information

Independence numbers of locally sparse graphs and a Ramsey type problem

Independence numbers of locally sparse graphs and a Ramsey type problem Independence numbers of locally sparse graphs and a Ramsey type problem Noga Alon Abstract Let G = (V, E) be a graph on n vertices with average degree t 1 in which for every vertex v V the induced subgraph

More information

The complexity of optimizing over a simplex, hypercube or sphere: a short survey

The complexity of optimizing over a simplex, hypercube or sphere: a short survey The complexity of optimizing over a simplex, hypercube or sphere: a short survey Etienne de Klerk Department of Econometrics and Operations Research Faculty of Economics and Business Studies Tilburg University

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

Approximation, Taylor Polynomials, and Derivatives

Approximation, Taylor Polynomials, and Derivatives Approximation, Taylor Polynomials, and Derivatives Derivatives for functions f : R n R will be central to much of Econ 501A, 501B, and 520 and also to most of what you ll do as professional economists.

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Mixed-Integer Nonlinear Programming

Mixed-Integer Nonlinear Programming Mixed-Integer Nonlinear Programming Claudia D Ambrosio CNRS researcher LIX, École Polytechnique, France pictures taken from slides by Leo Liberti MPRO PMA 2016-2017 Motivating Applications Nonlinear Knapsack

More information

A new nonmonotone Newton s modification for unconstrained Optimization

A new nonmonotone Newton s modification for unconstrained Optimization A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b

More information

1 Primals and Duals: Zero Sum Games

1 Primals and Duals: Zero Sum Games CS 124 Section #11 Zero Sum Games; NP Completeness 4/15/17 1 Primals and Duals: Zero Sum Games We can represent various situations of conflict in life in terms of matrix games. For example, the game shown

More information

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training Charles Elkan elkan@cs.ucsd.edu January 17, 2013 1 Principle of maximum likelihood Consider a family of probability distributions

More information

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization

Research Article Finding Global Minima with a Filled Function Approach for Non-Smooth Global Optimization Hindawi Publishing Corporation Discrete Dynamics in Nature and Society Volume 00, Article ID 843609, 0 pages doi:0.55/00/843609 Research Article Finding Global Minima with a Filled Function Approach for

More information

IE418 Integer Programming

IE418 Integer Programming IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 23rd February 2005 The Ingredients Some Easy Problems The Hard Problems Computational Complexity The ingredients

More information

Decomposing oriented graphs into transitive tournaments

Decomposing oriented graphs into transitive tournaments Decomposing oriented graphs into transitive tournaments Raphael Yuster Department of Mathematics University of Haifa Haifa 39105, Israel Abstract For an oriented graph G with n vertices, let f(g) denote

More information

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 Usual rules. :) Exercises 1. Lots of Flows. Suppose you wanted to find an approximate solution to the following

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Technische Universität Ilmenau Institut für Mathematik

Technische Universität Ilmenau Institut für Mathematik Technische Universität Ilmenau Institut für Mathematik Preprint No. M 14/05 Copositivity tests based on the linear complementarity problem Carmo Bras, Gabriele Eichfelder and Joaquim Judice 28. August

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Branching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science

Branching. Teppo Niinimäki. Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science Branching Teppo Niinimäki Helsinki October 14, 2011 Seminar: Exact Exponential Algorithms UNIVERSITY OF HELSINKI Department of Computer Science 1 For a large number of important computational problems

More information

On the Complexity of the Minimum Independent Set Partition Problem

On the Complexity of the Minimum Independent Set Partition Problem On the Complexity of the Minimum Independent Set Partition Problem T-H. Hubert Chan 1, Charalampos Papamanthou 2, and Zhichao Zhao 1 1 Department of Computer Science the University of Hong Kong {hubert,zczhao}@cs.hku.hk

More information