A Bound for Non-Subgraph Isomorphism
|
|
- Willa Rodgers
- 6 years ago
- Views:
Transcription
1 A Bound for Non-Sub Isomorphism Christian Schellewald School of Computing, Dublin City University, Dublin 9, Ireland cschellewald/ Abstract. In this paper we propose a new lower bound to a sub isomorphism problem. This bound can provide a proof that no sub isomorphism between two s can be found. The computation is based on the SDP relaxation of a to the best of our knowledge new combinatorial optimisation formulation for sub isomorphism. We consider problem instances where only the structures of the two instances are given and therefore we deal with simple s in the first place. The idea is based on the fact that a sub isomorphism for such problem instances always leads to 0 as lowest possible optimal objective value for our combinatorial optimisation problem formulation. Therefore, a lower bound that is larger than 0 represents a proof that a sub isomorphism don t exist in the problem instance. But note that conversely, a negative lower bound does not imply that a sub isomorphism must be present and only indicates that a sub isomorphism is still possible. 1 Introduction The isomorphism problem is a well known problem in computer science and usually involves also the problem of finding the appropriate matching. Therefore it is also of interest in computer vision. If an object is represented by a the object could be identified as sub within a possibly larger scene. Error-correcting matching [1] also known as error-tolerant matching is a quite general and appropriate approach to calculate an assignment between the nodes of two s. It is based on the minimisation of so called edit costs which appear when one is turned into the other by some predefined edit operations. Commonly introduced edit operations are deletion, insertion, and substitution of nodes and edges. Each edit operation has a cost assigned which is application dependent. The minimal edit cost defines the so called edit distance between two s. The idea to define the edit distance for matching goes back to Sanfeliu and Fu [2] in Before that the edit distance was mainly used for string matching. Several algorithms for error correcting matching have been proposed that are based on different methods like tree search [3], genetic algorithms [4] and others (see e.g. This research was supported by Marie Curie Intra-European Fellowships within the 6th European Community Framework Programme.
2 2 [1]). In this paper we first propose an combinatorial optimisation formulation for the sub isomorphism problem that can be seen as a error-correcting matching approach. The integer optimisation problem we end up with is generally an indefinite quadratic integer optimisation problem which is NP-hard [5]. For example Pardalos and Vavasis showed in that indefinite quadratic programs are NP-hard problems, even if the quadratic program is very simple (see [6]). Then we compute a (convex) SDP relaxation of the combinatorial problem to obtain a lower bound to the sub isomorphism problem. The bound can be computed with standard methods for semidefinite programs. Finally we show that the bound can indeed be used to proof that no sub isomorphism between two s can be found. Several approaches have been proposed to tackle the sub isomorphism problem [7, 8,3,9]. Our approach differs to a more recent proposed approach that is based on a reformulation to a largest clique problem [10,11]. Our approach intends to find the full first as an sub isomorphism in the second where the largest clique represents the largest common sub isomorphism. 2 Preliminaries In this work we consider simple s G = (V, E) with nodes V = {1,...,n} and edges E V V. We denote the first possibly smaller with G K and the second with G L. The corresponding sets V K and V L contain K = V K and L = V L nodes respectively. We assume that L K. We make extensive use of the direct product C = A B, which is also known as Kronecker product [12]. It is the product of every matrix element A ij of A IR n m with the whole matrix B IR p q resulting in the larger matrix C IR np mq. The sub isomorphism is a mapping m : V K V V L of all nodes in the G K to a subset V of V L with K nodes of the G L such that the structure is preserved. That means that any two nodes i and j from G K that are adjacent must be mapped to nodes m(i) and m(j) in G L that are adjacent too. The same has to be true for the inverse mapping m 1 : V V K which maps the nodes V of the sub to nodes V K of G K. 3 Combinatorial Objective Function In this section we propose and proof a formulation of the combinatorial problem of finding a sub- isomorphism. The general idea is to find a bipartite matching between the set of nodes from the smaller to the set of nodes of the larger. The bipartite matching is evaluated by an objective function that can be interpreted as a comparison of the structure between all possible node pairs in the first and the structure of the node pairs to which the nodes are matched in the second. A matching that leads to no structural differences has no costs and represents a sub- isomorphism. Mathematically the evaluation can be performed by a simple quadratic objective function
3 3 x Qx. The full task of finding a sub- isomorphism results in the following combinatorial quadratic optimisation problem, which details are explained below: min x Qx x s.t. A K x = e K, A L x e L (1) x {0, 1} KL The constraints that make use of the matrices A K = I K e L and A L = e K I L ensure that the vector x is a 0,1-indicator vector which represents a bipartite matching between the two node sets of the s. Here e n IR n represents a vector with all elements 1 and I n IR n n denotes the unit matrix. A vector element x ji = 1 indicates that the node i of the first set V K is matched to the node j in the second set V L otherwise x ji = 0. The elements of the indicator vector x {1, 0} KL are ordered as follows: x = (x 11,, x L1, x 12,, x L2,, x 1K,, x LK ). (2) We illustrate such an indicator vector in figure 1 where a bipartite matching between two sets of nodes and the corresponding indicator vector are shown. The matrix Q within the objective function x Qx of the optimisation problem ( ) Fig. 1. The illustration of the 0, 1-indicator vector on the right side is a representation of the matching which is shown on the left side of this figure. (1) can be written in a short form using the Kronecker product: Definition 1. Relational Structure Matrix Q = N K N L + N K N L (3) Here N K and N L are the 0, 1-adjacency matrices of the two s. The matrices N K and N L represent the complementary adjacency matrices which are computed as follows: Definition 2. Complementary Adjacency Matrices N L = E LL N L I L NK = E KK N K I K
4 4 These complementary adjacency matrices can be interpreted as 0, 1-indicator matrices for non-adjacent nodes. They have the element ( N) ij = 1 if the corresponding nodes i and j are not directly connected in the. The adjacency matrix N K for a small along with its complementary adjacency matrix are shown in figure 2. In the following we show that a 0,1-solution vector x of N K = B A N K = B A Fig.2. An example and its adjacency matrix N K along with its complementary adjacency matrix N K. the optimisation problem (1) with an optimal objective value of zero represents a sub isomorphism. We first show that zero is the smallest possible value and than we show that every derivation of a sub isomorphism results in an objective value > 0. Proposition 1. The minimal value of the combinatorial optimisation problem (1) is zero. Proof. The elements of Q and x are all non-negative. In fact the elements are either zero or one. Therefore the lowest possible value of the quadratic cost term which can be rewritten as the following sum x Qx = x (N K N L + N K N L )x K,L K,L = [(N K ) ab ( N L ) rs + ( N K ) ab (N L ) rs ]x ra x sb (4) a,r b,s is zero. Proposition 2. A solution with the minimal value of zero of the quadratic optimisation problem (1) represents a sub- isomorphism. To proof this we consider the term within the sum and show it leads only to a cost > 0 if the considered matching violates the condition for a sub isomorphism. Proof. Only if the product x ra x sb is one the term within the sum (4) can be different from zero and the part [(N K ) ab ( N L ) rs + ( N K ) ab (N L ) rs ] must be considered. In the following we refer to this part of the term also as structure comparison term. There are two cases that lead to x ra x sb = 1:
5 5 Case A: The node a and node b in G K represent the same node (a = b). But as the diagonals of N K and N K are zero one obtains that (N K ) aa = 0 and ( N K ) aa = 0. In this case the term [(N K ) aa ( N L ) rr +( N K ) aa (N L ) rs ]x ra x ra is always zero and does not contribute to the sum. Case B: The nodes a and b in G K represent different nodes (a b) in G K and due to the bipartite matching constraint a value x ar x bs = 1 represents the situation x ar = 1 and x bs = 1 which means that the nodes a and b are mapped to two different nodes r and s in the second G L, respectively. Considering now the term [(N K ) ab ( N L ) rs + ( N K ) ab (N L ) rs ] we observe that all four possible structural cases between two pairs of nodes in the two s are valued with a cost of zero or one. All these sub-cases from case B that could lead to a non-zero value in the structure comparison term and therefore in the sum are listed in the table 1. In the following we summarise the meaning of the cases and we will see that costs are only added for every difference between the structure of G K and the considered sub of the second G L. case configuration (N K) ab ( N L) rs ( N K) ab (N L) rs cost I a,b adjacent; r,s adjacent II a,b adjacent; r,s not adjacent III a,b not adjacent; r,s adjacent IV a,b not adjacent; r,s not adjacent Table 1. List of all outcomes of the structure comparison term between two different nodes a and b of G K that are mapped to two different nodes r and s in the second G L. Only in case I and IV the structure is preserved and can lead to an isomorphism. No cost is added in this cases. The other cases (II and III) don t preserve the structure and lead to an total cost > 0. For details see the text. I: If the two nodes a and b in the first are neighbours, (N K ) ab = 1, then no cost is added in (4) if the nodes r and s in the scene are neighbours, too: ( N L ) rs = 0. II: Otherwise if a and b are neighbours in G K and the corresponding nodes r and s are no neighbours in the second, ( N L ) rs = 1, then a cost of 1 is added. The configurations I and II are visualised in figure 3. III: Analogously, the structure comparison term penalises assignments where pairs of nodes (a and b) in the G K become neighbours in the second G L which were not adjacent before. IV: Finally if a and b are not adjacent in the first G K and the nodes r and s in G L are also not adjacent, no cost is added.
6 6 a GK x ra=1 GK GL GL a r x ra=1 r b x sb=1 s b x s b =1 Good assignment (no costs) Bad assignment (costly) s Fig.3. Left: Adjacent nodes a and b in the G K are assigned to adjacent nodes r and s in the G L. Right. Adjacent nodes a and b are no longer adjacent in the G L after the assignment. The left assignment leads to no additional costs while the right undesired assignment adds 1 to the total cost. a GK x ra=1 r GL a GK x ra=1 r GL b x sb=1 Good assignment (no costs) s b x s b =1 Bad assignment (costly) s Fig.4. Left: Nodes a and b which are not adjacent in the object G K are assigned to nodes which are also not adjacent in the scene G L. Right: A pair of nodes a and b become neighbours r and s after assignment. The left assignment is associated with no additional costs in (4). The undesired assignment on the right side adds 1 to these costs. Figure 4 illustrates situation III and IV in detail. This shows that only mappings that lead to a change in the structure are penelised with a cost. Structure preserving mappings which are compatible with a sub isomorphism are without costs. Note that due to the symmetry of the adjacency matrices the quadratic cost term x Qx is symmetric too and every difference in the compared structures of the two s is considered twice resulting in a cost of 2 for every difference in the structure. Finally the sum (4) and therefore the objective function x Qx considers all possible combinations of node pairs a and b that are mapped to r and s, respectively. And only for matchings which lead to no difference in the mapped sub-structure and vice versa all the terms within the sum (4) are zero. In this case the bipartite matching represents a sub isomorphism. We wish to emphasise that the minimisation of (1) represents the search for a bipartite matching which has the smallest possible structural deviation between G K and the considered sub of G L. Therefore (1) can be seen as a edit distance with a cost of 2 for each addition or removal of an edge that is needed to turn the first into the considered sub of the other.
7 7 4 Convex Problem Relaxation The combinatorial isomorphism approach (1) can be relaxed to a (convex) semidefinite program (SDP) which has the following standard form: [ ] min Tr QX s.t. Tr[A i X] = c i for i = 1,..., m (5) X 0 The constraint X 0 means that X has to be positive semidefinite. This convex optimization problem can be solved with standard methods like interior point algorithms (see e.g. [13]). Note that the solution of the relaxation (5) provides a lower bound to (1). Below, we describe how we derive such a semidefinite program from (1). For more information on semidefinite programming we refer to [14]. 5 Convex Relaxation The convex relaxation in this section follows the relaxation explained in detail in [15]. In order to obtain an appropriate SDP relaxation for the combinatorial sub matching problem, we start with the reformulation of the objective function of (1) f(x) = x Qx = Tr [ x Qx ] = Tr [ Qxx ] [ ] = Tr QX, (6) We take into account the following summarised constraints of the form Tr[A i X] = c i which intend to include the original bipartite matching constrainst in a suitable way. In particular we describe the constraint marices A i. The equality constraint L j=1 x ij = 1, i = 1,...,K, which are part of the bipartite matching constraints represent the constraint that each node of the smaller is mapped to exactly one node of the scene. We define K constraint matrices sum A j IR (KL+1) (KL+1), j = 1,...,K which ensure (taking the order of the diagonal elements into account) that the sum of the appropriate portion of the diagonal elements of X is 1. As we deal with the diagonal elements of X we exploit also the fact that x i = x 2 i holds true for 0/1-variables. The matrix elements for the j-th constraint matrix sum A j can be expressed as follows: sum A j kl = jl+1 i=(j 1)L+1 δ ik δ il for k, l = 1,...,KL + 1 For these constraints the constants c j are: c j, j = 1,...,K. As all integer solutions X = xx IR KL KL, where x represents a bipartite matching, have zerovalues at those matrix positions where I K (E LL I L ) and (E KK I K ) I L have non-zero elements we want to force the corresponding elements in X IR KL KL
8 8 to be zero. The matrices E LL IR L L and E KK IR R R are matrices where all elements are 1. The matrices I nn IR n n represent the unit matrices. This can be achieved with the constraint matrices A ars, Aŝâˆb IR KL KL which are determined by the indices a, r,s and ŝ, â,ˆb. They have the following matrix elements A ars kl =δ k,(al+r) δ l,(al+s) + δ k,(al+s) δ l,(al+r), (7) Aŝâˆb kl =δ k,(ŝk+ˆb) δ l,(ŝk+â) + δ k,(ŝk+â) δ l,(ŝk+ˆb), (8) where k, l = 1,...,KL. The indices a, r,s and ŝ, â,ˆb attain all valid combinations of the following triples where s > r and ˆb > â: (a, r, s) : a = 1,...,K; r = 1,...,L; s = (r + 1),...,L (ŝ, â,ˆb) : ŝ = 1,...,L; â = 1,...,K; ˆb = (â + 1),...,K For this constraints the constant c has to be zero. With this we define (LL L)K/2 + (KK K)L/2 additional constraints that ensure zero-values at the corresponding matrix positions of X. 6 Early Results to the Non-Isomorphism Bound For the early results presented in this section we used our implementation described in [15] where we had to set the similarity vector to a zero vector. Furthermore we introduced a parameter α > 0 which is just a scaling parameter for the objective function and should not have a influence on the solution other than a scaling. An illustrative example for a sub isomorphism problem is depicted in figure 5. For this example we compute a lower bound > 0 using the Fig. 5. Example for a randomly created sub problem. Is there a sub isomorphism? For the shown problem instance we can compute a lower bound > 0 for (1) which proves that no sub isomorphism is present. SDP relaxation (5), which proves that a sub isomorphism does not exist in this problem instance. Note that we did not eliminate mappings that could
9 9 not lead to an sub isomorphism. The possible objective values of (1) are restricted to discrete values as the quadratic term αx Qx can only reach values which are multiples of 2α. The discrete distribution of the objective values for the sub isomorphism problem shown in figure 5 is depicted in figure 6 where we have set α = 0.3. For Discrete Probability Distribution Objective Function Values of a Sub Isorphism Problem Graph: K=7,L=15, α= Probability Density f * bound = f * max = 12.0 f * opt = Objective Value Fig. 6. The distribution of the objective values for the sub isomorphism problem which is shown in figure 5. The objective values are restricted to discrete values, as the quadratic term αx Qx can only attain values which are multiples of 2α. Here we have set α arbitrarily to 0.3. The optimal objective value is 0.6 and the obtained lower bound is > 0.0, which is a non-isomorphism proof for this problem instance. a first preliminary investigation of this bound we created 1000 small sub matching problem instances for which we have chosen the size of the two s G K and G L to be K = 7 and L = 15, respectively. The edge probability of the G K was set to 0.5 and the probability for an edge in the second was set to 0.2. The results for this experiment series reveal that for various problem instances it is indeed possible to conclude that no sub isomorphism exist. We have obtained 388 problem instances with a lower bound > 0.0 which proves that no sub isomorphism can occur in this problem instances. The other 612 problem instances have a lower bound 0.0. For 436 ( 71%) of these problem instances the combinatorial optimum is > 0.0 indicating that the relaxation is not tight enough to detect that no sub isomorphism can occur. 7 Discussion We proposed a bound to the sub isomorphism problem and showed that the bound is not only of theoretical interest but also applies to several instances of sub matching problems. It would be interesting to investigate which criteria a sub matching problem has to fullfill to result in a tight relaxation. Such
10 10 insights could be usefull in the process of creating or obtaining object s from images for object recognition tasks. The tightness and therefore the lower bound can be improved by reducing the dimension of the problem size. For example one can eliminate a mapping i j if the degree (The number of incident edges.) of an node i is larger than the degree of node j in the second. Such a mapping cannot lead to a sub isomorphism. An other improvement could be expected when also inequalities are included in the SDP relaxation. None of these improvements are used for the presented results. However, for increasing problem instances the relaxation will probably get less tight and a lower bound 0.0 becomes more likely. But note that even less tight solutions still lead to good integer solutions (see e.g. [15]). References 1. H. Bunke. Error correcting matching: On the influence of the underlying cost function. IEEE Trans. Pattern Analysis and Machine Intelligence, 21(9): , A. Sanfeliu and K. S. Fu. A distance measure between attributed relational s for pattern recognition. IEEE Transaction on Systems, Man and Cybernetics, 13(3): , B.T. Messmer and H. Bunke. A new algorithm for error-tolerant sub isomorphism detection. IEEE Trans. Patt. Anal. Mach. Intell., 20(5): , Yuan-Kai Wang, Kuo-Chin Fan, and Jorng-Tzong Horng. Genetic-based search for error-correcting isomorphism. IEEETSMC: IEEE Transactions on Systems, Man, and Cybernetics, 27, M. R. Garey and D. S. Johnson. Computers and Intractability, a Guide to the Theory of NP-Completeness. W. H. Freeman and Company, P. M. Pardalos and S. A. Vavasis. Quadratic programming with one negative eigenvalue is np-hard. J. Global Optim., 1:15 22, J.R. Ullmann. An algorithm for sub isomorphism. Journal of the ACM, 23(1):31 42, H. G. Barrow and R. M. Burstall. Sub isomorphism, matching relational structures and maximal cliques. Information Processing Letters, 4(4):83 84, David Eppstein. Sub isomorphism in planar s and related problems. Journal of Graph Algorithms and Applications, 3(3):1 27, I. Bomze, M. Budinich, P. Pardalos, and M. Pelillo. The maximum clique problem. In D.-Z. Du and P. M. Pardalos, editors, Handbook of Combinatorial Optimization, volume 4. Kluwer Academic Publishers, Boston, MA, M. Pelillo. Replicator equations, maximal cliques, and isomorphism. Neural Computation, 11(8): , A. Graham. Kronecker Products and Matrix Calculus with Applications. Ellis Horwood Limited and John Wiley and Sons, B. Borchers. CSDP: A C library for semidefinite programming. Optimization Methods and Software, 11(1): , H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of Semidefinite Programming, Boston, Kluwer Acad. Publ. 15. Christian Schellewald and Christoph Schnörr. Probabilistic sub matching based on convex relaxation. In Proc. of EMMCVPR, volume 3757 of LNCS, pages Springer, 2005.
Research Article A Convex Relaxation Bound for Subgraph Isomorphism
International Journal of Combinatorics Volume 2012, Article ID 908356, 18 pages doi:10.1155/2012/908356 Research Article A Convex Relaxation Bound for Subgraph Isomorphism Christian Schellewald Independent
More informationMETRICS FOR ATTRIBUTED GRAPHS BASED ON THE MAXIMAL SIMILARITY COMMON SUBGRAPH
International Journal of Pattern Recognition and Artificial Intelligence Vol. 18, No. 3 (2004) 299 313 c World Scientific Publishing Company METRICS FOR ATTRIBUTED GRAPHS BASED ON THE MAXIMAL SIMILARITY
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationSpectral Generative Models for Graphs
Spectral Generative Models for Graphs David White and Richard C. Wilson Department of Computer Science University of York Heslington, York, UK wilson@cs.york.ac.uk Abstract Generative models are well known
More informationVariable Objective Search
Variable Objective Search Sergiy Butenko, Oleksandra Yezerska, and Balabhaskar Balasundaram Abstract This paper introduces the variable objective search framework for combinatorial optimization. The method
More informationOn a Polynomial Fractional Formulation for Independence Number of a Graph
On a Polynomial Fractional Formulation for Independence Number of a Graph Balabhaskar Balasundaram and Sergiy Butenko Department of Industrial Engineering, Texas A&M University, College Station, Texas
More information2 Notation and Preliminaries
On Asymmetric TSP: Transformation to Symmetric TSP and Performance Bound Ratnesh Kumar Haomin Li epartment of Electrical Engineering University of Kentucky Lexington, KY 40506-0046 Abstract We show that
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationCluster Graph Modification Problems
Cluster Graph Modification Problems Ron Shamir Roded Sharan Dekel Tsur December 2002 Abstract In a clustering problem one has to partition a set of elements into homogeneous and well-separated subsets.
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationNew Lower Bounds on the Stability Number of a Graph
New Lower Bounds on the Stability Number of a Graph E. Alper Yıldırım June 27, 2007 Abstract Given a simple, undirected graph G, Motzkin and Straus [Canadian Journal of Mathematics, 17 (1965), 533 540]
More informationThe maximal stable set problem : Copositive programming and Semidefinite Relaxations
The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu
More informationGlobal Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition
Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationOptimization Exercise Set n.5 :
Optimization Exercise Set n.5 : Prepared by S. Coniglio translated by O. Jabali 2016/2017 1 5.1 Airport location In air transportation, usually there is not a direct connection between every pair of airports.
More informationThe maximum edge biclique problem is NP-complete
The maximum edge biclique problem is NP-complete René Peeters Department of Econometrics and Operations Research Tilburg University The Netherlands April 5, 005 File No. DA5734 Running head: Maximum edge
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationNon-bayesian Graph Matching without Explicit Compatibility Calculations
Non-bayesian Graph Matching without Explicit Compatibility Calculations Barend Jacobus van Wyk 1,2 and Michaël Antonie van Wyk 3 1 Kentron, a division of Denel, Centurion, South Africa 2 University of
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationOptimization Exercise Set n. 4 :
Optimization Exercise Set n. 4 : Prepared by S. Coniglio and E. Amaldi translated by O. Jabali 2018/2019 1 4.1 Airport location In air transportation, usually there is not a direct connection between every
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,
More informationConvex and Semidefinite Programming for Approximation
Convex and Semidefinite Programming for Approximation We have seen linear programming based methods to solve NP-hard problems. One perspective on this is that linear programming is a meta-method since
More informationFast Linear Iterations for Distributed Averaging 1
Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider
More informationFINDING INDEPENDENT SETS IN A GRAPH USING CONTINUOUS MULTIVARIABLE POLYNOMIAL FORMULATIONS
FINDING INDEPENDENT SETS IN A GRAPH USING CONTINUOUS MULTIVARIABLE POLYNOMIAL FORMULATIONS J. ABELLO, S. BUTENKO, P.M. PARDALOS, AND M.G.C. RESENDE Abstract. Two continuous formulations of the maximum
More informationA Necessary and Sufficient Condition for Graph Matching to be equivalent to Clique Search
A Necessary and Sufficient Condition for Graph Matching to be equivalent to Clique Search Brijnesh J. Jain and Klaus Obermayer Berlin University of Technology Germany 15. Dezember 2009 This paper formulates
More informationUsing Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems
Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,
More informationInteger Programming Formulations for the Minimum Weighted Maximal Matching Problem
Optimization Letters manuscript No. (will be inserted by the editor) Integer Programming Formulations for the Minimum Weighted Maximal Matching Problem Z. Caner Taşkın Tınaz Ekim Received: date / Accepted:
More informationHandout 6: Some Applications of Conic Linear Programming
ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More informationRandom walks and anisotropic interpolation on graphs. Filip Malmberg
Random walks and anisotropic interpolation on graphs. Filip Malmberg Interpolation of missing data Assume that we have a graph where we have defined some (real) values for a subset of the nodes, and that
More informationarxiv: v1 [math.oc] 23 Nov 2012
arxiv:1211.5406v1 [math.oc] 23 Nov 2012 The equivalence between doubly nonnegative relaxation and semidefinite relaxation for binary quadratic programming problems Abstract Chuan-Hao Guo a,, Yan-Qin Bai
More informationEfficient Solution of Maximum-Entropy Sampling Problems
Efficient Solution of Maximum-Entropy Sampling Problems Kurt M. Anstreicher July 10, 018 Abstract We consider a new approach for the maximum-entropy sampling problem (MESP) that is based on bounds obtained
More informationIntroduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationUniqueness of the Solutions of Some Completion Problems
Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive
More informationThere are several approaches to solve UBQP, we will now briefly discuss some of them:
3 Related Work There are several approaches to solve UBQP, we will now briefly discuss some of them: Since some of them are actually algorithms for the Max Cut problem (MC), it is necessary to show their
More informationApplications of Convex Optimization on the Graph Isomorphism problem
Applications of Convex Optimization on the Graph Isomorphism problem 1 1 Department of Information and Computer Sciences UC Irvine March 8, 2011 Outline Motivation 1 Motivation The Graph Isomorphism problem
More informationA Dynamical Systems Approach to Weighted Graph Matching
A Dynamical Systems Approach to Weighted Graph Matching Michael M. Zavlanos and George J. Pappas Abstract Graph matching is a fundamental problem that arises frequently in the areas of distributed control,
More informationON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS
INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THEORY 4 (2004), #A21 ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS Sergey Kitaev Department of Mathematics, University of Kentucky,
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationIII. Applications in convex optimization
III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization
More informationThe Strong Largeur d Arborescence
The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics
More informationFixed-charge transportation problems on trees
Fixed-charge transportation problems on trees Gustavo Angulo * Mathieu Van Vyve * gustavo.angulo@uclouvain.be mathieu.vanvyve@uclouvain.be November 23, 2015 Abstract We consider a class of fixed-charge
More informationLecture 1 : Probabilistic Method
IITM-CS6845: Theory Jan 04, 01 Lecturer: N.S.Narayanaswamy Lecture 1 : Probabilistic Method Scribe: R.Krithika The probabilistic method is a technique to deal with combinatorial problems by introducing
More informationXLVI Pesquisa Operacional na Gestão da Segurança Pública
A linear formulation with O(n 2 ) variables for the quadratic assignment problem Serigne Gueye and Philippe Michelon Université d Avignon et des Pays du Vaucluse, Laboratoire d Informatique d Avignon (LIA),
More informationMaximizing the numerical radii of matrices by permuting their entries
Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and
More informationWeek 4. (1) 0 f ij u ij.
Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known
More informationInterior points of the completely positive cone
Electronic Journal of Linear Algebra Volume 17 Volume 17 (2008) Article 5 2008 Interior points of the completely positive cone Mirjam Duer duer@mathematik.tu-darmstadt.de Georg Still Follow this and additional
More informationClassical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems
Classical Complexity and Fixed-Parameter Tractability of Simultaneous Consecutive Ones Submatrix & Editing Problems Rani M. R, Mohith Jagalmohanan, R. Subashini Binary matrices having simultaneous consecutive
More informationAdding Relations in the Same Level of a Linking Pin Type Organization Structure
Adding Relations in the Same Level of a Linking Pin Type Organization Structure Kiyoshi Sawada Abstract This paper proposes two models of adding relations to a linking pin type organization structure where
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationPreliminaries and Complexity Theory
Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra
More informationFacets for the Maximum Common Induced Subgraph Problem Polytope
Facets for the Maximum Common Induced Subgraph Problem Polytope Breno Piva, Cid de Souza {bpiva, cid}@ic.unicamp.br 2 de setembro de 2011 Abstract: This paper presents some strong valid inequalities for
More informationApproximability and Parameterized Complexity of Consecutive Ones Submatrix Problems
Proc. 4th TAMC, 27 Approximability and Parameterized Complexity of Consecutive Ones Submatrix Problems Michael Dom, Jiong Guo, and Rolf Niedermeier Institut für Informatik, Friedrich-Schiller-Universität
More informationThe Ongoing Development of CSDP
The Ongoing Development of CSDP Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu Joseph Young Department of Mathematics New Mexico Tech (Now at Rice University)
More informationMustapha Ç. Pinar 1. Communicated by Jean Abadie
RAIRO Operations Research RAIRO Oper. Res. 37 (2003) 17-27 DOI: 10.1051/ro:2003012 A DERIVATION OF LOVÁSZ THETA VIA AUGMENTED LAGRANGE DUALITY Mustapha Ç. Pinar 1 Communicated by Jean Abadie Abstract.
More informationSeparation Techniques for Constrained Nonlinear 0 1 Programming
Separation Techniques for Constrained Nonlinear 0 1 Programming Christoph Buchheim Computer Science Department, University of Cologne and DEIS, University of Bologna MIP 2008, Columbia University, New
More informationLinear algebra and applications to graphs Part 1
Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces
More informationTree Decomposition of Graphs
Tree Decomposition of Graphs Raphael Yuster Department of Mathematics University of Haifa-ORANIM Tivon 36006, Israel. e-mail: raphy@math.tau.ac.il Abstract Let H be a tree on h 2 vertices. It is shown
More informationApproximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack
Approximation complexity of min-max (regret) versions of shortest path, spanning tree, and knapsack Hassene Aissi, Cristina Bazgan, and Daniel Vanderpooten LAMSADE, Université Paris-Dauphine, France {aissi,bazgan,vdp}@lamsade.dauphine.fr
More informationStrong duality in Lasserre s hierarchy for polynomial optimization
Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization
More information1 The independent set problem
ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationCopositive Programming and Combinatorial Optimization
Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA
More informationA new family of facet defining inequalities for the maximum edge-weighted clique problem
A new family of facet defining inequalities for the maximum edge-weighted clique problem Franklin Djeumou Fomeni June 2016 Abstract This paper considers a family of cutting planes, recently developed for
More informationINTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB
1 INTERIOR-POINT METHODS FOR SECOND-ORDER-CONE AND SEMIDEFINITE PROGRAMMING ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB Outline 2 Introduction
More informationLecture 3: Semidefinite Programming
Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationarxiv: v1 [math.oc] 14 Oct 2014
arxiv:110.3571v1 [math.oc] 1 Oct 01 An Improved Analysis of Semidefinite Approximation Bound for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints Yong Hsia a, Shu Wang a, Zi Xu
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationCleaning Interval Graphs
Cleaning Interval Graphs Dániel Marx and Ildikó Schlotter Department of Computer Science and Information Theory, Budapest University of Technology and Economics, H-1521 Budapest, Hungary. {dmarx,ildi}@cs.bme.hu
More informationMulti-objective Quadratic Assignment Problem instances generator with a known optimum solution
Multi-objective Quadratic Assignment Problem instances generator with a known optimum solution Mădălina M. Drugan Artificial Intelligence lab, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels,
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationQuadratic reformulation techniques for 0-1 quadratic programs
OSE SEMINAR 2014 Quadratic reformulation techniques for 0-1 quadratic programs Ray Pörn CENTER OF EXCELLENCE IN OPTIMIZATION AND SYSTEMS ENGINEERING ÅBO AKADEMI UNIVERSITY ÅBO NOVEMBER 14th 2014 2 Structure
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016
U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest
More informationNonconvex Quadratic Programming: Return of the Boolean Quadric Polytope
Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Kurt M. Anstreicher Dept. of Management Sciences University of Iowa Seminar, Chinese University of Hong Kong, October 2009 We consider
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More informationCOM Optimization for Communications 8. Semidefinite Programming
COM524500 Optimization for Communications 8. Semidefinite Programming Institute Comm. Eng. & Dept. Elect. Eng., National Tsing Hua University 1 Semidefinite Programming () Inequality form: min c T x s.t.
More informationApproximating min-max (regret) versions of some polynomial problems
Approximating min-max (regret) versions of some polynomial problems Hassene Aissi, Cristina Bazgan, and Daniel Vanderpooten LAMSADE, Université Paris-Dauphine, France {aissi,bazgan,vdp}@lamsade.dauphine.fr
More informationOn Non-Convex Quadratic Programming with Box Constraints
On Non-Convex Quadratic Programming with Box Constraints Samuel Burer Adam N. Letchford July 2008 Abstract Non-Convex Quadratic Programming with Box Constraints is a fundamental N P-hard global optimisation
More informationDoes Better Inference mean Better Learning?
Does Better Inference mean Better Learning? Andrew E. Gelfand, Rina Dechter & Alexander Ihler Department of Computer Science University of California, Irvine {agelfand,dechter,ihler}@ics.uci.edu Abstract
More informationGlobal Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic Constraints
Journal of Global Optimization 21: 445 455, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 445 Global Optimality Conditions in Maximizing a Convex Quadratic Function under Convex Quadratic
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More informationA solution approach for linear optimization with completely positive matrices
A solution approach for linear optimization with completely positive matrices Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with M. Bomze (Wien) and F.
More informationPairwise Neural Network Classifiers with Probabilistic Outputs
NEURAL INFORMATION PROCESSING SYSTEMS vol. 7, 1994 Pairwise Neural Network Classifiers with Probabilistic Outputs David Price A2iA and ESPCI 3 Rue de l'arrivée, BP 59 75749 Paris Cedex 15, France a2ia@dialup.francenet.fr
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017
U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 8 Luca Trevisan September 19, 2017 Scribed by Luowen Qian Lecture 8 In which we use spectral techniques to find certificates of unsatisfiability
More informationProperties and Classification of the Wheels of the OLS Polytope.
Properties and Classification of the Wheels of the OLS Polytope. G. Appa 1, D. Magos 2, I. Mourtos 1 1 Operational Research Department, London School of Economics. email: {g.appa, j.mourtos}@lse.ac.uk
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationConsecutive ones Block for Symmetric Matrices
Consecutive ones Block for Symmetric Matrices Rui Wang, FrancisCM Lau Department of Computer Science and Information Systems The University of Hong Kong, Hong Kong, PR China Abstract We show that a cubic
More informationSemidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming
Semidefinite Relaxations for Non-Convex Quadratic Mixed-Integer Programming Christoph Buchheim 1 and Angelika Wiegele 2 1 Fakultät für Mathematik, Technische Universität Dortmund christoph.buchheim@tu-dortmund.de
More informationRecognizing single-peaked preferences on aggregated choice data
Recognizing single-peaked preferences on aggregated choice data Smeulders B. KBI_1427 Recognizing Single-Peaked Preferences on Aggregated Choice Data Smeulders, B. Abstract Single-Peaked preferences play
More informationTHE NUMBER OF LOCALLY RESTRICTED DIRECTED GRAPHS1
THE NUMBER OF LOCALLY RESTRICTED DIRECTED GRAPHS1 LEO KATZ AND JAMES H. POWELL 1. Preliminaries. We shall be concerned with finite graphs of / directed lines on n points, or nodes. The lines are joins
More informationFast ADMM for Sum of Squares Programs Using Partial Orthogonality
Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with
More informationf-flip strategies for unconstrained binary quadratic programming
Ann Oper Res (2016 238:651 657 DOI 10.1007/s10479-015-2076-1 NOTE f-flip strategies for unconstrained binary quadratic programming Fred Glover 1 Jin-Kao Hao 2,3 Published online: 11 December 2015 Springer
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationA Necessary Condition for Learning from Positive Examples
Machine Learning, 5, 101-113 (1990) 1990 Kluwer Academic Publishers. Manufactured in The Netherlands. A Necessary Condition for Learning from Positive Examples HAIM SHVAYTSER* (HAIM%SARNOFF@PRINCETON.EDU)
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More information5.5 Quadratic programming
5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the
More informationAn Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones
An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones Bryan Karlovitz July 19, 2012 West Chester University of Pennsylvania
More informationComplexity of determining the most vital elements for the 1-median and 1-center location problems
Complexity of determining the most vital elements for the -median and -center location problems Cristina Bazgan, Sonia Toubaline, and Daniel Vanderpooten Université Paris-Dauphine, LAMSADE, Place du Maréchal
More information