Combinatorial Optimization
|
|
- Piers Farmer
- 5 years ago
- Views:
Transcription
1 Combinatorial Optimization Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is of maximum cardinality iff there is no augmenting path. Proof: By double implication: If there is an augmenting path, then M is not maximum: OK. Suppose there is no augmenting path for M but M opt is maximum. For each node in M M opt (where is a xor), the degree is either 0, 1 or 2, thus M M opt consists of disjoint paths or cycles. The cycles must be even (otherwise M or M opt is not a matching). Suppose there is an odd path. Since #M opt is maximum, it should be of such a form that there is an augmenting path for M : absurd. Thus, all cycles and all paths are even, so #M = #M opt. Definition (Hungarian Forest). Given a bipartite graph (A B, E), the Hungarian forest is built as follows: 1. Align all unmatched nodes of A and link them to all their neighbours in B. 2. Link all those nodes of B with the nodes of A to which they are already matched, unless such an edge has already been added in the preceding step. 3. Go back to step 1 with the new set of nodes of A. If, arriving at step 2, one of the nodes of B is not in the matching, then we have an augmenting path. This construction was in O(m) so the overall algorithm (until no augmenting path can be found) is O(nm) where n = #A + #B and m = #E. Theorem 1.2 (Hopcroft-Karp algorithm) The following algorithm works in O(m n): 1. Build the Hungarian forest according to the current matching M. 2. Find a maximal disjoint set of augmenting paths (O(m)) 3. Augment M and go back to step 1. Lemma 1.3 Suppose the shortest augmenting path (with respect to M) has length 2k +1. Then, after finding a maximal set of augmenting paths, the new matching M has an augmenting path of length 2k + 3, or no augmenting path at all. Corollary 1.4 After n rounds, the augmenting path P is so that #P 2 n + 1. Thus, for P the set of augmenting paths at this moment, #P = O( n) because the paths are disjoint and there are n nodes. So, at this point, #M opt #M + O( n) so we need at most O( n) steps to finish. More recently, a Õ(m10/7 ) algorithm was found, where Õ(x) = O(x poly(log n)) 1
2 1.2 Vertex Cover Definition Given a graph G = (V, E), a subset C V is a vertex cover iff (u, v) E, u C or v C. Observation: given any matching M and any vertex cover C, we have #M #C. (u, v) M, (u, v) E so either u C or v C, thus #M #C. Indeed, for each Theorem 1.5 (Kőnig) Given a bipartite graph G = (A B, E), max {#M M is a matching} = min {#C C is a vertex cover} Proof: Let M be a maximum matching. We decompose it into three parts: G 1 whose vertices are those reachable from an M-not exposed node in A via an alternative path. G 2 whose vertices are those reachable from an M-not exposed node in B via an alternative path. G 3 which is the rest. There must be a perfect matching in G 3. By definition, there must be some edges in (G 1 B) ((G 2 G 3 ) A) (G 3 B) (G 2 A), but none in (G 1 A) ((G 2 G 3 ) B) (G 3 A) (G 2 B). We can thus choose C to be (G 1 B) (G 2 A) (G 3 A) for instance. Theorem 1.6 (Egerváry) Let G = (A B, E) where E = A B be a complete bipartite graph and a weight function C : E N. the maximum weight of a perfect matching is equal to the minimum weight of weighted vertex cover π : A B N. Here, π is a weighted vertex if (a, b) A B, π(a) + π(b) c(a, b). Proof: Assume π is the minimum weighted vertex cover. Consider the subgraph G = (A B, E ) where E = {(a, b) A B π(a) + π(b) = c(a, b)}. If G allows a perfect matching, then it s over. Otherwise, suppose M is a maximum matching of G, which is not perfect. Define G 1, G 2, G 3 as before. Decrease all nodes in G 1 A by 1 and increase all nodes in G 1 B by 1. Id some node a A has π(a) = 1 after this operation, increase all the nodes in A by 1 and decrease all the nodes in B by 1. Then the new π has strictly lower weight than before : it s absurd. Derived algorithm: apply the transformation of decrease/increase as much as you can, then rebuild G and do it again until G allows a perfect matching. To do it efficiently, keep a matching that is updated at each iteration: Initialization: Choose some large π so that (a, b) A B, π(a) + π(b) c(a, b). Set M =. While M is not perfect: Build G Let A and B be nodes reachable from M-alternative path. If there is an augmenting path P, M M P. Otherwise, let t = min {π(a) + π(b) c(a, b) a A, b B \ B }. a A, π(a) π(a) t ; b B, π(b) π(b) + t. If some node a A has π(a ) < 0, assuming π(a ) is the smallest, then a A, π(a) π(a) + π(a ) ; b B, π(b) π(b) π(a ) 2 Linear programming 2.1 Simplex algorithm Given A R m n, b R m and c R n, we want max { c T x Ax b } (an equivalent formulation is to find: min { c T x Ax b, x 0 }, but we will keep to the first formulation for historical reasons). We suppose the polyhedron has a vertex x. Historically, the simplex algorithm starts from one of the vertices of the polyhedron defined by Ax b and follows the edges in order to maximize c T x along the edge. The idea is that the maximum will be found in one of the vertices, so the algorithm will ultimately find it.
3 For all J 1, m, we denote by A J the rows of A indexed by an element in J. Same for b J. Let a j = A {j} and β j = b {j}. Simplex algorithm: 1. Find J 1, m such that A J x = b J. 2. Compute y as follows: compute c T A 1 J and fill in zeros so that y T A = c T. 3. If y 0, then stop: in this case, x and y are the optimal in primal and dual programs. Else let i be the smallest index in J such that y i < 0. Compute w as the column in (A J ) 1 such that A J\{i} w = 0 and a i w = 1. If Aw 0 then stop: in this case there is no solution i.e. unbounded. { } 4. Compute λ = min βt a tx a t 1, m, a tw tw > 0. Choose j to be the one reaching this minimum when the smallest index. 5. J (J \ {i}) {j} x x + λw 6. Go back to 1. Theorem 2.1 (Weak duality) Let P = {x Ax b} and D = { y y T A = c T, y 0 }. The primal program is max { c T x x P } ; the dual program is min { y T b y D }. If both P and D are non-empty, then x P, y D, c T x y T b. Proof: Straightforward: c T x = (y T A)x y T b. Lemma 2.2 During the simplex algorithm, at each step, the following holds: (a) A J x = b J (b) x P (c) A J is non-singular (d) cw > 0 (e) λ 0. Proof: During the first step, (a),(b) and (c) are given by the assumption that x is a vertex. Let s prove (c) and (d) using (a), (b) and (c). c T w = y T Aw. Observe that if t / J, y t = 0. Thus, cw > 0. For λ, first note that λ is well defined because if t 1, m, a t w 0, then Aw 0 so the algorithm stopped just before. Then t 1, m, a t x 0 because x P. Thus λ 0. Let s now prove the rest in the general case. First, for (c), suppose that A J is singular. Since it was not the case juste before, we have a j = λ t a t. Thus, a j w = λ t a t w : absurd. }{{}}{{} >0 =0 Proof of (b): we suppose that at this step, t, a t x β t Two cases: If a t w 0, then it s immmediate. If a t w > 0, then a t (x + λw) a t ( x + βt atx a tw w ) = a t x + β t a t x. Proof of (a): In the second case just before, if t = j, then we have an equality instead of an inequality, thus, a i (x + λw) = β t, hence (a).
4 Let s finally prove that it the algorithm stops at the beginning of step 3 (in the case y 0), then x and y are the optimal in primal and dual programs. Up to reordering, suppose that J are the first elements of 1, m. Then ( ) ( ) (y T A)x = yj T A J x = y T b since A J x = b and y 1,m \J = 0. So we have c T x = y T 1,m \J A 1,m \J (y T A)x = y T b, thus we have both dual and primal optimal solutions. Lemma 2.3 If the algorithm has not stopped after ( m n) rounds, there is k < l so that J (k) = J (l). Then we at each round from k to l λ = 0, thus x (k) = x (k+1) =... = x (l). With this lemma, we may now prove that the algorithm stops. Proof: Suppose the algorithm does not stop and apply the lemma. Assume h is the highest index that has been kicked out at step 5 from rounds k to l 1. Assume h is kicked out in round p and taken back in round q. Since J (k) = J (l) p and q are well-defined, but we do not necessarily have p < q nor q > p (it depends on wether h J (k) or not). Define y to be the y in round p and w to be the w in round q. Since c T w > 0, y T Aw > 0 so there should be an r such that y r a r w > 0. Thus, y r 0, thus r J (p) by definition of y. Finally, we have three cases: 1. If r > h, since a r w 0, we have r / J (q) using the property that A J\{i} w = 0. Thus, r has been kicked out, but h has been chosen as the highest index that has been kicked out, so this is absurd. 2. If r < h, we know that y r > 0 because otherwise it would have been kicked out instead of h at round p. Thus, a r w > 0. We have, because of (a), A (p) J x(p) = B (p) J, so a rx (p) = β r. If a r w > 0, then r should be added in round q, not h : absurd. 3. If r = h, then y r < 0, but we also have a r w > 0 otherwise r = h could not be chosen to be added at round q. Spielman & Tenf proved in 2008 that if A, b and c are perturbed by a small amount, on expectation, the algorithm runs in polynomial time. This explains why in reality, most of the time, the algorithm is very fast although it is theoretically exponential in the worst case. 2.2 Polyhedral Combinatorics Given G 1 = (A B, E) where E A B. How do I write a linear program so that the feasible set is exactly the convex hull of all perfect matching? Here, we represent a perfect matching by a vector of 0 or 1 for each node, so the convex hull is that of the vectors. Theorem 2.4 The convex hull of perfect matchings is the set of vectors x E such that v A B, x e = 1 and e E, x e 0. e δ(v) Proof: Choose x to be an extreme point. Let E = {e x e > 0}. Claim: E cannot contain a cycle. Indeed, suppose C E is a cycle. Decompose C into M 1 and M 2 matchings. Then, perturb x by decreasing M 1 by a tiny ε and increasing M 2 by the same ε: we get x 1, which is still feasible. We can also perturb x by increasing M 1 and decreasing M 2 : we get x 2. Thus, x = x 1 +x 2 2, which contradicts the fact that x is an extreme point. Thus, E is a forest, that can be decomposed into paths. For each path, take an extreme point: it is a leaf, so the edge e it is bearing must be so that x e = 1. Then the next edge should be 0, etc. For most programs however, you cannot hope to cast it into the convex hull of anything, because otherwise it would be polynomial. Theorem 2.5 (Birkhoff) Given a doubly stochastic matrix A R n n, then A = i λ ip i such that i 1, n, λ i 0, i λ i = 1 and i, P i is a permutation matrix. Proof: It is an immediate consequence of the preceding theorem, taking the rows and columns of A as the two parts of the graph: A is some point in the convex hull of perfect matchings, so it can be written is a convex sum of perfect matchings.
5 2.3 Duality Consider the primal max { c T x Ax b } and the dual min { y T y T A = c T, y 0 }. Theorem 2.6 If both primal and dual have feasible solutions x and y, then the three followings are equivalent: 1. x and y are respectively optimal in primal and dual. 2. y T b = c T x. 3. y T (Ax b) = 0. Proof: 1 2: 2 immediately implies 1. For the converse, we apply the simplex algorithm to the dual. The only possible outcome is a pair of solutions (there cannot be no solution) which verify y T b = c T x. 2 3: If 2 is true, then y T (Ax b) = y T Ax y T b = y T Ax c T x = (y T A c T )x = 0. Conversely, if 3 is true, since y T A = c T, we have 0 = y T (Ax b) = c T x y T b. Corollary 2.7 If x is optimal, then c T is a non-negative combination of the rows i of A where a i x b i = 0. Proof: Using the third point of the last theorem, c T = y T A. For all i such that a j x b j 0, since y T (Ax b) = 0, we have y j = 0. Thus, c T is a combination of the rows i of A where a i x b i = 0. The combination is non-negative because y 0. Reformulation: writing the primal as min { c T x Ax 0, x 0 } and the dual max { y T b y T A c T, y 0 }, then the theorem can be written as Theorem 2.8 Let x and y be feasible solutions in the primal and dual respectively. Then the following are equivalent: 1. x and y are both optimal. 2. c T x = y T b 3. (c T y T A)x = 0 and (b Ax)y = 0 (complementary slackness) approximation of Vertex Cover G = (V, E), w : V N. We want a C V such that e = (u, v), u or v is in C and v C w(v) is minimized. This problem is NP-complete because the original vertex cover problem is. However, it can easily be 2- approximated by solving the linear program: min v V w(v)x v under the condition (u, v) E, x u + x v 1 and u V, x u 0. We obtain a optimal solution x which may not be integer. Then, let C = { v x v 2} 1. This is indeed a vertex cover because of the condition (u, v) E, x u + x v 1. Let s prove it is a 2-approximation. w(opt) w(v)x v, but w(c) 2 w(v)x v, so w(c) 2w(OPT). v V ( v V ) Today, the best polynomial time that is known is a 2 -approximation. 3 Blossom algorithm 3.1 Algorithm log(log n) log n Goal : find the maximum matching in a general graph (not necessarily bipartite). Recall Petersen s theorem for bipartite graphs: M is of maximal cardinality iff there is no augmenting path (also called Berge s theorem). However, the big trouble here is that, contrarily to bipartite graphs, there can be odd cycles in general graphs The main idea of the blossom algorithm consists in shrinking odd cycles into a new virtual node, called blossom. In the resulting graph, there are no more odd cycles, so finding augmenting paths is easy. Another observation is the following:
6 Proposition 3.1 If C is an odd-length cycle, then given a matching M in G C (where G C is G where the cycle C has been shrunk), there exists another matching M in G so that the number of unmatched vertices in G C and in G are the same. Proof: On a picture! Suppose all the unmatched nodes are not the shrunk node. Then one of the nodes within the cycle is matched with a node out of the cycle. Then, all the other nodes in the cycle can be matched pairwise. If the shrunk node is unmatched, then all but one node of the cycle can be matched pairwise, which lets only one unmatched node in the cycle. However, the story is not finished: indeed, after having shrunk, found the maximal cardinality matching and de-shrunk, it is not clear wether the resulting matching is maximal. Indeed, the difficulty is to find the good cycle. Theorem 3.2 (Build Hungarian Tree) In the process of building a Hungarian tree at an unmatched node r, we build at the same time the two following sets: A(T r ) will be the set of nodes that can be linked through an alternative path of odd length to r, and B(T r ) will be the same for even length paths. Build Hungarian Tree (rooted at r): While e = (u, v) such that 1. u B(T r ) and v / A(T r ) B(T r ), then If v is unmatched, augment. If v is matched to w then add (v, w) to the tree. v is now part of A(T ), w is now part of B(T ). 2. u B(T r ) and v B(T r ), then find their first common ancestor in the tree and shrink the resulting odd-length cycle. The new node (the blossom) is set in B(T r ). 3. v A(T r ), then continue. Theorem 3.3 (Blossom algorithm, Edmonds, 1965) Build Hungarian Trees rooted at all unmatched nodes one by one. If an augmentation happens, open all blossoms and go back at the beginning. Otherwise, stop. Remark: The paper by Edmonds in 1965 was quite prophetic: he described intuitively the notions of P and NP problems although they were really introduced in Direct proof Proof: A few observations: Let A(T ) = A(T r ), B(T ) = B(T r ) and U(T ) = V \ (A(T ) B(T )). Then (A(T ), B(T ), U(T )) is a partition of V which is the set of nodes derived from V after shrinking. Blossoms can only be in B(T ). The matching on U(T ) is perfect There is no edge in B(T ) (U(T ) B(T )) except those that are within a blossom in B(T ). Note that there can be any edge in V A(T ). Let s now suppose by absurd there is an augmenting path at the end of the algorithm. First, let s consider the shrunk nodes. The first node in the path should be in B(T ) since all unmatched nodes are in B(T ). The next one can only be in A(T ). Then the following is in B(T ). Etc. Finally, the two last ones should be in B(T ), which is absurd. If we consider a real node, if it is inside a blossom, we saw that the only outgoing edges that are not within the blossom link it with a node in A(T ), so the same argument should hold. Thus, the blossom algorithm works.
7 3.3 Complexity Whenever an augmentation happens, the number of unmatched node is decreased by two. Thus, the algorithm is O(nx) where x is the complexity of building the Hungarian Forest. If there is no shrinking, each edge is only looked at once, so O(m). Otherwise, the complexity of building the blossom is O(m). Finally, the complexity is O(n 2 m). Actually, using a union-find structure, a complexity of O(nm log(n)) can be reached. There is even another implementation trick to reduce the complexity to O(n 3 ). Using the idea of finding at each step a maximal set of disjoint set of augmenting paths, a theoretic complexity of O( nm) could be reached, which was proven in 1994 using another fancy data structure. ( nm ( The current fastest running time is O logn 3.4 MinMax-like proof Proposition 3.4 Given any matching M, then for any A V, #M 1 2 (#V (oc(v \ A) #A)) where oc(v \ A) is the number of odd components of \A. Proof: Indeed, oc(v \ A) #A is the minimal number of nodes that cannot be matched. Theorem 3.5 (Tutte - Berge) max #M = min 1 2 (#V (oc(v \ A) #A)) Proof: If we take A = A(T ) in the final outcome of the blossom algorithm, then oc(v \ A) #A is exactly the number of unmatched nodes: each matched node in B(T ) cancels in the two terms, so only the unmatched nodes in B(T ) count for one. 3.5 Dual problem Observations: we have v V, e δ(v) X e 1, e E, X e 0 and for Ω = {B V #B 3, #B is odd}, we have B Ω, e E(B) X e #B 1 2. The dual of this problem is min v V Yv + ( ) B Ω Z #B 1 B 2, (u, v) E, Y u +Y v + B Ω,(u,v) B Z 2 B 1 with u V, Y u 0 and B Ω, Z B 0. 1 if u A(T ) { 1 1 if B is the outermost blossom in B(T ) The dual solutions are: Y u = 2 if u U(T ) and Z B = 0 otherwise 0 otherwise We can verify that the solution is indeed feasible by looking at all (u, v) E. We can show it is optimal using the complementary slackness condition. Sidenote: we could also prove it by showing that the second condition, c T x = y T b is valid. Actually, if we replace the goal of the primal by max e E ct X e, that is if we look for a weighted matching this time, the optimal solution is still integral. 3.6 Randomized algorithm There is an O(n ω ) algorithm, where n ω is the complexity to multiply two matrices of size n (today, ω 2.367). Let G be a simple graph. Let G be derived from G by orienting the edges arbitrarily. Given (X e ) e E, X u,v if (u, v) G refine the Tutte matrix as follows: (T G (X)) u,v = X u,v if (v, u) G. T G (X) is antisymmetric (with 0 otherwise in particular null diagonal): T G (X) = (T G (X)) T. Theorem 3.6 det (T G (X)) is identically 0 iff there is no perfect matching. Proof: n 2 m If M is perfect, consider the permutation π as follows: if (u, v) M then π(u) = v, π(v) = u. Recall that det(a) = π S m sgn(π) n i=1 A i,π(i). Then in det (T G (X)), we have Π e M (X e ) 2 term which cannot be canceled by others. )).
8 Suppose by contradiction that there is no perfect matching and X, det(t G (X)) 0, consider all π S m for which n i=1 T G (X) i,π(i). Call them S n. π corresponds to a directed graph H π where every vertex has in-degree and out-degree exactly 1. Then H π cannot be only even cycles, otherwise we could easily find a perfect matching. Consider thus the odd cycle that contains the smallest index node, and reverse all the edges directions along this cycle, leaving all other edges as they were. This is a one-to-one mapping, we call f. We have sgn(π) = sgn(f(π)), but n i=1 T G (X) i,π(i) = n i=1 T G (X) i,f(π)(i) because the cycle is odd and T G (X) is antisymmetric. Thus, det(t G (X)) = det(t G (X)), so finally det(t G (X)) = 0: absurd.
CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest
More informationRecall: Matchings. Examples. K n,m, K n, Petersen graph, Q k ; graphs without perfect matching
Recall: Matchings A matching is a set of (non-loop) edges with no shared endpoints. The vertices incident to an edge of a matching M are saturated by M, the others are unsaturated. A perfect matching of
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More information1 Matchings in Non-Bipartite Graphs
CS 598CSC: Combinatorial Optimization Lecture date: Feb 9, 010 Instructor: Chandra Chekuri Scribe: Matthew Yancey 1 Matchings in Non-Bipartite Graphs We discuss matching in general undirected graphs. Given
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationGRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017)
GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017) C. Croitoru croitoru@info.uaic.ro FII November 12, 2017 1 / 33 OUTLINE Matchings Analytical Formulation of the Maximum Matching Problem Perfect Matchings
More informationPart V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows
Matching Input: undirected graph G = (V, E). M E is a matching if each node appears in at most one Part V edge in M. Maximum Matching: find a matching of maximum cardinality Matchings Ernst Mayr, Harald
More informationMaximum flow problem
Maximum flow problem 7000 Network flows Network Directed graph G = (V, E) Source node s V, sink node t V Edge capacities: cap : E R 0 Flow: f : E R 0 satisfying 1. Flow conservation constraints e:target(e)=v
More informationRECAP How to find a maximum matching?
RECAP How to find a maximum matching? First characterize maximum matchings A maximal matching cannot be enlarged by adding another edge. A maximum matching of G is one of maximum size. Example. Maximum
More information5 Flows and cuts in digraphs
5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationThe Cutting Plane Method is Polynomial for Perfect Matchings
The Cutting Plane Method is Polynomial for Perfect Matchings Karthekeyan Chandrasekaran 1, László A. Végh 2, and Santosh S. Vempala 1 1 College of Computing, Georgia Institute of Technology 2 Department
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationAgenda. Soviet Rail Network, We ve done Greedy Method Divide and Conquer Dynamic Programming
Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Now Flow Networks, Max-flow Min-cut and Applications c Hung Q. Ngo (SUNY at Buffalo) CSE 531 Algorithm Analysis and Design 1 / 52
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More information1 Integer Decomposition Property
CS 598CSC: Combinatorial Optimization Lecture date: Feb 2, 2010 Instructor: Chandra Chekuri Scribe: Siva Theja Maguluri Material taken mostly from [1] (Chapter 19). 1 Integer Decomposition Property A polyhedron
More informationList of Theorems. Mat 416, Introduction to Graph Theory. Theorem 1 The numbers R(p, q) exist and for p, q 2,
List of Theorems Mat 416, Introduction to Graph Theory 1. Ramsey s Theorem for graphs 8.3.11. Theorem 1 The numbers R(p, q) exist and for p, q 2, R(p, q) R(p 1, q) + R(p, q 1). If both summands on the
More information4 Packing T-joins and T-cuts
4 Packing T-joins and T-cuts Introduction Graft: A graft consists of a connected graph G = (V, E) with a distinguished subset T V where T is even. T-cut: A T -cut of G is an edge-cut C which separates
More informationOn improving matchings in trees, via bounded-length augmentations 1
On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical
More informationBreadth-First Search of Graphs
Breadth-First Search of Graphs Analysis of Algorithms Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Applications of Breadth-First Search of Graphs a) Single Source
More information6. Linear Programming
Linear Programming 6-1 6. Linear Programming Linear Programming LP reduction Duality Max-flow min-cut, Zero-sum game Integer Programming and LP relaxation Maximum Bipartite Matching, Minimum weight vertex
More informationProblem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?
CS261, Winter 2017. Instructor: Ashish Goel. Problem set 1 Electronic submission to Gradescope due 11:59pm Thursday 2/2. Form a group of 2-3 students that is, submit one homework with all of your names.
More informationThe Matching Polytope: General graphs
8.433 Combinatorial Optimization The Matching Polytope: General graphs September 8 Lecturer: Santosh Vempala A matching M corresponds to a vector x M = (0, 0,,, 0...0) where x M e is iff e M and 0 if e
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationM-saturated M={ } M-unsaturated. Perfect Matching. Matchings
Matchings A matching M of a graph G = (V, E) is a set of edges, no two of which are incident to a common vertex. M-saturated M={ } M-unsaturated Perfect Matching 1 M-alternating path M not M M not M M
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationRESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices
Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,
More informationChapter 7 Matchings and r-factors
Chapter 7 Matchings and r-factors Section 7.0 Introduction Suppose you have your own company and you have several job openings to fill. Further, suppose you have several candidates to fill these jobs and
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More information1 Perfect Matching and Matching Polytopes
CS 598CSC: Combinatorial Optimization Lecture date: /16/009 Instructor: Chandra Chekuri Scribe: Vivek Srikumar 1 Perfect Matching and Matching Polytopes Let G = (V, E be a graph. For a set E E, let χ E
More informationPolynomiality of Linear Programming
Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is
More informationGraph coloring, perfect graphs
Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LETURE 2 Network Flow Finish bipartite matching apacity-scaling algorithm Adam Smith 0//0 A. Smith; based on slides by E. Demaine,. Leiserson, S. Raskhodnikova, K. Wayne Marriage
More informationMath 775 Homework 1. Austin Mohr. February 9, 2011
Math 775 Homework 1 Austin Mohr February 9, 2011 Problem 1 Suppose sets S 1, S 2,..., S n contain, respectively, 2, 3,..., n 1 elements. Proposition 1. The number of SDR s is at least 2 n, and this bound
More informationACO Comprehensive Exam March 17 and 18, Computability, Complexity and Algorithms
1. Computability, Complexity and Algorithms (a) Let G(V, E) be an undirected unweighted graph. Let C V be a vertex cover of G. Argue that V \ C is an independent set of G. (b) Minimum cardinality vertex
More information15.1 Matching, Components, and Edge cover (Collaborate with Xin Yu)
15.1 Matching, Components, and Edge cover (Collaborate with Xin Yu) First show l = c by proving l c and c l. For a maximum matching M in G, let V be the set of vertices covered by M. Since any vertex in
More informationAlgorithms and Theory of Computation. Lecture 11: Network Flow
Algorithms and Theory of Computation Lecture 11: Network Flow Xiaohui Bei MAS 714 September 18, 2018 Nanyang Technological University MAS 714 September 18, 2018 1 / 26 Flow Network A flow network is a
More informationLecture notes on the ellipsoid algorithm
Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm
More information7. Lecture notes on the ellipsoid algorithm
Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear
More informationMatching Polynomials of Graphs
Spectral Graph Theory Lecture 25 Matching Polynomials of Graphs Daniel A Spielman December 7, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class The notes
More informationA New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser
A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality By Markus Bläser Presented By: Chris Standish chriss@cs.tamu.edu 23 November 2005 1 Outline Problem Definition Frieze s Generic
More informationLecture 10 February 4, 2013
UBC CPSC 536N: Sparse Approximations Winter 2013 Prof Nick Harvey Lecture 10 February 4, 2013 Scribe: Alexandre Fréchette This lecture is about spanning trees and their polyhedral representation Throughout
More informationGraph G = (V, E). V ={vertices}, E={edges}. V={a,b,c,d,e,f,g,h,k} E={(a,b),(a,g),( a,h),(a,k),(b,c),(b,k),...,(h,k)}
Graph Theory Graph G = (V, E). V ={vertices}, E={edges}. a b c h k d g f e V={a,b,c,d,e,f,g,h,k} E={(a,b),(a,g),( a,h),(a,k),(b,c),(b,k),...,(h,k)} E =16. Digraph D = (V, A). V ={vertices}, E={edges}.
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationLinear Programming. Scheduling problems
Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n
More informationA Geometric Approach to Graph Isomorphism
A Geometric Approach to Graph Isomorphism Pawan Aurora and Shashank K Mehta Indian Institute of Technology, Kanpur - 208016, India {paurora,skmehta}@cse.iitk.ac.in Abstract. We present an integer linear
More informationAn algorithm to increase the node-connectivity of a digraph by one
Discrete Optimization 5 (2008) 677 684 Contents lists available at ScienceDirect Discrete Optimization journal homepage: www.elsevier.com/locate/disopt An algorithm to increase the node-connectivity of
More informationTree-width and planar minors
Tree-width and planar minors Alexander Leaf and Paul Seymour 1 Princeton University, Princeton, NJ 08544 May 22, 2012; revised March 18, 2014 1 Supported by ONR grant N00014-10-1-0680 and NSF grant DMS-0901075.
More informationk-blocks: a connectivity invariant for graphs
1 k-blocks: a connectivity invariant for graphs J. Carmesin R. Diestel M. Hamann F. Hundertmark June 17, 2014 Abstract A k-block in a graph G is a maximal set of at least k vertices no two of which can
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationObservation 4.1 G has a proper separation of order 0 if and only if G is disconnected.
4 Connectivity 2-connectivity Separation: A separation of G of order k is a pair of subgraphs (H, K) with H K = G and E(H K) = and V (H) V (K) = k. Such a separation is proper if V (H) \ V (K) and V (K)
More informationFlows and Cuts. 1 Concepts. CS 787: Advanced Algorithms. Instructor: Dieter van Melkebeek
CS 787: Advanced Algorithms Flows and Cuts Instructor: Dieter van Melkebeek This lecture covers the construction of optimal flows and cuts in networks, their relationship, and some applications. It paves
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationLimitations of Algorithm Power
Limitations of Algorithm Power Objectives We now move into the third and final major theme for this course. 1. Tools for analyzing algorithms. 2. Design strategies for designing algorithms. 3. Identifying
More informationTree sets. Reinhard Diestel
1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationObservation 4.1 G has a proper separation of order 0 if and only if G is disconnected.
4 Connectivity 2-connectivity Separation: A separation of G of order k is a pair of subgraphs (H 1, H 2 ) so that H 1 H 2 = G E(H 1 ) E(H 2 ) = V (H 1 ) V (H 2 ) = k Such a separation is proper if V (H
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More informationAlgebraic Methods in Combinatorics
Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets
More informationGenerating p-extremal graphs
Generating p-extremal graphs Derrick Stolee Department of Mathematics Department of Computer Science University of Nebraska Lincoln s-dstolee1@math.unl.edu August 2, 2011 Abstract Let f(n, p be the maximum
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationDiscrete Optimization
Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises
More informationInteger Programming, Part 1
Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationAn improved approximation algorithm for the stable marriage problem with one-sided ties
Noname manuscript No. (will be inserted by the editor) An improved approximation algorithm for the stable marriage problem with one-sided ties Chien-Chung Huang Telikepalli Kavitha Received: date / Accepted:
More informationLinear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004
Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define
More informationChapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should
More information5 Set Operations, Functions, and Counting
5 Set Operations, Functions, and Counting Let N denote the positive integers, N 0 := N {0} be the non-negative integers and Z = N 0 ( N) the positive and negative integers including 0, Q the rational numbers,
More informationAn Introduction to Transversal Matroids
An Introduction to Transversal Matroids Joseph E Bonin The George Washington University These slides and an accompanying expository paper (in essence, notes for this talk, and more) are available at http://homegwuedu/
More informationMulticommodity Flows and Column Generation
Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07
More informationApproximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs
Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs Haim Kaplan Tel-Aviv University, Israel haimk@post.tau.ac.il Nira Shafrir Tel-Aviv University, Israel shafrirn@post.tau.ac.il
More informationExcluded t-factors in Bipartite Graphs:
Excluded t-factors in Bipartite Graphs: A nified Framework for Nonbipartite Matchings and Restricted -matchings Blossom and Subtour Elimination Constraints Kenjiro Takazawa Hosei niversity, Japan IPCO017
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationDiscrete Optimization 23
Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:
More informationTopics in Theoretical Computer Science April 08, Lecture 8
Topics in Theoretical Computer Science April 08, 204 Lecture 8 Lecturer: Ola Svensson Scribes: David Leydier and Samuel Grütter Introduction In this lecture we will introduce Linear Programming. It was
More informationSeparating Simple Domino Parity Inequalities
Separating Simple Domino Parity Inequalities Lisa Fleischer Adam Letchford Andrea Lodi DRAFT: IPCO submission Abstract In IPCO 2002, Letchford and Lodi describe an algorithm for separating simple comb
More informationAdvanced Linear Programming: The Exercises
Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z
More informationGeneralized Pigeonhole Properties of Graphs and Oriented Graphs
Europ. J. Combinatorics (2002) 23, 257 274 doi:10.1006/eujc.2002.0574 Available online at http://www.idealibrary.com on Generalized Pigeonhole Properties of Graphs and Oriented Graphs ANTHONY BONATO, PETER
More information7.5 Bipartite Matching
7. Bipartite Matching Matching Matching. Input: undirected graph G = (V, E). M E is a matching if each node appears in at most edge in M. Max matching: find a max cardinality matching. Bipartite Matching
More informationMotivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory
Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization
More informationRunning Time. Assumption. All capacities are integers between 1 and C.
Running Time Assumption. All capacities are integers between and. Invariant. Every flow value f(e) and every residual capacities c f (e) remains an integer throughout the algorithm. Theorem. The algorithm
More informationLinear Programming. 1 An Introduction to Linear Programming
18.415/6.854 Advanced Algorithms October 1994 Lecturer: Michel X. Goemans Linear Programming 1 An Introduction to Linear Programming Linear programming is a very important class of problems, both algorithmically
More informationSolving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:
Solving the MWT Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP: max subject to e i E ω i x i e i C E x i {0, 1} x i C E 1 for all critical mixed cycles
More informationLinear Programming. Chapter Introduction
Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable
More informationWeek 4. (1) 0 f ij u ij.
Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known
More informationCSCE 750 Final Exam Answer Key Wednesday December 7, 2005
CSCE 750 Final Exam Answer Key Wednesday December 7, 2005 Do all problems. Put your answers on blank paper or in a test booklet. There are 00 points total in the exam. You have 80 minutes. Please note
More informationBipartite Matchings and Stable Marriage
Bipartite Matchings and Stable Marriage Meghana Nasre Department of Computer Science and Engineering Indian Institute of Technology, Madras Faculty Development Program SSN College of Engineering, Chennai
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationOn shredders and vertex connectivity augmentation
On shredders and vertex connectivity augmentation Gilad Liberman The Open University of Israel giladliberman@gmail.com Zeev Nutov The Open University of Israel nutov@openu.ac.il Abstract We consider the
More informationWeek 3 Linear programming duality
Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationIntroduction to Mathematical Programming
Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /
More informationThe Minimum Rank, Inverse Inertia, and Inverse Eigenvalue Problems for Graphs. Mark C. Kempton
The Minimum Rank, Inverse Inertia, and Inverse Eigenvalue Problems for Graphs Mark C. Kempton A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for
More informationScheduling on Unrelated Parallel Machines. Approximation Algorithms, V. V. Vazirani Book Chapter 17
Scheduling on Unrelated Parallel Machines Approximation Algorithms, V. V. Vazirani Book Chapter 17 Nicolas Karakatsanis, 2008 Description of the problem Problem 17.1 (Scheduling on unrelated parallel machines)
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationPerfect matchings in highly cyclically connected regular graphs
Perfect matchings in highly cyclically connected regular graphs arxiv:1709.08891v1 [math.co] 6 Sep 017 Robert Lukot ka Comenius University, Bratislava lukotka@dcs.fmph.uniba.sk Edita Rollová University
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More information