Week 4. (1) 0 f ij u ij.

Similar documents
Chapter 7 Network Flow Problems, I

Lecture 5 January 16, 2013

Discrete Optimization 23

Week 8. 1 LP is easy: the Ellipsoid Method

Mathematical Programs Linear Program (LP)

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Maximum flow problem (part I)

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Lectures 6, 7 and part of 8

Transportation Problem

3.7 Cutting plane methods

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics

Graphs and Network Flows IE411. Lecture 12. Dr. Ted Ralphs

Advanced Linear Programming: The Exercises

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

C&O 355 Mathematical Programming Fall 2010 Lecture 18. N. Harvey

Maximum Integer Flows in Directed Planar Graphs with Multiple Sources and Sinks and Vertex Capacities

Minimum cost transportation problem

A simplex method for uncapacitated pure-supply and pure-demand infinite network flow problems

Strongly Polynomial Algorithm for a Class of Minimum-Cost Flow Problems with Separable Convex Objectives

Problem set 1. (c) Is the Ford-Fulkerson algorithm guaranteed to produce an acyclic maximum flow?

Contents. Introduction

Part V. Matchings. Matching. 19 Augmenting Paths for Matchings. 18 Bipartite Matching via Flows

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Duality of LPs and Applications

Math 5490 Network Flows

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Acyclic Digraphs arising from Complete Intersections

Discrete Optimization

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:

Flows. Chapter Circulations

Two Applications of Maximum Flow

Lecture 10 February 4, 2013

Linear and Integer Programming - ideas

arxiv: v1 [cs.ds] 26 Feb 2016

A primal-simplex based Tardos algorithm

3.3 Easy ILP problems and totally unimodular matrices

Maximum Flow Problem (Ford and Fulkerson, 1956)

Linear Programming. Scheduling problems

Extreme Point Solutions for Infinite Network Flow Problems

Introduction to Integer Programming

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47

Algorithms and Theory of Computation. Lecture 11: Network Flow

Review Questions, Final Exam

Written Exam Linear and Integer Programming (DM545)

Relation of Pure Minimum Cost Flow Model to Linear Programming

Linear Programming Redux

BBM402-Lecture 20: LP Duality

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Written Exam Linear and Integer Programming (DM554)

ACYCLIC DIGRAPHS GIVING RISE TO COMPLETE INTERSECTIONS

Math 16 - Practice Final

Lecture 2: The Simplex method

Network Flow Problems Luis Goddyn, Math 408

GRAPH ALGORITHMS Week 7 (13 Nov - 18 Nov 2017)

Critical Reading of Optimization Methods for Logical Inference [1]

Lecture 3. 1 Polynomial-time algorithms for the maximum flow problem

Network Flows. 4. Maximum Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Lecture notes on the ellipsoid algorithm

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

6.854 Advanced Algorithms

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

TRANSPORTATION PROBLEMS

Math 273a: Optimization The Simplex method

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Linear Programming Duality

On some matrices related to a tree with attached graphs

7. Lecture notes on the ellipsoid algorithm

3 The Simplex Method. 3.1 Basic Solutions

Preliminaries. Graphs. E : set of edges (arcs) (Undirected) Graph : (i, j) = (j, i) (edges) V = {1, 2, 3, 4, 5}, E = {(1, 3), (3, 2), (2, 4)}

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

CS 6820 Fall 2014 Lectures, October 3-20, 2014

1 The independent set problem

Integer Programming, Part 1

ORIE 633 Network Flows October 4, Lecture 10

AM 121: Intro to Optimization

Introduction 1.1 PROBLEM FORMULATION

Definition 2.3. We define addition and multiplication of matrices as follows.

Lecture 13: Polynomial-Time Algorithms for Min Cost Flows. (Reading: AM&O Chapter 10)

Combinatorial Optimization

CSE 206A: Lattice Algorithms and Applications Winter The dual lattice. Instructor: Daniele Micciancio

This section is an introduction to the basic themes of the course.

Introduction to Integer Linear Programming

A bad example for the iterative rounding method for mincost k-connected spanning subgraphs

directed weighted graphs as flow networks the Ford-Fulkerson algorithm termination and running time

On Végh s Strongly Polynomial Algorithm for Generalized Flows

ACO Comprehensive Exam March 20 and 21, Computability, Complexity and Algorithms

I = i 0,

3. THE SIMPLEX ALGORITHM

Multicommodity Flows and Column Generation

THE idea of network coding over error-free networks,

Maximum flow problem CE 377K. February 26, 2015

Linear Systems and Matrices

The maximum flow problem

Transcription:

Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known to everyone, but I will briefly recall them whenever necessary. Please alert immediately if I use a term that you don t know. All terms in graph theory are usually simple and indeed very intuitive. They can be explained in a few seconds. So don t hesitate to ask! We are given a network G = (N, A). This is a directed graph. A (undirected) graph is an object defined by a set of vertices or nodes and a set of edges, which are pairs of nodes. 1 We will write an edge as a set {i, j}. In a directed graph, the edges have a direction, and are called arcs. The order of the two nodes in the notation is relevant: (i, j) means an arc with i its tail and j its head. We are ready for the most important definition of this part of the course, which I will give in a mathematical formula first. We define a flow in a network G = (N, A) as any feasible solution of the following set of linear constraints (i,j) A f ij (j,i) A f ji = b i, i N (1) 0 f ij u ij. The interpretation of f ij is indeed the size of a stream of some commodity that flows through the arc (i, j). We always assume that it is non-negative and usually there is a capacity on the flow dependent on the arc, here u ij. b i > 0 signifies that node i supplies b i of the commodity; we call node i then a source. b i < 0 signifies that node i demands b i of the commodity; we call node i then a sink. b i = 0 signifies that the node is used only as transfer point. A little thought should make it clear that there are feasible flows only if i N b i = 0. An example is given in the accompanying powerpoint file. In matrix notation, a network flow is any vector f IR A satisfying Af = b, 0 f u. The matrix A is called the node-arc incidence matrix; a node is incident to an arc (and vice versa) if it is either head or tail of the arc. The rows of A correspond to nodes and the columns to arcs. The column corresponding to arc (i, j) has 1 in row i, the tail of the arc, and 1 in row j, the head of the arc, and all its other entries are 0. 1 I will draw figures at the blackboard. Unfortunately, I am so bad in making figures that it costs me too much time to insert them in these written notes. 1

Let us consider the optimisation problem. Let c ij be the cost of sending one unit of flow through arc (i, j). min s.t. c T f Af = b 0 f u. (2) Again, for an example I refer to the ppt-file. This is the min-cost flow problem or capacitated transhipment problem. A wide variety of optimisation problems can be modelled as network flow problem: the shortest path problem, the maximum flow problem, the transportation problem, the assignment problem etc. Any f in the null-space of A, i.e., any {f IR A Af = 0} is called a circulation. Notice that a circulation is not just a feasible solution of (2) with b = 0, since it may have negative coefficients and coefficients greater than the capacities. 1.1 Primal algorithms We will present two primal algorithms, the simplex method and the negative cost cycle algorithm. In each iteration of any (primal) algorithm for problem (7) we have a feasible flow, and investigate if we can improve it. Suppose we have feasible flow f. Let us argue that we can change this flow over any undirected cycle of the network G, i.e., any set of edges that form a cycle if we disregard directions. Let C be such a cycle in the underlying undirected graph and choose any direction for this cycle, clockwise or anti-clockwise (drawn at the blackboard). Then some arcs of the directed graph appear on the cycle in forward direction, call them the set F C, and some in backward direction, call them the set B C. Set h C ij = 1, if (i, j) F C, h C ij = 1, if (i, j) BC, (3) h C ij = 0, if (i, j) / C. All is illustrated in the example in the ppt-file. It is easy to see that h C satisfies Ah C = 0 (shown in a figure on the blackboard). It is called a simple circulation. Thus, given a solution f, f +θh C is also a solution for any θ IR, provided that 0 f + θh C u; i.e., provided that 0 f ij + θ u ij if (i, j) F C, 0 f ij θ u ij if (i, j) B C, 2

i.e., { θ = min min {u ij f ij }, (i,j) F C min (i,j) B C f ij }. (4) Given some feasible flow our algorithms will only be interested in flow exchanges that reduce total costs. Changing f with θh C will change the cost with θc T h C = θ c ij c ij. (5) (i,j) F C (i,j) B C We call a cycle a negative cost cycle if c ij (i,j) F C (i,j) B C c ij < 0 (6) Thus we get a better solution if we have a negative cost cycle. Clearly we only strictly decrease total costs if in (4) θ > 0, i.e. non of the forward edges on C has f ij = u ij and non of the backward arcs on C has f ij = 0. Given a feasible flow f, such a cycle is called an unsaturated cycle. Both the simplex algorithm and the negative cost cycle algorithm of Section 7.4 in [B&T] are variations on the theme to find in each iteration a negative cost cycle C and change the present feasible flow f to f + θ h C. The simplex method cannot always avoid to have θ = 0, because of degeneracy, but will always terminate in an optimal solution. The other algorithm only makes the changes on unsaturated cycles but may not terminate. Both algorithms, if they terminate, terminate with a feasible flow for which there is no negative cost cycle, which is indeed optimal, as we will show. 1.1.1 The Simplex Method for Network Flow Let us denote N = n and A = m. First notice that, since i N b i = 0, the rows of A must be linearly dependent. Satisfying the equalities for n 1 of the nodes will automatically satisfy the remaining equality. Hence rank(a) n 1. Thus the problem remains the same if we delete the last restriction from it. Let us denote the matrix A with the last row deleted Ã. Assume that G is connected, without loss of generality, otherwise we could split the problem into two or more separate problems. To facilitate the exposition of the simplex algorithm let us also assume that in (2) u is infinite, making the flow problem uncapacitated. min s.t. c T f Af = b f 0. (7) 3

For the characterisation of basic (feasible) solutions we need the definition of a tree. It is an undirected concept. In a graph a tree is a connected subgraph without (undirected) cycles. (A subgraph of a directed graph without directed cycles is called a directed acyclic subgraph, a concept we do not use in this part of the course.) A tree is a spanning tree of a graph if it contains all nodes. A basic and extremely crucial result in graph theory says that any tree on n vertices has exactly n 1 edges. The following theorem characterises a basic solution, not necessarily feasible. Theorem 1.1 (7.4 in [B&T]) A flow is a basic solution of (7) if and only if there exists a tree T of the underlying undirected graph of G, such that f ij = 0 for all {i, j} / T. Proof. (.) Let us take any spanning tree T of G. Let n be the number of the node corresponding to the last row of A. Choosing the columns of à corresponding to the edges of T gives a square matrix B. We will show that B is non-singular, which proves the theorem since it implies that Bf = b has a unique solution. Choose n, the node corresponding to the last row of A, as the root of T and give an artificial direction on the edges in T from each node towards the root. Thus every node has only one outgoing arc in this artificial direction. Renumber nodes such that on the unique path in T from each node to the root-node n numbers are increasing. Renumber arcs by their tail-node in the artificial direction. This renumbering of the nodes effects the row- and column order of B but not its singularity or non-singularity and it makes B a lower triangular matrix with elements 1 or 1 on the diagonal. Hence, det(b) {1, 1}. (.) If there is a basic solution with a cycle C in its support, then there is a simple circulation h C such that for ɛ small enough f + ɛh C has the same active constraints as f. Thus f is not a unique solution of its active constraints, hence no basic solution. Thus basic feasible solutions of the network flow polyhedron correspond to trees in the underlying undirected graph. This may be a crucial insight to prove the Hirsch bound for flow polytopes (assuming capacities on all arcs makes the polyhedron bounded but does not change the property). The proof also gives rise to two interesting observations: Remark 1. Apart from proving non-singularity, det(b) {1, 1} shows also, by application of Cramer s rule that Bf = b has an integer solution for all integral vectors b. The theorem then implies that the integer flow problem can simply be solved by its LP-relaxation. There is another proof of this fact: The matrix A is totally unimodular, meaning that every square submatrix of A has determinant +1, 0, or 1. There is a very nice induction proof of total unimodularity of A. 4

Remark 2. If G is connected then rank(a) = n 1, since B is a rank n 1 submatrix of A. The guide for improvement in the simplex method was the reduced cost corresponding to a basic feasible solution. They are particularly nicely characterised here. Suppose that we have a bfs with T the corresponding tree. Consider any non-basic variable f kl and its arc (k, l). This arc together with the arcs in T creates a unique cycle C. Direct C such that (k, l) is a forward arc on C, i.e., (k, l) F C. As argued before, increasing ( one unit of flow on (k, l) changes the cost as given in (5) by c T h C = (i,j) F c C ij ) (i,j) B c C ij. Hence, the reduced cost is (cf. (6)) c kl = (i,j) F C c ij (i,j) B C c ij Thus simplex stops as soon as there are no negative cost cycles left. The reduced costs can be found in a more efficient way than going through all O(m) cycles corresponding to the non-basic variables. Namely, through exploiting the simple structure of the dual problem of (7) without capacities. max p T b s.t. p T A c T (8) The column of A corresponding to arc (i, j) has 1 in coordinate i and 1 in coordinate j and 0 everywhere else. Thus, the reduced cost for each arc (i, j) is c ij = c ij (p i p j ), (i, j) A. From the above it is easy to see, that if p is a feasible dual solution, i.e. c ij 0, (i, j) A, then p + δ1 is also feasible. Moreover, p T b + δ1 T b = p T b since i b i = 0. Thus, we may assume that in any dual feasible solution p n = 0. Corresponding to a primal bfs, the values of the p i s are easily computed from the reduced cost of the basic variables being 0: c ij = 0 = c ij (p i p j ), (i, j) T. (9) Starting from p n = 0 and going through the n 1 equalities of (9) in a smart way sets the values of p in O(n) operations. Checking reduced costs on negativity takes O(m) and updating f in case a negative cost cycle exists takes O(n). The total time per iteration is therefore O(m) which compares favourably to the O(mn) for general LP s. 1.1.2 The Negative Cost Cycle Algorithm Let us now consider the algorithm that makes changes only on unsaturated cycles. We turn back to the capacitated problem (7). Given a flow f to find 5

such a cycle is done through building a residual graph R f with node set N and arc set A f. (Those of you who have studied the Max-Flow problem in some previous course will recognize this graph.) Given an arc (i, j) with flow f ij, if f ij < u ij, then (i, j) A f with capacity u ij f ij and cost c ij ; if f ij > 0, then (j, i) A f with capacity f ij with cost c ij. It is easy to see that 1) any directed cycle in R f is unsaturated, and 2) any negative cost directed cycle in R f corresponds to an unsaturated negative cost undirected cycle of G and vice versa, and that arcs on the cycle in R f with negative cost correspond to backward arcs on the cycle in G and arcs on the cycle in R f with positive cost correspond to forward arcs on the cycle in G. 3) The total cost of a directed cycle in R f is the total cost (sum of costs on forward arcs minus sum of costs on backward arcs) of the undirected cycle of G. Finding a negative cost directed cycle in a directed graph can be done in polynomial time. The algorithm terminates if there is no negative cost cycle in the residual graph. Theorem 1.2 (7.6 in [B&T]) A feasible flow f is optimal if and only if there is no unsaturated negative cost cycle. Proof. is trivial. For suppose that f is not an optimal flow. Then there is an optimal flow f and f f = f has c T f < 0. Since Af = Af = b, f must be a circulation, i.e., Af = 0. This corresponds to a non-negative circulation f 0 in R f. Notice that the support of this circulation may not be a single cycle. Thus we need to argue that a simple negative cost cycle exists in the residual network. I adopt the bad habit from the book to insert a lemma in the middle of the proof of another theorem. Lemma 1.1 Let f 0 be a (non-negative) circulation. Then there exist circulations f 1,..., f k, involving only forward arcs and positive scalars a 1,..., a k, such that f = k i=1 a if i. Furthermore, if f is integral then a i can be chosen integral, for all i. Proof. Starting in some arc with f ij > 0, because of flow conservation, there must exist an arc (j, k) with f jk > 0, etc., until we reach a node that we have met before. Take this cycle C 1 (having only forward arcs) with its simple circulation h C = f 1 and distract from the flow on each arc of C 1 an amount of flow equal to a 1 = min (i,j) C 1 f ij. At least one arc will get flow 0 and the remaining flow is still a non-negative circulation. Repeat this until all flow has become 0. 6

Therefore f = i a f i i, with each f i a simple circulation only on forward arcs in the residual graph. Since f has negative cost, at least one of the f i must have negative cost. Thus the original graph has an unsaturated negative cost cycle. Notice that if b and u are integral vectors and if we start with an integral feasible flow, then in each iteration θ will be integral and hence, the flow will remain integral. This shows that in this case θ 1 and hence termination of the algorithm (Theorem 7.7 in [B&T]). It also proves that given integral b and u there exists an integral optimal solution to the problem. We continue with showing how to obtain a first basic feasible solution. This goes through solving the so-called Max Flow problem, a very famous problem in combinatorial optimisation. Material of Week 4 from [B& T] I have skipped Chapter 5. Chapter 7, 7.1 7.4 Exercises of Week 4 7.2, 7.14, 7.17 Next time The rest of Chapter 7 7