Discrete Optimization 23

Similar documents
3.3 Easy ILP problems and totally unimodular matrices

Linear and Integer Programming - ideas

Week 4. (1) 0 f ij u ij.

TRANSPORTATION PROBLEMS

Linear Systems and Matrices

Properties of the Determinant Function

Lecture 5 January 16, 2013

Chapter 7 Network Flow Problems, I

Integer programming: an introduction. Alessandro Astolfi

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Lectures 6, 7 and part of 8

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Maximum Flow Problem (Ford and Fulkerson, 1956)

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics

Relation of Pure Minimum Cost Flow Model to Linear Programming

Mathematical Programs Linear Program (LP)

ORIE 6334 Spectral Graph Theory September 8, Lecture 6. In order to do the first proof, we need to use the following fact.

1 Matrices and Systems of Linear Equations. a 1n a 2n

Multicommodity Flows and Column Generation

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Topics in Graph Theory

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

C&O 355 Mathematical Programming Fall 2010 Lecture 18. N. Harvey

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Matrix Completion Problems for Pairs of Related Classes of Matrices

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Determinants Chapter 3 of Lay

Graphs and Network Flows IE411. Lecture 12. Dr. Ted Ralphs

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

3.7 Cutting plane methods

Transportation Problem

1 Integer Decomposition Property

Evaluating Determinants by Row Reduction

Acyclic Digraphs arising from Complete Intersections

Introduction to Integer Programming

Minimum cost transportation problem

Integer Programming, Part 1

Two Applications of Maximum Flow

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Maximum flow problem

Lecture Summaries for Linear Algebra M51A

Introduction to Integer Linear Programming

Graduate Mathematical Economics Lecture 1

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Undergraduate Mathematical Economics Lecture 1

ACYCLIC DIGRAPHS GIVING RISE TO COMPLETE INTERSECTIONS

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

IE 5531: Engineering Optimization I

TOPIC III LINEAR ALGEBRA

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

1 Determinants. 1.1 Determinant

II. Determinant Functions

Math 240 Calculus III

Chapter 2. Square matrices

Sign Patterns with a Nest of Positive Principal Minors

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Introduction to Determinants

Sign Patterns that Allow Diagonalizability

7 Matrix Operations. 7.0 Matrix Multiplication + 3 = 3 = 4

Chapter 7. Network Flow. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

5 Flows and cuts in digraphs

Lecture Notes in Linear Algebra

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Chapter 3. Determinants and Eigenvalues

Lecture 7. Econ August 18

PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX OF A TREE. R. B. Bapat and S. Sivasubramanian

Example. We can represent the information on July sales more simply as

Bicolorings and Equitable Bicolorings of Matrices

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Advanced Linear Programming: The Exercises

1.8 Dual Spaces (non-examinable)

Introduction to Matrices and Linear Systems Ch. 3

1 Matrices and Systems of Linear Equations

The dual simplex method with bounds

Chapter 1: Linear Programming

det(ka) = k n det A.

ECON 186 Class Notes: Linear Algebra

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram

Determinants. Beifang Chen

NOTES FOR LINEAR ALGEBRA 133

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CHAPTER 1. Relations. 1. Relations and Their Properties. Discussion

MTH 102A - Linear Algebra II Semester

LINEAR ALGEBRA WITH APPLICATIONS

MATRICES AND MATRIX OPERATIONS

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Solution Set 7, Fall '12

Online Exercises for Linear Algebra XM511

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Discrete Optimization

The Minimum Cost Network Flow Problem

Extreme Point Solutions for Infinite Network Flow Problems

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

MAT Linear Algebra Collection of sample exams

Transcription:

Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity: Definition and Properties Consider the following integer linear programming problem max c T x (P ) s.t. Ax = b (2.1) x 0 where A Z m n, b Z m and C Z n all integers. Definition 2.1 A square, integer matrix B is called unimodular if Det(B) = 1. An integer matrix A is called totally unimodular if every square, nonsingular submatrix of A is unimodular. The above definition means that a TU matrix is a {1, 0, 1}-matrix. {1, 0, 1}-matrix may not necessarily a TU matrix, e.g., A = 1 1 1 1 But, a Lemma 2.1 Suppose that A Z n n is a unimodular matrix and that b Z n is an integer vector. If A is nonsingular, then Ax = b has the unique integer solution x = A 1 b. Proof. Let a ij be the ij-th entry of A, i, j = 1,..., n. For any a ij, define the cofactor of a ij as Cof(a ij ) = ( 1) i+j Det(A {1,...,n}\{j} {1,...,n}\{i} ), where (A {1,...,n}\{j} {1,...,n}\{i} ) is the matrix obtained by removing the i-th row and the j-th column of A. Then Det(A) = n a i1 Cof(a i1 ). i=1

24 The Adjoint of A is and the inverse of A is Adj(A) = Adj({a ij }) = {Cof(a ij )} T A 1 = 1 Det(A) Adj(A). Since A Z n n is a unimodular nonsingular integer matrix, every Cof(a ij ) is an integer and Det(A) = ±1. Hence A 1 is an integer matrix and x = A 1 b is integer whenever b is. Q.E.D. Theorem 2.1 If A is TU, every basic solution to P is integer. Proof. Suppose that x is a basic solution to P. Let N be the set of indices of x such that x j = 0. Since x is a basic solution to P, there exist two nonnegative integers p and q with p + q = n and indices B(1),..., B(p) {1,..., m} and N(1),..., N(q) N such that {A T B(i)} p i=1 {et N(j)} q j=1 are linearly independent, where e N(j) is the N(j)-th unit vector in R n. From Ax = b and x j = 0 for j N, we know that there exists a matrix B R p p such that where x B = (x B(1),..., x B(p) ) T Bx B = b B, and b B = (b B(1),..., b B(p) ) T. The matrix B is nonsingular from the linear independence of {A T B(i) }p i=1 {et N(j) }q j=1. Then, by Lemma 2.1, we know that x B is integer. By noting that x N = 0 is integer, we complete the proof. Q.E.D. Proposition 2.1 A Z m n is TU = A and A T are totally unimodular matrices. Proposition 2.2 A Z m n is TU = (A e i ) is TU, where e i is the i-th unit vector of R m, i = 1,..., m.

Discrete Optimization 25 Proposition 2.3 A Z m n is TU = (A I) is TU, where I R m m is the identity matrix. Proposition 2.4 A Z m n is TU = A I is TU, where I R n n is the identity matrix. Theorem 2.2 (Hoffman and Kruskal, 1956) For any integer matrix A Z m n, the following statements are equivalent: 1. A is TU; 2. The extreme points (if any) of S(b) = {x Ax b, x 0} are integer for any integer b; 3. Every square nonsingular submatrix of A has integer inverse. Proof. (1 2) After adding nonnegative slack variables, we have the system Ax + Is = b, x 0, s 0. The extreme points of S(b) correspond to basic feasible solutions of the system (as an exercise). Let y = (x, s) be a basic feasible solution of the above system. If a given basis B contains only columns from A, then y B is integer as A is TU (Lemma 2.1). The same is true if B contains only columns from I. So we have to consider the case when B = (Ā Ī), where Ā is a submatrix of A and Ī is a submatrix of I. After the permutation of rows of B, we have B = A 1 0. A 2 I

26 Obviously, Det(B) = Det(B ) and Det(B ) = Det(A 1 ) Det(I ) = Det(A 1 ). Now A is totally unimodular implies Det(A 1 ) = 0 or 1 and since B is assumed to be nonsingular, Det(B ) = 1. Again, from Lemma 2.1, y B is an integer. Hence y is integer because y j = 0, j / B. This implies that x is integer. [One may also make use of Theorem 2.1 and Proposition 2.3 to get the proof immediately.] (2 3). Let B Z p p be any square nonsingular submatrix of A. It is sufficient to prove that b j is an integer vector, where b j is the jth column of B 1, j = 1,..., p. Let t be an integer vector such that t + b j > 0 and b B (t) = Bt + e j, where e j is the jth unit vector. Then x B = B 1 b B (t) = B 1 (Bt + e j ) = t + B 1 e j = t + b j > 0. By choosing b N (N = {1,..., n}\b) sufficiently large such that (Ax) j < b j, j N, where x j = 0, j N. Hence x is an extreme point of S(b(t)). As x B and t are integer vectors, b j is an integer vector too for j = 1,..., p and B 1 is an integer. (3 1). Let B be an arbitrary square, nonsingular submatrix of A. Then 1 = Det(I) = Det(BB 1 ) = Det(B) Det(B 1 ). By the assumption, B and B 1 are integer matrices. Thus Det(B) = Det(B 1 ) = 1, and A is TU. Q.E.D.

Discrete Optimization 27 Theorem 2.3 (A sufficient condition of TU) An integer matrix A with all a ij = 0, 1, or 1 is TU if 1. no more than two nonzero elements appear in each column, 2. the rows of A can be partitioned into two subsets M 1 and M 2 such that (a) if a column contains two nonzero elements with the same sign, one element is in each of the subsets, (b) if a column contains two nonzero elements of opposite signs, both elements are in the same subset. Proof. The proof is by induction. One element submatrix of A has a determinant equal to (0, 1, 1). Assume that the theorem is true for all submatrices of A of order k 1 or less. If B contains a column with only one nonzero element, we expand Det(B) by that column and apply the induction hypothesis. Finally, consider the case in which every column of B contains two nonzero elements. Then from 2(a) and 2(b) for every column j b ij = b ij, j = 1,..., k. i M 1 i M 2 Let b i be the ith row. Then the above equality gives b i b i = 0, i M 1 i M 2 which implies that {b i }, i M 1 M 2 are linearly dependent and thus B is singular, i.e., Det(B) = 0. Q.E.D. Corollary 2.1 The vertex-edge incidence matrix of a bipartite graph is TU. Corollary 2.2 The node-arc incidence matrix of a digraph is TU.

28 2.2 Applications In this section we show that the assumptions in Theorems in Section 2.1 for integer programming problems connected with optimization of flows in networks are fulfilled. This means that these problems can be solved by the SIMPLEX METHOD. However, it is not necessarily to use the simplex method because more efficient methods have been developed by taking into consideration the specific structure of these problems. Many commodities, such as gas, oil, etc., are transported through networks in which we distinguish sources, intermediate transportation or distribution points and destination points. We will represent a network as a directed graph G = (V, E) and associate with each arc (i, j) E the flow x ij of the commodity and the capacity d ij (possibly infinite) that bounds the flow through the arc. The set V is partitioned into three sets: V 1 set of sources or origins, V 2 set of intermediate points, V 3 set of destinations or sinks. V 1 V 3 V 2 Figure 2.1: A network

Discrete Optimization 29 For each i V 1, let a i be a supply of the commodity and for each i V 3, let b i be a demand for the commodity. We assume that there is no loss of the flow at intermediate points. Additionally, denote V (i) (V (i)) as V (i) = {j (i, j) E} and V (i) = {j (j, i) E}, respectively. Then the minimum cost capacitated problem may be formulated as (P) v(p ) = min (i,j) E c ij x ij subject to x ij j V (i) j V (i) x ji a i, i V 1, = 0, i V 2, b i, i V 3, (2.2) 0 x ij d ij, (i, j) E. (2.3) Constraint (2.2) requires the conservation of flow at intermediate points, a net flow into sinks at least as great as demanded, and a net flow out of sources equal or less than the supply. In some applications, demand must be satisfied exactly and all of the supply must be used. If all of the constraints of (2.2) are equalities, the problem has no feasible solutions unless a i = b i. i V 1 i V 3 To avoid pathological cases, we assume for each cycle in the network G = (V, E) either that the sum of costs of arcs in the cycle is positive or that the minimal capacity of an arc in the cycle is bounded. Theorem 2.4 The constraint matrix corresponding to (2.2) and (2.3) is totally unimodular.

30 Proof. The constraint matrix has the form A = A 1 I, where A 1 is the matrix for (2.2) and I is an identity matrix for (2.3). In the last section, we show that A 1 is totally unimodular implies that A is totally unimodular. Each variable x ij appears in exactly two constraints of (2.2) with coefficients +1 or 1. Thus A 1 is an incidence matrix for a digraph and therefore it is totally unimodular. Q.E.D. The most popular case of P is the so-called (capacitated) transportation problem. We obtain it if we put in P : V 2 =, V (i) = for all i V 1 and V (i) = for all i V 3. So we get v(t ) = min c ij x ij, (i,j) E (TP) s.t. x ij a i, i V 1, j V (i) x ji b i, i V 3, j V (i) 0 x ij d ij, (i, j) E. If d ij = for all (i, j) E, the uncapacitated version of P is sometimes called the transshipment problem.

Discrete Optimization 31 If all a i = 1, and all b i = 1, and additionally, V 1 = V 3, the transshipment problem reduces to the so-called assignment problem of the form v(ap ) = min c ij x ij, i V 1 j V (i) (AP) s.t. x ij = 1, i V 1, j V (i) x ji = 1, i V 3, j V (i) x ij 0. Note that V 1 = V 3 implies that all constraints in (AP) must be satisfied as equalities. Let V = {1,..., m}. Still another important practical problem obtained from P is called the maximum flow problem. In this problem, V 1 = {1}, V 3 = {m}, V (1) =, V (m) =, a 1 =, b m =. The problem is to maximize the total flow into the vertex m under the capacity constraints v(mf ) = max x im, i V (m) (MF) s.t. x ij x ji = 0, j V (i) j V (i) i V 2 = {2,..., m 1}, 0 x ij d ij, (i, j) E. Finally, consider the shortest path problem. Let c ij be interpreted as the length of edge (i, j). Define the length of a path in G to be the sum of the edge lengths over all edges in the path. The objective is to find a path of minimum length

32 from a vertex 1 to vertex m. It is assumed that all cycles have nonnegative length. This problem is a special case of the transshipment problem in which V 1 = {1}, V 3 = {m}, a 1 = 1 and b m = 1. Let A be the incidence matrix of the digraph G = (V, E), where V = {1,..., m} and E = {e 1,..., e n }. With each arc e j we associate its length c j 0 and its flow x j 0. The shortest path problem may be formulated as: v(sp ) = min n c j x j, j=1 (SP) s.t. Ax = 1 0. 0 +1, x 0. The first constraint corresponds to the source vertex, the mth constraint corresponds to the demand vertex, while the remaining constraints correspond to the intermediate vertices, i.e., the points of distribution of the unit flow. The dual problem to SP is (DSP) v(dsp ) = max( u 1 + u m ), A T u c. (2.4)