Relation of Pure Minimum Cost Flow Model to Linear Programming

Similar documents
Chapter 7 Network Flow Problems, I

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

"SYMMETRIC" PRIMAL-DUAL PAIR

A Review of Linear Programming

F 1 F 2 Daily Requirement Cost N N N

Sensitivity Analysis

(includes both Phases I & II)

TRANSPORTATION PROBLEMS

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Part 1. The Review of Linear Programming

Understanding the Simplex algorithm. Standard Optimization Problems.

(includes both Phases I & II)

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Foundations of Operations Research

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Linear Programming in Matrix Form

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Worked Examples for Chapter 5

4. Duality and Sensitivity

Linear Programming Redux

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

The Simplex Method: An Example

Lesson 27 Linear Programming; The Simplex Method

/3 4 1/2. x ij. p ij

Lecture 11: Post-Optimal Analysis. September 23, 2009

The dual simplex method with bounds

Chap6 Duality Theory and Sensitivity Analysis

Summary of the simplex method

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

4.6 Linear Programming duality

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

The Dual Simplex Algorithm

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Discrete Optimization 23

Introduction 1.1 PROBLEM FORMULATION

Direct Methods for Solving Linear Systems. Matrix Factorization

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

IE 5531: Engineering Optimization I

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

Review Solutions, Exam 2, Operations Research

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Linear Programming Duality

Duality Theory, Optimality Conditions

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

1 Review Session. 1.1 Lecture 2

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

Ω R n is called the constraint set or feasible set. x 1

Transportation Problem

Sensitivity Analysis and Duality in LP

Chapter 1 Linear Programming. Paragraph 5 Duality

Linear and Combinatorial Optimization

Week_4: simplex method II

Simplex Algorithm Using Canonical Tableaus

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Operations Research Lecture 2: Linear Programming Simplex Method

Math Models of OR: Some Definitions

Revised Simplex Method

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Simplex tableau CE 377K. April 2, 2015

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

15-780: LinearProgramming

Chapter 5 Linear Programming (LP)

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Minimum cost transportation problem

CS 6820 Fall 2014 Lectures, October 3-20, 2014

The Strong Duality Theorem 1

Math Models of OR: Handling Upper Bounds in Simplex

Discrete Optimization

4.4 The Simplex Method and the Standard Minimization Problem

Lecture 2: The Simplex method

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

December 2014 MATH 340 Name Page 2 of 10 pages

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

CO350 Linear Programming Chapter 6: The Simplex Method

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

1 Simplex and Matrices

3.10 Lagrangian relaxation

Lecture 5 January 16, 2013

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Math Models of OR: Extreme Points and Basic Feasible Solutions

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Part 1. The Review of Linear Programming

Linear Programming: Chapter 5 Duality

II. Analysis of Linear Programming Solutions

Lecture #21. c T x Ax b. maximize subject to

Linear and Integer Programming - ideas

Transcription:

Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m -1 elements. The network has n arcs with parameter vectors u and c, and the flow variable x. The model is conveniently described by either the graphical form illustrated on the left of Fig. 1 or the vector description illustrated on the right. Implicit in both representations is the criterion for optimization, to minimize total cost, the requirement for conservation of flow at the nodes, and the restriction of flows between zero and the upper bounds. 5 [Fixed External Flow] (Upper Bound, Cost) (1,1) 8 1 (,-1) 7 (3,5) (,-1) (1,) 3 (4,1) (5,3) [+3] 1 4 [-5] (1,1) 6 3 [0] [0] Figure 1. Example of network model 5 4 I = {1,, 3, 4, 5} b = [3, 0, 0, 5] K= {1,, 3, 4, 5, 6, 7, 8} o = [1, 1,,, 3, 3, 5, 5] t = [, 3, 3, 4, 4, 5,, 1] u = [3, 4, 1,, 5, 1,, 1] c = [5, 1,, 1, 3, 1, 1, 1] Linear Programming Model The minimum cost network flow problem is a special case of the linear programming problem. The solution algorithms described in this book are based on the primal simplex algorithm for linear programming. To determine optimality conditions it is necessary to provide both the primal and dual linear programming models for the network flow problem. The Primal Model The algebraic model expresses the objective function and constraints explicitly using linear functions of the flow variables. The result is a mathematical programming model.

Relations of Network Flow Model to Linear Programming Objective function n Minimize z = c k x k (1) k =1 subject to Conservation of flow x k x k kek Oi kek Ti = b i, i = 1,,,m 1 () Upper bounds on flow and nonnegativity 0 x k < u k, k = 1,... n (3) The model for the network of Fig. 1 provides a specific example. Min x = 5x 1 + 1x +x 3 1x 4 +3x 5 +1x 6 1x 7 +1x 8 subject to x 1 +x x 8 = 3 x 1 x 3 +x 4 x 7 = 0 x x 3 +x 5 +x 6 = 0 x 4 x 5 = 5 x 1 3 x 4 x 3 1 x 4 x 5 5 x 6 1 x 7 x 8 1 x k 0 for k = 1,...,n From the example and from the general expression, we see that, except for the arcs originating or terminating at the slack node, each arc has two nonzero coefficients in the conservation of flow constraints, a 1 in the row of its origin node, and 1 in the row of its terminal node. Arcs that originate at the slack node have a single 1 in their columns, while arcs that terminate at the slack node have a single +1. Let the matrix A represent the coefficients of the conservation of flow constraints. For the example

3 Relations of Network Flow Model to Linear Programming 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 A =. 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 0 A is called the incidence matrix of the network. The linear programming model for the minimum cost flow problem can now be written in a matrix format using the vectors previously defined. Minimize z =c x subject to A x = b 0 x u Since the network flow problem has a linear programming model, it may be solved by general linear programming algorithms. The fact that the A matrix always has the form shown above allows significant simplifications of the linear programming algorithms and thus much greater efficiencies of solution. For the pure network flow model, the matrix A consists of only +1, 1, and 0. Because of the arrangement of the nonzero coefficients in the matrix, it can be shown that the pure network flow problem with integer parameters will always have an integer optimum solution when solved as a linear program. This problem has the characteristic of total unimodularity. The Dual Model The dual solution of the network flow problem plays an important role in the solution algorithm. In order to form the dual, assign the dual variable i to the conservation of flow constraint for node i, and the dual variable k to the upper bound constraint for arc k. The dual model for the network flow programming problem becomes Dual Problem: m1 Minimize z D = π i b i + u k k (4) i =1 n k =1 subject to i j + k c k for all k(i, j) (5) m = 0, i unrestricted for all i (6) k 0 for all k(i, j). (7)

4 Relations of Network Flow Model to Linear Programming Conditions for Optimality From the duality theory of linear programming, we have conditions for optimality for the primal and dual solutions. Given solutions to the primal and dual problems, we are assured that they are optimal for their respective problems if they are feasible and satisfy the complementary slackness conditions. We will not derive the optimality conditions, however, the form that we use in this chapter is shown below. Assume we have solution x for the primal problem and solution for the dual problem. For simplicity define for the arcs: The solutions x and 1. Primal feasibility d k = i j + c k for all k(i, j) are optimal if the following conditions are satisfied. a. x provides conservation of flow at all nodes except slack node (8) b. 0 x k u k for all arcs (9). Complementary Slackness For each arc k: a. if 0 < x k < u k, then d k = 0 (10) b. if x k = 0 then d k 0 (11) c. if x k = u k then d k 0 (1) These conditions are fundamental for the solution algorithms. The conditions do not use the dual variables, k. These values are determined by k = max{0, d k } for each arc k (13) With this definition, we assure that for any value of, the dual solution is feasible. Basic Solutions and the Primal Simplex Using the notation of the general linear programming, the matrices defining the example problem are x = [ x 1, x, x 3, x 4, x 5, x 6, x 7, x 8 ] T u = [ 3, 4, 1,, 5, 1, 1 ] T c = [ 5, 1,, 1, 3, 1, 1, 1 ] b = [ 3, 0, 0, 5 ] T

5 Relations of Network Flow Model to Linear Programming a 1 a a 3 a 4 a 5 a 6 a 7 a 8 A = 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 0 The Basis and Basis Inverse For linear programming the basis defines the basis matrix B formed by selecting the columns of A associated with the elements of the basis vector. For the basis: n B = [8, 7,, 5] B = a 8 a 7 a a 5 1 0 1 0 0 1 0 0 0 0 1 1 0 0 0 1 We have ordered the columns of the basis so that the matrix has an upper diagonal form. This is always possible to do when the basis forms a spanning tree. Although generally it is fairly difficult to find the inverse of an arbitrary matrix, it is easy to find the inverse associated with the basis of a pure network flow problem. The inverse can be found by observation from the directed spanning tree defining the basis using the following rules. a. Identify the rows of B 1 with the arcs in n B and the columns with the nodes 1 to m 1 of the network. b. For each node i, construct column i of B 1 by finding the path from the slack node to node i in the basis tree. Entry (k, i) of the basis inverse is 1 if arc k is on the path, +1 if arc -k is on the path, otherwise it is zero. The information required from the basis inverse is immediately available from the paths in the tree defining the basis. The computational procedures of network flow programming will use the tree and not the inverse of the basis. inverse. Observing the paths from the basis tree, we construct the basis

6 Relations of Network Flow Model to Linear Programming Node: 1 3 4 Arc 1 0 1 1 8 B 1 0 1 0 0 7 = 0 0 1 1 0 0 0 1 5 Matrix multiplication verifies for the example that BB 1 = I. Basic Solution The collection of columns of the A matrix associated with the set n 0 is the matrix N 0, and the collection of columns of the A matrix associated with the set n 1 is the matrix N 1. The objective coefficients c are partitioned into the vectors c B, c 0, and c 1 according to the sets n B, n 0, and n 1. Similarly the upper bounds u are partitioned into the arrays u B, u 0, and u 1. From the conservation of flow constraints we have Ax = b. Partitioning the matrices into basic variables and the two kinds of nonbasic variables: Bx B + N 0 x 0 + N 1 x 1 = b. Solving for the basic variables while substituting x 0 = 0 and x 1 = u 1 we have x B = B 1 [b N 1 u 1 ]. For the network model, the terms in brackets represent the vector of adjusted external flows equivalent to Eq. (1). An individual component of x B multiplies this vector by a row of B 1. From the previous discussion this row represents an arc with entries 1 in column i if the arc is on the path to node i and 0 if the arc is not on the path. Thus the matrix multiplication sums the adjusted external flows for every node containing the arc in its path to the slack node. For the example, n B = [8, 7,, 5] so x 8 x 7 x x 5 = B 1 [b N1 u 1 ] = 1 0 1 1 0 1 0 0 0 0 1 1 0 0 0 1 3 = 0 3 0 3 3 For the general linear program, we compute the dual variables from the expression

7 Relations of Network Flow Model to Linear Programming = c B B 1 A negative sign appears in this expression because the network problem is stated as a minimization. For the network problem, the ith column of B 1 is a vector representing the arcs on the path from the slack node to node i ( 1 indicates that the arc is on the path, +1 indicates that the mirror arc is on the path, and 0 indicates that the arc is not part of the path). Thus the equation for i simply sums the arc costs on the path to node i. Using the matrices for the example we have 1 = c 3 B B 1 = (1, 1, 1, 3) 1 0 1 1 0 1 0 0 0 0 1 1 0 0 0 1 4 = [ 1 1 5 ]. The computation of the reduced cost, d k, comes directly from the operation for computing the marginal costs for the general linear program. d k = c k + a k where a k is the kth column of A, shown again for the example below a 1 a a 3 a 4 a 5 a 6 a 7 a 8 A = 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 0 The Primal Simplex The first step of the primal simplex is to compute the reduced cost of the nonbasic variables. The computation of the reduced cost, d k, comes directly from the operation for computing the marginal costs for the general linear program. d k = c k + where a k is the kth column of A, shown again for the example below a k

8 Relations of Network Flow Model to Linear Programming a 1 a a 3 a 4 a 5 a 6 a 7 a 8 A = 1 1 0 0 0 0 0 1 1 0 1 1 0 0 1 0 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 0 For the special case of the network, the column a k, representing the arc k(i, j), has at most two entries a +1 in row i and a 1 in row j. The matrix multiplication a k involves only two terms of the vector of dual variables: i j. The general case then specializes for the network problem into the simpler form d k = c k + i j. For the general linear program, the arc to leave the basis is found by computing the vector y k = B 1 a k, where a k is the column of A for the entering variable. For the network model a k, representing arc k(i, j), has only two nonzero entries, +1 in row i and 1 in row j. Recall the ith column of B 1 describes the path from the source node to node i in the basis tree with an element equal to +1 if the arc is traversed in the mirror direction and 1 if it is traversed in the forward direction. Let p i be the column of B -1 describing this path for node i. Similarly p j describes the path to node j. Then the computation y k = B 1 a k = p i p j. When paths p i and p j have elements in common, they cancel out in the expression above and y k simply indicates the arcs on the cycle formed by the entering arc. A 0 element indicates that the basic arc is not on the cycle, a 1 indicates that it is traversed in the forward direction, and a +1 indicates that it is in the mirror direction. The vector y k is used for the familiar ratio test that selects the variable to leave the basis for linear programming. When the arc flows and upper bounds are integer, y k will always prescribe an integer value of f. When the initial flows are integer, the flows in subsequent iterations remain integer. This is further justification, that all basic solutions will be integer, an important result for many applications. For general linear programming a basis change is accomplished by changing the contents of the set n B and recomputing the inverse of the basis. It is unnecessary to compute the basis inverse for the network problem, because the information associated with the basis inverse is obtained directly from the basis tree.