Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Similar documents
F 1 F 2 Daily Requirement Cost N N N

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Linear Programming Duality

IE 5531: Engineering Optimization I

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Chapter 1 Linear Programming. Paragraph 5 Duality

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

4.6 Linear Programming duality

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Lectures 6, 7 and part of 8

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Review Solutions, Exam 2, Operations Research

Optimisation and Operations Research

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Summary of the simplex method

4. Duality and Sensitivity

Lecture 5. x 1,x 2,x 3 0 (1)

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Part 1. The Review of Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming

Ω R n is called the constraint set or feasible set. x 1

Lecture 2: The Simplex method

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Understanding the Simplex algorithm. Standard Optimization Problems.

Sensitivity Analysis and Duality

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Sensitivity Analysis and Duality in LP

II. Analysis of Linear Programming Solutions

Lecture 10: Linear programming duality and sensitivity 0-0

Systems Analysis in Construction

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

4. The Dual Simplex Method

Discrete Optimization

Linear Programming: Chapter 5 Duality

March 13, Duality 3

Chap6 Duality Theory and Sensitivity Analysis

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

Linear and Combinatorial Optimization

Duality Theory, Optimality Conditions

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

MATH 445/545 Test 1 Spring 2016

Chapter 5 Linear Programming (LP)

The dual simplex method with bounds

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

The Simplex Algorithm

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Simplex Algorithm Using Canonical Tableaus

Chapter 1: Linear Programming

Review Questions, Final Exam

Duality of LPs and Applications

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

MATH 4211/6211 Optimization Linear Programming

1 Review Session. 1.1 Lecture 2

Math Models of OR: Some Definitions

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Lecture 11: Post-Optimal Analysis. September 23, 2009

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Chapter 3, Operations Research (OR)

9.1 Linear Programs in canonical form

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

Summary of the simplex method

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

OPERATIONS RESEARCH. Linear Programming Problem

3. THE SIMPLEX ALGORITHM

The Strong Duality Theorem 1

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

Discrete Optimization 23

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Sensitivity Analysis

Minimum cost transportation problem

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Linear and Integer Programming - ideas

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

MATH 445/545 Homework 2: Due March 3rd, 2016

ECE 307- Techniques for Engineering Decisions

Part IB Optimisation

TRANSPORTATION PROBLEMS

Special cases of linear programming

2.098/6.255/ Optimization Methods Practice True/False Questions

IE 400: Principles of Engineering Management. Simplex Method Continued

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Optimization methods NOPT048


Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

UNIT-4 Chapter6 Linear Programming

Transcription:

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1

In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define the dual of a normal (primal) linear program. If the primal is a minimization problem, the dual is a maximization problem. That the optimum solution to the dual has cost equal to that of the optimum solution to the primal. Complementary Slackness: A combinatorial statement of the relationship between the primal and the dual A primal-dual interpretation of the shortest path problem. 2

Convert an LP in general form such as one on the right into a LP in standard form using method seen earlier. That is min c x a i x = b i i M a i x b i i M x j 0 j N x j 0 j N min ĉ ˆx ˆx = b ˆx 0 where  = [ A j, j N (A j, A j ), j N 0, i M I, i M ] ˆx = col(x j, j N (x + j, x j ), j N x s i, i M) ĉ = col(c j, j N (c j, c j ), j N 0) 3

min ĉ ˆx ˆx = b ˆx 0 [ ]  = A j, j N (A j, A j ), j N 0, i M I, i M ˆx = col(x j, j N (x + j, x j ), j N x s i, i M) ĉ = col(c j, j N (c j, c j ), j N 0) Recall that BFS ˆx 0 is optimal iff corresponding c 0. If this occurs, there is a corresponding basis ˆB s.t. ĉ ( ĉ B ˆB 1)  0 so π = ĉ B ˆB 1 is a feasible solution to π  ĉ where π R m and m = M + M is # of rows in original LP. Note that there are three different sets of inequalities, one each for j N, j N, and i M. 4

min ĉ ˆx ˆx = b ˆx 0 [ ]  = A j, j N (A j, A j ), j N 0, i M I, i M ˆx = col(x j, j N (x + j, x j ), j N x s i, i M) ĉ = col(c j, j N (c j, c j ), j N 0) which we saw leads to π  ĉ 1. If j N then π A j c j 2. If j N then π A j c j and π A j c j so π A j = c j. 3. If i M then π 0 so π 0. Using these equations, given a primal LP in general form, we can define a new LP in dual form. Primal Dual min c x max π b a i x = b i i M π i 0 a i x b i i M π i 0 x j 0 j N π A j c j x j 0 j N π A j = c j 5

Primal Dual min c x max π b a i x = b i i M π i 0 a i x b i i M π i 0 x j 0 j N π A j c j x j 0 j N π A j = c j Theorem: If an LP has an optimal solution, so does its dual and, at optimality, their costs are equal. Proof: Let x, π be feasible solutions to the primal and dual. Then c x π Ax π b (1) Since primal has optimal solution, dual can not have unbounded feasible solutions. We saw before that, if ˆx 0 is optimal in primal then π = ĉ B ˆB 1 is, by construction, feasible in dual. This means dual is bounded and has some feasible solution so, by simplex algorithm, dual has an optimal (bounded) solution (BFS). The cost of this particular π is π b = ĉ B ˆB 1 b = ĉ Bˆx 0 Therefore, by (1), π is optimal in dual. 6

Primal Dual min c x max π b a i x = b i i M π i 0 a i x b i i M π i 0 x j 0 j N π A j c j x j 0 j N π A j = c j Theorem: The dual of the dual is the primal. Proof: Write dual as min π ( b) ( A j )π c j ( A j )π = c j π i 0 π i 0 and consider it as primal. Then j N j N i M i M max x ( c) x j 0 x j 0 a i x b a i x = b j N j N i M i M 7

Theorem: Given a primal-dual pair, exactly one of the three situations occurring below occurs. The X s denote situations which can not occur. Primal Dual Finite optimum Unbounded Infeasible Finite optimum 1 X X Unbounded X X 3 Infeasible X 3 2 Proof:We already saw that (1) is possible. We also saw that if primal has finite optimum, then the dual must have a finite optimum so the X s in the first row are also correct. Since the dual of the dual is the primal we find, by symmetry, that the X s in the first column are correct. We also saw that any feasible solution to the problem upper bounds the cost of all feasible solutions to the dual so it is impossible for both of them to simultaneously have unbounded solutions. We will see examples of (2) and (3) on the next slide. 8

PRIMAL min x 1 x 1 + x 2 1 x 1 x 2 1 x 1 0 x 2 0 DUAL max π 1 + π 2 π 1 π 2 = 1 π 1 π 2 = 0 π 1 0 π 2 0 Note that both the Primal and Dual are infeasible, giving case 2. Now modify the primal so that x 1, x 2 0. Then PRIMAL min x 1 x 1 + x 2 1 x 1 x 2 1 x 1 0 x 2 0 DUAL max π 1 + π 2 π 1 π 2 1 π 1 π 2 0 π 1 0 π 2 0 and the primal is infeasible while the dual is unbounded (case 3). Primal unbounded and dual infeasible follows by flipping the primal and the dual. 9

Recall the Diet Problem: a i,j min c x Ax r x 0 = amount of ith nutrient in a unit of the jth food i = 1,..., m, j = 1,..., n r i = yearly requirement of ith nutrient i = 1,..., m x j = yearly consumption of the jth food (in units) j = 1,..., n c j = cost per unit of the jth food j = 1,..., n The Dual is max π r π A c π 0 This can be interpreted as saying that a pill-manufacturer wants to sell m nutrients; pill i containing one unit of nutrient i at price π i. Manufacturer wants to maximize revenue. Constraint is that, for each food j, it must be cheaper for customer to satisfy nutritional requirements via pills rather than eating food. 10

Complementary Slackness Conditions Theorem: A pair x, π respectively feasible in a primaldual pair are optimal iff u i = π i (a i x b i) = 0 for all i (2) v j = (c j π A j )x j = 0 for all j (3) Proof: By duality definitions i, u i 0 and j, v j 0. Now set u = i u i 0 and v = j v j 0 Then u = 0 iff (2) and v = 0 iff (3). Now note that u + v = c x π b. So, (2) and (3) are true iff c x = π b which is true iff both x and π are optimal. 11

Example Primal min x 1 + x 2 + x 3 + x 4 + x 5 3x 1 + 2x 2 + x 3 = 1 5x 1 + x 2 + x 3 + x 4 = 3 2x 1 + 5x 2 + x 3 + x 5 = 4 x i 0 Dual max π 1 + 3π 2 + 4π 3 3π 1 + 5π 2 + 2π 3 1 2π 1 + π 2 + 5π 3 1 π 1 + π 2 + π 3 1 π 2 1 π 3 1 π i 0 for all i Because primal is in standard form we have i, u i = π i (a i x b i) = 0. We now need to satisfy j, v j = (c j π A j )x j = 0. 12

Primal min x 1 + x 2 + x 3 + x 4 + x 5 3x 1 + 2x 2 + x 3 = 1 5x 1 + x 2 + x 3 + x 4 = 3 2x 1 + 5x 2 + x 3 + x 5 = 4 x i 0 Dual max π 1 + 3π 2 + 4π 3 3π 1 + 5π 2 + 2π 3 1 2π 1 + π 2 + 5π 3 1 π 1 + π 2 + π 3 1 π 2 1 π 3 1 π i 0 for all i We now need to satisfy j, v j = (c j π A j )x j = 0. Optimal primal is(0,1/2,0,5/2,3/2) (with cost9/2) so need equality in 2nd, 4th and 5th equations, i.e., c 2 π A 2 = 0 or 2π 1 + π 2 + 5π 3 = 1 c 4 π A 4 = 0 or π 2 = 1 c 5 π A 5 = 0 or π 3 = 1 which has solution (π 1, π 2, π 3 ) = ( 5/2, 1, 1). Note that this has same cost 9/2 as primal and is therefore optimal. 13

The Shortest Path problem & its dual Let G = (V, E) be a directed graph s.t. every edge e j E has cost c j 0. The shortest path problem is to find a directed path from source s to sink t with minimal cost. We will now see how to write this as an LP and then derive its dual. The feasible set of this problem is F = { sequences P = (e j1,..., e jk ) : this sequence is a directed path from s to t in G} with cost c(p) = k i=1 c ji. Now define the node-incidence matrix A = [a i,j ], a ij = +1 if arc e j leaves node i 1 if arc e j enters node i 0 otherwise ( i = 1,..., V and j = 1,..., E ) 14

a ij = +1 if arc e j leaves node i 1 if arc e j enters node i 0 otherwise ( i = 1,..., V and j = 1,..., E ) a s 1 e 1 e 2 3 3 e 4 t 2 e 2 b 1 e 5 A = s t a b e 1 e 2 e 3 e 4 e 5 +1 +1 0 0 0 0 0 0 1 1 1 0 +1 +1 0 0 1 1 0 +1 15

To create a LP introduce f j to denote flow through arc e j. The intuition is that we would like to send one unit of flow from s to t. The cost of one unit of flow through e j will be c j so a shortest s t path should be one that minimizes cost. We need flow conservation at every non s, t node v i. This corresponds to a if = 0. On the other hand, one net unit of flow leaves s and one net unit enters t so the problem that we want to solve is Af = min c f f 0 +1 1 0 0. 0 where the +1 corresponds to row s and the 1 corresponds to row t. Note that it is possible that the f j could take on noninteger values but it is easy to see that there is an optimal solution in which each f j = 1 (in shortest path) or f j = 0 (not in shortest path). We will prove this more formally later when discussing unimodular matrices. 16

Af = min c f f 0 +1 1 0 0. 0 where the +1 corresponds to row s and the 1 corresponds to row t. Note that the V equations are redundant and we can therefore leave out any one equation. It is most convenient to leave out the row t equation since this will leave a nonnegative cost column in our simplex tableau. 17

Example a s 1 e 1 e 2 3 3 e 4 t 2 e 2 b 1 e 5 f 1 f 2 f 3 f 4 f 5 z= 0 1 2 2 3 1 1 1 1 0 0 0 0 1 0 1 1 0 0 0 1 1 0 1 Now create a basis with {f 1, f 4, f 5 }. f 1 f 2 f 3 f 4 f 5 z= 4 0 1 0 0 0 f 1 = 1 1 1 0 0 0 f 4 = 1 0 1 1 1 0 f 5 = 0 0 1 1 0 1 A basis corresponds to a set of arcs containing a s t path. Degenerate elements in basis are non-path arcs. 18

s Entering arc e 100 11 00 11 00 11 0 1 00 11 00 11 00 11 e00 11 2 00 11 00 11 a 00 11 e4 00 11 00 11 e 01 3 00 11 000 111e5 00 11 b Leaving arc t f 1 f 2 f 3 f 4 f 5 z= 4 0 1 0 0 0 f 1 = 1 1 1 0 0 0 f 4 = 1 0 1 1 1 0 f 5 = 0 0 1 1 0 1 Pivoting lets us remove f 4 from the basis and add f 2. Since c 0 we see that solution (0,1,0,0,1) is feasible optimal so path {f 2, f 5 } is optimal. f 1 f 2 f 3 f 4 f 5 z= 3 0 0 1 1 0 f 1 = 0 1 0 1 1 0 f 2 = 1 0 1 1 1 0 f 5 = 1 0 0 0 1 1 19

Af = Primal min c f f 0 +1 1 0 0. 0 Dual max π s π t π A c π 0 Since the column corresponding to e = (i, j) has a 1 in row i, a 1 in row j and a 0 elsewhere, the dual inequalities can be written as π i π j c ij for each (i, j) E (4) The complementary slackness conditions then say: Path f and assignment π are jointly optimal iff (i) each arc e = (i, j) in shortest path, i.e., f e > 0 corresponds to π i π j = c ij and (ii) π i π j < c ij corresponds to f (i,j) = 0, i.e., e = (i, j) not in shortest path. 20

Path f and assignment π are jointly optimal iff (i) each arc e = (i, j) in shortest path, i.e., f e > 0 corresponds to π i π j = c ij and (ii) π i π j < c ij corresponds to f (i,j) = 0, i.e., e = (i, j) not in shortest path. Let the shortest s t path found by simplex be e k e k 1,..., e 1, where e i = u i, v i, u k = s, v 1 = t and v i = u i 1. If v is some node on the path then, by definition, π v π t is the shortest distance from v to t so π s π t, the objective maximized by the dual, is exactly the shortest distance from s to t. 21

a 3 3 1 2 s t 3 0 2 1 b 1 The figure shows an optimal π corresponding to shortest path (s, b), (b, t) Note that when we calculate π we don t really have a value for π t, but it can be calculated from the fact that f b,t = 1 > 0 and the complementary slackness conditions which force π t = π b c b,t = 1 1 = 0 22

Initial Tableau Final Tableau c j c j π j I B 1 Assume, WLOG, that simplex starts with Identity matrix in left side, e.g., slack or artificial variables. At end of algorithm, we have essentially multiplied matrix by B 1 where B is the set of columns in original matrix corresponding to final optimal BFS. At optimality, cost row is 0 c = c ( c B B 1) A = c π A where we have already seen that π = c B B 1 is an optimal solution to the dual. So, and c j = c j π j π j = c j c j j = 1,..., m j = 1,..., m and we can read off optimal dual solution from tableau. 23

Example In the section on simplex we saw that the tableau on page 13 is equivalent to the two-phase tableau x a 1 x a 2 x a 3 x 1 x 2 x 3 x 4 x 5 z= 0 0 0 0 1 1 1 1 1 ξ= 8 0 0 0 10 8 3 1 1 x a 1 = 1 1 0 0 3 2 1 0 0 x a 2 = 3 0 1 0 5 1 1 1 0 x a 3 = 4 0 0 1 2 5 1 0 1 This has c j = 0 in real cost row. We also saw that at optimality this transforms to x a 1 x a 2 x a 3 x 1 x 2 x 3 x 4 x 5 z= 9/2 5/2 1 1 3/2 0 3/2 0 0 ξ= 0 1 1 1 0 0 0 0 0 x a 1 = 1/2 1/2 0 0 3/2 1 1/2 0 0 x a 2 = 5/2 1/2 1 0 7/2 0 1/2 1 0 = 3/2 5/2 0 1 11/2 0 3/2 0 1 x a 3 Thus, an optimal dual solution would be π = ( 5/2,1,1) which is exactly what we derived on page 13. 24