"SYMMETRIC" PRIMAL-DUAL PAIR

Similar documents
The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Chap6 Duality Theory and Sensitivity Analysis

The Dual Simplex Algorithm

Duality Theory, Optimality Conditions

Part 1. The Review of Linear Programming

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

MAT016: Optimization

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

A Review of Linear Programming

The Simplex Algorithm

1 Review Session. 1.1 Lecture 2

Review Solutions, Exam 2, Operations Research

Ω R n is called the constraint set or feasible set. x 1

Chapter 1 Linear Programming. Paragraph 5 Duality

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

The dual simplex method with bounds

Relation of Pure Minimum Cost Flow Model to Linear Programming

Simplex Algorithm Using Canonical Tableaus

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Simplex method(s) for solving LPs in standard form

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Lecture 11: Post-Optimal Analysis. September 23, 2009

Summary of the simplex method

OPRE 6201 : 3. Special Cases

F 1 F 2 Daily Requirement Cost N N N

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

II. Analysis of Linear Programming Solutions

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Chapter 5 Linear Programming (LP)

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

In Chapters 3 and 4 we introduced linear programming

Chapter 1: Linear Programming

Introduction to Mathematical Programming

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

OPERATIONS RESEARCH. Linear Programming Problem

Understanding the Simplex algorithm. Standard Optimization Problems.

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Lesson 27 Linear Programming; The Simplex Method

Sensitivity Analysis

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Linear Programming in Matrix Form

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Week 3 Linear programming duality

Introduction. Very efficient solution procedure: simplex method.

Simplex Method for LP (II)

LINEAR PROGRAMMING II

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

4.6 Linear Programming duality

9.1 Linear Programs in canonical form

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Linear Programming Duality

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Linear Programming Redux

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Sensitivity Analysis and Duality in LP

Lectures 6, 7 and part of 8

Lecture 10: Linear programming duality and sensitivity 0-0

ECE 307 Techniques for Engineering Decisions

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

4. Duality and Sensitivity

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

2.098/6.255/ Optimization Methods Practice True/False Questions

3. THE SIMPLEX ALGORITHM


Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Linear Programming, Lecture 4

Special cases of linear programming

(includes both Phases I & II)

TRANSPORTATION PROBLEMS

CO 250 Final Exam Guide

The simplex algorithm

Linear Programming: Chapter 5 Duality

MATH2070 Optimisation

New Artificial-Free Phase 1 Simplex Method

Discrete Optimization

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

The Simplex Algorithm and Goal Programming

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

IE 400: Principles of Engineering Management. Simplex Method Continued

Linear Programming and the Simplex method

Math Models of OR: Some Definitions

Systems Analysis in Construction

The Simplex Method: An Example

AM 121: Intro to Optimization

Linear Programming: Simplex

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

IE 5531: Engineering Optimization I

March 13, Duality 3

Chapter 4 The Simplex Algorithm Part I

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

56:270 Final Exam - May

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

56:171 Operations Research Fall 1998

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

MATH 445/545 Homework 2: Due March 3rd, 2016

Transcription:

"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax = b st A T y c T x y unrestricted IN GENERAL, PRIMAL Objective: Min cx DUAL Objective: Max y T b Variable: x j Constraint: (A j ) T y c j Variable: x j unrestricted (A j ) T y = c j Constraint: (A i )x b i Variable: y i (A i )x = b i Variable: y i unrestricted Coefficient matrix: A RHS Vector: b Cost Vector: c Coefficient matrix: A T Cost Vector: b RHS Vector: c 1

WEAK DUALITY THEOREM: Consider the primal-dual pair: (P) Minimize cx (D) Maximize y T b st Ax b st A T y c T x y Suppose x is primal feasible, and y is dual feasible. Then cx y T b Proof: x is feasible in (P); so Ax b, x. Similarly, y is feasible in (D); so A T y c T and y. Thus, Ax b and y, therefore y T Ax y T b (1) Also, A T y c T and x x T (A T y) x T c T y T Ax cx (2) From (1) and (2) cx y T Ax y T b Some corollaries (x * and y * are optimum vectors): 1) Primal objective for any primal-feasible x is (y * ) T b 2) Dual objective for any dual-feasible y is cx * 3) If x and y are feasible in (P) and (D) respectively, and cx = y T b, then x and y are optimal in (P) and (D) 4) If (P) feasible and unbounded, then (D) is infeasible 5) If (D) feasible and unbounded then, (P) is infeasible 6) If one of the problems is infeasible, then the other problem is (1) infeasible, OR (2) feasible, but with an unbounded objective function 2

STRONG DUALITY THEOREM Consider the Primal-Dual pair (P) Minimize cx (D) Maximize y T b st Ax b st A T y c T x y If either one of (P) or (D) has an optimal solution, then so does the other, and their optimal values are equal, i.e., cx = y T b. PROOF: Assume without loss of generality that the primal has an optimal solution x * and that it is in standard form so that the dual variables are unrestricted in sign. We know that the optimal solution x * is one of the basic feasible solutions for (P). We also know that x B =(A B ) -1 b and that = ( ), i.e, ( ) Let y * = = ( ( ) ) (i.e., y * is the optimal simplex multiplier vector) Then (y * ) T A = = ( ) = ( ) [ ] = [ ( ) ] [ ] =. Thus (y * ) T A c, i.e., ((y * ) T A) T = A T y * c T. So y * = is feasible in (D). Moreover the value of the primal objective at x * is cx * = = ( ), while the dual objective at y * is (y * ) T b = = ( ). So the objectives of (P) and (D) have the same value at x * and y * respectively. Then Corollary 3 implies they must be optimum as well to (P) and (D). 3

LEMMA OF FARKAS FARKAS LEMMA: Let A R m n, b R m, x R n, y R m. Then the following statements are equivalent to each other: A. y T A y T b B. The system Ax=b, x is feasible PROOF: Consider the primal-dual pair (note that this merely uses T for c...) Minimize T x Maximize y T b st Ax = b st A T y x y unrestricted If the statement y T A y T b is true, then for any y that is feasible in (D), y T b is. Thus the optimal value of (D) can never exceed. Since the vector y= is feasible and yields a value of for the objective, it MUST therefore be optimal for (D). This implies that (D) is feasible and bounded; therefore it has an optimal solution and by the strong duality theorem, (P) is also feasible, i.e., Ax = b, x is feasible. This proves that B implies A. Now consider the converse, i.e., assume that Ax = b is satisfied for some x. If A T y for some arbitrary y, then x T A T y (since x ), i.e., (Ax) T y = b T y. This proves that A implies B. 4

Graphical Interpretation of Farkas' Lemma y T A y { jh j- } where H j- is the closed halfspace on the side of H j that does not contain A j So H 1 H 2 H 3 b Suppose H j is the hyperplane through the origin that is orthogonal to A j A 1 b A 2 A 3 y T A y T b simply implies that { jh j- } H b- Note that b makes an angle > 9 with all y ( jh j- ) and can be expressed as a nonnegative linear combination of the columns of A. On the other hand, b' does not make an angle > 9 with all y ( jh j- ) and cannot be expressed as a nonnegative linear combination of the columns of A. 5

GORDAN'S THEOREM OF THE ALTERNATIVE This is an alternative version/variation of Farkas Lemma... Exactly one of the following is feasible: I. = II. PROOF: Suppose that satisfies I and satisfies II. Then, A =b A = b > (1) Similarly, A T A T = (A ) = A (2) Obviously, (1) and (2) contradict each other, hence Systems I and II cannot both be feasible. Now suppose that both I and II are infeasible. Then the infeasibility of II implies that y T b for all y such that A T y. Then by Farkas' Lemma, I has to be feasible, which contradicts our assumption. Hence both I and II cannot be infeasible. 6

COMPLEMENTARY SLACKNESS THEOREM Vectors and which are feasible in (P) and (D) respectively, are optimal in (P) and (D) if, and only if whenever a constraint in one problem is inactive, the corresponding variable in the other problem is zero, whenever a variable in one problem is nonzero, the corresponding constraint in the other problem is active. PROOF: Introducing slacks and surpluses, we have (P) Minimize cx (D) Maximize y T b st Ax - u = b st A T y + v = c T x, u y, v Consider the vectors [ ] and [ ] feasible in (P) and (D) respectively. Then c - b = [A T + ] T - [A - ] = A + - A + = + (1) First, suppose that and are also optimal. Then c = b, so from (1) + =, i.e., + Since every variable in every term in the above summation is restricted to be nonnegative, each and every term HAS to be equal to zero. Thus either = or = (or both = ) for all j either = or = (or both = ) for all i 7

This proves the first part of the theorem. Conversely, suppose the above conditions hold so that + =. From (1), we then have c = b and since x and ã are feasible in their respective problems, they must also be optimal. This proves the second part of the theorem. KARUSH-KUHN-TUCKER LP OPTIMALITY CONDITIONS Given a linear program in standard form, x is an optimal solution to the problem if, and only if, there exist vectors y and v such that 1. Ax = b, x (primal feasibility) 2. A T y + v = c T, v (dual feasibility) 3. v T x = (complementary slackness) Note that in this case, y is and optimal solution to the dual (follows from the complementary slackness theorem). The (primal) simplex method we have seen thus far maintains (1) and (3) and seeks to attain (2). We will later look at the dual simplex method that maintains (2) and (3) and seeks to attain (1). 8

A. OBJECTIVE FUNCTION SENSITIVITY ANALYSIS Consider the LP Minimize cx, st Ax=b, x. Suppose the coefficient of x k in the objective function changes from c k to c k ' = c k +δ k. Under what restrictions on δ k will the optimal basis remain unchanged? CASE 1: x k is NOT in the optimal basis, i.e., k N. Then x k will remain nonbasic as long as =,.., ( + ) Where π = c B (A B ) -1 is the optimal simplex multiplier vector (which is unaffected by the change in c k because k B, and hence c k does not form part of c B ). Let π A k = c B (A B ) -1 A k = z k. Then the basis is unchanged as long as z k c k +δ k = c k '. In other words the range of values for c k ' for which the basis is unchanged is z k c k ' CASE 2: x k is IN the optimal basis, i.e., k B. This case is a little more complicated. Suppose that A k is the p th basic variable. For B to remain unchanged = 9

where is the (modified) simplex multiplier vector given by = ( ), and = [ ( + ) ] = c B + [... δ k... ] p th entry = ( ) + [ ]( ) = z j - c j + δ k where is the p th entry of the updated version of column A j (= y j = ( ) ) Thus the basis remains optimal as long as for all j k: ( )/ > ( )/ < This leads to maximum ( ) minimum ( ) 1

B. RIGHT HAND SIDE Once again, consider the LP Minimize cx, st Ax=b, x. Suppose the RHS of the i th constraint changes from b i to b i ' = b i + δ i. Under what restrictions on δ i will the optimal basis remain unchanged? Note that the optimality conditions are NOT affected, since = c B (A B ) -1 A j - c k does not involve the vector b in any way and is therefore unaffected for all k. Feasibility of the current basis may be affected though, since x B = (A B ) -1 b changes. In what range of values for b i ' does the current basis remain feasible? We have x B ' = (A B ) -1 b' = (A B ) -1 + = (A B ) -1 b + γ i δ i (where γ i is the i th column of (A B ) -1 ) = x B + γ i δ i For feasibility of the current basis, we require x B ' = x B + γ i δ i i.e., δ i for j=1,2,...,m > < maximum minimum 11

C. ADDING A NEW VARIABLE Suppose we add a new variable x n+1 with cost c n+1 and column A n+1. Without resolving the problem we can determine whether x n+1 will be attractive to enter the basis. First we find z n+1 c n+1 = ( ). If this quantity is nonpositive (for a minimization) then x n+1 = at the optimum and the current optimum solution is unchanged. On the other hand if z n+1 c n+1 > then x n+1 is introduced into the basis and we continue until we find the new optimal solution. D. ADDING A NEW CONSTRAINT If the current optimal solution is feasible in the new constraint then the optimum solution is unchanged. On the other hand, if the current optimum is infeasible in the new constraint, then the feasible region with the new constraint is cutting out the current optimum solution (and other parts of the original feasible region). A new solution may be found from the current basis by using the dual simplex method we will study this later on... 12

EXAMPLE: Consider the LP MAX Z = [-3 3 2-2 -1 4 ] st 1 1 1 1 4 2 1 1 3 1 1 2 1 1 1 1 1 1 1 1 1 = 4 6 1 At the optimal iteration, we have Z * = 32 with B={2,5,6,3}; c B = [3-1 4 2] x = [2 2 5 4] A B = 1 4 2 1 3 2 1 1 1 1 1 1 (A B ) -1 =.5.5 1 1 2 1 4 5 2.5 1.5 6 7 2.5 1.5 5 7 The optimal simplex multiplier vector (dual solution) is π = c B (A B ) -1 = [-29/2 19/2 33 4] 1. Objective Coefficient Ranging Case 1: k B e.g. k=1 (x 1 is not basic) For a Max problem: Basis is unchanged as long as = = Here k=1 and z 1 = = [-29/2 19/2 33 4] i.e., as long as (-3) - c 1 ', i.e., - c 1 ' -3. 1 3 1 = -3 13

Thus, if we rewrite c 1 ' as c 1 ' = c 1 + δ 1 = -3+δ 1, then Max. allowable increase = and Max. allowable decrease = Case 2: k B, e.g., k=3 (x 3 is basic) For a MAX problem, the basis is unchanged as long as = Noting that x 3 is the 4 th basic variable (p=4), = c B + [ δ k ] and = ( ) = = ( ) + [ ]( ) = 4 th entry of the updated version of Column A j Since k=3, the basis thus remains optimal as long as j 3: ( )/ 3 > ( )/ 3 < i.e., maximum ( ) minimum ( ) We therefore first need y j = (A B ) -1 A j and c j z j = c j - for all j 3. These are given below: j 1 2 4 5 6 7 8 z j -c j 29/2 19/2 y j 1 1 1 1 1.5 2 2.5 2.5.5 1 1.5 1.5 p=4 14

maximum ( ) = maximum / /, / / = minimum ( ) = minimum = the basis is unchanged as long as -29/5 δ 3 1, i.e., Max. allowable increase = and Max. allowable decrease = 29/5 2. RHS Ranging: Basis is unchanged as long as maximum minimum Recall that γ i is the i th column of (A B ) -1. For example consider i=1, and suppose we have a change of δ 1 in the RHS, i.e., b 1 ' = b 1 + δ 1 = 4 + δ 1 ; γ 1 =.5 2 2.5 2.5. We have maximum = - since < for every j minimum = minimum [-2/2.5-2/-2-5/-2.5-4/-2.5] = 1 the basis is unchanged as long as - δ 1 1, i.e., Max. allowable increase = 1 and Max. allowable decrease = 15

SIMULTANEOUS CHANGES IN PARAMETERS: THE 1% RULE Consider the case where MORE THAN ONE element of c or b are changed simultaneously. Under what conditions does the optimal basis remain unchanged? Unfortunately, it is NOT possible to state that the basis remains unchanged if each change is within its INDIVIDUAL limit. However, the use of the 1% Rule provides us with a conservative bound. 1. OBJECTIVE COEFFICIENTS CASE 1: All coefficients that are changed correspond to variables that are NOT in the optimal basis. Since π = c B (A B ) -1 is unaffected, each change is independent of the others, and thus the basis is unchanged as long as each change is within its INDIVIDUAL bounds. CASE 2: At least one of the coefficients that are changed corresponds to a basic variable. Suppose we let I j = maximum INDIVIDUAL INCREASE possible in c j, D j = maximum INDIVIDUAL DECREASE possible in c j, for the basis to remain unchanged; these value are as computed in the previous section. 16

Let us define r j = / > / < to be the fraction of the maximum individual change that can take place in the coefficient for x j (r j = if δ j =). Then the 1% Rule states that if each change δ j is such that 1, then the basis will remain unchanged with the new set of cost coefficients. B. RIGHT HAND SIDE In a similar fashion let us define I j = maximum INDIVIDUAL INCREASE possible in b j, D j = maximum INDIVIDUAL DECREASE possible in b j, for the basis to remain unchanged; these value once again, are as computed in the previous section. If we define r j = / / to be the fraction of the maximum individual change that can take place in b j, then the 1% Rule states that if each change δ j is such that 1, then the basis will remain unchanged with the new set of RHS values. NOTE: In both cases, if r i exceeds 1, the basis MAY OR MAY NOT change. 17

Consider our example once again: Suppose the current cost vector is changed from c = [-3 3 2-2 -1 4 ] to c'= [-5 1 1-2 -1 8-6 -2]. The r j values are r 1 =(2/ )=; r 2 =(2/19); r 3 =(1/5.8); r 4 =; r 5 =; r 6 =(4/ )=; r 7 =(6/14.5); r 8 =(2/9.5). so that r j yields a value of.92. Since this value is less than 1, the 1% rule states that the same basis remains optimal with the basic variables having the same values (although of course, the value of the objective function will be different). Next, consider the RHS vector b = [4 6 1 ] T, and suppose this is changed to b'= [4.5 4.5 1 2] T. Then r 1 =(.5/1)=.5; r 2 =(1.5/2)=.75; r 3 =; r 4 =(2/ )=; which yields r i = 1.25. Since this value exceeds 1, by the 1% Rule, the basis is no longer guaranteed to be optimal. 18

PARAMETRIC PROGRAMMING Used to investigate how sensitive the optimal solution is to continuous and simultaneous changes in the RHS vector (e.g. resources) or in the cost parameters (e.g. profit margins). Usually we represent the cost vector c or the RHS vector b as parameterized functions c(θ) or b(θ) of some parameter θ (such as time, interest rate, some physical dimension etc.). Note that b(θ) or c(θ) need NOT be linear. Thus if we have constraints corresponding to three different resources, the functions may look like the ones shown below: b 1 (θ) b i b 2 (θ) b 3 (θ) θ 19

The general idea is to find x * at θ=, and find the range of values for θ (say (,θ 1 ]) in which the optimal basis is the same. Thus for θ>θ 1 the current basis becomes suboptimal or infeasible. We now reoptimize and find the new optimal basis along with the range of values (say (θ 1, θ 2 ]) in which it is valid etc., until we reach a point beyond which the basis never changes any more or always stays infeasible. A. OBJECTIVE FUNCTION c = c(θ)= [c 1 (θ) c 2 (θ)... c n (θ)] Suppose that B is the optimal basis at θ=θ i with corresponding basis matrix A Bi. We want to find the value of θ at which this basis is no longer optimal. Let x Bi = (A Bi ) -1 b be the optimal solution at θ i and c B (θ) be the corresponding cost vector. Once again, as θ changes, x Bi and hence feasibility is unaffected. However, (assuming minimization) the solution stays OPTIMAL only if for all variables, z j c j = c B (θ) (A B ) -1 A j - c j ( if maximizing) Consider our example. At the optimal iteration, for θ= B ={2,5,6,3}; x B = [2 2 5 4] A B = 1 4 2 1 3 2 1 1 1 1 1 1 (A B ) -1 =.5.5 1 1 2 1 4 5 2.5 1.5 6 7 2.5 1.5 5 7 2

Suppose that the cost vector we used earlier, namely c = [-3 3 2-2 -1 4 ] is parameterized as c(θ) = [-3-2θ 3-2θ 2-4θ -2-4θ -1-5θ 4-2θ ] Thus, c B (θ) = [3-2θ -1-5θ 4-2θ 2-4θ] and the optimal simplex multiplier vector π(θ) = c B (θ)(a B ) -1 is given by [-14.5+26θ 9.5-15θ 33-54θ 4-69θ] We now find z j = π(θ)a j and z j c j for each j... j c j z j z j c j 1-3-2θ -3+2θ 4θ 2 3-2θ 3-2θ 3 2-4θ 2-4θ 4-2-4θ -2+4θ 8θ 5-1-5θ -1-5θ 6 4-2θ 4-2θ 7 14.5-26θ 14.5-26θ 8 9.5-15θ 9.5-15θ 21

Thus the basis remains unchanged as long as z j c j j. 4θ, 8θ, 14.5-26θ, 9.5-15θ θ.55769. Thus θ 1 =.55769, and when θ exceeds this value x 7 has a negative reduced cost and therefore enters the basis. We can then reoptimize the problem and find a new basis, and once again find θ 2 so that, for θ from.55769 to θ 2 this NEW basis stays optimal etc., etc. B. RIGHT HAND SIDE In an analogous fashion, the current basis remains optimal as long as x Bi = (A Bi ) -1 b(θ). Given θ i, we can then find θ i+1 for which the current basic solution stays feasible. In our example, suppose b(θ) = [4+2θ 6+θ 1-2θ +θ] (A B ) -1 b(θ)=.5.5 1 1 2 1 4 5 2.5 1.5 6 7 2.5 1.5 5 7 4 + 2 6 + 1 2 + = 2 1.5 2 6 5 8.5 4 6.5 Thus the basis is unchanged as long as (A B ) -1 b(θ), i.e., θ 1/3, after which x 5 becomes negative and we need to use the dual simplex method to re-attain feasibility. 22

The (primal) simplex method solves DUAL SIMPLEX METHOD Min cx, st Ax=b, x starts with a (primal) feasible basis B, i.e., x B = (A B ) -1 b and while maintaining complementary slackness, works towards satisfying the primal optimality condition, which is the same as DUAL FEASIBILITY: [c B (A B ) -1 ]A j c j πa j -c j πa j c j for all j The DUAL SIMPLEX method does the exact opposite. It starts with a DUAL FEASIBLE basis satisfying the dual constraints A T π c T, i.e., π A T c. This is equivalent (as seen above) to the PRIMAL OPTIMALITY conditions, namely [c B (A B ) -1 ]A j c j. While maintaining complementary slackness, the method then pivots to attain DUAL OPTIMALITY, which is the same as PRIMAL FEASIBILITY, namely (A B ) -1 b. Thus, the PRIMAL SIMPLEX starts feasible but suboptimal, and finishes up feasible and optimal, through a sequence of primal feasible suboptimal points. The DUAL SIMPLEX starts infeasible and "superoptimal", and ends feasible and optimal through a sequence of primal infeasible but superoptimal points. 23

EXAMPLE: Minimize Z = 2x 1 + 3x 2 + 4x 3 st x 1 + 2x 2 + x 3 3 x 1 + 2x 2 + x 3 - x 4 = 3 2x 1 - x 2 + 3x 3 4 2x 1 - x 2 + 3x 3 - x 5 = 4; all x j. After introducing surplus variables x 4 and x 5 consider the basis B={4,5}. We get the basic BASIC but INFEASIBLE solution = = ; = = 3 4 ; = (A B ) -1 = A B = 1 1 ; π= cb (A B ) -1 = [ ], and = = 2 3 4 Notice that ALL reduced costs are nonpositive, i.e., the optimality criterion is met for a minimization. We thus have a solution that is superoptimal but infeasible. How do we pivot so that we MAINTAIN the satisfaction of the optimality criterion, yet reduce the infeasibility? The leaving variable may be arbitrarily selected corresponding to any basic variable that is negative in value (say, the MOST negative one). Let us say that this corresponds to the s th basic variable (i.e., = < ). 24

Refer to problem (P1) when we developed the primal simplex method. The constraints for this stated that for a given basis B + = Looking at row s of the above system, constraint no. s can be written in terms of the current basis B as + = < Suppose we wish to bring variable k N into the basis to replace while keeping the rest of N unchanged and consider = First, it is clear that if we want to remove value from the current (negative) value of from the basis we must increase its. Since we also plan to increase the entering variable x k from its current (nonbasic) value of, it follows that we must pick a variable x k for which <. The maximum allowable increase in the value of x k would be /, at which point = and exits the basis. In addition, we must also choose x k in such a way that the primal optimality (dual feasibility) conditions continue to be satisfied when we bring it into the basis, i.e., we want the new reduced costs for the nonbasic variables (say, ) to remain nonpositive. These new values are given by 25

= and so we want (if j=k, then of course, this is zero). Thus the smallest ratio with < determines which reduced cost first goes to zero. This ratio thus determines our entering variable via argmin, We now have a new basis with x k replacing at position s in the basis and we continue the process. Note that if for all j then the dual is unbounded and the primal is infeasible. Back to our example... = = ; = = 3 4 ; = (A B ) -1 = A B = 1 1 ; π= cb (A B ) -1 = [ ], and = = 2 3 4 Let us choose s=2 (corr. to x 5 ) as the leaving variable. Then the updated columns for j N are given by y j =(A B ) -1 A j = 1 2, = 2 1, = 1 3 = 2 26

Then min, = min(-2/-2, -, -4/-3) = 1 corresponding to the first member of N (x k =x 1 ). So our new basis will be B={4,1} and N={5,2,3} So, A B = 1 1 2 and (AB ) -1 = 1.5.5 and = = ( ) = Z=c B x B =4. We now recompute 1.5.5 3 4 = 1 2 ; = =, with π= c B (A B ) -1 = [ 2] 1.5.5 = [ 1], and = [ 1] 1 = 2 [ 1] 1 3 = [ 1] 1 3 4 1 4 1 Let us choose s=1 (corr. to x 4 ) as the leaving variable; this is the only option we have. Then the updated columns for j N are given by y j =(A B ) -1 A j = = = 1.5.5 1.5.5 1.5.5 1 =.5.5, 2 1 = 2.5.5, 1 3 =.5 1.5 27

Then min, = min(-1/-.5, -4/-2.5, - ) = 1.6 corresponding to the second member of N (x k =x 2 ). So our new basis will be B={2,1} and N={5,4,3} So, A B = 2 1 1 2 and (AB ) -1 =.4.2.2.4 and = = ; = = ( ) = Z=c B x B =5.6..4.2.2.4 3 4 =.4 2.2 ; = =, with At this point all of our variables are nonnegative and we have preserved the optimality conditions (CHECK AND VERIFY). Therefore this is the optimal solution to the original LP. Note that for this particular instance we did not need artificial variables etc. and were able to solve the problem in 2 iterations! 28

QUESTION. Why is this called the DUAL SIMPLEX method? Consider the primal-dual pair for our example... Minimize Z = 2x 1 + 3x 2 + 4x 3 st x 1 + 2x 2 + x 3 3 x 1 + 2x 2 + x 3 - x 4 = 3 2x 1 - x 2 + 3x 3 4 2x 1 - x 2 + 3x 3 - x 5 = 4; all x j. Maximize W = 3π 1 + 4π 2 st π 1 + 2π 2 2 π 1 + 2π 2 + π 3 = 2 2π 1 + π 2 3 2π 1 - π 2 + π 4 = 3 π 1 + 3π 2 4 π 1 + 3π 2 + π 5 = 4 all π i. The correspondence of π and x may be summarized as π 1 x 4 π 2 x 5 x 1 π 3 x 2 π 4 x 3 π 5 (The decision variable in one corresponds to the slack/surplus in the other) At Iteration 1, we had the basic, but infeasible solution = = 3 4 with Z=. Corresponding to this, the simplex multiplier vector was π = [ ] Plugging this into the dual we have π 3 =2, π 4 =3, π 5 =4. This corresponds to a BASIC FEASIBLE solution IN THE DUAL, namely = = 2 3 4, with W= 29

At Iteration 2, we had another basic infeasible solution x B = = 1 2 with Z=4. Corresponding to this, simplex multiplier vector was π = [ 1]. Plugging this into the dual we have π 3 =, π 4 =3, π 5 =4. This corresponds to the BASIC FEASIBLE solution IN THE DUAL, namely = 1 3 4 with W=4. Finally, at Iteration 3, we had the basic feasible solution = =.4 2.2, with Z=5.6. Corresponding to this, the simplex multiplier vector isπ = [1.6.2]. Plugging this into the dual we have π 3 =, π 4 =, π 5 =1.8. This corresponds to the BASIC FEASIBLE solution IN THE DUAL, namely =.2 1.6 1.8 with W=5.6. Notice that the dual simplex method generates a sequence of improving BASIC FEASIBLE SOLUTION IN THE DUAL! Hence the name DUAL SIMPLEX METHOD. At the optimum point we have an optimal BFS for the primal and an optimal BFS for the dual, both yielding the same value for the dual and primal objectives. In general, corresponding to any basic (not necessarily feasible) solution to one problem, there exists a complementary basic solution to the other (given by the current simplex multiplier vector = c B (A B ) -1 ). Furthermore, both these complementary solutions yield the same value for their respective objectives. 3

In the PRIMAL simplex method we move through a sequence of improving BFSs in the primal. The complementary basic solutions in the dual are all infeasible (in the dual) until the last (optimal) in the sequence. Thus the primal seeks optimality while the dual seeks feasibility. In the DUAL simplex method we move through a sequence of improving BFSs in the DUAL. The complementary basic solutions in the primal are all infeasible (in the primal) until the last (optimal) in the sequence. Thus the dual seeks optimality while the primal seeks feasibility. If the primal has n variables, m constraints and we define x n+1, x n+2,..., x n+m as the slacks/surplus in the primal, and π m+1, π m+2,..., π m+n as the slacks/surplus in the dual, then complementary slackness is satisfied at each iteration in both methods. That is, for the pair of complementary basic solutions x and π. x n+i π i =, for i=1,2,...,m, and π m+j x j =, for j=1,2,...,n 31

THE PRIMAL-DUAL METHOD We now briefly mention the Primal-Dual method, which is similar to the Dual simplex method, in that it starts with a dual feasible solution and tries to find a complementary primal solution that is feasible (in the primal). The main difference is that the dual feasible solutions NEED NOT BE BASIC. Suppose at the current dual feasible solution π we let the set J index all DUAL constraints that are active. Thus J = {j: πa j = c j }, and J {1,2,...,m}. Then complementary slackness tells us that the only primal variables that can be positive are those that correspond to active dual constraints, i.e., with indices in J. We now try and attain primal feasibility using ONLY x j with j J by solving the following RESTRICTED PRIMAL problem: Min = st = i=1,2,...,n x j, A i Note that this is like a Phase 1 problem where A i is the artificial variable corresponding to Constraint i. 32

If the above problem has an optimal value of =, then all the A i values are equal to zero, and we have a primal feasible vector x, with x j = for j J and x j obtained from above for j J. Furthermore for this vector and the dual feasible vector π, complementary slackness holds: if j J, then the dual constraint j is tight so that π m+j = ; if j J, then x j =. So in either case, x i π m+j =. Then by the complementary slackness theorem x and π are optimal in (P) and (D). STOP. Now, if the optimal value of the RESTRICTED PRIMAL is greater than, then primal feasibility is not attainable with the current set J, and we therefore need a new dual feasible solution that will admit a new variable to set J, in such a way that is reduced. Consider now the restricted primal problem. Let ω = [ω 1 ω 2... ω n ] be the simplex multiplier vector corresponding to the optimal iteration. Since we are at its optimal iteration, = = = 1 ( ) i.e., ω i 1 for all i, and ωa j for all j J. 33

Now, to reduce the value of any further would require a variable x j for which ωa j >. This would give it a negative reduced cost, and we could enter it into the basis and reduce. So we need to find such a variable x j from j J and FORCE it into set J, so that it can be introduced into the restricted master problem. In order to accomplish this, we need to modify the dual vector π so that with the modified vector, (a) all dual constraints that were previously active remain active, and (b) a new constraint as defined in the previous paragraph be activated so that the corresponding x j can enter the set J. Suppose the new vector π'=π+θω, where θ is a POSITIVE constant. Then the j th dual constraint is π'a j c j (π+θω)a j = πa j + θωa j c j For j J: we know πa j = c j, ωa j, so the constraint is automatically satisfied. For j J, we want θωa j c j - πa j We know πa j < c j (since π is dual feasible). So c j -πa j >. If ωa j, then Constraint j is thus satisfied automatically (and remains inactive). If ωa j >, then the constraint is active as long as θ=(c j -πa j )/(ωa j ). 34

Thus, to activate at least one constraint we merely need to define a new dual vector π'=π+θω, where = minimum : NOTE: If we cannot find a j J such that ωa j >, then the primal must be infeasible, so we can stop. Consider our earlier example again... Minimize Z = 2x 1 + 3x 2 + 4x 3 st x 1 + 2x 2 + x 3 3 x 1 + 2x 2 + x 3 - x 4 = 3 2x 1 - x 2 + 3x 3 4 2x 1 - x 2 + 3x 3 - x 5 = 4; all x j. Maximize W = 3π 1 + 4π 2 st π 1 + 2π 2 2 π 1 + 2π 2 + π 3 = 2 2π 1 + π 2 3 2π 1 - π 2 + π 4 = 3 π 1 + 3π 2 4 π 1 + 3π 2 + π 5 = 4 all π i. Consider the solution π 1 =1.5, π 2 =. Thus J={2} The restricted master is Minimize = A 1 +A 2 st 2x 2 + A 1 =3 -x 2 + A 2 =4 x 2, A 1, A 2 35

The optimal solution is A 1 =, A 2 =5.5, x 2 =1.5 with objective =5.5 and the simplex multiplier vector ω = [.5 1]. Since, we need to get a new vector π' feasible in the dual via π'=π+θω, where θ = Mimumum over all j J such that ωa j >. Here we have ωa 1 = [.5 1] 1 2 = 2.5 (>), ωa3 = [.5 1] 1 3 = 3.5 (>) and so θ = Min.,. = Min {.5/2.5, 2.5/3.5} =.2 π'= 1.5.5 +.2 1 = 1.6.2. Thus the new J={1,2} and the new restricted master is Minimize = A 1 +A 2 st x 1 + 2x 2 +A 1 = 3 2x 1 - x 2 +A 2 = 4 x 1, x 2, A 1, A 2 The optimal solution is A 1 =, A 2 =, x 1 =2.2, x 2 =.4 with =. So this must be the optimum solution to (P)! STOP. 36