F 1 F 2 Daily Requirement Cost N N N

Similar documents
Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

4.6 Linear Programming duality

Part 1. The Review of Linear Programming

4. Duality and Sensitivity

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Simplex Algorithm Using Canonical Tableaus

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Review Solutions, Exam 2, Operations Research

II. Analysis of Linear Programming Solutions

Chap6 Duality Theory and Sensitivity Analysis

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

A Review of Linear Programming

Lecture 2: The Simplex method

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Chapter 1 Linear Programming. Paragraph 5 Duality

Duality Theory, Optimality Conditions

Sensitivity Analysis

OPERATIONS RESEARCH. Linear Programming Problem

Lecture 11: Post-Optimal Analysis. September 23, 2009

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

IE 5531: Engineering Optimization I

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Special cases of linear programming

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics


Linear Programming Duality

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

MAT016: Optimization

TRANSPORTATION PROBLEMS

Sensitivity Analysis and Duality in LP

Relation of Pure Minimum Cost Flow Model to Linear Programming

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

AM 121: Intro to Optimization

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

UNIT-4 Chapter6 Linear Programming

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

IE 400: Principles of Engineering Management. Simplex Method Continued

Chapter 1: Linear Programming

Math Models of OR: Sensitivity Analysis

Summary of the simplex method

"SYMMETRIC" PRIMAL-DUAL PAIR

Ω R n is called the constraint set or feasible set. x 1

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

9.1 Linear Programs in canonical form

Linear Programming Redux

March 13, Duality 3

Chapter 5 Linear Programming (LP)

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

The Simplex Algorithm

Math Models of OR: Some Definitions

Linear and Combinatorial Optimization

The dual simplex method with bounds

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

Understanding the Simplex algorithm. Standard Optimization Problems.

1 Simplex and Matrices

56:171 Operations Research Fall 1998

Linear Programming: Chapter 5 Duality

Sensitivity Analysis and Duality

Optimisation and Operations Research

AM 121: Intro to Optimization Models and Methods

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Linear Programming Inverse Projection Theory Chapter 3

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Linear Programming, Lecture 4

Linear programs Optimization Geoff Gordon Ryan Tibshirani

AM 121: Intro to Optimization Models and Methods Fall 2018

Lectures 6, 7 and part of 8

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

Lecture 10: Linear programming duality and sensitivity 0-0

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

Introduction. Very efficient solution procedure: simplex method.

The Dual Simplex Algorithm

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

The Simplex Algorithm and Goal Programming

OPRE 6201 : 3. Special Cases

1 Review Session. 1.1 Lecture 2

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

Foundations of Operations Research

Chapter 4 The Simplex Algorithm Part I

MATH2070 Optimisation

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

MATH 445/545 Test 1 Spring 2016

SEN301 OPERATIONS RESEARCH I LECTURE NOTES

Systems Analysis in Construction

56:171 Operations Research Midterm Exam--15 October 2002

Linear Programming. H. R. Alvarez A., Ph. D. 1

Transcription:

Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever one problem is solved, the other is solved as well. The original LPP is called the primal problem and the associated LPP is called the dual problem. Together they are called a dual pair (primal + dual) in the sense that the dual of the dual will again be the primal. Example 5.. (The Diet Problem) How can a dietician design the most economical diet that satisfies the basic daily nutritional requirements for a good health? For simplicity, we assume that there are only two foods F and F and the daily nutrition required are N, N and N 3. The unit cost of the foods and their nutrition values together with the daily requirement of each nutrition are given in the following table. F F Daily Requirement Cost 0 80 N 0 N 4 4 N 3 3 6 3 Let x j, j =, be the number of units of F j that one should eat in order to minimize the cost and yet fulfill the daily nutrition requirement. Thus the problem is to select the x j such that the nutritional constraints: min x 0 = 0x + 80x x + x 0 x + 4x 4 3x + 6x 3 and the non-negativity constraints: x j 0, j =,. In matrix form, we have min x 0 = c T x Ax b x 0

Chapter 5. DUALITY where and c = [ ] [ ] 0 x, x =, b = 0 4 80 x 3 A = 4. 3 6 Now let us look at the same problem from a pharmaceutical company s point of view. How can a pharmaceutical company determine the price for each unit of nutrient pill so as to imize revenue, if a synthetic diet made up of nutrient pills of various pure nutrients is adopted? Thus we have three types of nutrient pills P, P and P 3. We assume that each unit of P i contains one unit of the N i. Let u i be the unit price of P i, the problem is to imize the total revenue u 0 from selling such a synthetic diet, i.e. u 0 = 0u + 4u + 3u 3 the constraints that the cost of a unit of synthetic food j made up of P i is no greater than the unit market price of F j : u + u + 3u 3 0 u + 4u + 6u 3 80 u, u, u 3 0 In matrix form, the problem is: u 0 = b T u A T u c u 0 We said the two problems form a dual pair of linear programming problem, and we will see that the solution to one should lead to the solution of the other. Definition 5.. Let x and c be column n-vectors, b and u be column m-vectors and A be an m-by-n matrix. The primal and the dual problems can be defined as follows: Primal c T x Ax b Dual min b T u A T u c x 0 u 0 Calling one primal and the other one dual is completely arbitrary for we have the following theorem. Theorem 5.. The dual of the dual is the primal. Proof. Transforming the dual into canonical form, we have u 0 = b T u A T u c u 0

5.. The Dual Problems 3 Taking the dual of this problem, we have min x 0 = c T x Ax b x 0 which is the same as the primal problem. To obtain the dual of an LP problem in standard form: x 0 = c T x Ax = b x 0 we can first change it into canonical form: Then its dual is given by Letting u = u u, we finally have x 0 = c T x Ax b Ax b x 0 min u 0 = b T u b T u A T u A T u c u, u 0 min u 0 = b T u A T u c u free The following is a general rule of the relationship between a dual pair. n c j x j j= min m u i b i n a ij x j b i (i =,,, k) u i 0 (i =,,, k) j= n a ij x j = b i (i = k +,, m) u i free (i = k +,, m) j= x j 0 (j =,,, l) x j free (j = l +,, n) i= m u j a ij c j (j =,,, l) i= m u i a ij = c j (j = l +,, n) i= We observe from the above the following correspondence:

4 Chapter 5. DUALITY Primal Max Program Dual Min Program c j : n obj. ftn. coeff. n r.h.s. b i : m r.h.s. m obj. ftn. coeff. u i : k ( ) constraints k non-neg. variables u i : m k (=)constraints m k free variables x j : l non-neg. variables l ( ) constraints x j : n l free variable n l (=)constraints Example 5.. Let the original (primal) problem be given by x + 4x + 3x 3 x + x + x 3 4 x + x + x 3 6 x, x, x 3 0 Before transforming it to the dual problem, we first have to standardize the primal problem. x + 4x + 3x 3 + 0x 4 + 0x 5 x + x + x 3 + x 4 = 4 x + x + x 3 + + x 5 = 6 x, x, x 3, x 4, x 5 0 The dual of a standardize primal is easily obtained by transposing the equations. For primals that are imization problems, the duals are minimization problems with signs. All dual variables are assumed to be unrestricted first. min 4u + 6u u + u u + u 4 u + u 3 u 0 u 0 u, u free After simplification, we get our final dual problem. min 4u + 6u u + u u + u 4 u + u 3 u, u 0

5.. The Dual Problems 5 Example 5.3. Let us consider a primal given by min 5x + 6x x + x = 5 x + 5x 3 4x + 7x 8 x free, x 0 The standardized primal is given by min 5x 5x + 6x + 0x 3 + 0x 4 x x + x = 5 x + x + 5x x 3 = 3 4x 4x + 7x + x 4 = 8 x, x, x, x 3, x 4 0 Since the primal is a minimization problem, the dual problem will be a imization problem with signs. The dual variables are assumed to be free first. 5u + 3u + 8u 3 u u + 4u 3 5 u + u 4u 3 5 u + 5u + 7u 3 6 u 0 u 3 0 u, u, u 3 free The first two inequality constraints combine together to give an equality constraint. 5u + 3u + 8u 3 u u + 4u 3 = 5 u + 5u + 7u 3 6 u 0 u 3 0 u, u, u 3 free By replacing the free variable u by u u and the negative variable u 3 by u 3, we finally arrive at 5u 5u + 3u 8u 3 u u u 4u 3 = 5 u u + 5u 7u 3 6 u, u, u, u 3 0 which is the dual problem of the original primal problem. Example 5.4. (Transportation Problem) Suppose that there are m sources that can provide materials to n destinations that require the materials. The following is called the costs and requirements table for the transportation problem.

6 Chapter 5. DUALITY Destination Supply c c c n s Origin c c c n s..... c m c m c mn s m Demand d d d n where c ij is the unit transportation cost from origin i to destination j, s i is the supply available from origin i and d j is the demand required for destination j. We assume that total supply equals to total demand, i.e. m n s i = d j. i= The problem is to decide the amount x ij to be shipped from i to j so as to minimize the total transportation cost while meeting all demands. That is j= min m n c ij x ij i= j= n j= x ij = s i (i =,,, m) m i= x ij = d j (j =,,, n) x ij 0 (i =,,, m ; j =,,, m) The dual is then given by: m s i u i + i= n d j v j j= u i + v j c ij (i =,,, m ; j =,,, n) u i, v j free 5. Duality Theorems We first give the relationship between the objective values of the primal and of the dual. Theorem 5. (Weak Duality Theorem). If x is a feasible solution (not necessarily basic) to the primal and u is a feasible solution (not necessarily basic) to the dual, then c T x b T u. Proof. Since x is a feasible solution to the primal P, we have Ax b. As u 0, we have Similarly, since A T u c and x 0, we have u T Ax u T b = b T u. (5.) x T A T u x T c. Taking the transpose and combining with (5.), we get c T x b T u.

5.. Duality Theorems 7 As an immediate corollary, we have Theorem 5.3. If x 0 and u 0 are feasible solutions to the primal and the dual respectively and if c T x 0 = b T u 0, then x 0 and u 0 are optimal solutions to the primal and the dual respectively. Proof. For all feasible solutions x to the primal, by Theorem 5., we have c T x b T u 0 = c T x 0. Thus x 0 is an optimal solution to primal. Similarly, if u is any feasible solution to the dual, then Thus u 0 is an optimal solution to the dual. b T u c T x 0 = b T u 0 The converse of this theorem is the strong duality theorem. Theorem 5.4 (The Strong Duality Theorem). A feasible solution x 0 to the primal is optimal if and only if there exists a feasible solution u 0 to the dual such that In particular, u 0 is an optimal solution to the dual. c T x 0 = b T u 0. (5.) Proof. The if part is Theorem 5.3. Let us now prove the only if part. Let the primal be z = c T x Ax b x 0 Standardizing it, we have z = c T x + c T s x s Ax + xs = b x, x s 0, where x s are all slack variables and c s = 0. Suppose that x 0 is an optimal solution to the problem with basis matrix B. Then z j c j, j =,,, n, n +,, n + s. Since for j n, z j = c T B y j = c T B (B a j ). We have, in vector form, A T (B ) T c B c. This shows that u 0 (B ) T c B is a solution to A T u c. It remains to show that u 0 0 and satisfies (5.). We first prove that u 0 0. Since z n+j c n+j, j =,,, s, we have a T n+j(b ) T c B c n+j, j =,,, s.

8 Chapter 5. DUALITY Since x n+j are slack variables, the corresponding columns in A are just the jth unit vector e j and the corresponding cost coefficients c n+j = 0. Thus Hence u 0 = (B ) T c B 0. Finally, since e T j (B ) T c B c n+j = 0, j =,,, s. b T u 0 = b T (B ) T c B = c T BB b = c T x 0, we see that u 0 satisfies (5.) and by Theorem 5.3, it is an optimal solution to the dual problem. This theorem also give us an explicit form of the optimal solution to the dual problem Theorem 5.5. If B is the basis matrix for the primal corresponding to an optimal solution and c B contains the prices of the variables in the basis, then an optimal solution to the dual is given by (B ) T c B, i.e., the entries in the x 0 row under the columns corresponding to the slack variables give the values of the dual structural variables. Moreover, the entries in the x 0 row under the columns for the structural variables will give the optimal values of the dual surplus variables. Proof. We only have to prove that u B (B ) T c B is given by the entries in the x 0 row under the columns corresponding to the slack variables. In fact, for the slack variables, we have z n+j c n+j = z n+j = e T j (B ) T c B = u Bj. Next we prove that the entries in the x 0 row under the columns for the structural variables give the optimal values of the dual surplus variables. Since z j = c T By j = c T B(B a j ) = a T j B T c B = a T j u B, j =, n, and A T u B u Bs = c where u Bs is the vector of dual surplus variables, we have z c = A T u B c = u Bs. Example 5.5. Let the primal problem be x 0 = 4x + 3x 0 6 0 [ ] 8 x x 7 3 5 0 x, x 0. Standardizing the problem, we have 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 The optimal tableau is given by x x x 3 x 4 x 5 x 6 x 7 6 8 = 7 5

5.3. The Existence Theorem and The Complementary Slackness 9 x x x 3 x 4 x 5 x 6 x 7 b x 3 0 0 0 0 x 0 0 0 3 0 3 x 4 0 0 0 3 x 0 0 0 x 7 0 0 0 0 x 0 0 0 0 0 5 0 5 0 4 4 0 5 Thus the optimal solution is [x, x ] = [4, 3] with [x 3, x 4, x 5, x 6, x 7 ] = [, 5, 0, 0, 4]. From the x 0 row, we see that the optimal solution to the dual is given by [u, u, u 3, u 4, u 5, u 6, u 7 ] = [0, 0, 5, ], 0, 0, 0. Let us verify this by considering the dual. The dual of the primal is given by min u 0 = 6u + 8u + 7u 3 + 5u 4 + u 5 u [ ] u 0 3 0 [ ] 0 u 3 4 3 u 4 u 5 u i 0, i =,, 3, 4, 5 Changing the minimization problem to a imization problem and using simplex method (or the dual simplex method to be introduced in 4), we obtain the optimal tableau for the dual: u u u 3 u 4 u 5 u 6 u 7 c u 4 6 0 u 3 3 0 3 3 u 0 5 0 0 4 4 3 5 5 Thus the optimal solution for the dual is [u, u, u 3, u 4, u 5 ] = ] [0, 0, 5,, 0 with optimal surplus variables [u 6, u 7 ] = [0, 0]. Notice that the optimal solution to the primal is given by the reduced cost coefficients for u 4 and u 5, i.e. [x, x ] = [4, 3] and the optimal values of the primal slack variables are given by [x 3, x 4, x 5, x 6, x 7 ] = [, 5, 0, 0, 4]. 5.3 The Existence Theorem and The Complementary Slackness Theorem 5.6 (Existence Theorem). (i) An LPP has a finite optimal solution if and only if both it and its dual have feasible solutions.

0 Chapter 5. DUALITY (ii) If the primal has an unbounded imum, then the dual has no feasible solution. (iii) If the dual has no feasible solution but the primal has, then the primal has an unbounded imum. Proof. (i) This follows by Theorem 5.4. (ii) Proof by contradiction using Theorem 5.. (iii) This follows from Theorem 5.4. We summarize the results in the following table. Primal is feasible Primal is not feasible Dual is both optimal Dual has feasible solutions exist unbounded solutions Dual is Primal has possible not feasible unbounded solutions Theorem 5.7 (Complementary Slackness). Given any pair of optimal solutions to an LP problem and its dual, then (i) for each i, i =,,, m, the product of the ith primal slack variable and ith dual variable is zero, and (ii) for each j, j =,,, n, the product of the jth primal variable and jth surplus dual variable is zero. Proof. Let the primal problem be z = c T x Ax b x 0 Standardizing it, we have z = c T x Ax + xs = b x, x s 0, (5.3) where x s are the slack variables. Standardizing the dual, we have u 0 = b T u A T u u s = c u, u s 0 (5.4) where u s are the surplus variables. Given any feasible primal solution [x, x s ] and any column m-vectors u, (5.3) implies that u T Ax + u T x s = u T b = b T u. (5.5)

5.3. The Existence Theorem and The Complementary Slackness Given any feasible dual solution [u, u s ] and any column n-vector x, (5.4) implies that x T A T u x T u s = x T c = c T x, (5.6) Since the cost coefficients of all slack and surplus variables are zero, we see that if [x 0, x 0s )] and [u 0, u 0s )] are optimal solutions to the primal and the dual problems, then c T x 0 = b T u 0. Hence by (5.5) and (5.6), we have u T 0 x 0s + x T 0 u 0s = 0. Using the fact that u 0, x 0s, x 0, u 0s 0, we finally have u T 0 x 0s = 0 = x T 0 u 0s. We remark that the converse of Theorem 5.7 is also true. Theorem 5.8. Let [x 0, x 0s ] be feasible solution to primal and [u 0, u 0s ] be feasible solution to dual. Suppose that (i) for each i, i =,,, m, the product of the ith primal slack variable and ith dual variable is zero, and (ii) for each j, j =,,, n, the product of the jth primal variable and jth surplus dual variable is zero. Then [x 0, x 0s ] and [bou 0, u 0s ] are optimal solutions to the primal and the dual respectively. Proof. By assumption, u T 0 x 0s + x T 0 u 0s = 0. Hence Adding the term u T 0 Ax 0 to both sides, we have u T 0 x 0s = x T 0 u 0s = u T 0sx 0. u T 0 (Ax 0 + x 0s ) = (u T 0 A u T 0s)x 0. Since [x 0, x 0s ] and [u 0, u 0s ] are feasible solutions to the primal and the dual, u T 0 b = c T x 0. Thus by Theorem 5.3, both solutions are optimal solutions. To see why we have the complementary slackness, suppose that the jth surplus variable of the dual problem is positive. Then by Theorem 5.5, the reduced cost coefficient of the jth structural variable of the primal problem is negative (because it is equal to the negation of the jth surplus variable of the dual problem). Hence the jth primal structural variable should be equal to zero if it is at the optimum. For if not, then we can set it to zero and thus increase the objective value. Example 5.6. (Dual Prices) Let the primal be given by x + 4x + 3x 3 x + x + x 3 4 x + x + x 3 6 x, x, x 3 0

Chapter 5. DUALITY Its dual is Initial Tableau: min 4u + 6u u + u u + u 4 u + u 3 u, u 0 x x x 3 x 4 x 5 b x 4 0 4 x 5 0 6 x 0 4 3 0 0 0 Optimal Tableau: x x x 3 x 4 x 5 b x 3 0 x 3 0 x 0 0 0 0 Thus the optimal primal solution is x = [0,,, 0, 0] and by the duality theorem, the optimal dual solution is u = [,,, 0, 0]. Let us check for the complementary slackness for these two dual solutions. u > 0 x 4 = 0 x + x + x 3 = 4 i.e. (0) + () + = 4 y > 0 x 5 = 0 x + x + x 3 = 6 i.e. 0 + () + () = 6 x = 0 u 3 0 u + u i.e. () + = 3 x > 0 u 4 = 0 u + u = 4 i.e. () + () = 4 x 3 > 0 u 5 = 0 u + u = 3 i.e. + () = 3 5.4 Dual Simplex Method In the usual simplex method, which will be called primal method for distinction, we start with a primal BFS x, maintain primal feasibility x i0 0} m i= and strive for non-positivity of the reduced cost coefficients (which is equivalent to x 0j 0} n j= ). However, by Theorem 5.5, the entries in the x 0 row give the values of the dual variables at optimal. Thus the nonnegativity of x 0j 0} n j= is equivalent to the feasibility of the dual variables. In the dual method, we start with a dual BFS u, maintain dual feasibility u j0 0} n j= (which is equivalent to x 0j 0} n j= ) and strive for nonnegativity of u 0i 0} m i= (which is equivalent to primal feasibility x i0 0} m i= ). Since at any iteration, both the primal and the dual solutions have the same objective value, by the duality theorem, we see that if both solutions are feasible, then we have reached optimality.

5.4. Dual Simplex Method 3 Algorithm for the dual simplex method. Given a dual BFS x B, if x B 0, then the current solution is optimal; otherwise select an index r such that the component x r of x B is negative.. If y rj 0 for all j =,,, n, then the dual is unbounded; otherwise determine an index s such that y 0s = min y } 0j y rj < 0. y rs j y rj 3. Pivot at element y rs and return to step. Example 5.7. Consider the problem: In canonical form, it is The initial tableau is min 3x + 4x + 5x 3 x + x + 3x 3 5 x + x + x 3 6 x, x, x 3 0 3x 4x 5x 3 x x 3x 3 5 x x x 3 6 x, x, x 3 0 x x x 3 x 4 x 5 b x 4 3 0 5 x 5 0 6 x 0 3 4 5 0 0 0 ratios 3 4 5 After one iteration x x x 3 x 4 x 5 b x 4 0 5 x 0 3 x 0 0 7 0 3 9 ratios 7 3 5

4 Chapter 5. DUALITY Optimal Tableau: x x x 3 x 4 x 5 b x 0 5 x 0 x 0 0 0 Since both the primal and the dual solutions are feasible, we have reached the optimal solution. The primal optimal solution is given by x = [,, 0], the dual optimal solution is u = [, ] and the optimal objective value is for the original problem is a minimization problem. 5.5 Post-Optimality or Sensitivity Analysis Given an LP problem, suppose that we have found the optimal feasible solution by the simplex (or dual simplex) method. Post-optimality or sensitivity analysis is the study of how the changes in the original LP problem would affect the feasibility and optimality of the current optimal solution. Before we analyze the method, we first recall the following criteria for determining the optimal primal solutions. Primal feasibility: Ax = b Bx B + Nx N = b. (5.7) Primal optimality: z T c T = c T BB A c T 0. (5.8) In the following we will consider changes in the original problem that can affect only one of these criteria. For in these cases, we can obtain the new optimal solution without redoing the whole simplex method for the new LP. () Changes in resource vector b. From (5.7) and (5.8), we see that changes in b will affect the feasibility but not the optimality of the current optimal solution. Thus if the current optimal solution satisfies the old constraints with the new right hand sides, then it will be the new optimal solution. By the duality theory, the changes in b will affect the optimality but not the feasibility of the dual optimal solution. In fact, the cost vector for the dual problem is given by b. () Changes in cost/profit vector c. By the duality theory, (or from (5.7) and (5.8) again), we see that changes in the cost vector c will affect only the optimality of the primal optimal solution and the feasibility of the dual optimal solution. Thus if the current optimal solution satisfies the criteria that the new x 0 row is nonnegative, then it will be the optimal solution for the new LP. (3) Changes in technology matrix A. If the changes in A occur at the basic variables, then B will be changed. From (5.7) and (5.8), we see that both the feasibility and the optimality of the current optimal solution may be violated. In that case, we have to redo the whole problem. However, if the changes of A are restricted to columns of nonbasic variables (i.e. N in (5.7)), then we see that only dual feasibility (or equivalently primal optimality) will be affected because x N = 0.

5.5. Post-Optimality or Sensitivity Analysis 5 (4) Addition of a new primal variable/dual constraint a ij + c j. This case is essentially the same as considering simultaneously changes in the objective function coefficient as well as the corresponding technological coefficients of nonbasic variable. (One can assume that the a ij and the c j are originally there with values equal to zero.) Consequently, the addition of a new variable can only affect the optimality of the problem. This means that the new variable will enter the solution if, and only if, it improves the objective function value. Otherwise the new variable becomes just another nonbasic variable (= 0). (5) Addition of a new primal constraint/dual variable a ij + b i. A new constraint can affect the feasibility of the current optimal solution only if it is active, i.e. it is not redundant with respect to the current optimal solution. Consequently, the first step would be to check whether the new constraint is satisfied by the current optimal solution. If it is satisfied, the new constraint is redundant and the optimal solution remains unchanged. Otherwise, the new constraint must be added to the system and the dual simplex method is used to clear the primal infeasibility (dual optimality). Example 5.8. Consider the LPP given by the following tableau: x x x 3 x 4 x 5 b x 4 3 4 0 30 x 5 0 4 0 0 x 0 7 3 0 0 0 The optimal tableau is: x x x 3 x 4 x 5 b x 4 0 5 0 x 4 0 0 x 0 0 0 0 () Changes in resource vector b. Let the new ˆb = [0, 0] T. Then the new basic solution is given by ˆx B = B ˆb [ ] [ ] 0 = = 0 0 [ ] 0 0 Thus it is no longer feasible. The new objective value is [ ] ˆx 0 = c T 0 B ˆx B = [0, ] = 40. 0 We then need to apply the dual simplex method to restore primal feasibility with a new b column of [ 0, 0, 40] T. The new starting tableau is given by:

6 Chapter 5. DUALITY x x x 3 x 4 x 5 b x 4 0 5 0 x 4 0 0 x 0 0 0 40 () Changes in cost/profit vector c. Let the new ĉ = [3, 6, 3, 0, 0] T. The new x 0 row is given by ẑ T ĉ T = ĉ T BB A ĉ T = [0, 6, 0, 0, 3] 0. (Recall that B A is just the last tableau.) This indicates primal optimality. Thus the primal optimal solution is unchanged. Looking at the dual, the new dual variables are [ ] [ û = B T ĉ B = B T 0 0 =. 3 3] (3) Changes in technology coefficients a ij. In the optimal tableau, x and x 4 are basic. Thus we can only change the entries of a, a 3 and a 5. Let the new â = [, 5] T. While the primal feasibility remains, we need to calculate the new reduced cost coefficient for x. [ ẑ c = c T BB â c = [0, ] 7 = 3 0. 5] Thus the current basis remains optimal with x B = (0, 0) T and x 0 = 0 unchanged. However, if â = [, 3] T, then ẑ c = 0, indicating non-optimality. In this case, we need to replace the column under x in the (previously optimal) tableau by ŷ = B â = [ ] [ 0 3] = [ ] 3 and pivot in the x column once (to have x become basic) to restore optimality. The new optimal x B = [x 4, x ] T = [6 3, 3 3 ]T with x 0 = 3 3. Example 5.9. (Adding extra constraints) Consider the following LLP problem: x x x 3 x 4 b x 3 0 x 4 3 0 x 0 3 0 0 0 We note that we have primal infeasibility and dual feasibility. Using the dual simplex method, we get the following optimal tableau.

5.5. Post-Optimality or Sensitivity Analysis 7 x x x 3 x 4 b x 0 x 4 0 3 x 0 0 0 Since both the primal and the dual are feasible, we have reached the optimal solutions with x = [0,, 0, ] and u = [, 0]. Suppose now a new constraint x x is added. As x x = 0 = <, x is infeasible. Adding slack variable x 5 gives x 5 x +x =. Let us add this constraint into the optimal tableau. x x x 3 x 4 x 5 b x 0 0 x 4 0 3 0 x 5 0 0 x 0 0 0 0 Notice that for the basic variable x, its column is not a unit vector. Thus it is not a simplex tableau. Using pivot operation, we change it back to the unit vector and we have the following simplex tableau. x x x 3 x 4 x 5 b x 0 0 x 4 0 3 0 x 5 0 0 x 0 0 0 0 We note that now the primal becomes infeasible again. Using dual simplex method, we finally arrive at the new optimal tableau. x x x 3 x 4 x 5 b x 0 0 0 x 4 0 0 5 x 0 0 x 0 0 0 0 3

8 Chapter 5. DUALITY Thus new x = [, 0, 0, 0, 0] and new u = [0, ]. This idea of treating additional constraints can sometimes be exploited to possibly reduce the overall computational effort of solving an LP. Since the computational difficulty depends far more heavily on the number of constraints than the number of variables, it may be possible first to relax the constraints which one suspects are not binding. The new relaxed problem is then solved with a fewer number of constraints. After the optimal solution of the new problem is obtained, the deleted constraints are checked for feasibility in the spirit of such sensitivity analysis. It is common to refer to these constraints as secondary constraints.