Farkas Lemma, Dual Simplex and Sensitivity Analysis

Similar documents
Review Solutions, Exam 2, Operations Research

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Chapter 1 Linear Programming. Paragraph 5 Duality

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Chap6 Duality Theory and Sensitivity Analysis

Part 1. The Review of Linear Programming

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Lecture 11: Post-Optimal Analysis. September 23, 2009

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Linear Programming Duality

Discrete Optimization

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

The Simplex Algorithm

Chapter 3, Operations Research (OR)

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

"SYMMETRIC" PRIMAL-DUAL PAIR

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

A Review of Linear Programming

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

1 Review Session. 1.1 Lecture 2

Sensitivity Analysis

Understanding the Simplex algorithm. Standard Optimization Problems.

2.098/6.255/ Optimization Methods Practice True/False Questions

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Simplex Algorithm Using Canonical Tableaus

F 1 F 2 Daily Requirement Cost N N N

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

The dual simplex method with bounds

March 13, Duality 3

Duality Theory, Optimality Conditions

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Math Models of OR: Sensitivity Analysis

Math Models of OR: Some Definitions

Linear and Combinatorial Optimization

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Lecture 10: Linear programming duality and sensitivity 0-0

Special cases of linear programming

Today: Linear Programming (con t.)

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Worked Examples for Chapter 5

Sensitivity Analysis and Duality in LP

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

3. THE SIMPLEX ALGORITHM

TIM 206 Lecture 3: The Simplex Method

Summary of the simplex method

OPERATIONS RESEARCH. Linear Programming Problem

Lectures 6, 7 and part of 8

Lesson 27 Linear Programming; The Simplex Method

IE 5531: Engineering Optimization I

Systems Analysis in Construction

The Dual Simplex Algorithm

4. Duality and Sensitivity

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Simplex method(s) for solving LPs in standard form

Introduction to linear programming using LEGO.

The augmented form of this LP is the following linear system of equations:

Sensitivity Analysis and Duality

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Introduction. Very efficient solution procedure: simplex method.

3 The Simplex Method. 3.1 Basic Solutions

Linear Programming, Lecture 4

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Optimisation and Operations Research

Lecture 5. x 1,x 2,x 3 0 (1)

Summary of the simplex method

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Linear programs Optimization Geoff Gordon Ryan Tibshirani

IE 400: Principles of Engineering Management. Simplex Method Continued

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

Week 3 Linear programming duality

Termination, Cycling, and Degeneracy

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

AM 121: Intro to Optimization

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize

Relation of Pure Minimum Cost Flow Model to Linear Programming

February 17, Simplex Method Continued

4.6 Linear Programming duality

Part IB Optimisation

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

LINEAR PROGRAMMING II

Operations Research Lecture 2: Linear Programming Simplex Method

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

MATH 445/545 Test 1 Spring 2016

Linear Programming Inverse Projection Theory Chapter 3

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Lecture Notes 3: Duality

56:171 Operations Research Fall 1998

Transcription:

Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x 0 such that Ax = b; (ii) y such that y T A 0 T, y T b < 0. Proof. Suppose (i) is true. Then (ii) cannot be true, because if both Ax = b, x 0 and y T A 0 T, then y T b = y T Ax 0. Now, suppose (i) is not true. To show that (ii) must be true, we consider the following optimization problem max s.t. 0 T x Ax = b x 0 and its dual min s.t. b T y A T y 0 If (i) is not true, the primal problem is infeasible. So by strong duality, the dual problem must be either infeasible or unbounded. It cannot be infeasible, because y = 0 is feasible for the dual. So the dual optimum =, and hence there exists some y such that A T y 0, b T y < 0. 2 Dual Simplex Method Consider an LP in standard form: min c T x subject to Ax = b, x 0, and its dual max y T b subject to y T A c T. Recall from last week that the (primal) Simplex Method maintains a pair of solutions, one for the primal and one for the dual, that satisfy complementary slackness. If x is a basic solution, let B be the indices of the basic variables. Then the associated dual solution is y T = c T B B. (Note: this is a basic solution to the dual: a basic solution to {y R m : A T y c} must have m linearly independent constraints that hold at equality. Indeed, the solution y we define satisfies the constraints B T y c B at equality, and the rows of B T (i.e., the columns of B) are linearly independent.) Today, we see the dual simplex method, which also maintains a pair of basic solutions, but now, the dual solution is feasible, and we work toward making the primal solution feasible. We use the tableau implementation from the primal simplex method: c T B B b B b c T c T B B A B A associated solution x B =B b c T B x B c 1... c n x B(1). B A x B(m) 1

Now, we don t require x to be nonnegative, but we do require c 0 (which is the same as requiring that the dual solution given by y T = c T B B is dual feasible). If there is a basic variable x B(l) < 0, we want to increase it to 0 (make it leave the basis). If all basic variables are 0, then we are done, since we have found a feasible primal and dual solution that obey complementary slackness. Denote the l-th row of the tableau by (x B(l), v 1,..., v n ). Then, we let the following variable enter the basis: j such that v j < 0 and c j v j = min c i i:v i<0 v i. If such j exists, then we do a pivot as before, i.e., j must become the l-th basic variable, so we do 0 elementary row operations to transform the j-th column of the tableau into. Note that this el means we add cj v times row l to row 0, so the i-th entry in row 0 becomes c c j i + v j i v j and this is 0 by the choice of j we thus maintain dual feasibility. Suppose that there is no j such that v j < 0. Then, we can conclude that the primal problem is infeasible (and hence, the dual problem has optimum + ): Let g T be the l-th row of B. Then, the l-th row of the tableau is equal to g T b A. We know that x B(l) = g T b < 0, and we know that (v 1,..., v n ) = g T A 0. Then, by Farkas Lemma, there exists no x 0 such that Ax = b. Example: 0 2 6 10 0 0 2-2 4 1 1 0-1 4-2 -3 0 1 Since the second basic variable (corresponding to x 5 ) is less than 0, we want to let x 5 leave the basis. We check which variable should enter the basis by finding min ci i:vi<0 v i = min{ 6 2, 1 0 3 } = 3, so the corresponding variable (x 2 ) leaves the basis. We pivot on element (2, 2) and obtain the following new tablaeau: -3 14 0 1 0 3 0 6 0-5 1 2 1/2-2 1 3/2 0-1/2 So when should we use the dual simplex method instead of the primal simplex method? It is often easier to find an initial feasible dual solution (for example, if c, A 0 then y = 0 is a feasible dual solution) so if we use the dual simplex method, we don t have to do Phase I of the Simplex Method to find an initial bfs. In practice, the dual simplex method is faster (see a lecture by Bixby at ADFOCS 03: http://www.mpiinf.mpg.de/conferences/adfocs-03/slides/bixby 2.pdf). If we need to resolve an LP, because some of our input data changed, and the previous dual solution remains feasible, e.g.: if we change some b i or add a new constraint. 2

3 Sensitivity Analysis We consider a primal and dual in standard form, i.e., min c T x max y T b s.t. Ax = b s.t. A T y c x 0 Suppose we have solved this problem to optimality, and that the optimal basis is B, the optimal primal solution is x B = B b, x j = 0, j B(1),..., B(m) and the optimal dual solution is y T = c T B B b. 3.1 Changes in b Suppose we change b to b + δe i, i.e., one entry is changed by δ (which may be positive or negative). The primal solution associated with the current basis has x B = B (b+δe i ). Note that the dual solution remains feasible, and hence the reduced costs remain non-negative. So we only need to check if B (b + δe i ) is still feasible, i.e., if it is 0. Suppose it is still feasible, then we have found that the optimal value has changed by c T BB (b + δe i ) c T BB = δc T BBe i = δy T e i = δy i. This is the reason why we sometimes call the dual variables the marginal costs or shadow prices of the requirements: the i-th dual variable tells us by how much the dual changes if we increase the i-th requirement by one unit. If B (b + δe i ) is not feasible, then we can use the Dual Simplex Method from the current basis to find an optimal solution for the new problem. 3.2 Changes in c Suppose we change c to c + δe j, i.e., one entry is changed by δ (which may be positive or negative). The primal solution associated with the current basis remains feasible, but it may not be optimal, i.e., the reduced cost vector c T c T B B A may not be non-negative. We consider two cases: Suppose j is a non-basic variable. Then the only reduced cost that changes is c j c j + δ. Suppose j is a basic variable, then c B changes, so all reduced costs are affected. If the reduced costs are still non-negative, then our current solution is optimal. In the first case (j is non-basic) the objective does not change, in the second case (j is basic) it changes by δ. If some reduced cost becomes positive, we can use the Primal Simplex Method to compute the new optimal solution starting from the current basis. 3.3 Adding a variable Suppose we add a variable x n+1 so our new problem becomes min c T x+c n+1 x n+1 subject to Ax+A n+1 x n+1 = b, x 0, x n+1 0. We can just check if our current basis is still optimal, i.e., if the reduced cost of x n+1 is non-negative. We compute c n+1 = c n+1 c T BB A n+1. If it is non-negative, then our current solution is optimal, otherwise, we use the Primal Simplex Method to find an optimal solution starting from the current basis. 3

3.4 Changes in a column of A If the column, say A j is non-basic, then B does not change, so the primal solution is still feasible. The reduced cost of j changes and may become negative; in that case, we can use the primal simplex method to find an optimal solution starting from the current solution. If the column that changes is basic, the whole tableau changes, in fact, the current basis may not be a basis anymore. You can still say something if the change is small (and the current basis is still a basis), but the arguments become more complicated so we omit the details. 3.5 Adding a constraint Here, we have to consider two cases: either an inequality or an equality constraint is added. Suppose the new constraint is a T m+1x b m+1. If the current optimal solution satisfies this constraint, it is optimal for the new problem. Otherwise, we add a slack variable x n+1 and rewrite the constraint as a T m+1x x n+1 = b m+1. The new constraint matrix is objective coefficients are c T 0. A a T m+1 0, its right hand side vector is b b m+1 and the new We can make the new slack variable basic, since the column we added is linearly independent of our current basis. Note that the associated basic feasible solution will have x n+1 < 0; otherwise, the optimal solution to the original problem was feasible to the new problem. So, we need to use the dual simplex method. What does the tableau that we start with look like? The new basis matrix is B 0 a T, where a T is obtained by keeping only those entries of a T m+1 that correspond to the basic variables. Its inverse is B 0 a T B. (You can verify that the product of the new basis matrix and its inverse is indeed the (m + 1) (m + 1) identity matrix). Using this information, we can determine the new tableau, and use the dual simplex method to find the new optimum: The reduced costs become c T ; 0 c T B ; 0 B ; 0 A; 0 a T B ; a T = c T c T B m+1; B A; 0. Row 1,..., m + 1, columns 1,..., n + 1 of the tableau contain B 0 A; 0 B a T B a T = A 0 m+1; a T B A a T m+1 1. (Note: we thus only need to compute the last row, since the other rows stay the same.) Now, suppose we add a new equality constraint a T m+1x = b m+1. If the current optimal solution satisfies this constraint, it is optimal for the new problem. Otherwise, suppose without loss of generality that our current solution x has a T m+1x > b m+1. We add an artificial variable x n+1 0, and rewrite the constraint as a T m+1x x n+1 = b m+1. Note that we want x n+1 to be zero in an optimal solution, since otherwise the solution is not feasible. Hence, 4

we give x n+1 a large objective coefficient M. The new constraint matrix and right hand side are thus A 0 b the same as in the previous case, a T, and and the new objective coefficients are m+1 b m+1 c T M. Note that if we set all variables x 1,..., x n according to our current solution, we will have x n+1 = a T m+1x b m+1 > 0, hence this gives us a primal feasible solution. By the same arguments as in the previous case, this is a basic solution. This solution is not optimal (i.e., the corresponding dual solution is not feasible) if we choose M to be large enough. Hence, we can use the dual simplex method from the current solution. We now give the details on how to construct the tableau: The inverse basis matrix is given by B 0 a T B, and row 1,..., m + 1, columns 1,..., n + 1 of the tableau contain B 0 A; 0 B a T B a T = A 0 m+1; a T B A a T m+1 1. (Note: we thus only need to compute the last row, since the other rows stay the same.) The new reduced costs are given by c T ; M c T B ; M B ; 0 a T B ; Example: The optimal tableau is Note that B = 3 2 5 3 A; 0 a T m+1; min 5x 1 x 2 +12x 3 s.t. 3x 1 +2x 2 +x 3 = 10 5x 1 +3x 2 +x 4 = 16 x 1, x 2, x 3, x 4 0 12 0 0 2 7 2 1 0-3 2 2 0 1 5-3. = c T c T B B A M(a T B A a T m+1); M. We add the constraint x 1 + x 2 = 5, which is violated by the optimal solution (2, 2, 0, 0). We form the new problem min 5x 1 x 2 +12x 3 Mx 5 s.t. 3x 1 +2x 2 +x 3 = 10 5x 1 +3x 2 +x 4 = 16 x 1 +x 2 x 5 = 5 x 1, x 2, x 3, x 4, x 5 0 We want to obtain the tableau associated with the solution (2, 2, 0, 0, 1). The new inverse basis matrix is B a T B 0 = 3 2 0 5 3 0 2, 5

and we compute a T B A a T m+1 = 0 0 2. to obtain entries 1,..., n of the new row added to the tableau, and the new reduced costs are obtained from the previous ones by subtracting M(a T B A a T m+1) = M 0 0 2. So our new tableau is 12 0 0 2-2M 7+M 0 2 1 0-3 2 0 2 0 1 5-3 0 1 0 0 2-1 1 6