Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Similar documents
Lecture 2: The Simplex method

Chapter 5 Linear Programming (LP)

4. Duality and Sensitivity

3 The Simplex Method. 3.1 Basic Solutions

AM 121: Intro to Optimization

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Simplex Algorithm Using Canonical Tableaus

Part 1. The Review of Linear Programming

OPERATIONS RESEARCH. Linear Programming Problem

IE 5531: Engineering Optimization I

3. THE SIMPLEX ALGORITHM

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

9.1 Linear Programs in canonical form

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Optimization (168) Lecture 7-8-9

Ω R n is called the constraint set or feasible set. x 1

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

MATHEMATICAL PROGRAMMING I

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

AM 121: Intro to Optimization Models and Methods Fall 2018

IE 400: Principles of Engineering Management. Simplex Method Continued

MATH 445/545 Homework 2: Due March 3rd, 2016

TIM 206 Lecture 3: The Simplex Method

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Relation of Pure Minimum Cost Flow Model to Linear Programming

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Lesson 27 Linear Programming; The Simplex Method

ORF 522. Linear Programming and Convex Analysis

Simplex method(s) for solving LPs in standard form

Operations Research Lecture 2: Linear Programming Simplex Method

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

The simplex algorithm

Chap6 Duality Theory and Sensitivity Analysis

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

MATH 4211/6211 Optimization Linear Programming

CO350 Linear Programming Chapter 6: The Simplex Method

ECE 307 Techniques for Engineering Decisions

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)

The augmented form of this LP is the following linear system of equations:

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

Notes on Simplex Algorithm

Math 273a: Optimization The Simplex method

Linear Programming Redux

1 Review Session. 1.1 Lecture 2

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Lecture: Algorithms for LP, SOCP and SDP

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Summary of the simplex method

Farkas Lemma, Dual Simplex and Sensitivity Analysis

AM 121: Intro to Optimization Models and Methods

Lecture slides by Kevin Wayne

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

2.098/6.255/ Optimization Methods Practice True/False Questions

OPRE 6201 : 3. Special Cases

Linear Programming, Lecture 4

A Review of Linear Programming

Week_4: simplex method II

Lecture 6 Simplex method for linear programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CO350 Linear Programming Chapter 6: The Simplex Method

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that

Linear Programming: Simplex

New Artificial-Free Phase 1 Simplex Method

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

TRANSPORTATION PROBLEMS

Linear Programming and the Simplex method

IE 5531: Engineering Optimization I

III. Linear Programming

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

CPS 616 ITERATIVE IMPROVEMENTS 10-1

The Simplex Algorithm

Linear Programming in Matrix Form

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Linear Programming for Planning Applications

Simplex Method for LP (II)

F 1 F 2 Daily Requirement Cost N N N

4.6 Linear Programming duality

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

MAT016: Optimization

"SYMMETRIC" PRIMAL-DUAL PAIR

CHAPTER 2. The Simplex Method

Slide 1 Math 1520, Lecture 10


Transcription:

Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding to the basis B for x 0 : Then B = (B ; :::; B m ) is a mm invertible basis matrix. For A j = B (x j is a nonbasic variable) 9 y j such that A j = By j or () A j = B i y ij () i= since B spans R m and therefore A j R m is expressible as a linear combination of columns from B: Let y 0 = (y 0 ; :::; y m0 ) T be the values of the basic variables (x j such that A j B). Then By 0 = b or () B i y i0 = b (4) i= with y i0 0: Consider (4) - () for some scalar 0 (y i0 y ij ) B i + A j = b () i= Suppose x 0 is nondegenerate, then all y i0 > 0: As " from zero, we move from the BFS x 0 to feasible solutions with m + strictly positive components. can increase until some component of y 0 becomes zero. This happens at the value Example. (contd.) 0 = min i:y ij >0 yi0 y ij (6) = y p0 y pj, say. (7) The BFS corresponding to the basic variables fx ; x ; x 6 ; x 7 g is x 0 = (; 0; ; 0; 0; ; 4) and B= (A ; A ; A 6 ; A 7 ) : The nonbasic column A = (0; ; 0; 0) T may be written A = A A + A 6 + A 7 i.e. y = ; y = ; y = ; y 4 = : Then () becomes ( ) A + ( + ) A + ( ) A 6 + (4 ) A 7 + A = b

The family of feasible points moves from the vertex x 0 to the new BFS x = ( ; 0; + ; 0; ; ; 4 ) x = (; 0; ; 0; ; 0; ) as increases from 0 to a maximum value of 0 = given by (6). The new set of basic variables is fx ; x ; x ; x 7 g. Thus x joins the basis and x 6 leaves basis. Notes. Where there is a tie in the minimization operation (6) the new BFS is degenerate.. If 8i; y ij 0 then can be increased inde nitely and the feasible region F is unbounded.. If x 0 is degenerate (some y i0 = 0) and the corresponding y ij > 0 then 0 = 0 and x j joins the basis at zero level. In this case the new BFS represents the same vertex as x 0 in R n ; but corresponds to a di erent basis: Theorem (Pivot step) Given a BFS x 0 with basic components y i0 ; i = ; :::; m and basis B; let j be such that A j = B: Then the new feasible solution given by 0 = min i:y ij >0 yi0 y ij = y p0 y pj,say y 0 i0 = ( y i0 0 y ij i 6= p 0 i = p is a BFS with B 0 = B[ fa j g n fb p g : Note: A j has been substituted in place of B p and the value y 0 p0 = 0 is to be interpreted as the value of the entering variable x j (see also calculation of z 0 0 below). Proof. We need to show that the columns of A contained in B 0 are linearly independent, and thus form a basis. Suppose 9 constants fd i g m i= such that P Substitute A j = m y ij B i + y pj B p i= i6=p d i B i + d p A j = 0 (8) i= i6=p (d p y ij + d i ) B i + d p y pj B p = 0 i= i6=p

But B = fb i g m i= are a basis, therefore linearly independent. Hence all coe cients of B i are zero. In particular d p y pj = 0 hence d p = 0 (as y pj > 0 by construction) : This means that all fd i g are zero in (8). Hence B 0 is linearly independent. Choosing a pro table column A j The cost of a bfs x 0 = (y 0 ; 0) with basis matrix B is z 0 = c T By 0 (9) where c T B = (c B; :::; c Bm ) are the costs of the basic variables x B : The net change in cost of a solution corresponding to a unit increase in the variable x j is c j m X i= y ij c Bi = c j z j ; say NB. z j denotes the scalar product of c B with y j and is very important in later explanations of the Simplex tableau. The quantity c j z j = c j is known as the relative cost or the reduced cost of variable x j (at this vertex). Theorem (Cost improvement) At a BFS x 0 ; a pivot step in which x j enters the basis at value 0 changes the cost by an amount 0 c j = 0 (c j z j ) (0) where z j are the components of z T = c T BB A () Proof. The previous theorem establishes that the new BFS, x ; after pivoting is so the new cost is z 0 0 = y 0 i0 = ( y i0 0 y ij i 6= p 0 i = p (y i0 0 y ij ) c Bi + 0 c j i= i6=p = z 0 y p0 c Bp 0 z j + 0 y pj c Bp + 0 c j = z 0 + 0 (c j z j ) noting that 0 y pj = y p0 ; thus proving (0) :

Since the y ij are de ned through A j = By j we have y j = B A j and hence z j = c Bi y ij i= = c T By j = c T BB A j for each j; thus proving (). Theorem (Optimality criterion) If c = c z 0 then x 0 is optimal. Proof. Let y be any feasible vector, not necessarily basic such that Ay = b and y 0: Given that c z 0 and y 0 their scalar product is also nonnegative: (c z) T y = c T z T y 0 Therefore c T y z T y = c T BB Ay = c T BB b = c T By 0 = c T x 0 so x 0 is optimal.. The Simplex Algorithm The fundamental theorem of LP assures us that we can nd an optimum to an LP in standard form by searching the BFS s of the constraint set Ax = b () which are precisely the vertices (extreme points) of the feasible region F. The simplex method proceeds from one BFS to another, ensuring that the objective function decreases monotonically (for minimization) until a minimum is reached.. Diagonal representation Given a basic solution to () we suppose for convenience of notation that the basic variables are x B = (x ; :::; x m ) T and nonbasic variables are x N = (x m+ ; :::; x n ) T : Let B denote the m m basis 4

matrix (containing basic columns of A) and N the matrix of nonbasic columns of A.Then () may be written B N xb x N! = b Bx B + Nx N = b Premultiply this system by B gives an equivalent system where I m is the m m identity matrix, or simply where Y = B N is a m (n I m x B + B Nx N = B b () x B + Y x N = y 0 (4) m) matrix and y 0 are the values of the basic variables x B at this BFS. (Set x N = 0 gives x B = y 0 :) A typical column of Y will be y j : The system of equations () or (4) are a representation of the original system () diagonalized with respect to the basic variables. Such a diagonalization may be achieved by a sequence of elementary row operations in a process known as Gauss-Jordan pivoting. When m = n such a process gives a unique solution y 0 to a system of linear equations, assuming that A is a full rank square matrix.. Tableau iterations Suppose that, initially, the variables are labelled so that fx ; x ; :::; x m g are basic and fx m+ ; x m+ ; :::; x n g are non-basic. The "tableau representation" of (4) is de ned to be the partitioned matrix 0 y ;m+ y ;m+ y ;n y 0 0 y ;m+ y ;m+ y ;n y 0 [I m j Y j y 0 ] =. ()... 6 7 4 y m;m+ y m;n y m0 The columns of the identity matrix correspond to a particular choice of basic variables and different choices of the basic variables lead to alternative BFS s. Note that in a diagonal representation basic variables correspond to columns of the identity matrix I m : Suppose some basic variable x p leaves the basis and some non-basic variable x q enters the basis. Provided 6= 0; the transformed tableau can be obtained by the following row operations on (.)

where R i denotes row i of the tableau R 0 p = R p R 0 i = R i y iq R p (i 6= p) is known as the pivot element. Element by element we have y 0 pj = y pj y 0 ij = y ij y iq y pj (8j) Example. (non-simplex) Consider the system in diagonal form: = y ij y iq y pj (8j; i 6= p) x + x 4 + x x 6 = x +x 4 x + x 6 = in which x ; x ; x are basic, x 4; x ; x 6 are nonbasic. x x 4 +x x 6 = Find a basic solution with basic variables x 4; x ; x 6 : Notice that at this stage we have no objective function, nor do we insist on feasibility. Tableau T0 x x x x 4 x x 6 x 0 0 x 0 0 x 0 0 Column of the tableau indicate the basic variables. The right hand side column contains the current value of the basic variables y 0 which are x = ; x = ; x = solution would not be feasible for a LP. Exchange x ; x 4. Then R 0 = R, R0 = R Tableau T R ; R 0 = R x x x x 4 x x 6 x 4 0 0 x 0 0 7 x 0 0 4. Since x < 0 this basic R The basic variables are now (in row order) x 4 = ; x = 7; x = 4: 6

Exchange x ; x. Then R 0 = R + R, R 0 = R ; R0 = R + R : Tableau T x 4 x x x x x x 4 x x 6 0 0 0 0 0 0 8 7 The basic variables are x 4 = 8 ; x = 7 ; x = : Exchange x ; x 6. Then R 0 = R R, R 0 = R R ; R 0 = R Tableau T x x x x 4 x x 6 x 4 0 0 4 x 0 0 x 6 0 0 The basic variables are x 4 = 4; x = ; x 6 = : The columns of I visible in the tableau correspond to these basic variables. In place of 0 the identity matrix originally under x ; x ; x in T0 we now have the columns of B B C where B = @ A since B N = B when N = I : This example shows how we can obtain a new system in diagonal form with respect to a desired basis by a sequence of pairwise exchanges of a basic and a nonbasic variable. In the simplex algorithm our target basis is not given explicitly but is the one de ning the optimal BFS. We need to consider two additional aspects:. Keep the right hand side vector y 0 positive so each basic solution is feasible (BFS).. Improve the objective function z at each iteration.. Maintaining feasibility The outgoing variable x p given a choice of incoming nonbasic variable x q (j = q) is given by the minimum ratio rule:.4 Improving the objective function y 0 p0 = min i:y iq >0 yi0 y iq = y p0 (6) We add a new row to the tableau (row 0; the z-row or the bottom row ) representing the equation z = c T x in diagonal form with respect to the current basis. We can show that the z-row equation 7

then contains the coe cients z j c j as de ned earlier. z = c T x = c T Bx B + c T Nx N = c T B (y 0 Y x N ) + c T Nx N from (4) nx nx = z 0 c T B y j x j + c j x j j=m+ j=m+ nx = z 0 (z j c j ) x j (7) j=m+ or in a form consistent with equation (4). nx z + (z j c j ) x j = z 0 j=m+ From (0) the per unit decrease in the OF (objective function) z due to introducing variable x j is z j c j. The criterion z q c q = max fz j c j g n j=m+ (8) therefore picks the nonbasic variable which gives the largest rate of decrease (Dantzig s Rule). As long as z q c q > 0; a pivot will decrease the OF. The optimality criterion (minimization) is z j c j 0; j = m + ; :::; n (9) The corresponding rule for maximization problems is to replace min by max in (8) and to seek all z j c j 0. Example. Maximize subject to z (x) = x + 4x + x x + x + x 4x + x + x x + 4x + x 8 x ; x ; x 0 Introduce slack variables s ; s ; s 0, and rewrite Constraint as for example. x + x + x + s = In the following sequence of tableaux the pivot element is boxed. Initial tableau is for the BFS s = ; s = ; s = 8; x = x = x = 0 8

T0 Max s s s x x x s 0 0 s 0 0 4 s 0 0 4 8 z 0 0 0 4 0 The z row represents the equation z = x + 4x + x and z 0 = 0: Alternatively the scalar product formula z j = c T B y j may be used giving z j c j = (0; 0; 0) : (; 4; ) = in the case of x : Dantzig s rule modi ed for the mazimization problem gives z q c q = which xes our choice of pivot column. So we add x to the basic variables. (Pivoting in any nonbasic variable x j with a strictly negative values of z j c j would however also lead to an increase in z ) The minimum ratio rule leads to min f=; =4; 8=g = =; therefore s leaves the basis. A pivot step represented by the row operations leads to the new BFS T R 0 = R R 0 0 = R 0 + R R 0 = R R R 0 = R R Max s s s x x x x 0 0 s 0 0 0 s 0 0 7 z 0 0 0 This tableau represents the BFS (=; 0; 0; 0; ; =:) i.e. x = =; s = ; s = = with objective value = = : (monotonic increase is guaranteed). Notice that the change in z-value is 0 (c q z q ) = and ct B y 0 = = z 0 Dantzig s rule gives z q c q = = showing it is pro table to include x in the basis. The minimum ratio rule gives min f=; =g = : s leaves the basis. Pivot again giving T Max s s s x x x x 0 0 s 0 0 0 x 0 0 z 0 0 0 9

This tableau has all z j c j 0 (i.e. c z 0) so satis es the optimality criterion: The optimal LP solution is x = (; 0; ) in the problem s original variables. i.e. x = ; x = 0; x = : The optimal value at this vertex is z = and c T B y 0 = + = : The change in z value is 0 (c q z q ) = = : We may apply the scalar product formula to verify all z j c j values; e.g. for variable x we obtain (; 0; ) : (; ; ) 4 = : (checks z-row pivots).. Reduced tableau iterations A faster but equivalent representation omits the columns (of I m ) corresponding to the basic variables x B : At each pivot we exchange the labels of the pivot column and pivot row and apply a new rule for transforming the pivot column: y 0 pq = y 0 iq = y iq ; (i 6= p) The remaining tableau elements transform in the same way as the full tableau: ypj 0 = y pj ; (j 6= q) yij 0 = y ij y iq y pj (i 6= p; j 6= q) For Example., the reduced tableau iterations are as follows: T0 x x x s s 4 s 4 8 z 4 0 The pivot element is replaced by its reciprocal. The rest of the pivot column is divided by the pivot element and changed in sign. Other tableau elements transform as before. T x s x x s 0 s z 7 One more iteration gives the reduced form of the optimal tableau obtained before omitting the tableau columns corresponding to the basic variables. 0

T s x s x s 0 x z.6 Obtaining an initial tableau (arti cial variables) How do we obtain a starting tableau which is diagonalized with respect to the basic variables? If the constraints happen to be of the form Ax b with b 0 then, as we have seen, an initial basis of slack variables is available. For the general case, suppose the constraints are in standard form Ax = b with b 0: We may use a two-phase method to obtain an initial BFS. To the i th constraint we add the term +R i representing an arti cial variable R i to which we attach a unit cost. The augmented tableau (without z row ) is then 0 a a a n b 0 a a a n b. (0)... 6 7 4 a m; a mn b m which represents the BFS R = b. In Phase I we minimize the cost function = i= R i subject to (0). There are three possible outcomes. Case We obtain min = 0 and all R i are driven out of the basis. We now have a BFS to the original problem. Case We obtain min > 0. There is no feasible solution to the original problem. (Otherwise a BFS to the original problem would also be a BFS to (0) with value = 0). Case We obtain min = 0 but some arti cial variables remain in ther basis at value zero. In this case we may continue non-simplex pivoting to drive all AV s out of the basis. i.e. reducing to Case Assuming Case I applies, we delete all arti cial variables and proceed in Phase II to solve the original problem. Either we need to have carried through a z row for the original problem in Phase I pivots or we recompute z j c j using the scalar product formula. Example.

Minimize z = 4x + x subject to x + x = 4x + x 6 x + x 4 x ; x 0 Insert surplus and slack variables s ; s to constraints and. Then add arti cial variables R ; R to constraints and. x + x +R = 4x +x s +R = 6 x +x +s = 4 An initial basis is given by R = ; R = 6; s = 4; x = x = s = 0: It is unnecessary to include R in this example because constraint has the form a T i x b i which allows s to be the third basic variable. The c B column contains unit costs of R i (i = ; ) and zero for s : The basic values y 0 are initially set to b: The initial OF value is 9: Phase I c j 0 0 0 c B x x s y 0 R 0 R 4 6 0 s 0 4 7 4 9 It easy to verify that the bottom row represents + 7x + 4x s = 9, the equation giving = R + R in terms of the non-basic variables. Two iterations result in an optimal tableau for Phase I with min = 0 (Case ). Min R x s x 0 R s 0 Min R R s x x 6 s 0 0

Phase II Delete columns corresponding to R ; R : Recompute new bottom row from z = 4x + x Min s x x 6 s z 8 Min s x x 9 s z 7.7 Alternative rules for pivot selection The most common rule and the easiest to implement for selecting the pivot column is (for minimization) by most negative reduced cost c j < 0: We may regard c j as the derivative of the cost with respect to distance in the space of nonbasic variables. Choosing the most negative c j is a form of steepest descent policy. The total increment in cost as a result of one pivot is however 0 c j where 0 is determined by the minimum ratio rule. Another rule for choosing the pivot column is to choose the column giving the largest decrease in cost. This may be termed the greatest increment rule. A unit increase in x j increments the entire solution vector x by the amount 8 >< + k = j x k = y ij k = B (i) ; i = ; :::; m >: 0 otherwise where k = B (i) indicates that x k is the i th basic variable. The derivative (rate of change) of cost with respect to Euclidean distance in the space of all variables is c j q + P m i= y ij Use of this criterion leads to a pivot selection rule known as the all variable gradient selection rule. No selection rule has been conclusively shown to be superior..8 Cycling Since there are only nitely many BFS s the simplex algorithm either terminates in a nite number of iterations or it must cycle i.e. loop through the same sequence of BFS s of the same value. Cycling is only possible if the problem has degeneracy (otherwise z decreases strictly monotonically). Cycling occurs rarely, but can be prevented by Bland s rule: Choose the pivot column by q = min fj : z j c j > 0g

and yi0 p = min y k0 ; 8k s.t. y k0 > 0 i:y iq >0 y iq y kq y kq i.e. of all valid pivot columns, choose the one with the lowest index. Of all tied valid pivot rows, choose the one with the lowest index. (Proof omitted) 4