Part 1. The Review of Linear Programming

Similar documents
Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming Introduction

9.1 Linear Programs in canonical form

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

OPERATIONS RESEARCH. Linear Programming Problem

Simplex Algorithm Using Canonical Tableaus

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

1 Review Session. 1.1 Lecture 2

Understanding the Simplex algorithm. Standard Optimization Problems.

TIM 206 Lecture 3: The Simplex Method

AM 121: Intro to Optimization Models and Methods Fall 2018

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Lecture slides by Kevin Wayne

ECE 307 Techniques for Engineering Decisions

3 The Simplex Method. 3.1 Basic Solutions

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Linear Programming Redux

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Lesson 27 Linear Programming; The Simplex Method

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Math Models of OR: Some Definitions

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

MATH 445/545 Homework 2: Due March 3rd, 2016

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Linear programs Optimization Geoff Gordon Ryan Tibshirani

IE 400: Principles of Engineering Management. Simplex Method Continued

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

New Artificial-Free Phase 1 Simplex Method

Math 273a: Optimization The Simplex method

AM 121: Intro to Optimization

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Simplex method(s) for solving LPs in standard form

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

Ω R n is called the constraint set or feasible set. x 1

3. THE SIMPLEX ALGORITHM

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Formulation of L 1 Norm Minimization in Gauss-Markov Models

In Chapters 3 and 4 we introduced linear programming

CHAPTER 2. The Simplex Method

The simplex algorithm

Lecture 2: The Simplex method

CO350 Linear Programming Chapter 6: The Simplex Method

Simplex tableau CE 377K. April 2, 2015

Chap6 Duality Theory and Sensitivity Analysis

Chapter 5 Linear Programming (LP)

MAT016: Optimization

OPRE 6201 : 3. Special Cases

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Optimization (168) Lecture 7-8-9

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Linear Programming The Simplex Algorithm: Part II Chapter 5

Sensitivity Analysis

Summary of the simplex method

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

IE 5531: Engineering Optimization I

Systems Analysis in Construction

Linear Programming in Matrix Form

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

The augmented form of this LP is the following linear system of equations:

CO350 Linear Programming Chapter 6: The Simplex Method

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Operations Research Lecture 2: Linear Programming Simplex Method

CPS 616 ITERATIVE IMPROVEMENTS 10-1

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

A Review of Linear Programming

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

ORF 522. Linear Programming and Convex Analysis

Summary of the simplex method

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

AM 121: Intro to Optimization Models and Methods

Chapter 4 The Simplex Algorithm Part I

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

Farkas Lemma, Dual Simplex and Sensitivity Analysis

ORF 522. Linear Programming and Convex Analysis

Math Models of OR: Handling Upper Bounds in Simplex

Math Models of OR: Sensitivity Analysis

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

6.2: The Simplex Method: Maximization (with problem constraints of the form )

1 Implementation (continued)

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

February 17, Simplex Method Continued

Lecture 11 Linear programming : The Revised Simplex Method

Introduction to optimization

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

MATH 445/545 Test 1 Spring 2016

MATH2070 Optimisation

TRANSPORTATION PROBLEMS

The Simplex Algorithm and Goal Programming

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Transcription:

In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini

Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm The In Tableau Format References

Introduction

The The simplex method is an algebraic procedure for solving linear programming problems. Its underlying concepts are geometric. Developed by George Dantzig in 1947. It has proved to be a remarkably efficient method that is used routinely to solve huge problems on today s computers.

Basic Feasible Solutions

Basic Feasible Solutions Consider the system Ax = b and x 0, where A is an m x n matrix and b is an m vector. Suppose that rank (A, b) = rank (A) = m. Let A = [B, N] where B is an m x m invertible matrix and N is an m x (n - m) matrix. The solution to the equations Ax = b, where

Basic Feasible Solutions This solution is a basic solution If x B 0, then x is called a basic feasible solution B is the basic matrix (or simply the basis) N is the nonbasic matrix x B are basic variables x N are nonbasic variables If x B > 0, then x is called a nondegenerate basic feasible solution If at least one component of x B is zero, then x is called a degenerate basic feasible solution

Example Consider the polyhedral set defined by the following inequalities

Example By introducing the slack variables x 3 and x 4, the problem is put in the following standard format: Note that the constraint matrix

Possible Basic solutions are: Example

Example We have four basic feasible solutions: These basic feasible solutions, in the (x 1, x 2 ) space give rise to the following four points: These points are precisely the extreme points of the feasible region.

Basic Feasible Solutions In this example, the possible number of basic feasible solutions is bounded by the number of ways of extracting two columns out of four columns to form the basis. Therefore the number of basic feasible solutions is less or equal to: Out of these six possibilities, one point violates the nonnegativity of B -1 b. Furthermore, a 1 and a 3, could not have been used to form a basis since a 1 = a 3 = are linearly dependent, and hence the matrix qualify as a basis.

Basic Feasible Solutions In general, the number of basic feasible solutions is less than or equal to:

Example: Degenerate Basic Feasible Solutions Consider the following system of inequalities: The third restriction is redundant.

Example: Degenerate Basic Feasible Solutions After adding the slack variables, we get Note that

Example: Degenerate Basic Feasible Solutions Let us consider the basic feasible solution with B = [a 1, a 2, a 3 ] Note that this basic feasible solution is degenerate since the basic variable x 3 = 0.

Example: Degenerate Basic Feasible Solutions Now consider the basic feasible solution with B = [a 1, a 2, a 4 ] This basic feasible solution gives rise to the same point

Example: Degenerate Basic Feasible Solutions It can be also checked that the basic feasible solution with basis B = [a 1, a 2, a 5 ] is given by All three of the foregoing basic feasible solutions with different bases are represented by the single extreme point (x 1, x 2, x 3, x 4, x 5 ) = (3, 3, 0, 0, 0). Each of the three basic feasible solutions is degenerate since each contains a basic variable at level zero.

Key to the

Key to the Since the number of basic feasible solutions is bounded by, One may think of simply listing all basic feasible solutions, and picking the one with the minimal objective value. This is not satisfactory, for a number of reasons

The reasons: Key to the Firstly, the number of basic feasible solutions is large, even for moderate values of m and n. Secondly, this approach does not tell us if the problem has an unbounded solution that may occur if the feasible region is unbounded. Lastly, if the feasible region is empty, we shall discover that the feasible region is empty, only after all possible ways of extracting m columns out of n columns of the matrix A fail to produce a basic feasible solution, on the grounds that B does not have an inverse, or else

Key to the The key to the simplex method lies in recognizing the optimality of a given basic feasible solution (extreme point solution) based on local considerations without having to (globally) enumerate all basic feasible solutions.

Key to the Consider the following linear programming problem. where A is an m x n matrix. Suppose that we have a basic feasible solution whose objective value z 0 is given by

Key to the Let x B and x N denote the set of basic and nonbasic variables for the given basis. Then feasibility requires that x B 0, X N 0, and that b = Ax = Bx B + Nx N Multiplying the last equation by B -1 and rearranging the terms: = b (y j )x j j R where R is the current set of the indices of the nonbasic variables.

Key to the letting z denote the objective function at x, we get where z j = c B B -1 a j for each nonbasic variable.

Key to the The LP problem may be rewritten as: Minimize z = z ( z c ) x Subject to (y )x j R 0 j j R j j j j + x = b x 0, j R, and 0 j B x B

Key to the Since we are to minimize z, whenever z j - c j > 0, it would be to our advantage to increase x j (from its current level of zero). If (z j - c j ) 0 for all j œ R, then the current feasible solution is optimal. z z 0 for any feasible solution, and for the current (basic) feasible solution z = z 0 since x j = 0 for all jœr

Algebra of the

Algebra of the b If (z j c j ) 0 for all j R, then x j = 0, j R and x B = is optimal for LP. If z k c k > 0, and it is the most positive of all z j c j, it would be to our benefit to increase x k as much as possible. As x k is increased, the current basic variables must be modified according to:

Algebra of the If y ik < 0, then x Bi increases as x k increases and so x Bi continues to be nonnegative. If y ik = 0, then x Bi is not changed as x k increases and so x Bi continues to be nonnegative. If y ik > 0, then x Bi will decrease as x k increases. ik Bi k In order to satisfy nonnegativity, x k is increased until the first point at which a basic variable x Br drops to zero. It is then clear that the first basic variable dropping to zero corresponds to the minimum of /y ik for y ik > 0. b i

More precisely: Algebra of the In the absence of degeneracy > 0, and x k = / y rk > 0. b r b r As x k increases from level 0 to / y rk, a new feasible solution is obtained. k b r rk

Algebra of the b r Substituting x k = / y rk in this Equation gives:

Algebra of the The corresponding columns are a B1, a B2,..., a r-1, a k, a r+1,..., a m. If a B1, a B2,..., a m are linearly independent, and if a k replaces a Br, then the new columns are linearly independent if and only if y rk 0.

Example Introduce the slack variables x 3 and x 4 to put the problem in a standard form. This leads to the following constraint matrix A:

Example Consider the basic feasible solution corresponding to B = [a 1, a 2 ]. In other words, x 1 and x 2 are the basic variables while x 3 and x 4 are the nonbasic variables. First compute Hence

Example Also The required representation of the problem is

Example Since z 3 c 3 > 0, then the objective improves by increasing x 3 The modified solution is given by The maximum value of x 3 is 2 (any larger value of x 3 will force x 1 to be negative). Therefore the new basic feasible solution is Here x 3 enters the basis and x 1 leaves the basis. Note that the new point has an objective value equal to 1, which is an improvement over the previous objective value of 3. The improvement is precisely (z 3 - c 3 ) x 3 = 2.

Example The feasible region of the problem in both the original (x 1, x 2 ) space as well as in the current (x 3, x 4 ) space.

Recall that Interpretation of Entering the Basis where where c Bi is the cost of the i th basic variable. If x k is raised from zero level, while the other nonbasic variables are kept at zero level, then the basic variables x B1, x B2,..., x Bm must be modified according to

Interpretation of Entering the Basis if x k is increased by 1 unit: if y ik > 0, the i th basic variable will be decreased by y ik if y ik < 0, the i th basic variable will be increased by y ik if y ik = 0, the i th basic variable will not be changed z k which is, is the cost saving that results from k the modification of the basic variables, as a result of increasing x k by 1 unit c k is the cost of increasing x k itself by 1 unit z k - c k is the saving minus the cost of increasing x k by 1 unit

if z k - c k > 0, Interpretation of Entering the Basis it will be to our advantage to increase x k For each unit of x k, the cost will be reduced by an amount z k - c k and hence it will be to our advantage to increase x k as much as possible. if z k - c k < 0, then by increasing x k, the net saving is negative, and this action will result in a larger cost. So this action is prohibited. If z k - c k = 0, then increasing x k will lead to a different solution, with the same cost.

Interpretation of Leaving the Basis Suppose that we decided to increase a nonbasic variable x k with a positive z k - c k the larger the value of x k, the smaller is the objective z. As x k is increased, the basic variables are modified according to If the vector y k has any positive component(s), then the corresponding basic variable(s) is decreased as x k is increased.

Interpretation of Leaving the Basis Therefore the nonbasic variable x k cannot be indefinitely increased, because otherwise the nonnegativity of the basic variables will be violated. The first basic variable x Br that drops to zero is called the blocking variable because it blocked further increase of x k. Thus x k enters the basis and x Br leaves the basis.

Termination: Optimality and Unboundedness

Termination Optimality and Unboundedness Different cases for terminations of simplex methods: Termination with an Optimal Solution Unique and Alternative Optimal Solutions Unboundedness

Termination with an Optimal Solution Suppose that x* is a basic feasible solution with basis B Let z* denote the objective of x* Suppose z j - c j 0, for all nonbasic variables In this case no nonbasic variable is eligible for entering the basis. Fro the following equation: x* is an unique optimal basic feasible solution.

Unique and Alternative Optimal Solutions Consider the case where z j c j 0 for all nonbasic components, Bu z k c k = 0 for at least one nonbasic variable x k. If x k is increased, we get (in the absence of degeneracy) points that are different from x* but have the same objective value. If x k is increased until it is blocked by a basic variable, we get an alternative optimal basic feasible solution. The process of increasing x k from level zero until it is blocked generates an infinite number of alternative optimal solutions.

Unboundedness Suppose that we have a basic feasible solution of the system Ax = b, x > 0, with objective value z 0. Let us consider the case when we find a nonbasic variable x k with z k c k > 0 and y k 0. This variable is eligible to enter the basis since increasing it will improve the objective function. It is to our benefit to increase x k indefinitely, which will make z go to -. Based on the following equation:

The Simplex Algorithm

The Simplex Algorithm Given a basic feasible solution, we can either improve it if z k c k > 0 for some nonbasic variable x k, or stop with an optimal point if z j c j 0 for all nonbasic variables. If z k - c k > 0, and the vector y k contains at least one positive component, then the increase in x k will be blocked by one of the current basic variables, which drops to zero and leaves the basis. On the other hand, if z k - c k > 0 and y k 0, then x k can be increased indefinitely, and the optimal solution is unbounded and has value -.

The Simplex Algorithm We now give a summary of the simplex method for solving the following linear programming problem. where A is an m x n matrix with rank m. Initialization Step of Simplex Algorithm Choose a starting basic feasible solution with basis B.

MAIN STEP: Step 1: The Simplex Algorithm: Main Step Solve the system Bx B = b b Let x B =, x N = 0, and z = c B x B.

The Simplex Algorithm: Main Step Step 2: Solve the system wb = c B, with unique solution w = c B B -1. The vector of w is simplex multipliers Calculate z j - c j = wa j - c j for all nonbasic variables. Let This is known as the pricing operation. where R is the current set of indices associated with the nonbasic variables. If z k c k 0, then stop with the current basic feasible solution as an optimal solution. Otherwise go to step 3 with x k as the entering variable.

The Simplex Algorithm: Main Step Step 3: Solve the system By k = a k,with unique solution y k = B -1 a k. If y k 0, then stop with the conclusion that the optimal solution is unbounded If NOT y k 0, go to step 4.

The Simplex Algorithm: Main Step Step 4: Let x k enters the basis and the blocking variable x Br leaves the basis, where the index r is determined by the following minimum ratio test: Update the basis B where a k replaces a Br, the index set R and go to step 1.

Modification for a Maximization Problem A maximization problem can be transformed into a minimization problem by multiplying the objective coefficients by - 1. A maximization problem can also be handled directly as follows. Let z k - c k instead be the minimum z j - c j for j nonbasic; the stopping criterion is that z k c k 0. Otherwise, the steps are as above.

The In Tableau Format

The In Tableau Format Suppose that we have a starting basic feasible solution x with basis B. The linear programming problem can be represented as follows: (1) (2)

The In Tableau Format From Equation (2) we have: Multiplying (3) by c B and adding to Equation (1), we get Currently x N = 0, and from Equations (3) and (4) we get x B = B -1 b and z = c B B -1 b. Also, from (3) and (4) we can conveniently represent the current basic feasible solution in the following tableau. (3) (4)

The In Tableau Format Here we think of z as a (basic) variable to be minimized. The objective row will be referred to as row 0 and the remaining rows are rows 1 through m. The right-hand-side column (RHS) will denote the values of the basic variables (including the objective function). The basic variables are identified on the far left column. Actually the cost row gives us c B B -1 N - c N, which consists of the z j - c j for the nonbasic variables. So row zero will tell us if we are at the optimal solution (if each z j c j 0), and which nonbasic variable to increase otherwise.

Pivoting If x k enters the basis and x Br leaves the basis, then pivoting on y rk can be stated as follows. 1. Divide row r by y rk. 2. For i = 1, 2,..., m and i # r, update the i th row by adding to it -y ik times the new r th row. 3. Update row zero by adding to it c k - z k times the new r th row. The two tableaux represent the situation immediately before and after pivoting.

Before and after pivoting

The In Tableau Format Let us examine the implications of the pivoting operation. 1. The variable x k entered the basis and x Br left the basis. This is illustrated on the left-hand side of the tableau by replacing x Br with x k. For the purpose of the following iteration, the new x Br is now x k. 2. The right-hand side of the tableau gives the current values of the basic variables. The nonbasic variables are kept zero. 3. Pivoting results in a new tableau that gives the new B -1 N under the nonbasic variables, an updated set of z j - c j for the new nonbasic variables, and the values of the new basic variables and objective function.

Example The In Tableau Format Introduce the nonnegative slack variables x 4, x 5, and x 6.

The In Tableau Format Since b 0, then we can choose our initial basis as B = [a 4, a 5, a 6 ] = I 3, and we indeed have B -1 b 0. This gives the following initial tableau.

The In Tableau Format This is the optimal tableau since z j c j 0 for all nonbasic variables. The optimal solution is given by

The In Tableau Format Note that the current optimal basis consists of the columns a1, a5, and a3 namely

References

References M.S. Bazaraa, J.J. Jarvis, H.D. Sherali, Linear Programming and Network Flows, Wiley, 1990. (Chapter 3)

The End