Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Similar documents
IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

9.1 Linear Programs in canonical form

Math Models of OR: Some Definitions

Simplex Algorithm Using Canonical Tableaus

Ω R n is called the constraint set or feasible set. x 1

Math 273a: Optimization The Simplex method

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Chapter 5 Linear Programming (LP)

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Review Solutions, Exam 2, Operations Research

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Part 1. The Review of Linear Programming

More First-Order Optimization Algorithms

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

OPERATIONS RESEARCH. Linear Programming Problem

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

Part 1. The Review of Linear Programming

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

A Review of Linear Programming

Lecture 2: The Simplex method

Lecture slides by Kevin Wayne

The Simplex Algorithm

1 Review Session. 1.1 Lecture 2

MATH2070 Optimisation

Chap6 Duality Theory and Sensitivity Analysis

MAT016: Optimization

TIM 206 Lecture 3: The Simplex Method

3 The Simplex Method. 3.1 Basic Solutions

2.098/6.255/ Optimization Methods Practice True/False Questions

In Chapters 3 and 4 we introduced linear programming

Lesson 27 Linear Programming; The Simplex Method

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

The dual simplex method with bounds

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

Farkas Lemma, Dual Simplex and Sensitivity Analysis

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

ECE 307 Techniques for Engineering Decisions

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Simplex Method for LP (II)

Chapter 4 The Simplex Algorithm Part I

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Linear Programming: Simplex

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

MATH 4211/6211 Optimization Linear Programming

AM 121: Intro to Optimization Models and Methods Fall 2018

March 13, Duality 3

Lecture 10: Linear programming duality and sensitivity 0-0

Introduction to optimization

F 1 F 2 Daily Requirement Cost N N N

Linear Programming Duality

Chapter 1 Linear Programming. Paragraph 5 Duality

Lecture 11: Post-Optimal Analysis. September 23, 2009

3. THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

CO350 Linear Programming Chapter 6: The Simplex Method

Lecture 5 Simplex Method. September 2, 2009

Simplex method(s) for solving LPs in standard form

The augmented form of this LP is the following linear system of equations:

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

CO350 Linear Programming Chapter 6: The Simplex Method

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1

Chapter 1: Linear Programming

CS Algorithms and Complexity

Linear Programming and the Simplex method

Math Models of OR: Sensitivity Analysis

Discrete Optimization

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Lecture: Algorithms for LP, SOCP and SDP

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

IE 400: Principles of Engineering Management. Simplex Method Continued

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Special cases of linear programming

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

ORF 522. Linear Programming and Convex Analysis

Operations Research Lecture 2: Linear Programming Simplex Method

The Simplex Algorithm and Goal Programming

Lectures 6, 7 and part of 8

Introduction. Very efficient solution procedure: simplex method.

MATH 445/545 Homework 2: Due March 3rd, 2016

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

AM 121: Intro to Optimization Models and Methods

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Transcription:

The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1

Geometry of Linear Programming (LP) Consider the following LP problem: maximize x 1 +2x 2 subject to x 1 1 x 2 1 x 1 +x 2 1.5 x 1, x 2 0. 2

LP Geometry depicted in two variable space a1 a2 a2 a3 c If the direction of c is contained by the norm cone of a corner point, then the point is optimal. a3 a4 Each corner point has a norm direction cone a4 Objective contour a5 Figure 1: Feasible region with objective contours 3

Solution (decision, point): any specification of values for all decision variables, regardless of whether it is a desirable or even allowable choice. Feasible Solution: a solution for which all the constraints are satisfied. Feasible Region (constraint set, feasible set): the collection of all feasible solution. Interior, Boundary, and Face Extreme Point or Corner Point or Vertex Objective Function Contour: iso-profit (or iso-cost) line. Optimal Solution: a feasible solution that has the most favorable objective value. Optimal Objective Value: the value of the objective function evaluated at an optimal solution. Active Constraint: binding constraint. 4

Definition of Face and Extreme Points Let P be a polyhedron in R n, then F is a face of P if and only if there is a vector b for which F is the set of points attaining max{b T y : y P } provided this maximum is finite. A polyhedron has only finite many faces; each face is a nonempty polyhedron. A vector y P is an extreme point or a vertex of P if y is not a convex combination of two distinct feasible points. 5

Theory of LP All LP problems fall into one of the following three classes: Problem is infeasible: feasible region is empty. Problem is unbounded: feasible region is unbounded towards the optimizing direction. Problem is feasible and bounded: There exists an optimal solution or optimizer. There may be a unique optimizer or multiple optimizers. All optimizers are on a face of the feasible region. There is always at least one corner (extreme) optimizer if the face has a corner. If a corner point is not worse than all its adjacent or neighboring corners, then it is optimal. 6

History of the Simplex Method George B. Dantzig s Simplex Method for LP stands as one of the most significant algorithmic achievements of the 20th century. It is now over 50 years old and still going strong. The basic idea of the simplex method to confine the search to corner points of the feasible region (of which there are only finitely many) in a most intelligent way. In contrast, interior-point methods will move in the interior of the feasible region, hoping to by-pass many corner points on the boundary of the region. The key for the simplex method is to make computers see corner points; and the key for interior-point methods is to stay in the interior of the feasible region. 7

From Geometry to Algebra How to make computer recognize a corner point? How to make computer tell that two corners are neighboring? How to make computer terminate and declare optimality? How to make computer identify a better neighboring corner? 8

LP Standard Form minimize c 1 x 1 + c 2 x 2 +... + c n x n subject to a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1, a 21 x 1 + a 22 x 2 +... + a 2n x n = b 2,. a m1 x 1 + a m2 x 2 +... + a mn x n = b m, x j 0, j = 1, 2,..., n. Equivalently, (LP ) minimize c T x subject to Ax = b, x 0. (LD) maximize b T y subject to A T y + s = c, s 0. 9

Reduction to the Standard Form Eliminating free variables: substitute with the difference of two nonnegative variables Eliminating inequalities: add slack variables x = x + x where x +, x 0. a T x b a T x + s = b, s 0, a T x b a T x s = b, s 0. Eliminating upper bounds: move them to constraints x 3 x + s = 3, s 0. Eliminating nonzezro lower bounds: shift the decision variables x 3 x := x 3. Change max c T x to min c T x. 10

An LP Example in Standard Form minimize x 1 2x 2 subject to x 1 +x 3 = 1 x 2 +x 4 = 1 x 1 +x 2 +x 5 = 1.5 x 1, x 2, x 3, x 4, x 5 0. 11

Basic and Basic Feasible Solution (BFS) In the LP standard form, let s assume that we selected m linearly independent columns, denoted by the index set B from A and solve A B x B = b for the m-vector x B. By setting the variables x N of x corresponding to the remaining columns of A equal to zero, we obtain a solution x of Ax = b. Then x is said to be a basic solution to (LP) with respect to basis A B. The components of x B are called basic variables and those of x N are called nonbasic variables. Two basic solutions are adjacent if they differ by exactly one basic (or nonbasic) variable. If a basic solution satisfies x B 0, then x is called a basic feasible solution (BFS), and it is an extreme point of the feasible region. If one or more components in x B has value zero, x is said to be degenerate. 12

Geometry vs Algebra Theorem 1 Consider the polyhedron in the standard LP form. Then a basic feasible solution and a corner point are equivalent; the former is algebraic and the latter is geometric. In the LP example: Basis 3,4,5 1,4,5 3,4,1 3,2,5 3,4,2 1,2,3 1,2,4 1,2,5 Feasible? x 1, x 2 0, 0 1, 0 1.5, 0 0, 1 0, 1.5.5, 1 1,.5 1, 1 Neighboring Basic Solutions: Two basic solutions are neighboring or adjacent if they differ by exactly one basic (or nonbasic) variable. A basic feasible solution is optimal if no better neighboring BFS exists, or the direction to each neighbor BFS is non-improving. 13

Theorem 2 (The Fundamental Theorem of LP in Algebraic form) a Given (LP) and (LD) where A has full row rank m, i) if there is a feasible solution, there is a basic feasible solution (Carathéodory s theorem); ii) if there is an optimal solution, there is an optimal basic solution. The simplex method is to proceed from one BFS (a corner point of the feasible region) to an adjacent or neighboring one, in such a way as to continuously improve the value of the objective function. We now prove ii) in the next slide. a Text p.21 14

Rounding to an Optimal Basic Feasible Solution for LP 1. Start with any optimal solution x 0 = x after removing all variables with the zero value, thus x 0 > 0. Let A 0 be the remaining constraint matrix corresponding to x 0 such that A 0 x 0 = b. If the columns of A 0 is linearly independent, then x 0 is already a BFS (if x 0 < m then fill any rest of independent columns to make it a basis so that the BFS is degenerate). Otherwise, let k = 0 and go to the next step. 2. Find any vector d such that A k d = 0, d 0, and let x k+1 := x k + αd where α is chosen so that x k+1 0 and at least one entry of x k+1 equals 0. 3. Remove all zero entries from x k+1 and let A k+1 be the remaining constraint matrix corresponding to x k+1 such that A k+1 x k+1 = b. 4. If the columns of A k+1 is linearly independent, then stop with the BFS x k+1 ; otherwise set k := k + 1 and return to Step 2. The final output would be a BFS, and it should be optimal because it remains complementary to any optimal dual vector that is complementary to the initial x. 15

Optimality test Consider a BFS (x 1, x 2, x 3, x 4, x 5 ) = (0, 0, 1, 1, 1.5) with basic variables {3, 4, 5}. It s NOT optimal to the problem minimize x 1 2x 2 subject to x 1 +x 3 = 1 x 2 +x 4 = 1 x 1 +x 2 +x 5 = 1.5 x 1, x 2, x 3, x 4, x 5 0; 16

but it would be optimal to the below problem minimize x 1 +2x 2 subject to x 1 +x 3 = 1 x 2 +x 4 = 1 x 1 +x 2 +x 5 = 1.5 x 1, x 2, x 3, x 4, x 5 0. At a BFS, when the objective coefficients to all the basic variables are zero and those to all the non-basic variables are non-negative, then the BFS is optimal if the objective is to be minimized. 17

LP Canonical Form An LP in the standard form is said to be in canonical form at a BFS if The objective coefficients to all the basic variables are zero, that is, the objective function contains only non-basic variables and The constraint matrix for the basic variables form an identity matrix (with some permutation if necessary). It s easy to see whether or not the current BFS is optimal or not, if the LP is in canonical form. Is there a way to transform the original LP problem to an equivalent LP in the canonical form? That is, is there an algorithm that the feasible set and optimal solution set remain the same when compared to the original LP? 18

Transforming to canonical form Suppose the basis of a BFS is A B and the rest is A N. One can transform the equality constraint to A 1 B Ax = A 1 B b, (so that x B = A 1 B b A 1 B A N x N ) That is, we express x B in terms of x N, the degree of freedom variables. Then the objective function becomes c T x = c T Bx B + c T Nx N = c T BA 1 B b ct BA 1 B A N x N + c T Nx N = c T BA 1 B b + (ct N c T BA 1 B A N )x N. The transformed problem is then in canonical form with respect to the BFS A B ; and it s equivalent to the original LP. That is, its feasible set and optimal solution set don t change. 19

Equivalent LP By ignoring the constant term in the objective function, we have an equivalent intermediate LP problem in canonical form: minimize subject to r T x Āx = b, x 0, where r B := 0, r N := c N A T N (A 1 B )T c B, Ā := A 1 B A, b := A 1 B b. Vector r is the so-called reduced cost coefficient vector or reduced gradient vector. 20

Optimality Test Revisited The reduced gradient vector r R n can be written by r = c ĀT c B = c A T (A 1 B )T c B = c A T y, where y = (A 1 B )T c B is called the shadow price or dual vector. Theorem 3 If r N 0 ( equivalently r 0) at a BFS with basic variable set B, then the BFS x is a primal optimal basic solution and y is a dual optimal basic solution, both with A B being an optimal basis,. The proof is simply from the primal feasibility, dual feasibility and complementarity with dual slack vector s = r. 21

In the LP Example, let the basic variable set B = {1, 2, 3} so that 1 0 1 A B = 0 1 0 1 1 0 and A 1 B = 0 1 1 0 1 0 1 1 1 y T = (0, 1, 1) and r T = (0, 0, 0, 1, 1) The corresponding optimal corner solution is (x 1, x 2 ) = (0.5, 1) in the original problem. 22

Canonical Tableau The intermediate canonical form data can be organized in a tableau: B r T c T B b Basis Indices Ā b The up-right corner quantity represents negative of the objective value at the current basic solution. With the initial basic variable set B = {3, 4, 5} for the example, the canonical tableau is B 1 2 0 0 0 0 3 1 0 1 0 0 1 4 0 1 0 1 0 1 5 1 1 0 0 1 3 2 23

Moving to a Better Neighboring Corner As we saw in the previous slides, the BFS given by the canonical form is optimal if the reduced-cost vector r 0. What to do if it is not optimal? An effort can be made to find a better neighboring basic feasible solution. That is, a new BFS that differs from the current one by exactly one new basic variable, as long as the reduced cost coefficient of the entering variable is negative. 24

Changing Basis for the LP Example With the initial basic variable set B = {3, 4, 5}: B 1 2 0 0 0 0 3 1 0 1 0 0 1 4 0 1 0 1 0 1 5 1 1 0 0 1 3 2 Select the entering basic variable x 1 with r 1 = 1 < 0 and consider x 3 1 1 x 4 = 1 0 x 1. x 5 How much can we increase x 1 while the current basic variables remain feasible (non-negative)? This connects to the definition of minimum ratio test (MRT). 3 2 25 1

Minimum Ratio Test (MRT) Select the entering variable x e with its reduced cost r e < 0; If column Ā.e 0, then Unbounded Minimum Ratio Test (MRT): θ := min { } bi : Ā Āie > 0. ie 26

How Much Can the New Basic Variable be Increased? What is θ? It is the largest amount that x e can be increased before one (or more) of the current basic variables x i decreases to zero. For the moment, assume that the minimum ratio in the MRT is attained by exactly one basic variable index, o. Then x o is called the out-going basic variable: That is, in the basic variable set. x o = b o ā oe θ = 0 x i = b i ā ie θ > 0 i o. x o x e 27

Tie Breaking If the MRT does not result in a single index, but rather in a set of two or more indices for which the minimum ratio is attained, we select one of these indices as out-going arbitrarily. Again, when x e reaches θ we have generated a corner point of the feasible region that is adjacent to the one associated with the index set B. The new basic variable set associated with this new corner point is o e. But the new BFS is degenerate, one of its basic variable has value 0. Pretending that ϵ > 0 but arbitrarily small and continue the transformation process to the new canonical form. 28

The Simplex Algorithm 0. Initialize with a minimization problem in feasible canonical form with respect to a basic index set B. Let N denote the complementary index set. 1. Test for termination: first find r e = min j N {r j }. If r e 0, stop. The solution is optimal. Otherwise determine whether the column of Ā.e contains a positive entry. If not, the objective function is unbounded below. Terminate. Let x e be the entering basic variable. 2. Determine the outgoing: execute the MRT to determine the outgoing variable x o. 3. Update basis: update B and A B and transform the problem to canonical form and return to Step 1. 29

Transform to a New Canonical Form: Pivoting The passage from one canonical form to the next can be carried out by an algebraic process called pivoting. In a nutshell, a pivot on the nonzero (in our case, positive) pivot element ā oe amounts to (a) expressing x e in terms of all the non-basic variables; (b) replacing x e in all other formula by using the expression. In terms of Ā, this has the effect of making the column of x e look like the column of an identity matrix with 1 in the o th row and zero elsewhere. 30

Pivoting: Gauss-Jordan Elimination Process Here is what happens to the current table: 1. All the entries in row o are divided by the pivot element ā oe (This produces a 1 (one) in the column of x s ). 2. For all i o, all other entries are modified according to the rule (When j = e, the new entry is just 0 (zero).) ā ij ā ij āoj ā oe ā ie. The right-hand side and the objective function row are modified in the same way. 31

The Initial Canonical Form of the Example Choose e = 2 and MRT would decide o = 4 with θ = 1: B 1 2 0 0 0 MRT 3 1 0 1 0 0 1 4 0 1 0 1 0 1 1 5 1 1 0 0 1 3 2 If necessary, deviding the pivoting row by the pivot value to make the pivot element equal to 1. 3 2 32

Transform to a New Canonical Form by Pivoting Objective row := Objective Row + 2 Pivot Row : Bottom row := Bottom Row - Pivot Row : This is a new canonical form! B 1 0 0 2 0 2 3 1 0 1 0 0 1 2 0 1 0 1 0 1 5 1 1 0 0 1 3 2 B 1 0 0 2 0 2 3 1 0 1 0 0 1 2 0 1 0 1 0 1 5 1 0 0 1 1 1 2 33

Let s Keep Going Choose e = 1 and MRT would decide o = 5 with θ = 1/2: B 1 0 0 2 0 2 MRT 3 1 0 1 0 0 1 1 2 0 1 0 1 0 1 5 1 0 0 1 1 1 2 1 2 34

Transform to a New Canonical Form by Pivoting Objective Row := Objective Row + Pivot Row : Second Row := Second Row - Pivot Row : B 0 0 0 1 1 2 1 2 3 1 0 1 0 0 1 2 0 1 0 1 0 1 1 1 0 0 1 1 1 2 Stop! It s optimal:) B 0 0 0 1 1 2 1 2 3 0 0 1 1 1 1 2 2 0 1 0 1 0 1 1 1 0 0 1 1 1 2 35

Detailed Canonical Tableau for Production If the original LP is the production problem: maximize p T x The initial canonical tableau for minimization would be The intermediate canonical tableau would be subject to Rx b(> 0), x 0. B p T 0 0 Basis Indices R I b B r T y T c T B b Basis Indices Ā A 1 B b 36