Operations Research Lecture 2: Linear Programming Simplex Method

Similar documents
Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Operations Research Lecture 1: Linear Programming Introduction

TIM 206 Lecture 3: The Simplex Method

Simplex method(s) for solving LPs in standard form

Ω R n is called the constraint set or feasible set. x 1

Lecture: Algorithms for LP, SOCP and SDP

Operations Research Lecture 6: Integer Programming

Lecture 2: The Simplex method

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Chapter 5 Linear Programming (LP)

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

IE 5531: Engineering Optimization I

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Simplex Algorithm Using Canonical Tableaus

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

9.1 Linear Programs in canonical form

Lesson 27 Linear Programming; The Simplex Method

OPERATIONS RESEARCH. Linear Programming Problem

MATH 4211/6211 Optimization Linear Programming

Part 1. The Review of Linear Programming

Introduction to Linear and Combinatorial Optimization (ADM I)

Farkas Lemma, Dual Simplex and Sensitivity Analysis

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Chap6 Duality Theory and Sensitivity Analysis

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

TRANSPORTATION PROBLEMS

Simplex tableau CE 377K. April 2, 2015

MATH 445/545 Homework 2: Due March 3rd, 2016

A Review of Linear Programming

Math Models of OR: Some Definitions

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

The dual simplex method with bounds

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

AM 121: Intro to Optimization

IE 400: Principles of Engineering Management. Simplex Method Continued

Linear Programming Redux

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

ORF 522. Linear Programming and Convex Analysis

Math 273a: Optimization The Simplex method

1 Review Session. 1.1 Lecture 2

AM 121: Intro to Optimization Models and Methods Fall 2018

F 1 F 2 Daily Requirement Cost N N N

CPS 616 ITERATIVE IMPROVEMENTS 10-1

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

3 The Simplex Method. 3.1 Basic Solutions

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Linear Programming, Lecture 4

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Systems Analysis in Construction

Summary of the simplex method

Relation of Pure Minimum Cost Flow Model to Linear Programming

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Simplex Method for LP (II)

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

IE 5531: Engineering Optimization I

1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Operations Research Lecture 4: Linear Programming Interior Point Method

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

"SYMMETRIC" PRIMAL-DUAL PAIR

Chapter 7 Network Flow Problems, I

Summary of the simplex method

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Linear programs Optimization Geoff Gordon Ryan Tibshirani

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

The augmented form of this LP is the following linear system of equations:

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

15-780: LinearProgramming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

13. Systems of Linear Equations 1

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

ECE 307 Techniques for Engineering Decisions

Termination, Cycling, and Degeneracy

Review Solutions, Exam 2, Operations Research

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Linear Programming: Simplex

Chapter 4 The Simplex Algorithm Part I

Lecture slides by Kevin Wayne

Linear & Integer programming

Lecture 6 Simplex method for linear programming

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

(includes both Phases I & II)

The Simplex Algorithm

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

3. THE SIMPLEX ALGORITHM

Part 1. The Review of Linear Programming

Transcription:

Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example 1. The feasible set is shared region in Figure 1 min x 1 x 2 s.t. x 1 + 2x 2 3 2x 1 + x 2 3 x 1, x 2 0 x 2 2x 1 + x 2 3 3 x 1 x 2 = 2 1.5 x 1 x 2 = z x 1 x 2 = 0 (1, 1) x 1 + 2x 2 3 1.5 3 x 1 For any given scalar z, the line x 1 x 2 = z describes all points whose cost c T x equal to z. Moving the line without leaving the feasible region, z = 2 is the minimum value what we can do, and x = (1, 1) is an optimal solution. Example 2. Consider the following feasible set in R 2 x 1 + x 2 1 x 1, x 2 0 1

which is shown in Figure 2 x 2 x1 + x2 1 1 x 1 a) For the cost vector c = (1, 1), x = (0, 0) is the unique optimal solution. b) For the cost vector c = (1, 0), there are multiple optimal solusions x = (0, x 2 ), with 0 x 2 1, and the set of optimal solutions is bounded c) For the cost vector c = (0, 1), there are multiple optimal solusions x = (x 1, 0), with 0 x 1, and the set of optimal solutions is unbounded d) For the cost vector c = ( 1, 1), no optimal solusion, and the optimal cost is e) If an additional constrain x 1 + x 2 2, no feasible solutions In these examples, it seems that we can find the optimal solutions among the corners of the feasible set:). 1.2 Some Concepts Given the stand form of LP, The matrix form of LP, min s.t. n c j x j j=1 (1a) n a ij x j = b i, i = 1,..., m (1b) j=1 x j 0 j = 1,..., n (1c) x 0 a 11 a 12 a 1n here, A =... = (A 1, A 2,..., A n ) a m1 a m2 a mn b = (b 1, b 2,..., b m ) T, c = (c 1, c 2,..., c n ) T and x = (x 1, x 2,..., x n ) T. min c T x (2a) s.t. Ax = b (2b) For the standard form of LP, without loss of generality, we assume that m rows of the matrix A are linearly independent. Then we can find m columns A B(1),..., A B(m), which are linearly independent. If we let x i = 0, for (2c) 2

i B(1),..., B(m), we get an equaltion m A B(i) x B(i) = b. The equation has a unqiue solusion x B. i=1 Definition 3. x B and x i = 0(i B(1),..., B(m)) compose a solusion of Ax = b, which is called basic solution, the variables x B(1),..., x B(m) are called basic variables, and the remaining variables are called nonbasic variables. B(1),..., B(m) are called basic indices. B = A B(1) A B(2) A B(m) feasible and called basic feasible solution. is called as basis matrix. If the basic solution is also nonnegative, then it is Definition 4. A set S R n is convex if for any x, y S, and any λ [0, 1], we have λx + (1 λ)y S Definition 5. A polyhedron is a set with the form {x R n Ax b}, here A is an m n matrix and b is a vector in R m. A polyhedron is a convex set. Definition 6. Let P be a polyhedron. A vector x P is an extreme point of P if we cannot find two vectors y, z P (y x, z x), and a schalar λ [0, 1], such that x = λy + (1 λ)z The feasible set of LP is a polyhedron. 1.3 Therories on Optimality of LP Theorem 7. Let P be a nonempty polyhedron and let x P. Then the following are equivalent: a) x is an extreme point b) x is a basic feasible solution Proof: (skip) Theorem 8. Consider the LP problem of minimizing c T x over a polyhedron P. Suppose that P has at least one extreme point and that there exists an optimal solution. Then, there exists an optimal solution which is an extreme point of P. Proof: (skip) Theorem 9. Consider the LP problem of minimizing c T x over a polyhedron P. Suppose that P has at least one extreme point. Then, either the optimal cost is equal to, or there exists an extreme point which is optimal. Proof: (skip) Corollary 10. Consider the LP problem of minimizing c T x over a nonempty polyhedron. Then, either the optimal cost is equal to or there exists an optimal solution. 3

2 Optimality Conditions Definition 11. Let x be an element of a polyhedron P. A vector d R n is said to be a feasible direction at x, if there exists a positive schalr θ for which x + θd P We want to move from a basic feasible solution (extreme point) to another, along the edges of the feasible set, in a cost reducing direction. We adopt this way for selecting feasible direction: select a nonbasic variable x j and set d j = 1, d i = 0 for other nonbasic index i other than j. d B is the components of changeing direction d (the jth basic direction) that correspond to the basic variables. Then, we have A(x + θd) = b Ad = 0 Bd B + j N A j d j = 0 Bd B + A j = 0 d B = B 1 A j If we use jth basic direction, the cost change, c T d = c T B d B + c j = c j c T B B 1 A j = c j. c j is called reduced cost of the jth variable j. Theorem 12. Consider a basic feasible solution x associated with a basis matrix B, and let c be the coresponding vector of reduced costs, a) If c 0, then x is optimal. b) If x is optimal and nondegenerate, then c 0. Proof: a) Assume c 0, let y be an arbitrary feasible solution, which means Ax = Ay = b and. Let d = y x, we have Ad = 0 Bd B + i N A i d i = 0 d B = i N B 1 A i d i c T d = c T Bd B + i N c id i = i N (c i c T BB 1 A i )d i = i N c id i since y is feasible, y i 0, thus d i 0 and c id i 0. c T (y x) 0, that is x is optimal. b) Suppose that x is a nondegenerate basic feasible solution, and c i < 0 for some j. Then jth basic direction is a feasible direction of cost decrease. By moving in this direction, we obtain feasible solutions whose cost is less than that of x, and x is not optimal. Given a basic feasible solution x and the reduced costs c j of all nonbasic variables. If all of them are nonngeative, Theroem 12 shows that we have an optimal solution, we stop. If on the other hand, one reduced cost c j is negative, we select jth basic direction d as the cost decrease direction to move x x + θd: the nonbasic variable x j becomes positve(x j enters the basis) and all other nonbasic variables remain at zero. When moving in d direction x x + θd, we need to make sure x + θd is feasible: A(x + θd) = b(this is true) and x + θd 0: 1) If d 0, x + θd 0 is true for all θ 0 2) If d i < 0 for some i, then θ = min xi {i d d i<0} i. 3 Tableau Method 3.1 Method Development Let l be such that θ = x B(i) /u l. Form a new basis by replacing A B(l) with A j. For convenience to define u = d B = B 1 A j, If y is the new basic feasible solution, the values of the new basic variables are y j = θ 4

Figure 1: example: elementary row operation and y B(i) = x B(i) θ µ i, i l We need to find an efficient method for updating the matrix B 1 each time that we effect a change of basis. Let B = [ A B(1) A B(m) ] be the basis matrix at the beginning of an iteration. Let B = [ A B(1) A B(l 1) A j A B(l+1) A B(m) ] be the basis matrix at the beginning of the next iteration. We expect B 1 contains information that can be exploited in the computation of B 1. Given a matrix, the operation of adding a constant multiple of one row to the same or to another row is called an elementary row operation. Performing an elementary row operation on a matrix C is equivalent to forming the matrix QC, where Q is a suitably constructed square matrix (Q is invertible). Since B 1 B = I, we see that B 1 A B(i) is the ith unit vector e i, we have B 1 B = where u = B 1 A j. e 1 e l 1 u e l+1 e m Execute the sequence of elementary row operations, which is equivalent to left-multiplying B 1 B by a certain invertible matrix Q. We have QB 1 B = I, which yields QB 1 = B 1. The last equation shows that if we apply the same sequence of row operations to the matrix B 1 (equivalently, leftmultiply by Q), we obtain B 1. All it takes to generate B 1, is to start with B 1 and apply the sequence of elementary row operations described above. Here, instead of maintaining and updating the matrix B 1, we maintain and update the m (n + 1) matrix B 1 [ b A ] This matrix is called the simplex tableau. The column u = B 1 A j corresponding to the variable that enters the basis is called the pivot column. If the lth basic variable exits the basis, the lth row of the tableau is called the pivot row. The element belonging to both the pivot row and pivot column is called the pivot element. The rows of the tableau provide us with the coefficients of the equality constrains B 1 b = B 1 Ax. At the end of each iteration, we need to update the tableau B 1 [ b A ] and compute B 1 [ b A ]. This can be accomplished by left-multiplying the simplex tableau with a matrix Q satisfying QB 1 = B 1. As explained earlier, this is the same as performing those elementary row operations that turn B 1 to B 1 ; that is, we add to each row a multiple of the pivot row to set all entries of the pivot column to zero, with the exception of the pivot element which is set to one. It is customary to augment the simplex tableau by including a top row, zeroth row. The entry at the top left corner contains the value c T B x B. The rest of the zeroth row is the row vector of reduced costs. The structure of the tableau, 5

is (Table 1). The rule for updating the zeroth row turns out be identical to the rule used for the other rows of the tableau (the validation is skipped here). 3.2 General process An iteration of the full tableau implementation 1. A typical iteration starts with the tableau (Table 1) associated with a basis matrix B and the corresponding basic feasible solution x. 2. Examine the reduced costs in the zeroth row of the tableau. If they are all nonnegative, the current basic feasible solution is optimal, and the algorithm terminates; else, choose some j for which c j < 0 3. Consider the vector u = B 1 A j, which is the jth column of the tableau. If no component of u is positive, the optimal cost is and the algorithm terminates. 4. For each i for which u i is positive, compute the ratio x B(i) u i. Let l be the index of a row that corresponds to the smallest ratio. The column A B(l) exits the basic and the column A j enters the basis. 5. Add to each row of the tableau a constant multiple of the lth row (the pivot row) so that u l (the pivot el) becomes one and all other entries of the pivot column become zero. Table 1: Tableau. c T Bx b c 1 c n x B(1). B 1 A 1 B 1 A n x B(m) Example 13. Consider the problem The feasible solution of this problem is shown in Figre 13 transform it into the standard form min 10x 1 12x 2 12x 3 s.t. x 1 + 2x 2 + 2x 3 20 2x 1 + x 2 + 2x 3 20 2x 1 + 2x 2 + x 3 20 x 1, x 2, x 3 0 min 10x 1 12x 2 12x 3 s.t. x 1 + 2x 2 + 2x 3 + x 4 = 20 2x 1 + x 2 + 2x 3 + x 5 = 20 2x 1 + 2x 2 + x 3 + x 6 = 20 x 1,..., x 6 0 6

x 2 C = (0, 10, 0) E = (4, 4, 4) A = (0, 0, 0) D = (10, 0, 0) x 1 B = (0, 0, 10) x 3 It is obvious that x = (0, 0, 0, 20, 20, 20) T is a basic feasible solution (the point A = (0, 0, 0)). That is x 4, x 5, x 6 1 0 0 are as the basic variables, then the corresponding basis matrix B = 0 1 0, c B = 0, therefore c T B B = 0 and 0 0 1 c = c. x 1 x 2 x 3 x 4 x 5 x 6 0-10 -12-12 0 0 0 x 4 = 20 1 2 2 1 0 0 x 5 = 20 2 1 2 0 1 0 x 6 = 20 2 2 1 0 0 1 The reduced cost of x 1 is negative and we let that variable enter the basis. The pivot column u = (1, 2, 2) T, and caculate the ratios x B(i) u i, i = 1, 2, 3, the smallest ratio corresponds to i = 2 and i = 3. We select x 5 to exist the basis. Then x 4, x 1, x 6 are the new basic variables. Then we do some row operations to get the new tableau x 1 x 2 x 3 x 4 x 5 x 6 100 0-7 -2 0 5 0 x 4 = 10 0 1.5 1 1-0.5 0 x 1 = 10 1 0.5 1 0 0.5 0 x 6 = 0 0 1-1 0-1 1 7

The corresponding basic feasible solution is x = (10, 0, 0, 10, 0, 0) T. We have moved to point D = (10, 0, 0) in Figure 13. x 2 and x 3 have negative reduced costs, we choose x 3 to the one that enter the basis. The pivot column is u = (1, 1, 1) T. We only caculate the ratioes x B(i) u i, for i = 1, 2. We select x 4 exist the basis, then execute the row operations. x 1 x 2 x 3 x 4 x 5 x 6 120 0-4 0 2 4 0 x 3 = 10 0 1.5 1 1-0.5 0 x 1 = 0 1-1 0-1 1 0 x 6 = 10 0 2.5 0 1-1.5 1 We have moved to point B = (0, 0, 10) in Figure 13. We bring x 2 into the basis, and x 6 exits, and the resulting tableau is: x 1 x 2 x 3 x 4 x 5 x 6 136 0 0 0 3.6 1.6 1.6 x 3 = 4 0 0 1 0.4 0.4-0.6 x 1 = 4 1 0 0-0.6 0.4 0.4 x 2 = 4 0 1 0 0.4-1.6 0.4 We have moved to point E = (4, 4, 4) in Figure 13. Now, all reduced costs are nonnegative. We get the optimal solution 3.3 Finding an initial basic feasible solution Sometimes this is straintforward. For example, a problem involving constraints of the form Ax b. We can introduce nonnegative slack variables s and rewrite the constraint in the form Ax + s = b. Then the vector (x, s) defined by x = 0 and s = b can be as an initial basic feasible solution. Otherwise, we can use the following methods The big-m method Example 14. Consider the following LP problem, min x 1 + x 2 + x 3 s.t. x 1 + 2x 2 + 3x 3 = 3 x 1 + 2x 2 + 6x 3 = 2 4x 2 + 9x 3 = 5 3x 3 + x 4 = 1 x 1,..., x 4 0 When using big-m method, generate the following auxiliary problem min x 1 + x 2 + x 3 + Mx 5 + Mx 6 + Mx 7 s.t. x 1 + 2x 2 + 3x 3 + x 5 = 3 x 1 + 2x 2 + 6x 3 + x 6 = 2 4x 2 + 9x 3 + x 7 = 5 3x 3 + x 4 = 1 x 1,..., x 7 0 8

A basic feasible solution is obtained by letting (x 5, x 6, x 7, x 4 ) = b = (3, 2, 5, 1). The corresponding basis matrix is the identity matrix. And c B = (M, M, M, 0). x 1 x 2 x 3 x 4 x 5 x 6 x 7-10M 1-8M+1-18M+1 0 0 0 0 x 5 = 3 1 2 3 0 1 0 0 x 6 = 2-1 2 6 0 0 1 0 x 7 = 5 0 4 9 0 0 0 1 x 4 = 1 0 0 3 1 0 0 0 The reduced cost of x 3 is negative when M is large enough. Bring x 3 into the basis and have x 4 exit. x 1 x 2 x 3 x 4 x 5 x 6 x 7-4M-1/3 1-8M+1 0 6M-1/3 0 0 0 x 5 = 2 1 2 0-1 1 0 0 x 6 = 0-1 2 0-2 0 1 0 x 7 = 2 0 4 0-3 0 0 1 x 3 = 1/3 0 0 1 1/3 0 0 0 The reduced cost of x 2 is negative when M is large enough. Bring x 2 into the basis and x 6 exists. Have x 1 enter and x 5 exit the basis. x 1 x 2 x 3 x 4 x 5 x 6 x 7-4M-1/3-4M+3/2 0 0-2M+2/3 0 4M-1/2 0 x 5 = 2 2 0 0 1 1-1 0 x 2 = 0-1/2 1 0-1 0 1/2 0 x 7 = 2 2 0 0 1 0-2 1 x 3 = 1/3 0 0 1 1/3 0 0 0 x 1 x 2 x 3 x 4 x 5 x 6 x 7-11/6 0 0 0-1/12 2M-3/4 2M+1/4 0 x 1 = 1 1 0 0 1/2 1/2-1/2 0 x 2 = 1/2 0 1 0-3/4 1/4 1/4 0 x 7 = 0 0 0 0 0-1 -1 1 x 3 = 1/3 0 0 1 1/3 0 0 0 Bring x 4 into the basis and x 3 exits. x 1 x 2 x 3 x 4 x 5 x 6 x 7-7/4 0 0 1/4 0 2M-3/4 2M+1/4 0 x 1 = 1/2 1 0-3/2 0 1/2-1/2 0 x 2 = 5/4 0 1 9/4 0 1/4 1/4 0 x 7 = 0 0 0 0 0-1 -1 1 x 4 = 1 0 0 3 1 0 0 0 Now, we get the optimal solution (x 1, x 2, x 3, x 4 ) T = (1/2, 5/4, 0, 1) T. 9

4 References 1. Dimitris Bertsimas, John N. Tsitsiklis. Introduction to Linear Programming, Athena Scientific, 1997.02 2. Michael C. Ferris, Olvi L. Mangasarian, Stephen J. Wright. Linear Programming with Matlab, SIAM, 2007 10