The Simplex Algorithm

Similar documents
A Review of Linear Programming

Discrete Optimization

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Summary of the simplex method

Review Solutions, Exam 2, Operations Research

Lesson 27 Linear Programming; The Simplex Method

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

The Strong Duality Theorem 1

Chapter 1 Linear Programming. Paragraph 5 Duality

"SYMMETRIC" PRIMAL-DUAL PAIR

Chapter 3, Operations Research (OR)

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Part 1. The Review of Linear Programming

Understanding the Simplex algorithm. Standard Optimization Problems.

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Chap6 Duality Theory and Sensitivity Analysis

III. Linear Programming

AM 121: Intro to Optimization

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Linear Programming, Lecture 4

Lecture 5. x 1,x 2,x 3 0 (1)

1 Seidel s LP algorithm

Lecture 2: The Simplex method

Optimization (168) Lecture 7-8-9

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Week 3 Linear programming duality

BBM402-Lecture 20: LP Duality

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

9.1 Linear Programs in canonical form

MAT016: Optimization

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Linear Programming: Chapter 5 Duality

3 The Simplex Method. 3.1 Basic Solutions

F 1 F 2 Daily Requirement Cost N N N

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

CS261: A Second Course in Algorithms Lecture #8: Linear Programming Duality (Part 1)

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Optimisation and Operations Research

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

1 Review Session. 1.1 Lecture 2

The dual simplex method with bounds

An introductory example

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear and Integer Programming - ideas

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CO 250 Final Exam Guide

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Linear Programming Redux

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

IE 5531: Engineering Optimization I

Part III: A Simplex pivot

3 Does the Simplex Algorithm Work?

Slide 1 Math 1520, Lecture 10

Lectures 6, 7 and part of 8

Linear Programming Inverse Projection Theory Chapter 3

AM 121: Intro to Optimization Models and Methods Fall 2018

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM

Lecture 10: Linear programming duality and sensitivity 0-0

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

4.6 Linear Programming duality

Linear Programming and the Simplex method

Optimization methods NOPT048

Part 1. The Review of Linear Programming Introduction

Systems Analysis in Construction

Today: Linear Programming (con t.)

c) Place the Coefficients from all Equations into a Simplex Tableau, labeled above with variables indicating their respective columns

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

The augmented form of this LP is the following linear system of equations:

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Lecture #21. c T x Ax b. maximize subject to

Special cases of linear programming

The Dual Simplex Algorithm

4.3 Minimizing & Mixed Constraints

4.4 The Simplex Method and the Standard Minimization Problem

CHAPTER 2. The Simplex Method

IE 5531: Engineering Optimization I

The simplex algorithm

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Duality of LPs and Applications

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Math Models of OR: Some Definitions

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.


3. Linear Programming and Polyhedral Combinatorics

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Linear Programming: Simplex

Simplex Algorithm Using Canonical Tableaus

II. Analysis of Linear Programming Solutions

MATH 445/545 Test 1 Spring 2016

3. Linear Programming and Polyhedral Combinatorics

Transcription:

8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:. x R n : Ax = b and x 0.. y R m : b T y < 0 and A T b 0. Consider a linear problem in standard form and its dual: (P) opt(p) = max c T x opt(d) = min b T y (D) s.t. Ax = b s.t. A T y c x 0 We say that a problem is bounded if its optimum value is finite. We proved the weak duality theorem: Theorem. If (P) and (D) are feasible, then opt(p) opt(d) and finite. In particular, (P) is bounded. The following corollary is an immediate consequence: Corollary 3. If (D) is feasible, then (P) is bounded or infeasible. If (P) is feasible, then (D) is bounded or infeasible. Strong duality The previous corollary says that some combinations of being feasible (F), bounded (B), unbounded (U) and infeasible (I) for the primal and dual are not possible. For example, primal feasible, dual feasible and unbounded is impossible by the corollary. The strong duality theorem will tell us exactly which combinations are possible and, additionally, it will prove that in the case where the primal is bounded and feasible we have that opt(p) = opt(d), which is the most important conclusion.

Theorem 4 (Strong duality). For a linear program (P) and its dual (D) there are only the following possibilities:. (P) B F and (D) B F. In this case opt(p) = opt(d).. (P) I, (D) U F. 3. (P) U F, (D) I. 4. (P) I, (D) I. Proof. A priori, there are 9 possible combinations. Weak duality already ruled out (P) B F, (D) U F, (P) U F, (D) B F, and (P) U F, (D) U F. We will eliminate the case (P) B F, (D) I, then duality eliminates (P) I, (D) B F. Only the 4 claimed cases survive. Assume that (P) B F. Let z > opt(p). Apply Farkas lemma to A 0 = ( A c T ), b0 = ( b z). We know that c T x < z for all feasible x, that is, x satisfying Ax = b and x 0. In other words, for all x, x 0 implies that A 0 x = ( Ax c T x) ( b z) = b0, i.e. condition () in Farkas lemma is not satisfied; thus, () in the lemma is true: there exists y R m and there exists α R such that b T y + zα < 0 and A T y + αc 0. We will now see that α < 0. Else, let x R n be primal optimal, that is a primal feasible point such that c T x = opt(p). Then the conditions that y and α satisfy imply: x T A T y + αc T x 0 If α 0 we can get b T y + αz 0 which is a contradiction. Thus, α < 0 and y 0 = y/α satisfies b T y 0 < z and A T y 0 c. That is, the dual is feasible (and bounded). Moreover, z opt(p) was arbitrary, i.e. for any z > max x 0,Ax=b ct x

there exists y 0 dual feasible such that The fact that z is arbitrary implies that max x 0,Ax=b ct x min b T y b T y 0 < z y : A T y c max x 0,Ax=b ct x = min b T y. y : A T y c Linear programs Recall our standard form for a linear program: maximize z = c t x, subject to Ax b, x 0. () Let us concentrate on a concrete example, the program maximize z = 5x + 4x + 3x 3, subject to x + 3x + x 3 5, 4x + x + x 3, 3x + 4x + x 3 8, x,x,x 3 0. () 3 The simplex algorithm 3. Insert slack variables To solve the linear program above using the simplex algorithm, we first convert the constraints involving s to equality constraints by introducing slack variables. Each inequality, n j= a ijx j b i, is replaced by the equality, n j= a ijx j + x n+i = b i, and an additional constraint that the slack variables are non-negative, x n+i 0. In terms of our specific example, adding slack variables gives the linear program: x + 3x + x 3 + x 4 = 5, maximize z = 5x + 4x + 3x 3, subject to 4x + x + x 3 + x 5 =, 3x + 4x + x 3 + x 6 = 8, x,x,x 3,x 4,x 5,x 6 0. (3) 3

3. Increase the objective function value through a pivot Let us focus on our specific example. By isolating the slack variables in the equalities, we see that This suggests one feasible solution, x 4 = 5 x 3x x 3 x 5 = 4x x x 3 (4) x 6 = 8 3x 4x x 3 x,x,x 3 = 0, x 4 = 5, x 5 =, x 6 = 8. For this solution our objective function is (5) z = 5x + 4x + 3x 3 = 0, (6) which seems low. How can we do better? Perhaps we should attempt to increase x. If we increase x and hold x and x 3 at zero, we can calculate the required values of x 4,x 5, and x 6 from (4). For example, x =, x,x 3 = 0, z = 5, x 4 = 3, x 5 = 7, x 6 = 5, x =, x,x 3 = 0, z = 0, x 4 =, x 5 = 3, x 6 =, (7) x = 3, x,x 3 = 0, z = 5, x 4 =, x 5 =, x 6 =. Increasing x improves the objective function value, but we cannot push x too far or the slack variables become negative, as is the case when x = 3. We can calculate the values of x that preserve nonnegativity for the slack variables from (4). x 4 0 x 5, x 5 0 x 4, (8) x 6 0 x 8 3, so the most we can increase x is to x = 5. Let s rewrite the equality constraints (4) to reflect our decision that x = 5 and x 4 = 0. x = 5 3 x x 3 x 4, x 5 = 4( 5 3 x x 3 x 4) x x 3, (9) x 6 = 8 3( 5 3 x x 3 x 4) 4x x 3. 4

and our objective function from (3) may be written z = 5 x + x 3 5 x 4. This sequence of operations is called a pivot, for reasons which will make more sense when we write everything in tableau form. 3.3 Repeat We have improved our objective function value from 0 to 5 by pivoting around x. Let s try to do it again. Increasing x will hurt our objective value, as will increasing x 4. The only thing we can increase is x 3. x 0 x 3 5, x 5 0 puts no constraint on x 3, x 6 0 x 3, so we ll increase x 3 as far as we can, to. (0) Rewriting the equality constraints to reflect our choice of x 3 = and x 6 = 0 yields x 3 = + x + 3x 4 x 6, x 5 = + 5x + x 4, x = x x 4 + x 6, () and objective function z = 3 3x x 4 x 6. When the objective function is written in this form, it is clear that increasing any of the variables x,x 4, or x 6 will decrease the objective function value. Therefore the value is at its maximum, 3, when x,x 4,x 6 = 0. 4 High level description of the simplex algorithm In a linear program of the form max c x s.t. Ax = b x 0, with m equality constraints and n variables, if one sets n m variables to 0 and lets the equality constraints determine the value of the rest and this values are non-negative, then 5

that point is feasible and is a vertex (the proof is left as an exercise); it is called a basic feasible solution. The variables that were set to 0 are called the non-basic variables, the rest are the basic variables. The set of basic variables is also called the basis. Note that if the value of a variable is non-zero then it is basic, but the converse is not true. When, in a basic feasible solution, a basic variable is zero, that solution (or basis) is said to be degenerate. The geometrical meaning of this is that each basis has an associated vertex, but a vertex can be associated to several bases (in the degenerate case). If we denote by B the indices of the basic variables and N the indices of the non-basic variables, the program is in canonical form if b 0 and the program is in the form: max c N x N + c B x B s.t. A N x N + Ix B = b, x N,x B 0. From a basic feasible solution (a vertex) and the problem in canonical form, the simplex algorithm chooses a non-basic variable that has a positive reduced cost, that is, a variable that, if increased, would increase the objective function. Then it increases the value of that variable as much as possible, without violating the non-negativity of the basic variables. That variable is made basic; (at least) one of the old basic variable becomes 0, and one becomes non-basic. The sequence of operations called a pivot (in the previous section) goes from the canonical form with respect to the old basis to the canonical form with respect to the new basis. 5 The simplex algorithm again (with better notation) 5. Simplex tableau notation If we were going to program a computer to perform the manipulations we outlined in the previous sections, it would be convenient to represent the coefficients in matrix form. When expressed as a matrix, or simplex tableau, the linear program in (3) looks like z x x x 3 x 4 x 5 x 6 = 3 5 4 3 4 8-5 4 3 0 () 6

5. Tableau algorithm In terms of simplex tableau, the algorithm for solving the problem above is:. Look at the last row of the tableau. Find a column in this row with a positive entry. This is the pivot column. z x x x 3 x 4 x 5 x 6 = 3 5 4 3 4 8-5 4 3 0 (3). Among the rows whose entry r in the pivot column is positive, find the row that has the smallest ratio s r, where s is this row s entry in the last column. This is the pivot row. z x x x 3 x 4 x 5 x 6 = 3 5 4 3 4 8-5 4 3 0 3. Divide every entry of the pivot row by the entry in the pivot column. (4) z x x x 3 x 4 x 5 x 6 = 3 5 4 3 4 8-5 4 3 0 (5) 4. For every other row, subtract a multiple of the pivot row to make the entry in the pivot column zero. z x x x 3 x 4 x 5 x 6 = 3 5 5-7 3 5 5 5. Repeat steps one to four. If step one finds no pivot column, we ve reached the optimum. If step two finds no positive ratio then z is unbounded. (6) 7

5.3 Caveats 5.3. Cycling If you have bad luck in step one, the algorithm could be in a cycle. This is pretty unlikely. If you use a consistent rule for deciding which of the positive entries to make the pivot column, like choose the positive entry with the smallest index (Bland, Robert G., A combinatorial abstraction of linear programming, J. Combinatorial Theory, Ser. B, 3 (977), no., 33 57.), it is impossible to cycle. 5.3. Initial feasible solution According to the description of the simplex algorithm that we saw, the algorithm can be applied directly only to systems in canonical form. As in our examples, if the system starts (or can be transformed to) the form max c x s.t. Ax b x 0, (7) with b 0, then the introduction of slack variables will leave the system in canonical form. The only non-trivial situation occurs when the system is in the form (7), but b has some negative components. In this case, one adds an auxiliary variable x 0 and consider the problem: min x 0 s.t. Ax x 0 b (8) x,x 0 0, After adding slack variables, one pivots at variable x 0 and the row associated to the minimum entry of b. This will leave the system in canonical form. Then one solves the problem (8) with the simplex algorithm. Consider now the situation when the algorithm finishes. If x 0 > 0, then the original problem (7) is infeasible. Else, x 0 = 0. In this case, if x 0 is basic, then we perform one more pivot operation to make it non-basic. Now that x 0 = 0 and non-basic, we stop considering that column in the tableau and replace the reduced costs by the cost function of (7). This is precisely a canonical form of the problem (7), so that we can apply simplex to it. 8

Example: max x 3x s.t. x x 3 x x x,x 0 The initial tableau after adding the auxiliary variable x 0 and the slack variables x 3, x 4 is x 0 x x x 3 x 4 3 After pivoting with respect to the first row and first column we get x 0 x x x 3 x 4 3 3 3 This tableau is in canonical form. 6 A proof of strong duality, based on the simplex algorithm The strong duality theorem states: max n j= c jx j, subject to n j= a ijx j b i = min m i= b iy i, subject to m i= a ijy i c j (9) x j 0 y i 0 We proved it already using Farkas Lemma. Now we will prove it again using ideas borrowed from the simplex algorithm. Weak duality proved n m c j x j b i y i (0) j= i= 9

for any feasible values of x j s and y i s, so to prove strong duality it is sufficient to exhibit feasible x j s and y i s where the sums are equal. Given a linear program, maximize n j= c jx j, subject to n j= a ijx j b i, x j 0, () let x,...,x n be a feasible solution that maximizes the objective function, and let z = n j= c jx j be the value of the maximum. We introduce slack variables, x n+i = b i n a ij x j, () to convert the standard form linear program to the equality form we use in the simplex algorithm. Because z is the maximum feasible value of the objective function we can write the value j= of the objective function for any feasible solution as for some c k 0. n+m z = z + c k x k (3) k= Let y i = c n+i. (4) We will show that z = m i= b iyi and m i= a ijyi c j. Since yi 0 by construction, this will prove that a feasible dual solution is equal to a feasible primal solution. Begin with z: We break up the sum z = n m+n c j x j = z + c k x k. (5) j= z = z + n c j x j + j= k= n+m k=n+ Substituting (4) to remove the c s in the second sum gives z = z + n c j x j + j= c k x k. (6) m yk x n+i. (7) i= 0

Substituting () to remove the slack variables gives n m n z = z + c j x j + yk (b i a ij x j ). (8) j= i= j= Regrouping, reversing the order of the double sum, and regrouping some more, we have m n m z = (z b i yi ) + ( c j + a ij yi )x j. (9) i= j= i= Note that all the previous manipulations are just a rewriting of the objective function, and are valid for any x R n. So, it is true for x j = 0. Therefore, z = m b i yi, (30) i= which finishes half of the proof. To see that the y i s are a feasible solution, we plug the value of z from (30) into (9) and set this equal to the original formula (5). n m n z = ( c j + a ij yi )x j = c j x j. (3) j= i= j= These sums can be equal only if the coefficients of the x j s are all equal (again, because this equality is true for any x R n ): m c j = c j + a ij yi. (3) But each c j 0, so m a ij yi c ij. (33) i= This completes the proof of strong duality. i=