ORF 522. Linear Programming and Convex Analysis

Similar documents
ORF 522. Linear Programming and Convex Analysis

3. THE SIMPLEX ALGORITHM

IE 5531: Engineering Optimization I

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

3 The Simplex Method. 3.1 Basic Solutions

Lecture 2: The Simplex method

IE 5531: Engineering Optimization I

Part 1. The Review of Linear Programming

A Review of Linear Programming

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

1 Review Session. 1.1 Lecture 2

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

MATH 445/545 Homework 2: Due March 3rd, 2016

"SYMMETRIC" PRIMAL-DUAL PAIR

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

MATH 445/545 Test 1 Spring 2016

Linear Programming Redux

AM 121: Intro to Optimization Models and Methods Fall 2018

February 17, Simplex Method Continued

OPERATIONS RESEARCH. Linear Programming Problem

February 22, Introduction to the Simplex Algorithm

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

15-780: LinearProgramming

IE 400: Principles of Engineering Management. Simplex Method Continued

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

MAT016: Optimization

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

AM 121: Intro to Optimization

9.1 Linear Programs in canonical form

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

TRANSPORTATION PROBLEMS

Simplex Algorithm Using Canonical Tableaus

CO350 Linear Programming Chapter 6: The Simplex Method

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Math Models of OR: Some Definitions

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Lecture 6 Simplex method for linear programming

CHAPTER 2. The Simplex Method

Notes taken by Graham Taylor. January 22, 2005

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Lesson 27 Linear Programming; The Simplex Method

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Simplex Method for LP (II)

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

1. Consider the following polyhedron of an LP problem: 2x 1 x 2 + 5x 3 = 1 (1) 3x 2 + x 4 5 (2) 7x 1 4x 3 + x 4 4 (3) x 1, x 2, x 4 0

December 2014 MATH 340 Name Page 2 of 10 pages

F 1 F 2 Daily Requirement Cost N N N

The dual simplex method with bounds

Going from graphic solutions to algebraic

TIM 206 Lecture 3: The Simplex Method

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics

The Simplex Algorithm and Goal Programming

Math Models of OR: The Revised Simplex Method

1 The linear algebra of linear programs (March 15 and 22, 2015)

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Operations Research Lecture 2: Linear Programming Simplex Method

Termination, Cycling, and Degeneracy

ECE 307 Techniques for Engineering Decisions

Transportation Problem

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

Math 273a: Optimization The Simplex method

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

EXERCISES. a b = a + b l aq b = ab - (a + b) + 2. a b = a + b + 1 n0i) = oii + ii + fi. A. Examples of Rings. C. Ring of 2 x 2 Matrices

Ω R n is called the constraint set or feasible set. x 1

Systems Analysis in Construction

Chapter 1: Linear Programming

Introduction to optimization

The Simplex Algorithm

Revised Simplex Method

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Linear Programming Duality

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Relation of Pure Minimum Cost Flow Model to Linear Programming

Definition 2.3. We define addition and multiplication of matrices as follows.

Linear Programming: Simplex

Linear programs Optimization Geoff Gordon Ryan Tibshirani

II. Analysis of Linear Programming Solutions

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

Linear Programming and the Simplex method

Lecture slides by Kevin Wayne

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Math Models of OR: Extreme Points and Basic Feasible Solutions

Summary of the simplex method

Simplex tableau CE 377K. April 2, 2015

I = i 0,

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

Transcription:

ORF 522 Linear Programming and Convex Analysis The Simplex Method Marco Cuturi Princeton ORF-522 1

Reminder: Basic Feasible Solutions, Extreme points, Optima Some important theorems last time for standard forms: (i) Existence of one feasible solution Existence of a basic feasible solution; (ii) basic feasible solutions extreme points of the feasible region; (iii) Optimum of an LP occurs at an extreme point of the feasible region; Extreme points in canonical and corresponding standard form are equivalent. Princeton ORF-522 2

Today The simplex algorithm with an initial feasible solution, How to check for optimality, How to check for unboundedness of the feasible set and/or the objective in that feasible region. Princeton ORF-522 3

Golden slide. Always remember A Linear Program is a program with linear constraints and objectives. Equivalent formulations for LP s: canonical (inequalities) and standard (equalities) form. Both have feasible convex sets that are bounded from below. Simplex Algorithm to solve LP s works in standard form. In standard form, the optimum occurs on an extreme point of this polyhedron. All extreme points are basic feasible solutions. That is, all extreme points are of the type x I = B 1 I b for a subset I of coordinates, zero elsewhere. Looking for an optimum? only need to check extreme points/bfs Looking for an optimum? there exists a basis I which realizes that optimum. Princeton ORF-522 4

Improving a Basic Feasible Solution Princeton ORF-522 5

Improving a BFS Remember that a standard form LP is maximize c T x subject to Ax = b, x 0. Given I = (i 1,, i m ), the base B I = [a i1,a i2,,a im ], suppose we have a basic feasible solution where x I = B 1 b, that is an extreme point of the feasible polyhedron. We know that the optimum is reached on an optimal I. There is finite number of families {I B I is invertible, x I is feasible}. How can we find a family I such that x I is still feasible and c T I x I > c T I x I?. The simplex algorithm provides an answer, where an index of I is replaced by a new integer in O = [1,, n] \ I. Note that we only have methods that change one index at a time. Princeton ORF-522 6

The simplex does three things Given a BFS I shows how to select a base I by changing one index in I (an index goes out, an index goes in) check how to select an improved basic solution by telling which index to include. check how we can select a improved basic feasible solution linked to I by telling which index to remove. In practice, given a BFS I, the 3 steps of the simplex 1. Look for an index that would improve the objective. 2. check we can improve and obtain a valid base I by incorporating that index and checking there is at least one we can remove. 3. basic & improve objective accomplished, ensure now x I is feasible by choosing the index we remove. Princeton ORF-522 7

Initial Setting Let I = (i 1,, i m ), the base B I = [a i1,a i2,,a im ] and suppose we have a basic feasible solution x I = B 1 I b. The column vectors of B are l.i., and can thus be used as a basis of R m. Thus Y R m n A = BY, namely Y = B 1 A, the coordinates of all vectors of A in base B. m n {}} {... a 1 a 2 a n =... m { }} {{ }} {...... a i1 a i2 a im y 1 y 2 y n...... n or individually a j = m k=1 y k,j a ik. We write y j = [ y1,j ]. y m,j and a j = By j. Hence y j = B 1 a j and B 1 is a change of coordinate matrix from the canonical base to the base in B. Princeton ORF-522 8

Change an element in the basis and still have a basic solution Change an index in I? everything depends on... Y = y 1 y 2 y n R m n... Claim: if y r,e 0 for two indices, r m, e n and not in I, r for remove, e for enter, one can substitute the r th column of B,a ir, for the e th column of A,a e. That is we can select the basis Î = (I \ i r) e and we are sure that BÎ is invertible, xî is a basic solution. Princeton ORF-522 9

basic solution Proof if y r,e 0,a e = y r,e a ir + k ry k,j a ik a ir = 1 y r,e a e y r,e a ik. k r Thus is replaced by B I x I = m x ik a ik = x ir a ir + k=1 x ir y r,e a e + m k=1 m k=1,k r x ik a ik = b ( ) y k,e x ik x ir a ik = b y r,e and we have a new solution ˆx with Î = (i 1,,i r 1, e, i r+1,,i m ) and y k,j ˆx ik ˆx e y = x ik x k,e ir y r,e for 1 k m, (k r) = x ir y r,e note that ˆx ir = 0 and we still have a basic solution. Princeton ORF-522 10

basic & better: restriction on e The objective value, c T I x I becomes c Ṱ ˆx I Î with ĉ i k = c ik for k r and ĉ e = c e. Thus ẑ = c Ṱ ˆx I Î = k r c i kˆx ik + c eˆx e = ( ) k r c y i k x ik x k,e x ir y r,e + c i r e y r,e = k c i k x ik x ir y r,e k c x i k y k,e + c i r e y r,e = z x ir y r,e c T I y x e + c i r e y r,e = z + x ir y r,e (c e z e ), where z e = c T I y e = c T I B 1 a e. ẑ > z if y r,e > 0 and c e z e > 0, hence we choose a column e such that c e z e > 0 there exists y i,e > 0 Important Remark if x I is non-degenerate, x ir > 0 and hence ẑ > z. Much better than ẑ z as it implies convergence. Princeton ORF-522 11

basic & better & feasible: restriction on r We require ˆx i 0 for all i. In particular, for basic variables we need that { ˆxik = x ik x ir y k,e y r,e 0 for 1 k m (k r) ˆx e = x ir y r,e 0 Let r be chosen such that x ir y r,e = { } xik min y k,e > 0 k=1,..,m y k,e Princeton ORF-522 12

From one basic feasible solution to a better one Theorem 1. Let x be a basic feasible solution (BFS) to a LP with index set I and objective value z. If there exists e / I, 1 e n such that (i) a reduced cost coefficient c e z e > 0, (ii) at least one coordinate of y e is positive, i such that y i,e > 0, then it is possible to obtain a new BFS by replacing an index in I by e, and the new value of the objective value ẑ is such that ẑ z, strictly if x I is non-degenerate. Princeton ORF-522 13

From one basic feasible solution to a better one Remark: coefficients c e z e are called reduced cost coefficients. Remark e / I is redundant: if e I, that is k,i k = e then c e z e = 0. Indeed, c e z e = c e c T I B 1 a e = c e c T I e i k = c e c e = 0 where e i is the i th canonical vector of R m. Indeed, if Bx = a and a is the k th vector of B then necessarily x = e k. Remember: if k I then necessarily the reduced cost (c k z k ) is 0. Princeton ORF-522 14

Testing for Optimality Princeton ORF-522 15

Optimality: c i z i 0 for all i Theorem 2. Let x be a basic feasible solution (BFS) to a LP with index set I and objective value z. If c i z i 0 for all 1 i n then x is optimal. Proof idea: the conditions c i zi 0 allow us to write that c i x i is smaller than zi x i for all x in R m +. Moreover, zi integrates information about the base I and we show that the point that realizes zi x i = c T x is necessarily x and thus every c T x is smaller than c T x. Princeton ORF-522 16

Proof For any feasible solution x we have n k=1 c kx k n k=1 z k x k. Yet, n z k x k = k=1 n c T I y k x k = k=1 n k=1 m c ij y j,k x k = j=1 ( m n ) c ij y j,k x k j=1 k=1 We have found a maxima of c T x with base I... def The terms u j = n k=1 y j,k x k are actually equal to x i j. Indeed, remember m j=1 x i j a ij = b and that since x is feasible, n k=1 x ka k = b. Yet, n x k (B I y k ) = n m y k,j a ij x k = ( m n y k,j x k )a ij = m u j a ij = b. k=1 k=1 j=1 j=1 k=1 j=1 Hence z m c ij x i j = z. j=1 Princeton ORF-522 17

Testing for Boundedness Princeton ORF-522 18

(un)boundedness Sometimes programs are trivially unbounded maximize 1 T x subject to x 0. Here both the feasible set and the objective on that feasible set are unbounded. Feasible set is bounded objective is bounded. Feasible set is unbounded, optimum might be bounded or unbounded, no implication. Two different issues. Can we check quickly? Princeton ORF-522 19

(un)boundedness of the feasible set and/or of the objective. Theorem 3. Consider an LP in standard form and a basic feasible index set I. If there exists an index e / I such that y e 0 then the feasible region is unbounded. If moreover for e the reduced cost c e z e > 0 then there exists a feasible solution with at most m + 1 nonzero variables and an arbitrary large objective function. Proof sketch: Take advantage of y e 0 to modify a BFS b = x ij a ij to get a new nonbasic feasible solution using a e, b = x ij a ij θa e + θa e. This solution is arbitrarily large. If for that e, c e > z e then it is easy to prove that we can have an arbitrarily high objective. Princeton ORF-522 20

(un)boundedness of the feasible set and/or of the objective. Proof. Let I be an index set and x I the corresponding BFS. Remember that for any index, e in particular, a e = B I y e = m j=1 y j,ea ij. Let s play with a e : b = m j=1 x i j a ij θa e + θa e. b = m j=1 ( xij θy j,e ) aij + θa e Since y j,e 0 is negative we have a nonbasic & feasible solution with m + 1 nonzero variables. θ can be set arbitrarily large: x I + θa e is feasible unboundedness. If moreover c e > z e then writing ẑ for the objective of the point above, ẑ = m j=1 (x i j θy j,e )c ij + θc e, = m j=1 x i j c ij θ m j=1 y j,ec ij + θc e, = c T I x I θc T I y e + θc e = z θz e + θc e, = z + θ(c e z e ). Princeton ORF-522 21

A simple example Princeton ORF-522 22

An example Let s consider the following example: A = [ ] 1 2 3 4,c = 1 0 0 1 [ 25 6 8 ],b = [ 5 2]. Let us choose the starting I as (1, 4). B I = [ 1 1 4 1 ], and we check easily that x I = [ 1 ] which is feasible (lucky here) with objective z = c T I x I = [ 2 8][ 1 1 ] = 10. Princeton ORF-522 23

Here B 1 I = 1 3 namely An example: 4 out, 2 in [ 1 4 ] 1 1 the yij are given by B 1 I A = y 1 = [ 1 0 ],y 2 = [ 2 3 2 3 ],y 3 = [ 1 1 ],y 4 = [ 0 1 ] [ ] 1 2 3 1 0 2, 0 3 1 1 [ Hence, z 2 = [ 2 8 ] 2 3 2 3 ] = 4, z 3 = [ 2 8][ 1 1 ] = 6. Because I = [1,4], we know z 1 c 1 = z 4 c 4 = 0. We have c 2 z 2 = 1; c 3 z 3 = 0 so only one choice for e, that is 2. We check y 2 and see that y 22 is the only positive entry. Hence we remove the second index of I, i 2 = 4. I = (1, 2) and B I = [ 1 2 0 ] [ ] 2 The corresponding basic solution is x I = 3, feasible as expected. 2 ] 2 The objective is now z = [ 2 5][ 3 = 11.5 > z, better, as expected. 2 Princeton ORF-522 24

An example: that s it Since B 1 I = 1 2 [ 0 2 1 1] the new coefficients y ij in are given by B 1 I A == [ ] 1 0 0 1 0 1 3 2 3 2 Now c 3 z 3 = 6 [ 2 5 ][ 03 y 1 = [ 1 0 ], y 2 = [ 0 1 ], y 3 = [ ] 0 3/2, y 4 = [ 1 3/2], 2 ] = 1.5 and c 4 z 4 = 8 [ 2 5][ 13 since all c j z j 0, the set of indices 1, 2 is optimal. 2 The solution is x 3 = 2 0. 0 2 ] = 1.5. Princeton ORF-522 25

Nice algorithm but... Princeton ORF-522 26

Issues with the previous example Clean mathematically, but very heavy notation-wise. Worse: lots of redundant computations: we only change one column from B I to B I but always recompute at each iteration: the inverse B 1 I, the y i s, that is the matrix Y = B 1 I A, the z i s which can be found through c T I Y = ct I B 1 I A and the reduced costs. Plus we assumed we had an initial feasible solution immediately... what if? Imagine someone solves the problem (c,a,b) before us and finds x as the optimal solution such that c T x = z. He gives it back to us adding the constraint c T x z. Finding an initial feasible solution is as hard as finding the optimal solution itself! Princeton ORF-522 27

Next time For all these reasons, we look for a compact (less redundant variables and notations), fast computationally (rank one updates), methodology: the tableaux and dictionaries methods to go through the simplex step by step. We also study how to find an initial BFS and address additional issues. YET The simplex is not just a dictionary or a tableau method. The latter are tools. The simplex algorithm is 100% algebraic and combinatorial. The truth is that it is just an optimization tool in disguise. Princeton ORF-522 28