DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

Similar documents
Revised Simplex Method

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Solving Linear and Integer Programs

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Novel update techniques for the revised simplex method (and their application)

Math Models of OR: Handling Upper Bounds in Simplex

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Lecture 2: The Simplex method

Simplex Algorithm Using Canonical Tableaus

Linear Programming The Simplex Algorithm: Part II Chapter 5

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

The practical revised simplex method (Part 2)

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

3 The Simplex Method. 3.1 Basic Solutions

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

9.1 Linear Programs in canonical form

Lecture 11 Linear programming : The Revised Simplex Method

Math Models of OR: Sensitivity Analysis

3. THE SIMPLEX ALGORITHM

Linear Programming: Simplex

MATH2070 Optimisation

Summary of the simplex method

Part 1. The Review of Linear Programming

(includes both Phases I & II)

15-780: LinearProgramming

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Relation of Pure Minimum Cost Flow Model to Linear Programming

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

(includes both Phases I & II)

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

A Review of Linear Programming

Math Models of OR: The Revised Simplex Method

TRANSPORTATION PROBLEMS

ORF 522. Linear Programming and Convex Analysis

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Termination, Cycling, and Degeneracy

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

AM 121: Intro to Optimization

IE 5531: Engineering Optimization I

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Simplex Method: More Details

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

CO350 Linear Programming Chapter 6: The Simplex Method

Farkas Lemma, Dual Simplex and Sensitivity Analysis

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Math Models of OR: Some Definitions

Scientific Computing

A primal-simplex based Tardos algorithm

1 Review Session. 1.1 Lecture 2

TIM 206 Lecture 3: The Simplex Method

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

1 Implementation (continued)

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

OPERATIONS RESEARCH. Linear Programming Problem

Linear Programming and the Simplex method

Linear Programming in Matrix Form

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

2.098/6.255/ Optimization Methods Practice True/False Questions

Optimization (168) Lecture 7-8-9

Summary of the simplex method

Lecture 11: Post-Optimal Analysis. September 23, 2009

III. Linear Programming

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

New Artificial-Free Phase 1 Simplex Method

Notes on Simplex Algorithm

IE 400: Principles of Engineering Management. Simplex Method Continued

4. The Dual Simplex Method

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Linear Programming, Lecture 4

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

Solving LP and MIP Models with Piecewise Linear Objective Functions

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

DM559 Linear and Integer Programming. Lecture 3 Matrix Operations. Marco Chiarandini

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

ORF 522. Linear Programming and Convex Analysis

Lesson 27 Linear Programming; The Simplex Method

Review Solutions, Exam 2, Operations Research

Linear Programming Redux

MATH 3511 Lecture 1. Solving Linear Systems 1

Introduction to optimization

F 1 F 2 Daily Requirement Cost N N N

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Lecture: Algorithms for LP, SOCP and SDP

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 8: Column Generation

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

The dual simplex method with bounds

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Math 273a: Optimization The Simplex method

Worked Examples for Chapter 5

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Transcription:

DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark

Outline 1. 2. 2

Motivation Complexity of single pivot operation in standard simplex: entering variable O(n) leaving variable O(m) updating the tableau O(mn) Problems with this: Time: we are doing operations that are not actually needed Space: we need to store the whole tableau: O(mn) floating point numbers Most problems have sparse matrices (many zeros) sparse matrices are typically handled efficiently the standard simplex has the Fill in effect: sparse matrices are lost accumulation of Floating Point Errors over the iterations 3

Outline 1. 2. 4

Several ways to improve wrt pitfalls in the previous slide, requires matrix description of the simplex. max n j=1 c j x j n a ij x j b i i = 1..m j=1 x j 0 j = 1..n max c T x max{c T x Ax = b, x 0} Ax = b x 0 A R m (n+m) c R (n+m), b R m, x R n+m At each iteration the simplex moves from a basic feasible solution to another. For each basic feasible solution: = {1... m} basis N = {m + 1... m + n} A = [a 1... a m ] basis matrix A N = [a m+1... a m+n ] x N = 0 x 0 5

A N A 0 b c T N c T 1 0 Ax = A N x N + A x = b Theorem A x = b A N x N asic feasible solution A is non-singular x = A 1 b A 1A Nx N 6

for the objective function: z = c T x = c T x + c T N x N Substituting for x from above: z = c T (A 1 b A 1 A Nx N ) + c T N x N = = c T A 1 b + (ct N c T A 1 A N)x N Collecting together: x = A 1 b A 1 A Nx N z = c T A 1 b + (ct N ct A 1 A N)x N }{{} Ā In tableau form, for a basic feasible solution corresponding to we have: A 1 A N I 0 A 1 b c T N ct A 1 A N 0 1 c T A 1b We do not need to compute all elements of Ā

c T = [ 1 1 0 ] c T N = [ 0 0 ] 8 Example max x 1 + x 2 x 1 + x 2 1 x 1 3 x 2 2 x 1, x 2 0 Initial tableau x1 x2 x3 x4 x5 z b 1 1 1 0 0 0 1 1 0 0 1 0 0 3 0 1 0 0 1 0 2 1 1 0 0 0 1 0 max x 1 + x 2 x 1 + x 2 + x 3 = 1 x 1 + x 4 = 3 x 2 + x 5 = 2 x 1, x 2, x 3, x 4, x 5 0 After two iterations x1 x2 x3 x4 x5 z b 1 0 1 0 1 0 1 0 1 0 0 1 0 2 0 0 1 1 1 0 2 0 0 1 0 2 1 3 asic variables x 1, x 2, x 4. Non basic: x 3, x 5. From the initial tableau: 1 1 0 1 0 x 1 [ ] A = 1 0 1 A N = 0 0 x = x 2 x3 x N = x 0 1 0 0 1 5 x 4

Entering variable: in std. we look at tableau, in revised we need to compute: c T N ct A 1 A N 1. find y T = c T A 1 (by solving yt A = c T, the latter can be done more efficiently) 2. calculate c T N yt A N 9

Step 1: [ y1 y 2 ] y 3 1 1 0 1 0 1 = [ 1 1 0 ] 0 1 0 [ ] 1 1 0 1 0 1 0 0 1 = [ 1 0 2 ] 1 1 1 Step 2: [ ] [ ] 1 0 0 0 1 0 2 0 0 = [ 1 2 ] 0 1 y T A = c T c T A 1 = yt c T N y T A N (Note that they can be computed individually: c j y T a j > 0) Let s take the first we encounter x 3 10

Leaving variable we increase variable by largest feasible amount θ R1: x 1 x 3 + x 5 = 1 x 1 = 1 + x 3 0 R2: x 2 + 0x 3 + x 5 = 2 x 2 = 2 0 R3: x 3 + x 4 x 5 = 2 x 4 = 2 x 3 0 x = x A 1 A Nx N x = x dθ d is the column of A 1 A N that corresponds to the entering variable, ie, d = A 1 a where a is the entering column 3. Find θ such that x stays positive: Find d = A 1 a (by solving A d = a) Step 3: 1 0 1 1 1 1 1 = 0 0 1 0 = d = 0 = x = 2 0 θ 0 1 1 1 0 1 2 1 d 1 d 2 d 3 2 θ 0 = θ 2 x 4 leaves

So far we have done computations, but now we save the pivoting update. The update of A is done by replacing the leaving column by the entering column x 1 d 1 θ 3 1 1 1 x = x 2 d 2 θ = 2 A = 1 0 0 θ 2 0 1 0 Many implementations depending on how y T A = c T and A d = a are solved. They are in fact solved from scratch. many operations saved especially if many variables! special ways to call the matrix A from memory better control over numerical issues since A 1 can be recomputed. 12

Outline 1. 2. 13

Solving the two Systems of Equations A x = b solved without computing A 1 (costly and likely to introduce numerical inaccuracy) Recall how the inverse is computed: For a 2 2 matrix [ ] a b A = c d For a 3 3 matrix a 11 a 12 a 13 A = a 21 a 22 a 23 a 31 a 32 a 33 the matrix inverse is A 1 = 1 [ ] T [ ] d c 1 d b = A b a ad bc c a the matrix inverse is + a22 a23 a 32 a 33 a 21 a 23 a 31 a 33 + a 21 a 22 a 31 a 32 A 1 = 1 A a12 a13 a 32 a 33 + a 11 a 13 a 31 a 33 a 11 a 12 a 31 a 32 + a12 a13 a 22 a 23 a 11 a 13 a 21 a 23 + a 11 a 12 a 21 a 22 T 14

Eta Factorization of the asis Let := A, kth iteration k be the matrix with col p differing from k 1 Column p is the a column appearing in k 1 d = a solved at 3) Hence: k = k 1 E k E k is the eta matrix differing from id. matrix in only one column 1 1 1 1 1 0 1 1 1 0 0 = 0 1 0 1 0 1 0 1 0 1 0 1 No matter how we solve y T k 1 = c T and k 1d = a, their update always relays on k = k 1 E k with E k available. Plus when initial basis by slack variable 0 = I and 1 = E 1, 2 = E 1 E 2 : k = E 1 E 2... E k eta factorization ((((y T E 1)E 2)E 3) )E k = c T, u T E 4 = c T, v T E 3 = u T, w T E 2 = v T, y T E 1 = w T (E 1(E 2 E k d)) = a, E 1u = a, E 2v = u, E 3w = v, E 4d = w 15

Solving y T k = c T also called backward transformation (TRAN) Solving k d = a also called forward transformation (FTRAN) E i matrices can be stored by only storing the column and the position If sparse columns then can be stored in compact mode, ie only nonzero values and their indices 19

More on LP Tableau method is unstable: computational errors may accumulate. Revised method has a natural control mechanism: we can recompute A 1 at any time Commercial and freeware solvers differ from the way the systems y T A = c T and A d = a are resolved 21

Efficient Implementations Dual simplex with steepest descent (largest increase) Linear Algebra: Dynamic LU-factorization using Markowitz threshold pivoting (Suhl and Suhl, 1990) sparse linear systems: Typically these systems take as input a vector with a very small number of nonzero entries and output a vector with only a few additional nonzeros. Presolve, ie problem reductions: removal of redundant constraints, fixed variables, and other extraneous model elements. dealing with degeneracy, stalling (long sequences of degenerate pivots), and cycling: bound-shifting (Paula Harris, 1974) Hybrid Pricing (variable selection): start with partial pricing, then switch to devex (approximate steepest-edge, Harris, 1974) A model that might have taken a year to solve 10 years ago can now solve in less than 30 seconds (ixby, 2002). 22