Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Similar documents
Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Lecture 11: Post-Optimal Analysis. September 23, 2009

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Notes on Simplex Algorithm

Math Models of OR: Some Definitions

Review Solutions, Exam 2, Operations Research

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

The augmented form of this LP is the following linear system of equations:

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

(includes both Phases I & II)

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

Simplex Algorithm Using Canonical Tableaus

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

The dual simplex method with bounds

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Sensitivity Analysis

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Math 273a: Optimization The Simplex method

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

(includes both Phases I & II)

IE 400: Principles of Engineering Management. Simplex Method Continued

On the Number of Solutions Generated by the Simplex Method for LP

Lecture 11 Linear programming : The Revised Simplex Method

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

2.098/6.255/ Optimization Methods Practice True/False Questions

Understanding the Simplex algorithm. Standard Optimization Problems.

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Complexity of linear programming: outline

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

3 The Simplex Method. 3.1 Basic Solutions

Slide 1 Math 1520, Lecture 10

TIM 206 Lecture 3: The Simplex Method

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

Chap6 Duality Theory and Sensitivity Analysis

A Review of Linear Programming

MATH 445/545 Homework 2: Due March 3rd, 2016

Math Models of OR: Sensitivity Analysis

Linear Programming and the Simplex method

Chapter 1 Linear Programming. Paragraph 5 Duality

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Foundations of Operations Research

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles

Linear Programming. Chapter Introduction

On the number of distinct solutions generated by the simplex method for LP

Lecture: Algorithms for LP, SOCP and SDP

Part 1. The Review of Linear Programming

The Avis-Kalunzy Algorithm is designed to find a basic feasible solution (BFS) of a given set of constraints. Its input: A R m n and b R m such that

Part III: A Simplex pivot

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Multicommodity Flows and Column Generation

Lecture 6 Simplex method for linear programming

The Simplex Algorithm

Lesson 27 Linear Programming; The Simplex Method

Lecture 14: October 11

Lecture 5 Simplex Method. September 2, 2009

4.3 Minimizing & Mixed Constraints

1 Implementation (continued)

Revised Simplex Method

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Introduction to Operations Research

IE 5531: Engineering Optimization I

that nds a basis which is optimal for both the primal and the dual problems, given

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

Optimization (168) Lecture 7-8-9

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Worked Examples for Chapter 5

MATH3017: Mathematical Programming

Homework Assignment 4 Solutions

Sensitivity Analysis and Duality in LP

Math 210 Finite Mathematics Chapter 4.2 Linear Programming Problems Minimization - The Dual Problem

Professor Alan H. Stein October 31, 2007

1 Review Session. 1.1 Lecture 2

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Linear Programming, Lecture 4

9.1 Linear Programs in canonical form

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

ORIE 6300 Mathematical Programming I August 25, Lecture 2

A primal-simplex based Tardos algorithm

Chapter 3, Operations Research (OR)

Summary of the simplex method

Introduction to Linear and Combinatorial Optimization (ADM I)

The Strong Duality Theorem 1

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

CO350 Linear Programming Chapter 6: The Simplex Method

Linear Programming: Simplex

Lecture #21. c T x Ax b. maximize subject to

Transcription:

ORIE 6300 Mathematical Programming I October 9, 2014 Lecturer: David P. Williamson Lecture 14 Scribe: Calvin Wylie 1 Simplex Issues: Number of Pivots Question: How many pivots does the simplex algorithm need to take to find an optimal solution? Answer: If we use a pivot rule that prevents cycling, like Bland s rule, we know that the simplex algorithm never revisits a basis. Therefore the number of pivots is bounded by the number of basis, which is no larger than ( n m ). In practice, however the simplex algorithm seems to perform O(m) pivots. For most known pivot rules, there have been examples devised where 2 n 1 pivots are taken. This work was started by Klee and Minty (1970). Idea: Consider n-dimensional cube 0 x i 1, i = 1, 2,..., n There is a Hamiltonian path in any n-dimensional cube. This path starts at x = 0 and visits every vertex exactly once. Examples: 0 0 The idea now is to perturb the costs and the constraints slightly so that the simplex algorithm follows the Hamiltonian path. Thus it performs 2 n 1 pivots. Observe, however that in such a case, only one pivot is necessary to reach the optimal solution!!! Most pivoting rules used in practice have been shown to have an exponential number of pivots in the worst case by an example similar to the ones given by Klee and Minty (often called Klee-Minty cubes). Until recently, two exceptions to this were Zadeh s rule and randomized pivoting rules. Zadeh s rule chooses the entering variable that has entered the basis the least often so far. It was shown by Friedmann (IPCO 2011) that this rule can also take an exponential number of pivots 14-1

to reach the optimal solution. Randomized pivoting rules choose the entering variable randomly, and certain classes of this rule have also been shown to be exponential by Friedmann, Hansen, and Zwick (STOC 2011). Suppose now that P is a bounded polyhedron and we have two vertices of P, x and y. Let d(x, y) be the least number of nondegenerate pivots needed to go from x to y. Let be the diameter of P. Let D(P ) = max d(x, y) vertices: x,y (n, m) = max D(P ), P S where S is the set of all bounded polyhedra in R n with m constraints. Note that (n, m) gives a lower bound on the number of pivots we might need for a bounded polyhedron with n variables and m constraints (assuming we start at the worst-case x and the optimal is at the worst-case y). The following was conjectured by Hirsch in 1957. Claim 1 (Hirsch Conjecture): (n, m) m n The conjecture is now known to be false. Santos (2011) showed that (43, 86) > 43, and in general, (n, m) 21 20 (m n) for n and m sufficiently large. Kalai and Kleitman (1992) showed the bound (n, m) m 2+log 2 n, and the best upper bound known is (n, m) (m n) log 2 n (Todd 2014). 2 Types of Simplex Method 2.1 Revised Simplex Method This is an implementation of the simplex algorithm in which we maintain a basis B and compute all other information from it during each iteration. What we have called the simplex method throughout the semester thus far. 2.2 Standard Simplex Method In this implementation of the simplex algorithm, we maintain a full tableau throughout the algorithm. Tableau: b(a T B) 1 c B A 1 B b c A T (A T B) 1 c B A 1 B A 14-2

The top left hand corner has the negative value of the objective function. Recall that cx = c T x b T y = c T x b T (A T B ) 1 c B. But cx = 0, since c B = 0 and x N = 0. Therefore c T x = b T (A T B ) 1 c B. The bottom left hand corner has the values of the basic variables. The top right hand corner has the reduced cost vector. The bottom right hand corner has a modified constrant matrix with the property that the submatrix consisting of the columns indexed by B is the identity I Example of the standard Simplex Method min x 1 +2x 2 x 3 x 1 + x 4 = 4 x 2 + x 5 = 4 x 1 +x 2 + x 6 = 6 x 1 + 2x 3 + x 7 = 4 x 1,..., x 7 0 0-1 2-1 0 0 0 0 x 4 =4 1 0 0 1 0 0 0 x 6 =6 1 1 0 0 0 1 0 x 7 =4-1 0 2 0 0 0 1 14-3

During this iteration, x 1 enters the basis and x 4 leaves the basis. To do an update take row defining the pivot, do Gaussian elimination to make the new basis submatrix into the Identity matrix. Also use this row to eliminate x 1 from the reduced costs. 4 0 2-1 1 0 0 0 x 1 =4 1 0 0 1 0 0 0 x 6 =2 0 1 0-1 0 1 0 x 7 =8 0 0 2 1 0 0 1 During this iteration, x 3 enters the basis and x 7 leaves. 8 0 2 0 1.5 0 0 0.5 x 1 =4 1 0 0 1 0 0 0 x 6 =2 0 1 0-1 0 1 0 x 3 =4 0 0 1 0.5 0 0 0.5 We found the optimal solution, since the reduced costs vector has nonnegative entries. The value of the LP is 8 and the optimal solution is [4,0,4,0,4,2,0]. 2.3 Capacitated Simplex We consider an LP of the following form. min c T x Ax = b l x u Question: How do we solve such an LP? Idea 1: Put into standard form. Let z = x l and b = b Al. The new LP is min c T z Az = b 0 z u l Now we can convert z u l to equality constraints by adding slack variables. Thus the number of costraints in the constraint matrix and the number of variables are increased by n. Since we typically have that n >> m, this alteration could cause the complexity of the algorithm to increase significantly. Idea 2: Modify the Simplex Method! We will maintain three sets of variables: B (basic), L, and U. Then j L = x j = l j If we take the dual of the original LP, we get max j U = x j = u j b T y u T v + l T w A T y v + w = c v, w 0 14-4

Notice that for any y, there exists a dual feasible solution; namely v = max(0, A T y c) w = max(0, c A T y) We can set y = (A T B ) 1 c B as usual and compute reduced costs, c = c A T y. Let s now consider a solution to the primal, x. A B x B + A L x L + A U x U = b A B x B = b A L x L A U x U x B = A 1 B b A 1 B A Lx L A 1 B A Ux U If x satisfies the above equation and l B x B u B, then it is primal feasible. Furthermore, it is optimal if it is primal feasible and c j 0 for all j L, and c j 0 for all j U. This follows since if c j 0 then A T j y c j = v j = 0 and w j 0. Therefore complementary slackness is obeyed since w j > 0 = x j = l j. If c j 0 then A T j y c j = w j = 0 and v j 0. Therefore complementary slackness is obeyed since v j > 0 = x j = l j. Suppose these optimality conditions did not hold, i.e., either c j < 0 for some j L or c j > 0 for some j U. This would motivate increasing or decreasing x j, respectively. The rest of the details of the capacitated simplex method will be given as a homework problem. 14-5