CO350 Linear Programming Chapter 6: The Simplex Method

Similar documents
CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 6: The Simplex Method

Sensitivity Analysis

9.1 Linear Programs in canonical form

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Lecture 11: Post-Optimal Analysis. September 23, 2009

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Lesson 27 Linear Programming; The Simplex Method

Part 1. The Review of Linear Programming

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

CO350 Linear Programming Chapter 5: Basic Solutions

MATH2070 Optimisation

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

The augmented form of this LP is the following linear system of equations:

ECE 307 Techniques for Engineering Decisions

Special cases of linear programming

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

The dual simplex method with bounds

MAT016: Optimization

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Worked Examples for Chapter 5

MATH 445/545 Test 1 Spring 2016

Chap6 Duality Theory and Sensitivity Analysis

CHAPTER 2. The Simplex Method

IE 400: Principles of Engineering Management. Simplex Method Continued

Simplex method(s) for solving LPs in standard form

MATH 445/545 Homework 2: Due March 3rd, 2016

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

TRANSPORTATION PROBLEMS

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

CPS 616 ITERATIVE IMPROVEMENTS 10-1

AM 121: Intro to Optimization

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

IE 5531: Engineering Optimization I

Understanding the Simplex algorithm. Standard Optimization Problems.

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

Sensitivity Analysis and Duality in LP

Simplex tableau CE 377K. April 2, 2015

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear Programming. Jie Wang. University of Massachusetts Lowell Department of Computer Science. J. Wang (UMass Lowell) Linear Programming 1 / 47

3 Does the Simplex Algorithm Work?

The Dual Simplex Algorithm

Lecture 2: The Simplex method

Part 1. The Review of Linear Programming

Linear Programming Redux

Ω R n is called the constraint set or feasible set. x 1

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

F 1 F 2 Daily Requirement Cost N N N

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Introduction to optimization

Farkas Lemma, Dual Simplex and Sensitivity Analysis

4. Duality and Sensitivity

AM 121: Intro to Optimization Models and Methods Fall 2018

Chapter 1 Linear Programming. Paragraph 5 Duality

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

1 Review Session. 1.1 Lecture 2

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Summary of the simplex method

Today: Linear Programming (con t.)

Math Models of OR: Some Definitions

The Simplex Algorithm

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

An introductory example

The Simplex Algorithm: Technicalities 1

c) Place the Coefficients from all Equations into a Simplex Tableau, labeled above with variables indicating their respective columns

Chapter 5 Linear Programming (LP)

Chapter 3, Operations Research (OR)

A Review of Linear Programming

Notes on Simplex Algorithm

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

3 The Simplex Method. 3.1 Basic Solutions

Review Solutions, Exam 2, Operations Research

In Chapters 3 and 4 we introduced linear programming

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

6.2: The Simplex Method: Maximization (with problem constraints of the form )

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Relation of Pure Minimum Cost Flow Model to Linear Programming

Linear Programming: Chapter 5 Duality

Simplex Algorithm Using Canonical Tableaus

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

4.6 Linear Programming duality

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Linear Programming Duality

Sensitivity Analysis and Duality

TIM 206 Lecture 3: The Simplex Method

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Linear Programming: Simplex Method CHAPTER The Simplex Tableau; Pivoting

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Numerical Optimization

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Lecture 9 Tuesday, 4/20/10. Linear Programming

December 2014 MATH 340 Name Page 2 of 10 pages

Transcription:

CO350 Linear Programming Chapter 6: The Simplex Method 8th June 2005

Chapter 6: The Simplex Method 1 Minimization Problem ( 6.5) We can solve minimization problems by transforming it into a maximization problem. Another way is to change the selection rule for entering variable. Since we want to minimize z, we would now choose a reduced cost c k that is negative, so that increasing the nonbasic variable x k decreases the objective value. We now stop with an optimal solution only when c j 0 for all j N. Final remark: We can always transform the LP into a maximization problem, and use the simplex method as normal.

Chapter 6: The Simplex Method 2 Choice Rules ( 6.6) In the simplex method, we need to make two choices at each step: entering and leaving variables. When choosing entering variable, there may be more than one reduced cost c j > 0. When choosing leaving variable, there may be more than one ratio b i /ā ik that matches the minimum ratio. We may pick entering and leaving variable arbitrarily in the event of multiple choices. On the other hand, we may devise choice rules for choosing entering and leaving variable. We leave the discussion of the choice rules for leaving variable to a later chapter.

Chapter 6: The Simplex Method 3 Rules for entering variables We discuss four commonly used choice rules for entering variables. We shall use the following tableau for illustration. z 2x 1 3x 2 = 0 x 1 + 2x 2 + x 3 = 2 x 1 x 2 + x 4 = 3 Largest coefficient rule. This rule was first suggested by George Dantzig (the inventor of the simplex method). As a result, it is also known as Dantzig s rule. The rule states Pick the nonbasic variable with the largest reduced cost. Break tie arbitrarily. E.g., in, since c 2 = 3 > 2 = c 1, we choose x 2 to enter according to this rule. This rule is well-motivated and widely used since it picks a variable that gives the largest increase in objective value per unit of increase in the variable.

Chapter 6: The Simplex Method 4 Rules for entering variables We discuss four commonly used choice rules for entering variables. We shall use the following tableau for illustration. z 2x 1 3x 2 = 0 x 1 + 2x 2 + x 3 = 2 x 1 x 2 + x 4 = 3 Smallest subscript rule. This rule assumes that the variable are in pre-arranged (before starting the simplex method) in a certain order. When the variables are labelled x 1, x 2,..., x n, we order them in ascending order of their subscript (hence the name for the rule). The rule states Pick the nonbasic variable with the least subscript among those with positive reduced cost. E.g., in, both x 1 and x 2 have positive reduced costs. Since x 1 has a smaller subscript than x 2, we choose x 1 to enter according to this rule. This rule is not so well-motivated as Dantzig s rule, but it is unambiguous there is no need to break tie.

Chapter 6: The Simplex Method 5 Rules for entering variables We discuss four commonly used choice rules for entering variables. We shall use the following tableau for illustration. z 2x 1 3x 2 = 0 x 1 + 2x 2 + x 3 = 2 x 1 x 2 + x 4 = 3 Largest improvement rule. This rule has the same motivation as Dantzig s rule it aims for large increase in objective value. We need to compute the amount of possible increase t j for each potential entering variable x j. The rule states Pick the nonbasic variable with the largest c j t j those with positive reduced cost. among E.g., in, both x 1 and x 2 have positive reduced costs. For x 1 we can increase it to t 1 = min{2/1, 3/1} = 2. For x 2 we can increase it to t 2 = min{2/2, } = 1. Since c 1 t 1 = 2 2 > 3 1 = c 2 t 2, we choose x 1 to enter according to this rule. This rule picks the variable that results in the largest overall increase in objective value.

Chapter 6: The Simplex Method 6 Rules for entering variables We discuss four commonly used choice rules for entering variables. We shall use the following tableau for illustration. z 2x 1 3x 2 = 0 x 1 + 2x 2 + x 3 = 2 x 1 x 2 + x 4 = 3 Steepest edge rule. This rule also aim for large increase in objective value; however, it does so geometrically. We need to compute, for each potential entering variable, the increase in objective value per unit distance moved. For each c k > 0, increasing x k from 0 to t changes each basic variable x i from x i to x i ā ikt and leaves the other nonbasic variables unchanged. s X i B So the solution has moved a distance of (x i ā ikt) x i 2 + (t 0) 2 = s X i B ā 2 ik t2 + t 2 Also, the objective value changes from v to v + c k t. So the increase in objective value per unit distance is ( v + c k t) v c k t c k = = qp qpi B ā2ik t2 + t 2 t qpi B ā2ik + 1 i B ā2 ik + 1

Chapter 6: The Simplex Method 7 Rules for entering variables We discuss four commonly used choice rules for entering variables. We shall use the following tableau for illustration. z 2x 1 3x 2 = 0 x 1 + 2x 2 + x 3 = 2 x 1 x 2 + x 4 = 3 Steepest edge rule. So the increase in objective value per unit distance is ( v + c k t) v c k t c k = = qp qpi B ā2ik t2 + t 2 t qpi B ā2ik + 1 i B ā2 ik + 1 The rule states Pick the nonbasic variable x k with the largest value of c k qpi B ā2ik + 1 among those with positive reduced cost. E.g., in, both x 1 and x 2 have positive reduced costs. c k 2 For x 1, we have = qpi B ā2ik + 1 1 2 + 1 2 + 1 = 2. 3 For x 2, we have c k qpi B ā2ik + 1 = 3 p 2 2 + ( 1) 2 + 1 = 3. 6 Since 3 6 > 2 3, we choose x 2 to enter.

Chapter 6: The Simplex Method 8 The Simplex Method and Duality ( 6.7) Suppose at the end of the simplex method, we have an optimal solution x determined by a basis B and the corresponding tableau z P j N c jx j = v x i + P j N āijx j = b i (i B) From, we can give a proof of optimality of x. Recall that we may also prove the optimality of x using the C.S. Theorem. We only need to demonstrate a y that is feasible for the dual LP, and satisfy the C.S. condition with x. It turns out that the simplex method is implicitly computing this y at the same time!

Chapter 6: The Simplex Method 9 The simplex method uses elementary row operations to move from the initial tableau to the final optimal tableau So the z-row in the final tableau must be obtained by taking a linear combination of the equations Ax = b and add it to the equation z c T x = 0. Suppose that this linear combination is ŷ 1 (eqn. 1) + ŷ 2 (eqn. 2) + + ŷ m (eqn. m) If we let ŷ = [ŷ 1, ŷ 2,..., ŷ m ] T, then we can write the linear combination as ŷ T Ax = ŷ T b In another words, the z-row [z P j N c jx j = v] is actually z c T x + ŷ T Ax = ŷ T b i.e. z (c A T ŷ) T x = b T ŷ Comparing the above expression with the z-row, we conclude that c i A T i ŷ = 0 (i B) c j A T j ŷ = c j (j N) Since x i > 0 = i B = AT i ŷ = c i, ŷ satisfies the C.S. condition with x. Since c j 0 for all j N in the optimal tableau, ŷ satisfies the dual constraints A T i ŷ c i (i = 1, 2,..., m) By the C.S. Theorem, x is optimal for the primal and ŷ is optimal for the dual.