TIM 206 Lecture 3: The Simplex Method

Similar documents
Simplex Method for LP (II)

Chap6 Duality Theory and Sensitivity Analysis

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

The augmented form of this LP is the following linear system of equations:

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Ω R n is called the constraint set or feasible set. x 1

Operations Research Lecture 2: Linear Programming Simplex Method

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Part 1. The Review of Linear Programming

Lecture 2: The Simplex method

Lesson 27 Linear Programming; The Simplex Method

9.1 Linear Programs in canonical form

Math Models of OR: Some Definitions

Math 273a: Optimization The Simplex method

Lecture 5 Simplex Method. September 2, 2009

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Simplex Algorithm Using Canonical Tableaus

Summary of the simplex method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Summary of the simplex method

MAT016: Optimization

IE 400: Principles of Engineering Management. Simplex Method Continued

Chapter 5 Linear Programming (LP)

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

1 Review Session. 1.1 Lecture 2

Notes taken by Graham Taylor. January 22, 2005

Lecture slides by Kevin Wayne

Review Solutions, Exam 2, Operations Research

MATH 445/545 Homework 2: Due March 3rd, 2016

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Farkas Lemma, Dual Simplex and Sensitivity Analysis

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

IE 5531: Engineering Optimization I

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

Algebraic Simplex Active Learning Module 4

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

MATH 4211/6211 Optimization Linear Programming

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Introduction. Very efficient solution procedure: simplex method.

Systems Analysis in Construction

AM 121: Intro to Optimization Models and Methods Fall 2018

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Linear Programming Redux

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Linear programs Optimization Geoff Gordon Ryan Tibshirani

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

ECE 307 Techniques for Engineering Decisions

Chapter 4 The Simplex Algorithm Part I

Part 1. The Review of Linear Programming

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

February 17, Simplex Method Continued

Simplex method(s) for solving LPs in standard form

Linear Programming and the Simplex method

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

3 The Simplex Method. 3.1 Basic Solutions

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

The simplex algorithm

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Lecture 11 Linear programming : The Revised Simplex Method

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Introduction to the Simplex Algorithm Active Learning Module 3

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Chapter 3, Operations Research (OR)

Sensitivity Analysis

OPRE 6201 : 3. Special Cases

CPS 616 ITERATIVE IMPROVEMENTS 10-1

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

MATH2070 Optimisation

Optimization (168) Lecture 7-8-9

3. THE SIMPLEX ALGORITHM

Linear Programming: Simplex Algorithm. A function of several variables, f(x) issaidtobelinear if it satisþes the two

AM 121: Intro to Optimization

The dual simplex method with bounds

Lecture 11: Post-Optimal Analysis. September 23, 2009

A Review of Linear Programming

Slide 1 Math 1520, Lecture 10

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Chapter 1: Linear Programming

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

AM 121: Intro to Optimization Models and Methods

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

TRANSPORTATION PROBLEMS

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

OPERATIONS RESEARCH. Linear Programming Problem

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

9.5 THE SIMPLEX METHOD: MIXED CONSTRAINTS

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

In Chapters 3 and 4 we introduced linear programming

4.4 The Simplex Method and the Standard Minimization Problem

Michælmas 2012 Operations Research III/IV 1

Math Models of OR: Handling Upper Bounds in Simplex

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Transcription:

TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints), typically have more variables than constraints. A basic solution corresponds to a solution that uses only linearly independent columns of A. Problem: A is not invertible. If A is n x n then we can solve for x = A 1 b but we need A to be full rank. Solution: One method to solve is by using a decomposition. A better method is to pick some independent columns in A and set the OTHER variables to 0. Some notes: Full rank is when rank(a) = minimum(number of rows, number of columns) 2 Solutions, Extreme Points, and Bases Key Fact: If a Linear Program has an optimal solution, then it has an optimal extreme point solution. Basic Feasible Solution (Corner Point Feasible): The vector x is an extreme point of the solution space iff it is a BFS or Ax = b, x 0. If A is of full rank then there is at least one basis B of A. B is a set of linearly independent columns of A. B gives us a basic solution. If this is feasible then it is called a basic feasible solution (BFS) or corner point feasible (CPF). 3 The Optimal Basis Theorem The proof of the Optimal Basis Theorem requires the Representation Theorem. Representation Theorem: Consider the set S = {x : Ax = b, x 0}. Let V = {V 1... V N } be the extreme points of S. If S is nonempty, V is nonempty and every feasible point x S ca be written as: k x = d + α i V i i=1 where α i = 1, α i 0 i and d satisfies Ad = 0, d 0 A simple explanation of this theorem is that every point in a solution space is a combination of the convex corners of that space. We discard the unbounded cases for simplificitation in the following proof: 1

3.1 Proof of the Optimal Basis Theorem Say x is a finite optimal solution of a linear program. The representation theorem then says: case d = 0 x = d + k α i V i i=1 k c T x = c T α i V i = i=1 k c T α i V i i=1 It is known that α i = 1 and that α i 0. Let V = arg max c T V i then c T x α i c T V = c T V Therefore c T x c T V Optimal basis theorem If a linear program has a finite optimal solution then it has an optimal basic feasible solution. The intuition behind the Simplex method is that it checks the corner points and gets a better solution at each iteration. A simplified outline of the Simplex method: 1. Find a starting solution. 2. Test for optimality. If optimal then stop. 3. Perform one iteration to a new CPF (BFS) solution. Back to 2. 4 Simplex Method: Basis Change 1. One basic variable is replaced by another. This process essentially chooses which variables are active and then changes the active constraints. 2. The optimality test identifies a non-basic variable to enter the basis. The entering basic variable is increased until one of the other basic variables becomes zero. The variable that reaches zero is identified using the minimum ratio test. That variable departs the basis. ( ) cols Note that the number of corner points is rows 2

5 Simplex Example The following examples are from the OR tutor CD. This is one iteration of the Simplex method via the algebraic method: Algebraic Simplex Method - Introduction To demonstrate the simplex method, consider the following linear programming model: This is the model for Leo Coco's problem presented in the demo, Graphical Method. That demo describes how to find the optimal solution graphically, as displayed on the right. Thus the optimal solution is,, and. We will now describe how the simplex method (an algebraic procedure) obtains this solution algebraically. 3

Algebraic Simplex Method - Formulation The Simplex Formulation To solve this model, the simplex method needs a system of equations instead of inequalities for the functional constraints. The demo, Interpretation of Slack Variables, describes how this system of equations is obtained by introducing nonnegative slack variables, and. The resulting equivalent form of the model is The simplex method begins by focusing on equations (1) and (2) above. 4

Algebraic Simplex Method - Initial Solution The Initial Solution Consider the initial system of equations exhibited above. Equations (1) and (2) include two more variables than equations. Therefore, two of the variables (the nonbasic variables) can be arbitrarily assigned a value of zero in order to obtain a specific solution (the basic solution) for the other two variables (the basic variables). This basic solution will be feasible if the value of each basic variable is nonnegative. The best of the basic feasible solutions is known to be an optimal solution, so the simplex method finds a sequence of better and better basic feasible solutions until it finds the best one. To begin the simplex method, choose the slack variables to be the basic variables, so and are the nonbasic variables to set equal to zero. The values of and now can be obtained from the system of equations. The resulting basic feasible solution is,,, and. Is this solution optimal? 5

Algebraic Simplex Method - Checking Optimality Checking for Optimality To test whether the solution,,, and is optimal, we rewrite equation (0) as Since both and have positive coefficients, can be increased by increasing either one of these variables. Therefore, the current basic feasible solution is not optimal, so we need to perform an iteration of the simplex method to obtain a better basic feasible solution. This begins by choosing the entering basic variable (the nonbasic variable chosen to become a basic variable for the next basic feasible solution). 6

Algebraic Simplex Method - Entering Basic Variable Selecting an Entering Basic Variable The entering basic variable is: Why? Again rewrite equation (0) as. The value of the entering basic variable will be increased from 0. Since has the largest positive coefficient, increasing will increase at the fastest rate. So select. This selection rule tends to minimize the number of iterations needed to reach an optimal solution. You'll see later that this particular problem is an exception where this rule does not minimize the number of iterations. Algebraic Simplex Method - Leaving Basic Variable Selecting a Leaving Basic Variable The entering basic variable is: The leaving basic variable is: Why? Choose the basic variable that reaches zero first as the entering basic variable ( ) is increased (watch increase). when. What if we increase until? 7

Algebraic Simplex Method - Leaving Basic Variable Selecting a Leaving Basic Variable The entering basic variable is: The leaving basic variable is: Why? Choose the basic variable that reaches zero first as the entering basic variable ( ) is increased. when. What if we increase until (watch increase)? when. However is now negative, resulting in an infeasible solution. Therefore, cannot be the leaving basic variable. 8

Algebraic Simplex Method - Gaussian Elimination Scaling the Pivot Row In order to determine the new basic feasible solution, we need to convert the system of equations into proper form from Gaussian elimination. The coefficient of the entering basic variable ( ) in the equation of the leaving basic variable (equation (1)) must be 1. The current value of this coefficient is: 1 Therefore, nothing needs to be done to this equation. Algebraic Simplex Method - Gaussian Elimination Eliminating from the Other Equations, we need to obtain a coefficient of zero for the entering basic variable ( ) in every other equation (equations (0) and (2)). The coefficient of in equation (0) is: -20 To obtain a coefficient of 0 we need to: Add 20 times equation (1) to equation (0). The coefficient of in equation (2) is: 3 Therefore, to obtain a coefficient of 0 we need to: Subtract 3 times equation (1) from equation (2). 9

Algebraic Simplex Method - Checking Optimality Checking for Optimality The new basic feasible solution is,,, and, which yields. This ends iteration 1. Is the current solution optimal? No. Why? Rewrite equation (0) as. Since has a positive coefficient, increasing from zero will increase. So the current basic feasible solution is not optimal. So have x 2 enter as the new variable and x 3 leave. 6 Proof that it works If there is an optimal point, there is an optimzl basic feasbile solution. There are a finite number of basic feasible solutions. Each iteration of the simplex method moves from one basic feasible solution to a better basic feasible solution. Here is the same process using a Simplex tableau: 10

Tabular Simplex Method - Introduction To demonstrate the simplex method in tabular form, consider the following linear programming model: This is the same problem used to demonstrate the simplex method in algebraic form (see the demo The Simplex Method - Algebraic Form), which yielded the optimal solution (, ) = (0, 7), as shown to the right. Tabular Simplex Method - Initial Tableau Using the Tableau After introducing slack variables (, ), etc., the initial tableau is as shown above. Choose the slack variables to be basic variables, so and are the nonbasic variables to be set to zero. The values of and can now be obtained from the right-hand side column of the simplex tableau. The resulting basic feasible solution is,,, and. 11

Tabular Simplex Method - Entering Basic Variable Selecting an Entering Basic Variable Ahe entering basic variable is: Why? This is the variable that has the largest (in absolute value) negative coefficient in row 0 (the equation (0) row). Tabular Simplex Method - Leaving Basic Variable Selecting a Leaving Basic Variable The entering basic variable is: The leaving basic variable is: Why? Apply the minimum ratio test as shown above. 12

Tabular Simplex Method - Gaussian Elimination Scaling the Pivot Row Although it is not needed in this case, the pivot row is normally divided by the pivot number. Tabular Simplex Method - Gaussian Elimination Eliminating from the Other Rows Add 20 times row 1 to row 0. Subtract 3 times row 1 from row 2. Why? To obtain a new value of zero for the coefficient of the entering basic variable in every other row of the simplex tableau. 13

Tabular Simplex Method - Gaussian Elimination Eliminating from the Other Rows Add 20 times row 1 to row 0. Subtract 3 times row 1 from row 2. Why? To obtain a new value of zero for the coefficient of the entering basic variable in every other row of the simplex tableau. Tabular Simplex Method - Checking Optimality Checking for Optimality This ends iteration 1. Is the current basic feasible solution optimal? No. Why? Because row 0 includes at least one negative coefficient. 7 Testing for optimality Once you have a basic feasible solution in the following form: Maximize c Bx B + c Nx N s.t. Bx B + Nx N = B x B, x N 0 14

Algebraically manipulating the program we can solve for x B : Bx B + Nx N = b x B = B 1 (b N xn ) Then rearranging c x: c x = c Bx B + c Nx N = c BB 1 (b Nx N ) + c N X N = c BB 1 b c BB 1 Nx N + c Nx N = z 0 j (z j c j )x j where: z 0 = c BB 1 b z j = c BB 1 A j, j N We can now see that: max c x = z 0 j (z j c j )x j So to test optimality we can check: z j c j 0 j N Note that if we have an unbounded region, we will get either a 0 or a negative value during the ratio test. For a given problem, there may be no obvious feasbile solutions and we need one to begin the simplex algorithm. To find an original solution (a basic feasible solution) we want to find ANY solution to Ax = b (solution does not need to be optimal). To do this, we choose a new set of variables (y) and minimize n k=1 y k = max n k=1 y k, where y k Ax + I y + = b, wherever b n 0 we have a +1 in the corresponding I matrix, else we have a 1. 2 1, 0, 0, 0 For example: If b = 3 5 then I = 0, 1, 0, 0 0, 0, 1, 0 4 0, 0, 0, 1 8 Finding and initial Basic Feasible Solution With the simplex method we have a way of solving an LP as long as we know a starting BFS. We use this knowledge to find a starting solution. If the vector b is nonnegative, then there is an easy starting solution: set all of the augmented variables to the values of b, and all of the initial variables to zero. If b has negative elements then we use either the phase 1 method or the big-m method. 8.1 Phase 1 approach Initially, revise the constrains of the original problem by introducing artificial variables as needed to obtain an obvious initial BF solution for the artificial problem. 1. Phase 1: The objective for this phase is to find a BF solution for the real problem. To do this, Minimize Z = the sum of the artificial variables, subject to revised constrains 15

2. Phase 2: The objective of this phase is to find an optimal solution for the real problem. Since the artificial variables are not part of the real problem, these variables can now be dropped. Starting from the BF solution obtained at the end of phase 1, use the simplex method to solve the real problem. max i y i (1) s.t.[aî](x, y) = b (2) x, y 0 (3) where Î is the identity matrix, but with (-1) wherever b i < 0. Then we have a starting solution with x = 0, y i = b i. Solving this LP will give a solution with y = 0, as long as the initial problem is feasible. The corresponding x values form a starting solution to the original LP. 8.2 The Big-M Approach The big-m approach uses the same intuition, but solves the problem in one stage rather than two phases. maxc x M i y i (4) s.t.[aî](x, y) = b (5) x, y 0 (6) Again use the starting solution x = 0, y i = b i, and solve. If the initial problem is feasible, the solution will be found with y = 0, and x will satisfy the original constraints. 16