IE 5531: Engineering Optimization I

Similar documents
IE 5531: Engineering Optimization I

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

1 Review Session. 1.1 Lecture 2

ORF 522. Linear Programming and Convex Analysis

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I

Lecture 2: The Simplex method

Math 273a: Optimization The Simplex method

9.1 Linear Programs in canonical form

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Math Models of OR: Some Definitions

IE 5531: Engineering Optimization I

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Simplex Algorithm Using Canonical Tableaus

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

3 The Simplex Method. 3.1 Basic Solutions

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming and the Simplex method

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

AM 121: Intro to Optimization Models and Methods Fall 2018

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

Simplex Method for LP (II)

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Ω R n is called the constraint set or feasible set. x 1

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Simplex method(s) for solving LPs in standard form

AM 121: Intro to Optimization

Math Models of OR: Extreme Points and Basic Feasible Solutions

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

TIM 206 Lecture 3: The Simplex Method

The dual simplex method with bounds

IE 5531: Engineering Optimization I

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Lesson 27 Linear Programming; The Simplex Method

Chapter 5 Linear Programming (LP)

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Operations Research Lecture 2: Linear Programming Simplex Method

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

A Review of Linear Programming

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Linear programs, convex polyhedra, extreme points

MATH 4211/6211 Optimization Linear Programming

Notes on Simplex Algorithm

OPERATIONS RESEARCH. Linear Programming Problem

Chapter 4 The Simplex Algorithm Part I

In Chapters 3 and 4 we introduced linear programming

Part 1. The Review of Linear Programming

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

February 17, Simplex Method Continued

Lecture: Algorithms for LP, SOCP and SDP

3. THE SIMPLEX ALGORITHM

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Termination, Cycling, and Degeneracy

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

ORF 522. Linear Programming and Convex Analysis

MS-E2140. Lecture 1. (course book chapters )

A TOUR OF LINEAR ALGEBRA FOR JDEP 384H

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

IE 5531: Engineering Optimization I

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Lecture 11: Post-Optimal Analysis. September 23, 2009

Numerical Optimization

MATH2070 Optimisation

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Lecture slides by Kevin Wayne

Notes taken by Graham Taylor. January 22, 2005

The Simplex Algorithm and Goal Programming

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

2.098/6.255/ Optimization Methods Practice True/False Questions

ECE 307 Techniques for Engineering Decisions

TRANSPORTATION PROBLEMS

IE 400: Principles of Engineering Management. Simplex Method Continued

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

15-780: LinearProgramming

CO 250 Final Exam Guide

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Lecture 6 Simplex method for linear programming

Discrete Optimization

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

Chapter 1. Preliminaries

{ move v ars to left, consts to right { replace = by t wo and constraints Ax b often nicer for theory Ax = b good for implementations. { A invertible

"SYMMETRIC" PRIMAL-DUAL PAIR

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Chapter 7 Network Flow Problems, I

1 Review of last lecture and introduction

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Constrained and Unconstrained Optimization Prof. Adrijit Goswami Department of Mathematics Indian Institute of Technology, Kharagpur

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Transcription:

IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 1 / 49

Pop quiz Write the region above in the form Ax b. For points A, B, and C, give a vector c such that c T x is minimized at that point (or explain why none exists) Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 2 / 49

Administrivia Lecture slides 1, 2, 3 posted http://www.tc.umn.edu/~jcarlsso/syllabus.html PS 1 posted this evening Xi Chen's oce hours: Tuesdays 10:00-12:00, ME 1124, Table B Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 3 / 49

Today: Linear Programming, continued Linearization Mathematical preliminaries Simplex method Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 4 / 49

Recap A linear program (LP) is a mathematical optimization problem in which the objective function and all constraint functions are linear: minimize 2x 1 x 2 +4x 3 s.t. x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3 0 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 5 / 49

2-dimensional LPs If x R 2, it is easy to solve a linear program Consider the problem minimize x 1 x 2 s.t. x 1 + 2x 2 3 2x 1 + x 2 3 x 1, x 2 0 How do we solve this? Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 6 / 49

The graphical method Draw half-spaces corresponding to the constraints: Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 7 / 49

The graphical method Draw half-spaces corresponding to the constraints: Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 7 / 49

The graphical method Draw half-spaces corresponding to the constraints: Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 7 / 49

The graphical method Draw half-spaces corresponding to the constraints: Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 7 / 49

The graphical method Draw level sets of the objective function (they're lines orthogonal to c) Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 8 / 49

Active constraints We say a constraint a T x b is active at a point x if we have a T x = b In the previous example we had two active constraints: x 1 + 2x 2 = 3 and 2x 1 + x 2 = 3, while x 1, x 2 > 0 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 9 / 49

Facts about LP All LP problems fall into one of three classes: Problem is infeasible: the feasible region is empty Problem is unbounded: the feasible region is unbounded in the objective function direction Problem is feasible and bounded: There exists an optimal solution x There may be a unique optimal solution or multiple optimal solutions All optimal solutions are on a face of the feasible region There is always at least one corner optimizer if the face has a corner If a corner point is not worse than its neighboring corners, then it is optimal Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 10 / 49

Linearizing a problem LP can also be used to model certain non-linear problems A convex function is a function f ( ) : R n R satisfying f (λx + (1 λ) y) λf (x) + (1 λ) f (y) for all x, y R n and λ [0, 1] (bowl-shaped) A concave function is a function f ( ) : R n R satisfying f (λx + (1 λ) y) λf (x) + (1 λ) f (y) for all x, y R n and λ [0, 1] (hill-shaped) We claim that any piecewise linear convex function can be minimized by solving an LP Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 11 / 49

Linearizing a problem Consider the function f ( ) dened by { f (x) = max i=1,...,m c T i x + d i } It is easy to prove that this function is convex We can solve the problem minimize f (x) s.t. Ax b by solving an LP Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 12 / 49

Linearizing a problem The LP is minimize z s.t. z c T i Ax b x + d i i {1,..., m} Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 13 / 49

Absolute values Problems involving absolute values can be handled as well; consider minimize n i=1 c i x i s.t. Ax b Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 14 / 49

Absolute values The LP is minimize n i=1 c i z i s.t. z i x i i {1,..., n} z i x i i {1,..., n} Ax b Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 15 / 49

Data tting Consider the unconstrained problem of minimizing the largest residual minimize max b i a T i x i i where a i and b i are given, for i {1,..., m} Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 16 / 49

Data tting Consider the unconstrained problem of minimizing the largest residual minimize max b i a T i x i i where a i and b i are given, for i {1,..., m} The LP is minimize z s.t. z b i a T i x i ) z (b i a Ti x i Note that we can impose additional linear constraints on x, say C x d We could even impose something like n i=1 x i q! Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 16 / 49

Polyhedra Denition A polyhedron is a set that can be described in the form {x R n : Ax b} where A is an m n matrix and b R n. By the equivalence of linear programs, we know that a set of the form is also a polyhedron {x R n : Ax = b, x 0} Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 17 / 49

Boundedness Denition A set S R n is bounded if there exists a constant K such that S is contained in a ball of radius K. Note: a linear program can be bounded, but have an unbounded feasible set! However, if a linear program has a bounded feasible set, it must be bounded Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 18 / 49

Hyperplanes and half spaces Let a R n be nonzero and let b be a scalar. Denition The set { x : a T x = b} is called a hyperplane. Denition The set { x : a T x b} is called a half-space. Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 19 / 49

Convex sets Denition A set S R n is convex if, for any x, y S and any λ [0, 1], we have λx + (1 λ) y S. Intuitively, this means that the line segment between two points in the set must also lie in the set Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 20 / 49

Facts about convex sets The intersection of convex sets is convex Every polyhedron is convex The sub-level set of a convex function is convex (converse?) Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 21 / 49

Linear independence We say a set of vectors x 1,... x k is linearly dependent if there exist real numbers a 1,..., a k, not all of which are zero, such that a 1 x 1 + + a k x k = 0 If no such real numbers exist, we say that x 1,..., x k is linearly independent If x 1,..., xn R n are linearly independent, then the matrix (x 1,..., xn) is invertible Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 22 / 49

Real functions Weierstrass theorem: a continuous function f (x) dened on a compact (closed and bounded) region S R n has a minimizer in S. Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 23 / 49

Gradient The gradient of a function f (x) : R n R, is the vector f dened by f / x 1 f =. f / x n The gradient vector always points in the direction that the function is increasing the fastest The gradient of a linear function c T x is c Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 24 / 49

Extreme points Denition Let P be a polyhedron. A point x P is said to be an extreme point of P if we cannot nd two vectors y, z P, both dierent from x, and a scalar λ [0, 1], such that x = λy + (1 λ) z In other words, x does not lie on the line segment between two other points in P Denition Let P be a polyhedron. A point x P is said to be a vertex of P if there exists some c such that c T x < c T y for all y P not equal to x. The vector c is said to dene a supporting hyperplane to P at x. Vertices and extreme points are the same thing! Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 25 / 49

Algorithmic interpretation of extreme points We gave two geometric interpretations of vertices/extreme points However, this does not suggest how a computer might nd an extreme point How can a computer recognize a vertex? How can we make a computer tell that two corners are neighboring? How can we make a computer terminate and declare optimality? How can we recognize vertices/extreme points directly from the polyhedron {x : Ax = b, x 0}? Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 26 / 49

Basic feasible solution Consider a polyhedron dened by {x : Ax = b, x 0} where A is an m n matrix and b R n. What describes the extreme points? Select m linearly independent columns, denoted by the indices B, from A, and solve A B x B = b Then, set all other variables x N to 0 If all entries x B 0, then x is called a basic feasible solution (BFS) A basic feasible solution is the same thing as a corner or extreme point this is an algebraic description, rather than a geometric description Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 27 / 49

Example Consider the polyhedron {x : Ax = b, x 0}, where ( ) ( ) 5 6 7 5 17 A = ; b = 4 6 9 8 16 If we take B = {1, 2}, then we solve ( ) 5 6 4 6 x B = ( 17 16 and nd that x B = (1; 2). Thus the BFS is x = (1; 2; 0; 0) However, if we take B = {2, 3}, then we solve ( ) ( ) 6 7 17 6 9 x B = 16 and nd that x B = (3.42, 0.5), which is not a BFS ) Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 28 / 49

Example We can enumerate all of the vertices of the polyhedron {x : Ax = b, x 0}, where A = by choosing all subsets B ( 5 6 7 5 4 6 9 8 ) ( 17 ; b = 16 ) B {1, 2} {1, 3} {1, 4} {2, 3} {2, 4} {3, 4} xb (1; 2) (2.41; 0.70) (2.8; 0.6) (3.42; 0.5) (3.11; 0.33) (5.09; 3.73) BFS? Y Y Y N N N Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 29 / 49

The simplex method One way to solve a linear program is clearly to write out all of the BFS's, although that would clearly be slow A better strategy is to start at a BFS, and move to a better neighboring BFS if one is available If no neighboring BFS exists, we're done! How to identify a neighboring BFS? Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 30 / 49

Neighboring basic solutions Two basic solutions are neighboring or adjacent if they dier by exactly one basic (or nonbasic) variable A basic feasible solution is optimal if no better neighboring feasible solution exists How to check if this is true? Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 31 / 49

Optimality test Consider the BFS (0, 0, 1, 1, 1.5); is this optimal for the problem minimize x 1 2x 2 s.t. x 1 +x 3 = 1 x 2 + x 4 = 1 x 1 + x 2 +x 5 = 1.5 x 1,x 2, x 3,x 4, x 5 0 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 32 / 49

Optimality test Consider the BFS (0, 0, 1, 1, 1.5); is this optimal for the problem minimize x 1 2x 2 s.t. x 1 +x 3 = 1 x 2 + x 4 = 1 x 1 + x 2 +x 5 = 1.5 x 1,x 2, x 3,x 4, x 5 0 No, it isn't; the basic set is {3, 4, 5}; if we increase x 1 while decreasing x 3 and x 5, the objective function decreases Thus, a better basic set has 1 in it, and we should remove 3 or 5 (don't know which one yet) Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 32 / 49

Optimality test Consider the BFS (0, 0, 1, 1, 1.5); is this optimal for the problem minimize x 1 + 2x 2 s.t. x 1 +x 3 = 1 x 2 + x 4 = 1 x 1 + x 2 +x 5 = 1.5 x 1,x 2, x 3,x 4, x 5 0 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 33 / 49

Optimality test Consider the BFS (0, 0, 1, 1, 1.5); is this optimal for the problem minimize x 1 + 2x 2 s.t. x 1 +x 3 = 1 x 2 + x 4 = 1 x 1 + x 2 +x 5 = 1.5 x 1,x 2, x 3,x 4, x 5 0 Yes, it is; our basic set is {3, 4, 5} and the objective function is 0. If we exchange any indices, the objective function becomes positive. Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 33 / 49

LP canonical form A standard-form LP is said to be in canonical form at a basic feasible solution if the objective coecients to all the basic variables are zero the constraint matrix for the basic variables form an identity matrix (with some permutation if necessary) If the LP is in canonical form, then it's easy to tell if the current BFS is optimal Can we always transform an LP problem to an equivalent LP in canonical form? Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 34 / 49

Transforming to canonical form Consider the constraint Ax = b and suppose that ( A = (A 1, A 2 ) and x = y z ) It is therefore the case that Ax = A 1 y + A 2 z = b Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 35 / 49

Transforming to canonical form If we let y = x B and z = x N, then we nd that A B x B + A N x N = b A B x B = b A N x N x B = A 1 B (ignore the fact that x N = 0 for now) The objective function is b A 1 B A N x N c T (x B; x N) = c T B x B + c T N x N = c T B (A 1 B = c T B A 1 B = c T B A 1 b A 1 B b ct B A 1 B B b + ( A N x N ) + c T N x N A N x N + c T N x N c T N ct B A 1 B A N ) x N Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 36 / 49

An equivalent LP We can ignore the constant term c T B A 1 b which doesn't contribute to B the optimization The alternative LP is minimize r T x s.t. Āx = b x 0 where r B = 0, r N = c N A T N ) (A 1 T B c B, Ā = A 1 A, b = A 1 B B b Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 37 / 49

Optimality testing Note that c Ā T c B = c ( ) A T A 1 T B c B = c (A B, A N ) ( T A 1 ( ) A T B ( A 1 B = c ( = c = = ( ( c B c N A T N ) A T B (A 1 T B A T N (A 1 B ) c N A T N ( 0 A T N (A 1 B ) T B c B ) T c B ) T ) c B c B ) T c B (A 1 B ) T c B ) = r ) Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 38 / 49

Optimality test The vector r = c Ā T c B = c ( ) A T A 1 T B c B is called a reduced cost coecient vector We often write y = ( ) A 1 T B c B so that r = c A T y Note that if r N 0 (equivalently r 0) at a BFS with basic variable set B, then the BFS is an optimal basic solution and A B is an optimal basis Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 39 / 49

Example Consider the example minimize x 1 + 2x 2 +3x 3 x 4 s.t. x 1 +x 3 = 1 x 2 + x 4 = 1 x 1 + x 2 +x 5 = 1.5 x 1,x 2, x 3,x 4, x 5 0 We set B = {1, 2, 3} so that x = (0.5, 1, 0.5, 0, 0) as 1 0 1 0 1 1 A B = 0 1 0 ; A 1 = 0 1 0 B 1 1 0 1 1 1 and therefore r N = c N A T N optimal ( ) (A 1 T 6 B c B = 3 ) ; this is not Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 40 / 49

Simplex tableau While performing the simplex algorithm, we maintain a simplex tableau that organizes the intermediate canonical form data: B r T c T B b basis indices Ā b What does the upper-right corner represent? Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 41 / 49

Simplex tableau While performing the simplex algorithm, we maintain a simplex tableau that organizes the intermediate canonical form data: B r T c T B b basis indices Ā b What does the upper-right corner represent? Since b = A 1 b, B ct B b = c T B A 1 B b = ct B x B, the negative objective function value Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 41 / 49

Simplex tableau example The problem minimize x 1 2x 2 s.t. x 1 +x 3 = 1 x 2 + x 4 = 1 x 1 + x 2 +x 5 = 1.5 x 1,x 2, x 3,x 4, x 5 0 has the following tableau for B = {3, 4, 5}: B 1 2 0 0 0 0 3 1 0 1 0 0 1 4 0 1 0 1 0 1 5 1 1 0 0 1 1.5 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 42 / 49

Finding a better neighbor point If one of the indices of r is negative, our basic set is not optimal We make an eort to nd a better neighboring basic solution (that diers by the current basic solution by exactly one basic variable), as long as the reduced cost coecient of the entering variable is negative Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 43 / 49

Changing basis With B = {3, 4, 5}, B 1 2 0 0 0 0 3 1 0 1 0 0 1 4 0 1 0 1 0 1 5 1 1 0 0 1 1.5 Try inserting variable x 1 into the basic set; the constraint says 1 0 1 0 0 0 x1 + 1 x2 + 0 x3 + 1 x4 + 0 x5 = }{{} 1 1 0 0 1 i.e. 0 x 3 x 4 x 5 = 1 1 1.5 1 0 1 x1 1 1 1.5 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 44 / 49

Minimum ratio test The question is: how much can we increase x 1 while the current basic variable remain feasible (non-negative)? This is easy to gure out with the minimum ratio test (MRT): 1 Select the entering variable x e with reduced cost r e < 0 2 If Ā e 0 then the problem is unbounded 3 The MRT: What does θ represent? θ = min { bi Ā ie : Ā ie > 0 } Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 45 / 49

Minimum ratio test θ represents the largest amount that x e can be increased before one (or more) of the current basic variables x i becomes zero (and leaves the feasible set) Suppose that the minimum ratio is attained by one unique basic variable index o. Then x e is the entering basic variable and x o is the out-going basic variable: x o = b o ā oe θ = 0 x i = b i ā ie θ > 0 i 0 Thus the new basic set contains x e and drops x o Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 46 / 49

Tie breaking If the MRT does not give a single index, but instead a set of one or more, we choose one of these arbitrarily We say that the new basic feasible solution is degenerate because some of the basic variables x B just happen to be 0 We'll deal with this later; for now, we can just pretend that the degeneracies are actually ɛ > 0 and continue Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 47 / 49

The simplex algorithm Initialize the simplex algorithm with a feasible basic set B, so that x B 0. Let N be the remaining indices. Write the simplex tableau. 1 Test for termination. Find r e = min {r j } j N If r e 0, the solution is optimal. Otherwise, determine whether the column of Ā e contains a positive entry. If not, the objective function is unbounded below. Otherwise, let x e be the entering basic variable 2 Determine the outgoing variable. Use the MRT to determine the outgoing variable x o. 3 Update the basic set. Update B and A B and transform the problem to canonical form. Return to step 1. Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 48 / 49

Expanded simplex tableau B basis indices ) T c B c T B A 1 B b A 1 B A A 1 B b c A T ( A 1 B Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010 49 / 49