Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Similar documents
MATH 4211/6211 Optimization Linear Programming

Operations Research Lecture 2: Linear Programming Simplex Method

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

TIM 206 Lecture 3: The Simplex Method

3 The Simplex Method. 3.1 Basic Solutions

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Math 273a: Optimization The Simplex method

A Review of Linear Programming

Lecture: Algorithms for LP, SOCP and SDP

ORF 522. Linear Programming and Convex Analysis

Ω R n is called the constraint set or feasible set. x 1

IE 5531: Engineering Optimization I

Lecture 2: The Simplex method

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

The dual simplex method with bounds

Chapter 5 Linear Programming (LP)

Lecture 6 Simplex method for linear programming

Notes taken by Graham Taylor. January 22, 2005

Termination, Cycling, and Degeneracy

9.1 Linear Programs in canonical form

3. THE SIMPLEX ALGORITHM

Linear & Integer programming

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

TRANSPORTATION PROBLEMS

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

Simplex method(s) for solving LPs in standard form

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

AM 121: Intro to Optimization Models and Methods Fall 2018

MAT-INF4110/MAT-INF9110 Mathematical optimization

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Lecture 6 - Convex Sets

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Chap6 Duality Theory and Sensitivity Analysis

Part 1. The Review of Linear Programming

Summary of the simplex method

The Simplex Algorithm

Linear Programming Redux

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Math 341: Convex Geometry. Xi Chen

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

"SYMMETRIC" PRIMAL-DUAL PAIR

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

1 Review Session. 1.1 Lecture 2

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

ECE 307 Techniques for Engineering Decisions

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

IE 5531: Engineering Optimization I

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

Advanced Linear Programming: The Exercises

Lecture slides by Kevin Wayne

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Simplex Method for LP (II)

Notes on Simplex Algorithm

Linear and Integer Optimization (V3C1/F4C1)

AM 121: Intro to Optimization

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

CO 250 Final Exam Guide

Introduction to Integer Programming

OPERATIONS RESEARCH. Linear Programming Problem

Lecture Note 18: Duality

Linear Programming: Simplex

2 Two-Point Boundary Value Problems

Simplex Algorithm Using Canonical Tableaus

On mixed-integer sets with two integer variables

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

ORF 522. Linear Programming and Convex Analysis

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Optimization methods NOPT048

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Transportation Problem

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

Math Models of OR: Some Definitions

1 Review of last lecture and introduction

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Linear programming: theory, algorithms and applications

Chapter 1 Preliminaries

4. Duality and Sensitivity

Convex Sets with Applications to Economics

Linear Programming The Simplex Algorithm: Part II Chapter 5

Multicommodity Flows and Column Generation

IE 5531: Engineering Optimization I

NOTES (1) FOR MATH 375, FALL 2012

MATH 445/545 Homework 2: Due March 3rd, 2016

Transcription:

Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012

Linear Programming 1 The standard Linear Programming (SLP) problem: subject to minimize x R n c T x {}}{ c 1 x 1 + c 2 x 2 + + c n x n { a 11 x 1 Ax=b }} + a 12 x 2 + + a 1n x n = { b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a n1 x 1 + a n2 x 2 + + a mn x n = b m x i 0 for all i = 1,, n

Define the feasible set of the SLP as 2 Λ := {x R n Ax = b, x 0} The SLP is given by minimize {c T x x Λ} Theorem 1 Consider the following problems µ = min{ c T z A 1 z b 1, A 2 z = b 2 and z 0} (1) and where c = [ c 0 ν = min{c T x Ax = b, x 0} (2) ], A = [ A1 I A 2 0 ] and b = [ b1 ] b 2

Then µ = ν If the optimal solution of (1) is z o then an optimal solution x o of (2) is given by 3 x o = [ z 0 y o ] where y o 0 and vice versa Proof: Note that as x o is an optimal solution of (2) it follows that ν = c T x o, Ax o = b and x o 0 Partition x o appropriately as x o = [ z 1 y o ]

where z has the same dimension as c Then it follows that 4 z 1 0, A 1 z 1 + y o = b 1, y o 0 and A 2 z 1 = b 2 This implies that z 1 0, A 1 z 1 b 1, A 2 z 1 = b 2 and thus z 1 is a feasible element for the optimization problem of (1) Thus it follows that ν = c T x o = c T z 1 min{ c T z A 1 z b 1, A 2 z = b 2 and z 0} = µ Note that as z o is an optimal solution of (1) it follows that µ = c T z o, A 1 z o b 1, A 2 z o = b 2 and z o 0 Define y 1 := b 1 A 1 z o 0

Define [ ] x 1 z o = y 1 Then it follows that x 1 0, Ax 1 = b and x 1 0 and thus x 1 is a feasible element for the optimization problem of (2) Thus it follows that 5 µ = c T z o = c T x 1 min{c T x 1 Ax = b and x 0} = ν = c T z 1 µ This proves µ = ν Also we have shown that if the optimal solution of (1) is z o then an optimal solution x o of (2) is given by where y o 0 and vice versa x o = [ z 0 y o ]

Theorem 2 Consider the following problems 6 µ = min{ ( c T 1 c T 2 ) ( z y ) A 1 z + A 2 y = b, z 0} (3) and where c = ν = min{c T x Ax = b, x 0} (4) c 1 c 2 c 2, A = [ A 1 A 2 A 2 ] Then µ = ν ( ) z o If the optimal solution of (3) is y o then an optimal solution x o of (4) is

given by x o = where y o = u o v o 0 and vice versa z0 u o v o 7 Proof: Note that as x o is an optimal solution of (4) it follows that ν = c T x o, Ax o = b and x o 0 Partition x o appropriately as x o = z1 u o v o where z 1, u o and v o have dimensions same as c 1, c 2 and c 2 respectively Then it follows that z 1 0, A 1 z 1 + A 2 (u 0 v o ) = b, u o, v o 0

Let y 1 := u 0 v o This implies that 8 z 1 0, A 1 z 1 + A 2 y 1 = b ( ) z 1 and thus y 1 is a feasible element for the optimization problem of (3) Thus it follows that ν = c T x o = ( c T 1 c T 2 c T 2 = ( c T 1 c T 2 min{ ( c T 1 c T 2 = µ ) ) ( z 1 y 1 ) ) ( z y z1 u o v o = c T 1 z 1 + c T 2 (u o v o ) ) A 1 z + A 2 y = b, z 0}

Note that as ( z o y o ) is an optimal solution of (3) it follows that 9 µ = c T 1 z o + c T 1 y o, A 1 z o + A 2 y o = b, z o 0 Define u o and v o to satisfy u 1 (i) := y o (i) if y o (i) 0 = 0 if y o (i) < 0 v 1 (i) = 0 if y o (i) 0 = y o (i) if y o (i) < 0 for all i = 1,, n y n y being the dimension of y Note that u 1 0 v 1 0 and y o = u 1 v 1

Therefore it follow that 10 Thus x 1 := zo u 1 v 1 A 1 z o + A 2 u 1 A 2 v 1 = A 1 z o + A 2 y o = b u 1 1 v 1 0 is a feasible solution for (4) Thus it follows that µ = c T 1 z o + c T 2 y o = c T 1 z o + c T 2 u o c T 2 v o = c T x 1 min{c T x Ax = b and x 0} = ν = c T 1 z 1 + c T 2 (u o v o ) µ This proves µ = ν Also we have shown that if the optimal solution of (3) is

( z o y o ) then an optimal solution x o of (4) is given by 11 x o = z0 u o v o where y o = u o v o 0 and vice versa

Feasible solution and Optimal solution 12 Definition 1 Consider the Standard Linear Programming (SLP) problem µ = min{c T x Ax = b, x 0, x R n } where A is a m n matrix Any x R n that satisfies Ax = b, x 0 is a feasible solution If x o is such that then x o is an optimal solution µ = c T x o, Ax o = b and x 0,

Basic Solution, basic variable and nonbasic variables 13 Definition 2 Consider the Standard Linear Programming (SLP) problem min{c T x Ax = b, x 0, x R n } where A is a m n matrix Suppose Rank(A) = m Suppose x = x 1 x 2 x n

is such that only m elements {x k1, x k2,, x km } are non zero with x k1 x k2 x km = B 1 b, B = [ a k1 a k2 a km ] 14 Then x is a basic solution of the SLP The variables {x k1, x k2,, x km } are called the basic variables associated with the matrix B The variables x i with i {k 1, k 2,, k m } are called the non-basic variables Note that B is a matrix formed by m linearly independent columns of A Basic solution depends only on A and b and not on c

A non-basic variable is set to zero in a basic solution 15 A basic variable can be zero in a basic solution There are only finitely many basic solutions associated with A R m n and b R m Definition 3 A basic solution is said to be degenerate if any of the basic variables is zero Definition 4 feasible Definition 5 x is said to be basic feasible solution if x is basic and is x is said to be basic optimal solution if x is basic and is optimal

The Fundamental Theorem of Linear Programming 16 Theorem 3 Consider the optimization problem where A R m n has rank m Then min{c T x Ax = b, x 0, x R n }, 1 If there exists a feasible solution then there exists a basic feasible solution 2 If there is an optimal solution then there is a basic optimal solution SLP has only finitely many basic solutions Fundamental theorem on Linear Programming asserts that LP can be solved in a finite number of steps

Proof of (1): Let x R n be a feasible solution Suppose only p elements of the vector x be nonzero Without loss of generality assume these variables to be x 1,, x p Thus x p+1 = x p+2 = = x n = 0 Note also that Ax = b and x 0 Let 17 A = [ a 1 a 2 a n ], where a i denotes the i th column of A Then as Ax = b we have n a i x i = b i=1 p a i x i = b i=1 Case1: Suppose a 1,, a p are independent set of vectors Then as p m as Rank(A) = m One can add columns a ip+1,, a im such that the B = [ a 1 a p a ip+1 a ip+2 a im ],

has independent columns and thus is invertible It is evident that x 0, Ax = b and the nonzero variables are basic variables associated with the matrix B above Thus it follows that x is a basic feasible solution 18 Case 2: Suppose the columns a 1,, a p form a dependent set Then there exists real variables y 1,, y p with at least one element strictly positive such that p y i a i = 0 Let i=1 y = y 1 y p 0 0 n 1

It is evident that Ay = 0 Let 19 Let Then ɛ := min{ x 1 y 1,, x p y p y i > 0} > 0 z = x ɛy z 0, Az = A(x ɛy) = Ax ɛay = Ax = b and z has p 1 nonzero elements Thus z is a feasible solution and has at most p 1 nonzero elements This process can be continued to a stage when the non-zero elements of a feasible element are associated with independent columns and then we revert to Case 1 This proves the first part of the theorem

Proof of (2): Suppose x is such that 20 µ = c T x, A x = b and x 0 that is x is an optimal solution Suppose only p elements of the vector x be nonzero Without loss of generality assume these variables to be x 1,, x p Thus x p+1 = x p+2 = = x n = 0 Note also that A x = b and x 0 Let A = [ a 1 a 2 a n ], where a i denotes the i th column of A Then as A x = b we have n a i x i = b i=1 p a i x i = b i=1

Case1: Suppose a 1,, a p are independent set of vectors Then as p m as Rank(A) = m Using results from linear algebra, one can add columns a ip+1,, a im such that the 21 B = [ a 1 a p a ip+1 a ip+2 a im ], has independent columns and thus is invertible It is evident that x 0, A x = b and the nonzero variables are basic variables associated with the matrix B above Thus it follows that x is a basic feasible solution x is an optimal solution too and thus x is a basic optimal solution Case 2: Suppose the columns a 1,, a p forms a dependent set Then there exists real variables y 1,, y p with at least one element strictly positive such that p y i a i = 0 i=1

22 Let It is evident that Ay = 0 y = y 1 y p 0 0 n 1 Let δ := min{ x 1 y 1,, x p y p y i 0} > 0 Let ɛ be any real number such that ɛ < δ Then x ɛy 0 and A( x ɛy) = b

Thus x ɛy is a feasible solution Suppose c T y 0 then we can choose ɛ such that 0 < ɛ δ and sgn( ɛ) = sgn(c T y) Then 23 c T ( x ɛy) = c T x ɛc T y = c T x c T ɛc T y < c T x As ( x ɛy) is a feasible solution x cannot be an optimal solution This a contradiction and thus c T y = 0 Now let Let Then z is a feasible solution with ɛ := min{ x 1 y 1,, x p y p y i > 0} > 0 z = x ɛy c T z = c T ( x ɛc T y) = c T x = µ

and thus z is an optmal solution Also z has at most p 1 nonzero elements This process can be continued to a stage when the only non-zero terms in the optimal solution are associated with independent columns of A 24 This proves (2) Definition 6 [Convex sets] A subset Ω of a vector space X is said to be convex if for any two elements c 1 and c 2 in Ω and for a real number λ with 0 < λ < 1 the element λc 1 + (1 λ)c 2 Ω (see Figure??) The set {} is assumed to be convex Theorem 4 Let Λ α, α S be an arbitrary collection of convex sets Then α S Λ α

is a convex set 25 Theorem 5 Suppose K and G are convex subsets of a vector space X Then K + G := {x X x = x k + x G, x K K and x G G} is convex Definition 7 Let S be an arbitrary set of a vector space X Then the convex hull of S is the smallest convex set containing S and is denoted by co(s) Note that where Λ α is any set that contains S co(s) = Λ α Definition 8 [Convex combination] A vector of the form n k=1 λ kx k, where n k=1 λ k = 1 and λ k 0 for all k = 1,, n is a convex combination of the vectors x 1,, x n

Definition 9 [Cones] A subset C of a vector space X is a cone if for every non-negative α in R and c in C, αc C 26 A subset C of a vector space is a convex cone if C is convex and is also a cone Definition 10 [Positive cones] A convex cone P in a vector space X is a positive convex cone if a relation is defined on X based on P such that for elements x and y in X, x y if x y P We write x > 0 if x int(p ) Similarly x y if x y P := N and x < 0 if x int(n) Example 1 Consider the real number system R The set P := {x : x is nonnegative}, defines a cone in R It also induces a relation on R where for any two elements x and y in R, x y if and only if x y P The convex cone P with the relation defines a positive cone on R

Definition 11 [Convex maps] Let X be a vector space and Z be a vector space with positive cone P A mapping, G : X Z is convex if G(tx + (1 t)y) tg(x) + (1 t)g(y) for all x, y in X and t with 0 t 1 and is strictly convex if G(tx + (1 t)y) < tg(x) + (1 t)g(y) for all x y in X and t with 0 < t < 1 27 Definition 12 [Extreme points] Let C be a convex set Then a C is said to be an extreme point of the set C if for any x, y C and 0 < λ < 1 λx + (1 λ)y = a implies that x = y = a Note that the feasible set of a SLP is given by Λ = {x R n Ax = bx 0}

Clearly if x and y Λ then it follows that 28 A(λx+(1 λ)y) = λax+(1 λ)ay = λb+(1 λ)ab = b and (λx+(1 λ)y) 0 Thus (λx + (1 λ)y) Λ if x and y Λ Thus Λ is convex

Equivalence of extreme points and basic solutions 29 Theorem 6 Let A be a m n matrix with Rank(A) = m and let Λ = {x R n Ax = bx 0} Then a vector x is an extreme point of Λ if and only if x is a basic feasible solution Proof: Suppose x is a basic feasible solution Assume without loss of generality that the basic variables are the first m elements of x given by x i, i = 1,, m Also let B = [ a 1 a 2 a m ] Then it follows that x 1 a 1 + x 2 a 2 + + x m a m = b

or in other words x 1 x m = B 1 b Now suppose 0 < λ < 1 and y and z Λ are such that 30 λy + (1 λ)z = x Thus it follows that λy i + (1 λ)z i = x i for all i = 1,, n Note that as x i = 0 for all i = m + 1,, n, λ > 0 and (1 λ) > 0 it follows that y i = z i = 0 for all i = m + 1,, n

Note that as y and z Λ, Ay = Az = b and thus 31 Thus Thus y 1 a 1 + + y m a m = z 1 a 1 + + z m a m = b y 1 y m = z 1 z m = B 1 b = y = z = x x 1 x m Thus we have shown that every basic feasible solution is an extreme point Suppose x is an extreme point of the set Λ Without loss of generality assume that x 1,, x p are the only non-zero elements of x Suppose a 1,, a p form a dependent set Then there exists scalars y 1,, y p not all zero such that y 1 a 1 + + y p a p = 0

Let Note that ɛ = min{x i / y i i {1,, p} and y i 0} > 0 y 1 y = y p 0 0 n 1 32 x ɛy 0, x + ɛy 0, A(x ɛy) = b and A(x + ɛy) = b Also, Clearly x = 1 2 (x ɛy) + 1 (x + ɛy) 2 x x ɛ and x (x + ɛy)

as y 0 and ɛ 0 Thus we have written x as a non trivial convex combination of x ɛ and (x + ɛy) Thus x is not an extreme point This is a contradiction and therefore a 1,, a p are independent Thus x is a basic feasible solution 33 The following corollaries follow easily from Theorem 3 and Theorem 6 Corollary 1 Suppose Rank(A) = m where A is a m n matrix The feasible set of the SLP Λ = {x R n Ax = b and x 0} is nonempty if and only if there exists an extreme point of Λ Corollary 2 Suppose Rank(A) = m where A is a m n matrix Let the feasible set of the SLP be Λ = {x R n Ax = b and x 0} Then an optimal solution to the SLP exists if and only if an optimal solution to the SLP that is also an extreme point of Λ exists

Which basic variable becomes non-basic 34 Consider the SLP with as the feasible set Lets assume that Λ = {x R n Ax = b, x 0} the first m columns of A are independent the basic solution x associated with the first m columns is feasible ie x 0 Thus x 1 a 1 +x 2 a 2 + +x m a m = b, x m+1 = x m+2 = = 0 and x i > 0 for all i = 1,, m

The columns a 1, a 2 a m are the basic columns Suppose it is determined that a nonbasic column a q should become a basic column Then we have to determine which column has to leave from the basic set Note that a 1, a 2 a m form a basis for R m Therefore we can write 35 a q = y 1q a 1 + + y mq a m Suppose the variable associated with a q is increased from 0 to ɛ > 0 while keeping all other non-basic variables 0 Then to maintain feasibility we have that coefficients of the initial basic set a 1,, a m will be altered to satisfy x 1 a 1 + x 2 a 2 + + x m a m + ɛa q = b where x i, i = 1,, m have to be determined Clearly x 1 a 1 + x 2 a 2 + + x m a m = b ɛa q = m i=1 x ia i ɛ m = m i=1 (x i ɛy iq )a i i=1 y iqa i

This implies that 36 m [ x (x i ɛy iq )]a i = 0 i=1 Thus x i = x i ɛy iq for all i = 1,, m Thus we have m [(x i ɛy iq )]a i + ɛa q = b i=1

37 Thus we have a solution to the equation Az = b given by x = x 1 ɛy 1q x m ɛy mq 0 0 ɛ 0 0, where the ɛ is the q th element

Feasibility of x 38 Note that A x = b where x = x 1 ɛy 1q x m ɛy mq 0 To be feasible x 0 This may be satisfied by an appropriate choice of ɛ Indeed there are two possible case for feasibility of x Case 1: y iq 0 for all i = 1,, m In this case x is feasible (that is x Λ) for any ɛ > 0 Also we can conclude that Λ is an unbounded set 0 ɛ 0 0

Case 2: there exists at least one i 0 {1, 2,, m} such that y i0 q > 0 In this case for any 0 ɛ ɛ M, x is feasible with 39 ɛ M = min i=1,,m { x i y iq y iq > 0} 0

Making x basic feasible solution with a q basic 40 Note that x = x 1 ɛy 1q x m ɛy mq 0 0 ɛ 0 0 Case 1: If y iq 0 for all i = 1,, m then it is not possible to make x i ɛy iq = 0 any ɛ if x i > 0 If the initial basic feasible solution was degenerate with x k = 0, k {1,, m} then by choosing ɛ = 0 one can swap the role of a k and a q However in this case the solution x = x

Case 2: there exists at least one i 0 {1, 2,, m} such that y i0 q > 0 In this case let ɛ = ɛ M, with 41 ɛ M = min { x i y iq > 0} 0 i=1,,m y iq Let p a minimizing index above Then a p can be replaced with a q in the basic column set Note again that if x p = 0 then again the new solution x = x

Boundedness and Non-degeneracy assumption 42 x = x 1 ɛy 1q x m ɛy mq 0 0 ɛ 0 0 If we make the assumption that Λ is bounded and that x is a non-degenerate basic feasible solution the following steps can be performed to determine which basic column leaves the basic column set to allow a q to enter the basic column set

1 43 ɛ M = min { x i y iq > 0} > 0 i=1,,m y iq with p = arg{ min { x i y iq > 0}} i=1,,m y iq 2 Let

44 x = x 1 ɛ M y 1q x m ɛ M y mq 0 0 ɛ M 0 0 Note that x 0 and it has the p th element zero The only possible nonzero elements are belong to the set {1,, (p 1), (p + 1),, m} {q}

Effect on the Cost of changing a basic feasible solution 45 Suppose x bfs is a basic feasible solution We will also assume that A = [ e 1 e m y m+1 y n ] and b = yn+1 In this case we will have x bfs = [ ] yn+1 = b 0 n m The cost associated with this basic feasible solution is z bfs = c T x bfs = c T By n+1 Suppose x is another feasible solution How does the cost c T x compare with

c T x bfs As x is feasible Ax = b = y n+1 Thus 46 x 1 a 1 + x m a m + x m+1 y m+1 + + x n y n = y n+1

Thus m i=1 x ie i = y n+1 n i=m+1 x iy i c T B ( m i=1 x ie i ) = c T B (y n+1) c T B ( n i=m+1 x iy i ) 47 m i=1 x ic i = z bfs =:z {}}{ i n i=m+1 x i c T By i m i=1 x ic i = z bfs n i=m+1 x iz i n i=1 x ic i = z bfs n i=m+1 x iz i + n i=m+1 x ic i n x i c i i=1 }{{} c T x n x i c i i=1 }{{} c T x z bfs }{{} c T x bfs = = z }{{} bfs + c T x bfs n i=m+1 n i=m+1 x i (c i z i ) }{{} difference in cost x i (c i z i ) }{{} difference in cost

Thus if c i z i 0 for all i = m + 1,, n then x bfs is optimal 48

Consider the following SLP The Simplex Method 49 Minimize c 1 x 1 + c 2 x 2 + + c n x n subject to: a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = + b 2 a n1 x 1 + a n2 x 2 + + a mn x n = b m x 1, x 2,, x n 0 Lets assume that the matrix A has rank m We will assume that the matrix A is such that a i = e i, i = 1,, m and a i = y i, i = m + 1,, n We will denote the vector b by y n+1 Thus we have

min subject to [ c1 c 2 c m c m+1 c n ] x 1 0 0 y 1m+1 y 1n 0 1 0 y 2m+1 y 2n 0 0 1 y mm+1 y mn x 1 x 2 x n = y 1n+1 y 2n+1 y mn+1 50 [ x1 x 2 x m x m+1 x n ] 0 Lets assume that y 1n+1,, y mn+1 0 Then for the above table a basic feasible solution is given by x 1 = y 1n+1,, x m = y mn+1, x m+1 = 0,, x n = 0 Suppose it is decided that x q will enter the basic set

Consider the table: 51 1 0 0 y 1m+1 y 1n 0 1 0 y 2m+1 y 2n 0 0 1 y mm+1 y mn y 1n+1 y 2n+1 y mn+1 [ c1 c 2 c m c m+1 c n ] [ 0 ] Suppose we do the following operation: [ row (m + 1)] [ row (m + 1)] c 1 [ row 1] c 2 [ row 2] c m [ row m]

Then we have the table: Consider the table: 52 1 0 0 y 1m+1 y 1n 0 1 0 y 2m+1 y 2n 0 0 1 y mm+1 y mn [ 0 0 0 m i=1 c iy im+1 y 1n+1 y 2n+1 y mn+1 m i=1 c iy in ] [ m i=1 c iy in+1 ] Note that r i = c i c T By i for all i = 1,, n and r n+1 = m c i y in+1 = z bfs i=1

Note that if any other feasible solution has cost c T x then 53 z z bfs = n i=m+1 c i r i (5) Determining the variable that will enter: First determine If r q = min{r i, i = 1,, n} r q 0, then the current bfs is the optimal solution as the cost of any other feasible solution is greater than or equal to z bfs (see Equation 5) If r q < 0 then q is chosen as the variable to become basic Determining the basic variable that will leave the basic set:

Note that the bfs is given by x bfs = [ yn+1 0 ] 54 Also note that y j = y 1j e 1 + y 2j e2 + + y mj e m = y 1j y 1 + y 2j e 2 + + y mj y m Suppose q enters the basic set Suppose we denote the new solution to be x Then as A x = b it follows that x 1 e 1 + x 2 e 2 + + x m e m + ɛy q = y n+1 Thus m ( x i + ɛy iq x i )e i = 0 i=1

Thus That is x i = x i ɛy iq for all i = 1,, m and x q = ɛ 55 To be feasible x 0 x = y 1n+1 ɛy 1q y mn+1 ɛy mq 0 0 ɛ 0 0 = If y iq 0 for all i = 1,, m then any ɛ > 0 does not violate feasibility and

the feasible set is unbounded Note that in this case as r q < 0 and the cost of making the non-basic q variable to take a nonzero value ɛ is from Equation (5) is z z bfs = r q ɛ As ɛ 0 can be arbitrarily large without violating feasibility we conclude that the SLP has no solution and the minimum value is Thus if 56 y iq 0 for all i = 1,, m then one can stop the Simplex algorithm and conclude that the optimal value is If y iq > 0 for some i 0 {1,, m} then let p = arg[ min {y in+1 y iq > 0}] and ɛ = min i=1,,m y iq i=1,,m {y in+1 y iq y iq > 0} p is the basic variable that will leave the basic set to be replaced by q as the basic variable

If the initial bfs is not degenerate then y in+1 > 0 for all i = 1, m Thus, ɛ > 0 and from Equation (5) as r q < 0 and ɛ > 0 the new cost z < z bfs Thus if all bfs are non-degenerate then at every time the simplex table is updated the cost strictly decreases and thus the same solution cannot be visited twice Thus there is no cycling in the iterations 57 Update the table by the following operations y ij y ij y iq y pq y pj if i p y pj y pj y pq In other words [row i] [row i] y iq y pq [row p] row p] 1 y pq [row p] if i p

Note that with this operation 58 y i = e i for all i [{1, 2,, p 1, p, m} {q}]

The Simplex Algorithm 59 (Step 1) Find index q such that r q = min{r j j = 1,, n} If r q 0 STOP The current basic feasible solution is the optimal solution (Step 2) Let r q be the solution in Step 1 with r q < 0 1 If y iq 0 for all i = 1,, m then STOP There is no optimal solution and the optimal value is 2 If there exists i 0 such that y i0 q > 0 then let ɛ = min{ y in+1 y iq y iq > 0}

and let p = arg[min{ y in+1 y iq y iq > 0}] 60 (Step 3) Update the table by the following operations y ij y ij y iq y pq y pj if i p y pj y pj y pq A new basic feasible solution is obtained (Step 4) Update the relative cost vector to assure that all the relative cost with respect to basic variables are zero Return to Step 1 Theorem 7 The simplex algorithm will yield an optimal basic feasible solution in a finite number of steps if the SLP has any optimal solution and all basic feasible solutions are non-degenerate

Proof: Follows from the fact that there are finite number of basic feasible solutions and that the simplex algorithm at each iteration yields a new basic feasible solution that has a cost strictly smaller than the previous iteration cost (if all basic feasible solutions are non-degenerate there is no cycling) 61

Revised Simplex: Matrix Method 62 Let the SLP be given by µ = min{c T x Ax = b, x 0} where A R m n and b R m Suppose the first m columns of A are independent Let B := [ a 1 a 2 a m ] and D := [ am+1 a m+2 a n ] With this definition we have A = [ B D ]

Partition any feasible x according to x = [ xb ] x D 63 Let the basic feasible solution associated with B be given by x = [ xb x D ] where x B = B 1 b and x D = 0 If x = [ ] x B x D is feasible then from Ax = b it follows that [ ] [ ] B D xb x D = b

Thus Thus Bx B + Dx D = b x B = B 1 b B 1 Dx D = x B B 1 Dx D The cost associated with this feasible solution is [ ] [ ] z = c T x = c T B c T D B D = c T B x B + c T D x D = c T B (B 1 b B 1 Dx D ) + c T D x D = c T B x B + (c T D ct B B 1 D)x D z = z bfs + (c T D ct B B 1 D)x D z z bfs = (c T D ct B B 1 D)x D = rd T x D 64 where r T D = (ct D ct B B 1 D) Thus the following algorithm can be followed

(Step 1): Compute r T D = (c T D c T BB 1 D) If r D 0 STOP The current solution is optimal 65 (Step 2) Let q be the most negative element of r D a q will enter the basic set (Step 3) Let y q = B 1 a q the coordinate vector of a q in the basis given by B (Step 4) If y iq 0 for all i = 1,, m STOP The SLP has no solution and the optimal value is Else calculate p = arg[min{ y in+1 y iq y iq > 0}]

where Replace the vector a p in B by a q y n+1 = B 1 b 66 Go to step 1