THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Similar documents
F 1 F 2 Daily Requirement Cost N N N

Chap6 Duality Theory and Sensitivity Analysis

1 Review Session. 1.1 Lecture 2

OPERATIONS RESEARCH. Linear Programming Problem

Simplex Algorithm Using Canonical Tableaus

Duality Theory, Optimality Conditions

Ω R n is called the constraint set or feasible set. x 1

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

UNIT-4 Chapter6 Linear Programming

TIM 206 Lecture 3: The Simplex Method

3 The Simplex Method. 3.1 Basic Solutions

Summary of the simplex method

Special cases of linear programming


The Simplex Algorithm

Chapter 5 Linear Programming (LP)

Lecture 2: The Simplex method

4.6 Linear Programming duality

Math 273a: Optimization The Simplex method

Lecture 10: Linear programming duality and sensitivity 0-0

MATH 4211/6211 Optimization Linear Programming

3. THE SIMPLEX ALGORITHM

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

4. Duality and Sensitivity

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Linear Programming Redux

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Chapter 1: Linear Programming

OPRE 6201 : 3. Special Cases

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Review Solutions, Exam 2, Operations Research

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

TRANSPORTATION PROBLEMS

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

MATH2070 Optimisation

Lecture slides by Kevin Wayne

Linear Programming and the Simplex method

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Lecture 6 Simplex method for linear programming

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

3. Linear Programming and Polyhedral Combinatorics

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

9.1 Linear Programs in canonical form

Lectures 6, 7 and part of 8

AM 121: Intro to Optimization

Introduction. Very efficient solution procedure: simplex method.

AM 121: Intro to Optimization Models and Methods

Introduction to Mathematical Programming

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Optimization (168) Lecture 7-8-9

Numerical Optimization

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Part 1. The Review of Linear Programming

3. Linear Programming and Polyhedral Combinatorics

II. Analysis of Linear Programming Solutions

3.7 Cutting plane methods

MATH 445/545 Test 1 Spring 2016

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

IE 5531: Engineering Optimization I

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Linear Programming Inverse Projection Theory Chapter 3

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

A Review of Linear Programming

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

The dual simplex method with bounds

MATHEMATICAL PROGRAMMING I

February 17, Simplex Method Continued

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

Linear Programming. Chapter Introduction

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

Summary of the simplex method

IE 400: Principles of Engineering Management. Simplex Method Continued

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

"SYMMETRIC" PRIMAL-DUAL PAIR

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Chapter 3, Operations Research (OR)

Introduction to Linear and Combinatorial Optimization (ADM I)

AM 121: Intro to Optimization Models and Methods Fall 2018

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

MAT 2009: Operations Research and Optimization 2010/2011. John F. Rayman

ECE 307 Techniques for Engineering Decisions

The Simplex Algorithm and Goal Programming

Transcription:

LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions of the unknowns, called the decision variables It calls for optimizing (maximizing or minimizing) a linear function of decision variables, called the objective function, subject to a set of linear equalities and/or inequalities called the constraints Example (Simplified Oil Blending Problem) The capacity of the blending tank is 100 tons of oil but at present it contains only 20 tons of an oil, which costs $650/ton Selling price of oil is at $6/ton There are two properties of the mixture if we decide to blend the oil:- (a) Viscosity 32 units, and (b) S-content 3% The oil in the tank has a viscosity of 24 units and an S-content of 25% There are three types of oil available to mix with the oil in the tank: heavy oil (H), light oil (L) and cutter stock (C) Viscosity S-content Cost/ton H 40 4% $4 L 36 25% $45 C 24 2% $7 The inequalities (capacity constraint) (Viscosity constraint) (S-content) H + L + C + 20 100, ie H + L + C 80 (1) 40H + 36L + 24C + 24 20 H + L + C + 20 4H + 25L + 2C + 25 20 H + L + C + 20 32, ie 8H + 4L 8C 160 (2) 3, ie H 5L C 10 (3) 1

Any triple of non-negative values (H, L, C) satisfying (1), (2) and (3) is called a feasible solution The objective function To find the feasible solution which yields the max profit P P = 6(H + L + C + 20) (4H + 45L + 7C + 65 20) = 2H + 15L C 10 And we thus have a linear programming problem:- Max P =2H + 15L C subject to H + L + C 80 8H + 4L 8C 160 H 5L C 10 H 0, L 0, C 0 A solution (H, L, C ) to the LP is called an optimal solution For this problem, we have H = 0, L = 66 2 3, C = 13 1 3, P = $76 2 3 The general LP problem can be stated as follows: Max (or Min) x 0 =c 1 x 1 + c 2 x 2 + + c n x n subject to a 11 x 1 + a 12 x 2 + + a 1n x n (, =, )b 1 a 21 x 1 + a 22 x 2 + + a 2n x n (, =, )b 2 a m1 x 1 + a m2 x 2 + + a mn x n (, =, )b m x 1 0, x 2 0,, x n 0 Max (or Min) x 0 = subject to c j x j a ij x j (, =, )b i, i = 1, 2,, m x j 0, j = 1, 2,, n 2

The canonical form of an LP Characteristics: Max x 0 = subject to 1 All decision variables 0 2 All constraints of ( ) type 3 Objective function is of max type c j x j a ij x j b i, Note that any LP can be put into the canonical form: 1 Min program, ie Min x 0 = n c j x j This is equivalent to Max g 0 x 0 = n ( c j )x j 2 ( ) type constraint, ie This is equivalent to n 3 (=) type constraint, ie This is equivalent to or a ij x j b i ( a ij )x j b i a ij x j = b i a ij x j b i a ij x j b i i = 1, 2,, m x j 0, j = 1, 2,, n and and a ij x j b i, ( a ij )x j b i 4 Free variables, ie x j is unrestricted in sign Let x j x + j x j, where x+ j 0 and x j 0 Substitute x + j x j for x j everywhere in the LP, the problem is then expressed in (n + 1) non-negative variable x 1, x 2,, x j 1, x + j, x j, x j+1,, x n Further, if, in the canonical form of an LP, we have b i 0 (i = 1, 2,, m), then we have what we shall call a feasible canonical form The standard form of an LP Max (or Min) x 0 = subject to c j x j a ij x j = b i (b i 0), i = 1, 2,, m x j 0, j = 1, 2,, n 3

Characteristics: 1 All decision variables 0 2 All constraints are equations 3 The rhs element (b i ) of each constraint equation is 0 4 The objective function is of the max or min type Note that constraints of the inequality type can be changed to equations by the use of slack variables or surplus variables : (a) a ij x j b i can be expressed as n a ij x j + s i = b i, where s i 0 is a slack variable (b) a ij x j b i, can be expressed as variable a ij x j t i = b i, where t i 0 is a surplus Exercise: vice versa Verify that an LP in standard form can be put into its canonical form and A useful way of presenting the information of the standard form in preperation for solution is the LP tableau Example (LP tableau for feasible canonical form) Max {x 0 = c T x Ax b (b 0), x 0}, where xεir n, c T εir n, bεir m, AεIR m IR n Putting into standard form yields Max x 0 = c j x j subject to a ij x j + s i = b i (b i 0), i = 1, 2,, m x j 0, j = 1, 2,, n; s i 0, i = 1, 2,, m [Max{x 0 = c T x Ax + s = b (b 0), x, s 0}, where xεir n, c T εir n, bεir m, AεIR m IR n, sεir m ] This can then be presented as an LP tableau: 4

obj ftn value rhs constant {}}{ {}}{{ decision}} variables {{ slack variables }}{ x 0 x 1 x 2 x n s 1 s 2 s m b 0 a 11 a 12 a 1n 1 0 0 b 1 0 a 21 a 22 a 2n 0 1 0 b 2 0 a m1 a m2 a mn 0 0 1 b m 1 c 1 c 2 c n 0 0 0 0 constraint equations } objective function equation The objective function equation is obtained by considering x 0 + ( c j )x j = 0 The LP tableau assumes that all x j and s i are 0 Tableau has (m + 1) equations (rows) and, not counting the x 0 column, (m + n + 1) variables (columns) Example (A Simple Graphical Example) Max x 0 = x 1 + x 2 (0) subject to 2x 1 + x 2 4 (1) x 1 + 2x 2 6 (2) x 1, x 2 0 (0) For various values of x 0, x 1 + x 2 = x 0 is a family of lines with slope dx 2 /dx 1 = 1 Optimal solution is x = (x 1, x 2) = (2/3, 8/3) and x 0 = 10/3 (0 ) If the objective function is of the form 1 4 x 1 + x 2, then for various values of x 0, 1 4 x 1 + x 2 = x 0 is a family of lines with slope 1 4 Optimal solution is x = (x 1, x 2) = (0, 3) and x 0 = 3 (0 ) If the objective function is of the form x 1 + 1 4 x 2, then for various values of x 0, x 1 + 1 4 x 2 = x 0 is a family of lines with slope 4 Optimal Solution is x = (x 1, x 2) = (2, 0) and x 0 = 2 Intuitively, it is clear that the optimal solution is at a corner point (ie vertex of the solution space) Exercise: Construct the LP tableau for the example above 5

Consider the linear system of equalities Ax = b, (1) where xεir n, bεir m and AεIR m IR n Assume A is of full rank m(< n) Suppose that from the n columns of A, we select a subset B of m linearly independent columns (For notational simplicty, assume these are the last m columns of A) We rewrite (1), using A = (N, B), (N, B) ( xn x B ) = b Here N is a submatrix of A with its first n m columns ( xn xb ) is a partition of x into n m and m elements, respectively for x N and x B, to correspond to the dimensions of N and B Hence Nx N + Bx B = b (2) Now if we put x N = 0, ie x = ( x N xb ) = ( 0 x B ), then (2) becomes Bx B = b Uniquely, we can solve for x B = B 1 b } (3) Conclusion: x = ( 0 x B ) is a solution to the system of equalities (1) under this particular selection of the basis B of A (B is in fact, a basis of the vector space spanned by the columns of A) 6

Definition Let B be any non-singular m m submatrix (ie a basis) of A in (1) Then if all the n m components of x not associated with the columns of B (ie x N ) are set to zero, the solution to the resulting set of equations as given in (3) is said to be a basic solution to (1) wrt the basis B The components of x associated with the columns of B (ie x B ) are called the basic variables, while those associated with N (ie x N ) are called non-basic variables Definition If one or more of the basic variables (x B ) in a basic solution x = ( 0 x B ) has value zero, that solution is said to be a degenerate basic solution Otherwise, it is said to be non-degenerate Now consider adding the non-negativity constraints, ie Ax = b x 0 } (4) Definition A vector xεir n satisfying (4) is said to be a feasible solution for these constraints A feasible solution to (4) that is also a basic solution is said to be a basic feasible solution (BFS); if this solution is non-degenerate then it is called a non-degenerate basic feasible solution (NBFS), otherwise it is a degenerate basic feasible solution (DBFS) [We shall be mostly concerned with non-degenerate basic feasible solution Hence frequently we shall write only BFS for NBFS; and specify degeneracy as exception] Example (Old Example Revisited) 2x 1 + x 2 4 x 1 + 2x 2 6 x 1, x 2 0 Adding slack variables to get it into equalities (hence standard form) gives, 2x 1 + x 2 + x 3 = 4 (51) x 1 + 2x 2 + x 4 = 6 (52) x 1, x 2, x 3, x 4 0 (53) Here A = [ ] 2 1 1 0 and b = ( 4 1 2 0 1 6) (a) If basis B is chosen to be the last 2 columns, ie B = I, then N = x B = ( x 3 x 4 ) 7 (5) [ ] 2 1, x 1 2 N = ( x 1 ) x 2,

And Nx N + Bx B = b becomes [ 2 1 1 2 ] ( x1 x 2 ) + [ 1 0 0 1 Putting the non-basic variables x 1 = x 2 = 0 gives [ ] ( ) 1 0 x3 = 0 1 x 4 ( ) 4, or 6 Hence x = (0, 0, 4, 6) T is a NBFS (or simply BFS) (b) If we subtract 2 (51) from (52), we get ] ( ) x3 = x 4 ( x3 x 4 ) = { 2x1 + x 2 + x 3 = 4 ( ) 4 6 ( ) 4 (= b) 6 3x 1 2x 3 + x 4 = 2 Now if we select the current column 2 and column 4 as B (which is an identity matrix), we get x 1 = x 3 = 0 (non-basic variables) x 2 = 4, x 4 = 2 (basic variable) Hence x = (0, 4, 0, 2) is a basic solution to (51) and (52), it is not a feasible solution to (5) (ie not a BFS) because x 4 < 0 violates (53) (c) If we subtract 1/2 (52) from (51), we get { 3/2x1 + x 3 1/2x 4 = 1 x 1 + 2x 2 + x 4 = 6 Dividing the 2 nd equation above by 2 gives { 3/2x1 + x 3 1/2x 4 = 1 (61) 1/2x 1 + x 2 + 1/2x 4 = 3 (62) Selecting B to be the 3 rd and 2 nd columns gives [ 3/2 1/2 1/2 1/2 ] ( x1 x 4 ) + [ 1 0 0 1 ] ( ) x3 = x 2 ( ) 1 3 } (6) Putting x 1 = x 4 = 0 (non-basic) gives x 3 = 1 and x 2 = 3 (basic) Hence x = (0, 3, 1, 0) T is another NBFS to (5) 8

(d) Further in (c), if we subtract 1/3 (61) from (62), we get { 3/2x1 + x 3 1/2x 4 = 1 x 2 1/3x 3 + 2/3x 4 = 8/3 Multiplying the 1 st equation above by 2/3 gives { x1 + 2/3x 3 1/3x 4 = 2/3 x 2 1/3x 3 + 2/3x 4 = 8/3 Selecting the basis to be the 1 st two columns yields [ 2/3 1/3 1/3 2/3 ] ( x3 x 4 ) + [ 1 0 0 1 ] ( ) x1 = Putting x 3 = x 4 = 0 gives x 1 = 2/3 and x 2 = 8/3 x 2 ( ) 2/3 8/3 Hence x = (2/3, 8/3, 0, 0) T is another NBFS (Note that this x is also optimal, hence optimal NBFS for the objective function of Max x 1 + x 2 ) Graphically, Max x 0 = x 1 + x 2 subject to 2x 1 + x 2 4 x 1 + 2x 2 6 x 1, x 2 0 The Fundamental Theorem of Linear Programming (Source: Luenberger) In this section, through the fundamental theorem of linear programming, we establish the primary importance of basic feasible solutions in solving linear programming problems The method of proof of the theorem is in many respects as important as the result itself, since it represents the beginning of the development of the simplex method The theorem 9

itself shows that it is necessary only to consider basic feasible solutions when seeking an optimal solution to a linear program because the optimal value is always achieved at such a solution Corresponding to a linear program in standard form Min c T x subject to Ax = b, (11) x 0 a feasible solution to the constraints that achieves the minimum value of the objective function subject to those constraints is said to be an optimal feasible solution solution is basic, it is an optimal basic feasible solution If this Fundamental theorem of linear programming Given a linear program in standard form (11) where A is an m n matrix of rank m, i) if there is a feasible solution, there is a basic feasible solution; ii) if there is an optimal feasible solution, there is an optimal basic feasible solution Proof of (i) Denote the columns of A by a 1, a 2,, a n Suppose x = (x 1, x 2,, x n ) is a feasible solution Then, in terms of the columns of A, this solution satisfies: x 1 a 1 + x 2 a 2 + + x n a n = b Assume that exactly p of the variables x i are greater than zero, and for convenience, that they are the first p variables Thus x 1 a 1 + x 2 a 2 + + x p a p = b (12) There are now two cases, corresponding as to whether the set a 1, a 2,, a p is linearly independent or linearly dependent Case 1: Assume a 1, a 2,, a p are linearly independent Then clearly, p m If p = m, the solution is basic and the proof is complete If p < m, then, since A has rank m, m p vectors can be found from the remaining n p vectors so that the resulting set of m vectors is linearly independent Assigning the value zero to the corresponding m p variables yields a (degenerate) basic feasible solution Case 2: Assume a 1, a 2,, a p are linearly dependent Then there is a non-trivial linear combination of these vectors that is zero Thus there are constants y 1, y 2,, y p, at least one of which can be assumed to be positive, such that y 1 a 1 + y 2 a 2 + + y p a p = 0 (13) 10

Multiplying this equation by a scalar ε and subtracting it from (12), we obtain (x 1 εy 1 )a 1 + (x 2 εy 2 )a 2 + + (x p εy p )a p = b (14) This equation holds for every ε, and for each ε the components x i εy i correspond to a solution of the linear equations although they may violate x i εy i 0 Denoting y = (y 1, y 2,, y p, 0, 0,, 0), we see that for any ε x εy (15) is a solution to the equalities For ε = 0, this reduces to the original feasible solution As ε is increased from zero, the various components increase, decrease, or remain constant, depending upon whether the corresponding y i is negative, positive, or zero Since we assume at least one y i is positive, at least one component will decrease as ε is increased Increase ε to the first point where one or more components becomes zero Specifically, set ε = min{x i /y i : y i > 0} For this value of ε the solution given by (15) is feasible and has at most p 1 positive variables Repeating this process if necessary, we can eliminate positive variables until we have a feasible solution with corresponding columns that are linearly independent At that point case 1 applies Proof of (ii) Let x = (x 1, x 2,, x n ) be an optimal feasible solution and, as in the proof of (i) above, suppose there are exactly p positive variables x 1, x 2,, x p Again there are two cases; and case 1, corresponding to linear independence, is exactly the same as before Case 2 also goes exactly the same as before, but it must be shown that for any ε the solution (15) is optimal To show this, note that the value of the solution x εy is c T x εc T y (16) For ε sufficiently small in magnitude, x εy is a feasible solution for positive or negative values of ε Thus we conclude that c T y = 0 For, if c T y 0, an ε of small magnitude and proper sign could be determined so as to render (16) smaller than c T x while maintaining feasibility This would violate the assumption of optimality of x and hence we must have c T y = 0 Having established that the new feasible solution with fewer positive components is also optimal, the remainder of the proof may be completed exactly as in part (i) 11

This theorem reduces the task of solving a linear programming problem to that of searching over basic feasible solutions For a problem having n variables and m constraints the number of basic solutions is at most ( ) n = m n! m!(n m)! Relation to convexity Definition (1) A set C in IR n is said to be convex if x 1, x 2 εc and 0 λ 1, the point λx 1 + (1 λ)x 2 εc (2) A point xεc is said to be an extreme point (vertex, corner point) of C if there are no two distinct point x 1, x 2 εc such that x = λx 1 + (1 λ)x 2 for some 0 < λ < 1 Theorem The set of all feasible solutions to an LP problem is a convex set Proof Suppose x 1 and x 2 are two feasible solutions Then and Ax 1 = b, x 1 0 Ax 2 = b, x 2 0 For 0 λ 1, let x = λx 1 + (1 λ)x 2 be any convex combination of x 1 and x 2 Hence (i) x 0, since λx 1 0 and (1 λ)x 2 0 and (ii) Ax = A[λx 1 + (1 λ)x 2 ] = λax 1 + (1 λ)ax 2 = λb + (1 λ)b = b Theorem Let A be an m n matrix and b an m-vector Let K be the convex polytope consisting of all n vectors satisfying Ax = b, x 0 (1) A vector x is an extreme point of K iff x is a basic feasible solution to (1) (NB Def A convex polytope is the intersection of a finite no of closed half spaces) Proof Assume x = (x 1, x 2,, x m, 0, 0,, 0) T is a BFS to (1) Then x 1 a 1 + x 2 a 2 + + x m a m = b, where a i is the i th column of A, i = 1, 2,, m; and {a i } are independent 12

Suppose x could be expressed as a convex combination of two distinct points y, zεk, say x = λy + (1 λ)z, for some 0 < λ < 1 Since x 0, y 0, z 0 and 0 < λ < 1, we have y j = z j = 0, j = m + 1, m + 2,, n { y1 a 1 + y 2 a 2 + + y m a m = b z 1 a 1 + z 2 a 2 + + z m a m = b (y 1 z 1 )a 1 + (y 2 z 2 )a 2 + + (y m z m )a m = 0 Hence {a i } indepndent z j = y j = x j j x is an extreme point of K Conversely, assume x is an extreme point of K Let s assume that the non-zero components of x are the first k components Then x 1 a 1 + x 2 a 2 + + x k a k = b, with x i > 0 (i = 1, 2,, k) In order for x to be basic, we must have {a i } independent (hence also k m) Now suppose {a i } is dependent Then there is a non-trivial linear combination of {a i } such that y 1 a 1 + y 2 a 2 + + y k a k = 0 Define an n-vector y = (y 1, y 2,, y k, 0, 0,, 0) T Since x i > 0, i = 1, 2,, k, it is possible to select some ε > 0 such that x + εy 0 and x εy 0 Also A(x + εy) = b and A(x εy) = b We then have x = 1 2 (x + εy) + 1 2 (x εy) which expresses x as a convex combination of two distinct points in K x is not an extreme point (!) {a i } are linearly independent x is a BFS Corollary 1 If there is a finite optimal solution to an LP problem, there is a finite optimal solution which is an extreme point of the constraint set Proof Finite optimal solution finite optimal BFS extreme point (optimal) Corollary 2 The constraint set (ie the convex polytope K) possesses at most a finite no of extreme point Proof There are at most ( n m) BFS, each of which corresponds to an extreme point of K 13

Proposition A linear objective function cx achieves its optimum over a convex polyhedron (a bounded convex polytope) K at an extreme point of K Proof Let x 1, x 2,, x k be the extreme points of K Then any point xεk can be expressed in the form of x =λ 1 x 1 + λ 2 x 2 + + λ k x k, where λ i 0 i = 1, 2,, k and Then c T x = λ 1 c T x 1 + λ 2 c T x 2 + + λ k c T x k Let x 0 Max i=1,2,,k ct x i then from ( ) k λ i = 1 i=1 c T x (λ 1 + λ 2 + + λ k )x 0 = x 0 ( ) Hence the optimum of c T x over K is equal to x 0, achieved at some extreme point of K Example Consider the constraint set in IR 2 defined as x 1 + 8 3 x 2 4 x 1 + x 2 2 2x 1 3 x 1, x 2 0 (1) (2) (3) (4) 14

Adding slack variables x 3, x 4 and x 5 to convert it into standard form gives, x 1 + 8 3 x 2 + x 3 = 4 x 1 + x 2 + x 4 = 2 (1) (2) 2x 1 + x 5 = 3 x 1, x 2, x 3, x 4 x 5 0 (3) (4) A basic solution ε{a, b, c, d, e} is obtained by setting any 2 variables of x 1, x 2, x 3, x 4, x 5 to zero and solving for the remaining three, for example Extreme point a: (i) Set x 1 = 0, x 3 = 0 (2 binding constraints) (ii) Solve 8/3x 2 = 4 x 2 + x 4 = 2 x 5 = 3 giving (0, 3 2, 0, 1 2, 3) which corresponds to extreme point a of the convex polyhedron K defined by (1), (2), (3), (4) extreme point a b c d e set to x 1 x 3 x 4 x 2 x 1 zero x 3 x 4 x 5 x 5 x 2 Note: There is a maximum total of ( 5 3) = ( 5 2) = 10 and here we have 9 Simplex Method (Adjacent Extreme Point Method) for an LP in feasible canonical form The idea of the Simplex method is to proceed from one BFS (ie extreme point) of the feasible region of an LP problem expressed in tableau form to another BFS, in such a way as to continually increase (or decrease) the value of the objective function until optimality is reached The simplex method moves from one extreme point to its neighbouring extreme point For the following LP in feasible canonical form (ie its rhs vector b 0): Max{x 0 = c T x Ax b (b 0), x 0} 15

its LP tableau is x 1 x 2 x s x n s 1 s 2 s r s m b s 1 a 11 a 12 a 1s a 1n 1 0 0 0 b 1 s 2 a 21 a 22 a 2s a 2n 0 1 0 0 b 2 s r a r1 a r2 a rs a rn 0 0 1 0 b r s m a m1 a m2 a ms a mn 0 0 0 1 b m c 1 c 2 c s c n 0 0 0 0 0 Since all b i 0, we can read off directly from the tableau a starting BFS (0, 0,, 0, b 1, b 2,, b m ) T Note that this corresponds to the origin of the n-dimensional subspace (the solution space) of IR n (ie all structural variables x j are set to zero) The set B of basic variables is {s 1, s 2,, s r,, s m } and we say that for each varriable s r εb, s r is in the basis B The set N of non-basic variables is {x 1, x 2,, x s,, x n } and we say that any x s εn is not in the basis B Consider now replacing s r εb by x s εn We say that s r is to leave the basis and x s is to enter the basis Consequently after this operation s r becomes non-basic (εn) and x s becomes basic (εb) This of course amounts to a different (selection of columns of matrix A to give a different) basis B We shall achieve this change of basis by a Pivot Operation (or simply called a pivot) This pivot operation is designed to maintain an identity matrix as the basis in the tableau at all time Pivot Operation (wrt element a rs > 0) Definition (a) a rs > 0 is called the pivot element (b) row r is called the pivot row (c) column s is called the pivot column Rules (a) In pivot row, a rj a rj /a rs j (b) In pivot column, a rs 1, a is 0 i r (c) For all other elements, a ij a ij a rj a is /a rs 16

Graphically, j s j s i a ij a is r a rj a rs becomes i a ij a rj a is /a rs 0 r a rj /a rs 1 Or, simply, a b c d becomes a bc/d 0 c/d 1 Exercise: Verify that this pivot operation is simply the Gaussian elimination such that variable x s is eliminated from all m + 1 but the r th equation, and in the r th equation, the coefficient of x s is equal to 1 Example (Pivot operation and feasibility) x 1 + x 2 x 3 + x 4 = 5 2x 1 3x 2 + x 3 + x 5 = 3 x 1 + 2x 2 x 3 + x 6 = 1 x 1 x 2 x 3 x 4 x 5 x 6 b x 4 1 1 1 1 0 0 5 x 5 2 3 1 0 1 0 3 x 6 1 2 1 0 0 1 1 B 1 0 0 0 1 0 0 0 1 Basic solution is (0, 0, 0, 5, 3, 1) T feasible x 1 x 2 x 3 x 4 x 5 x 6 b x 1 1 1 1 1 0 0 5 x 5 0 5 3 2 1 0 7 x 6 0 3 2 1 0 1 6 B 1 0 0 2 1 0 1 0 1 Basic solution is (5, 0, 0, 0, 7, 6) T infeasible x 1 x 2 x 3 x 4 x 5 x 6 b x 1 1 0 2/5 3/5 1/5 0 18/5 x 2 0 1 3/5 2/5 1/5 0 7/5 x 6 0 0 1/5 1/5 3/5 1 9/5 B 1 1 0 2 3 0 1 2 1 Basic solution is (18/5, 7/5, 0, 0, 0, 9/5) T feasible 17

x 1 x 2 x 3 x 4 x 5 x 6 b x 1 1 0 0 1 1 2 4 x 2 0 1 0 1 2 3 2 x 3 0 0 1 1 3 5 9 Basic solution is (4, 2, 9, 0, 0, 0) T infeasible B 1 1 1 2 3 1 1 2 1 Exercise: Let y j denote the current tableau column under variable x j For each of the four tableaus above, calculate the matrix product of B [y 4, y 5, y 6 ] Can you explain the results and generalize? Pivoting Criterion (Feasibility Condition) For a given selection of pivot column (say, with entering variable x s ), the pivot row (ie the leaving basic variable, say x r ) must be selected as the basic variable corresponding to the smallest positive ratio of the values of the current rhs to the current (positive) constraint coefficient of the entering non-basic variable x s Graphically, x s b ratio To determine row r y 1s y 10 y 10 /y 1s y 2s y 20 y 20 /y 2s y r0 /y rs = Min i { yi0 /y is yis > 0 } y is y i0 y i0 /y is To see why this works, note that the tableau is y ms y m0 y m0 /y ms x i + jεn y ij x j = y i0 ( 0), i = 1, 2,, m, or x i = y i0 jεn y ij x j 0, i = 1, 2,, m To increase the value of a non-basic variable x s from zero to positive and maintain feasibility needs y is x s y i0 (i = 1, 2,, m) or x s y i0 /y is (i = 1, 2,, m) Hence we should select row r such that x s = y r0 /y rs = Min i {y i0 /y is y is > 0} 18

Following this pivoting criterion, we have the result of a new BFS with x r replaced by x s as a basic variable That is x s is increased from zero to x s = y r0 /y rs, while new x r = y r0 y rj x j = y r0 y rs x s = 0 jεn Exercise: Verify that pivoting means replacing column a r (of the original matrix A) that is in B by column a s (of A) that is currently not in B Hence pivoting is also called a change of basis Optimality Condition (for a max program) Given the objective function row (ie the x 0 -equation) in terms of the non-basic variables x j εn in the tableau x 0 = y 00 y 0j x j, jεn where y 00 is the current objective function value associated with the current BFS in the tableau (the (m + 1, n + 1) th entry) The entering variable x s εn can be selected as a non-basic variable x s having a negative coefficient (such as the first negative y 0s or the most negative y 0s ) If all coefficients y 0j are non-negative, the objective function cannot be increased by making any non-basic variable positive (ie basic); hence an optimal solution has been reached Summary of Computation Procedure (for feasible canonical form LP) Once the initial tableau has been constructed, the Simplex procedure calls for the successive iteration of the following steps 1 Testing of the coefficients of the objective function row to determine whether an optimal solution has been reached, ie coefficients non-negative in that row is satisfied whether the optimality condition that all 2 If not, select a (currently non-basic) variable x s to enter the basis (eg the 1 st negative coefficient or the most negative) 3 Then determine the (currently basic) variable x r to leave the basis using the feasibility condition, ie select x r where y r0 /y rs = Min i {y i0 /y is y is > 0} 4 Perform a pivot operation with pivot row corresponding to x r and pivot column corresponding to x s Return to 1 Exercise: In step 3, if all y is 0, verify that the LP has an unbounded objective function value, ie x 0 can tend to 19

Example (Simplex Method for feasible canonical form) Max x 0 = 3x 1 + x 2 + 3x 3 Subject to 2x 1 + x 2 + x 3 2 x 1 + 2x 2 + 3x 3 5 (Initial Tableau) x 1 x 2 x 3 x 4 x 5 x 6 b x 4 2 1 1 1 0 0 2 x 5 1 2 3 0 1 0 5 x 6 2 2 1 0 0 1 6 3 1 3 0 0 0 0 2x 1 + 2x 2 + x 3 6 x 1, x 2, x 3 0 ratio 2/1 = 2 5/2 = 25 6/2 = 3 Current BFS x = (0, 0, 0, 2, 5, 6) T, x 0 = 0 x 1 x 2 x 3 x 4 x 5 x 6 b x 2 2 1 1 1 0 0 2 x 5 3 0 1 2 1 0 1 x 6 2 0 1 2 0 1 2 1 0 2 1 0 0 2 ratio 2/1 = 2 1/1 = 1 Current BFS x = (0, 2, 0, 0, 1, 2) T, x 0 = 2 x 1 x 2 x 3 x 4 x 5 x 6 b x 2 5 1 0 3 1 0 1 x 3 3 0 1 2 1 0 1 x 6 5 0 0 4 1 1 3 7 0 0 3 2 0 4 ratio 1/5 Current BFS x = (0, 1, 1, 0, 0, 3) T, x 0 = 4 (Optimal tableau) x 1 x 2 x 3 x 4 x 5 x 6 b x 1 1 1/5 0 3/5 1/5 0 1/5 x 3 0 3/5 1 1/5 2/5 0 8/5 x 6 0 1 0 1 0 1 4 0 7/5 0 6/5 3/5 0 27/5 Optimal BFS x = (1/5, 0, 8/5, 0, 0, 4) T, x 0 = 27/5 20

Extreme point sequence: {x 4, x 5, x 6 } {x 2, x 5, x 6 } {x 2, x 3, x 6 } {x 1, x 3, x 6 } Exercise: Apply the Simplex Method again, but using the first negative coefficient rule to select a pivot column Simplex Method for an LP in Standard Form (Artificial Variables Techniques) Consider an LP in standard form: Max{x 0 = c T x Ax = b (b 0), x 0} There is no obvious initial starting basis B such that B = I m For notational simplicity, assume that we pick B as the last m (linearly independent) columns of A We then have for the augmented system : { Nx N + Bx B = b x 0 c T N x N c T B x B = 0 Multiplying by B 1 yields, { B 1 Nx N + x B = B 1 b (or x B = B 1 b B 1 Nx N ) x 0 c T N x N c T B (B 1 b B 1 Nx N ) = 0 ie { B 1 Nx N + x B = B 1 b x 0 (c T N ct B B 1 N)x N = c T B B 1 b Denoting zn T ct B B 1 N (an (n m) row vector) gives { B 1 Nx N + x B = B 1 b x 0 (c T N zt N )x N =c T B B 1 b, which is called the General representation of an LP in standard form wrt the basis B Its simplex tableau is then x N x B b x B B 1 N I B 1 b x 0 (c T N zt n ) 0 c T B B 1 b Definition The coefficients r j c j z j (where zn T = (z j) T = c T B B 1 N) are called the reduced cost coefficients wrt the basis B Remark (a) Current BFS optimal when r j = c j z j 0 j for Max Program (b) Current BFS optimal when r j = c j z j 0 j for Min Program because x 0 = c T B B 1 b + (c j z j )x j = c T B B 1 b + r j x j jεn 21 jεn

Example (The Big-M method) [Ref Taha-Chapter 3] Max x 0 = x 1 + x 2 subject to 2x 1 + x 2 4 x 1 + 2x 2 = 6 x 1, x 2 0 Putting into standard form, the augmented system is: 2x 1 + x 2 x 3 = 4 x 1 + 2x 2 = 6 x 0 x 1 x 2 = 0 Introducing artificial variables x 4 and x 5 yields, 2x 1 + x 2 x 3 + x 4 = 4 x 1 + 2x 2 + x 5 = 6 x 0 x 1 x 2 + Mx 4 + Mx 5 = 0 Calculating reduced cost coefficients r j = c j z j gives c B = ( ) M M r 1 = c 1 ( M, M)a 1 = 1 + 3M ; r 2 = c 2 ( M, M)a 2 = 1 + 3M r 3 = c 3 ( M, M)a 3 = M ; r 4 = r 5 = 0 Objective function value = c T B B 1 b = ( M, M)b = 10M x 1 x 2 x 3 x 4 x 5 b x 4 2 1 1 1 0 4 x 5 1 2 0 0 1 6 x 0 (1 + 3M) (1 + 3M) +M 0 0 10M (Note: An artificial variable can be dropped from consideration once it becomes non-basic) x 1 x 2 x 3 x 5 b x 1 x 2 x 3 b x 1 1 1/2 1/2 0 2 x 5 0 3/2 1/2 1 4 x 0 0 (1 + 3M)/2 (1 + M)/2 0 2 4M x 1 1 0 2/3 2/3 x 2 0 1 1/3 8/3 x 0 0 0 1/3 10/3 At this point all artificial variables are dropped from the problem, and x = (2/3, 8/3, 0) T is an initial BFS x 1 x 2 x 3 b x 1 1 2 0 6 x 3 0 3 1 8 x 0 0 1 0 6 Optimal solution x = (6, 0, 8) T, with x 0 = 6 22

Example (The Two-Phase method) [cf Example of Big-M method] Max x 0 = x 1 + x 2 subject to 2x 1 + x 2 4 x 1 + 2x 2 = 6 x 1, x 2 0 Putting into standard form, the augmented system is: 2x 1 + x 2 x 3 = 4 x 1 + 2x 2 = 6 x 0 x 1 x 2 = 0 Introducing artificial variables x 4 and x 5 yields the (min-program) Artificial Problem as: 2x 1 + x 2 x 3 + x 4 = 4 x 1 + 2x 2 + x 5 = 6 x 0 x 4 x 5 = 0 Calculating reduced cost coefficients r j = c j z j gives c B = ( ) 1 1 r 1 = 0 (1, 1)a 1 = 3 ; r 2 = 0 (1, 1)a 2 = 3 r 3 = 0 (1, 1)a 3 = 1 ; r 4 = r 5 = 0 Objective function value = c T B B 1 b = (1, 1)b = 10 x 1 x 2 x 3 x 4 x 5 b x 4 2 1 1 1 0 4 x 5 1 2 0 0 1 6 x 0 3 3 1 0 0 10 (Note: An artificial variable can be dropped from consideration once it becomes non-basic) x 1 x 2 x 3 x 5 b x 1 x 2 x 3 b x 1 1 1/2 1/2 0 2 x 5 0 3/2 1/2 1 4 x 0 0 3/2 1/2 0 4 x 1 1 0 2/3 2/3 x 2 0 1 1/3 8/3 x 0 0 0 0 0 Phase I computation completes with objective function value = 0 and x = (2/3, 8/3, 0) T an initial BFS Phase II begins with calculating reduced cost coefficients to restore the original objective function, followed by pivot operation(s) to optimality x 1 x 2 x 3 b x 1 x 2 x 3 b x 1 1 0 2/3 2/3 x 2 0 1 1/3 8/3 x 0 0 0 1/3 10/3 x 1 1 2 0 6 x 3 0 3 1 8 x 0 0 1 0 6 Phase II computation is complete, giving optimal solution x = (6, 0, 8) T, with x 0 = 6 23

The Two-phase Method Phase I: (Search for a starting BFS) Introduce artificial variables to give a starting basis as an identity matrix Replace the original objective function by the sum of all artificial variables thus introduced The Simplex tableau of this derived (artificial) problem is then put into Canonical form (by calculating reduced cost coefficients) Apply the Simplex procedure to obtain a minimum optimal solution The minimum objective function value can be either (a) zero (ie all artificial variables equal zero) implying a BFS for the original problem; or (b) positive (ie at least one artificial variable basic and positive) implying no feasible solutions exist for the original problem In case of (a), proceed to Phase II In case (b), stop Phase II: (Conclude with an optimal BFS) Use the solution obtained at the end of Phase I as a starting BFS while restoring the original objective function Again, this Simplex tableau is put into Canonical form Apply the Simplex procedure to obtain an optimal solution A Complete Example (using the Two-phase Method) Min 2x 1 + 4x 2 + 7x 3 + x 4 + 5x 5 Subject to x 1 + x 2 + 2x 3 + x 4 + 2x 5 = 7 x 1 + 2x 2 + 3x 3 + x 4 + x 5 = 6 x 1 + x 2 + x 3 + 2x 4 + x 5 = 4 x 1 free, x 2 0, x 3 0, x 4 0, x 5 0 Since x 1 is free, it can be eliminated by solving for x 1 in terms of the other variables from the 1 st equation and substituting everywhere else This can be done nicely using our pivot operation on the following simplex tableau: x 1 x 2 x 3 x 4 x 5 b 1 1 2 1 2 7 1 2 3 1 1 6 1 1 1 2 1 4 2 4 7 1 5 0 Initial tableau 24

We select any non-zero element in the first column as our pivot element this will eliminate x 1 from all other rows:- x 1 x 2 x 3 x 4 x 5 b 1 1 2 1 2 7 0 1 1 0 1 1 0 0 1 1 1 3 ( ) 0 2 3 1 1 +14 Equivalent Problem Saving the first row ( ) for future reference only, we carry on only the sub-tableau with the first row and the first column deleted There is no obvious basic feasible solution, so we use the two-phase method: After making b 0, we introduce artificial variables y 1 0 c B = ( ) 1 1 0 0 0 0 1 1 0 Initial tableau for phase I Transforming the last row to give a tableau in canonical form, we get x 2 x 3 x 4 x 5 y 1 y 2 b 1 1 0 1 1 0 1 0 1 1 1 0 1 3 1 0 1 2 0 0 4 and y 2 0 to give the artificial problem:- x 2 x 3 x 4 x 5 y 1 y 2 b 1 1 0 1 1 0 1 0 1 1 1 0 1 3 First tableau phase I which is in canonical form We carry out the pivot operations with the indicated pivot elements:- x 2 x 3 x 4 x 5 y 1 y 2 b 1 1 0 1 1 0 1 1 2 1 0 1 1 2 1 2 1 0 2 0 2 Second tableau phase I 25

x 2 x 3 x 4 x 5 y 1 y 2 b 0 1 1 1 0 1 3 1 2 1 0 1 1 2 0 0 0 0 1 1 0 Final tableau phase I At the end of phase I, we go back to the equivalent reduced problem (ie discarding the artificial variables y 1, y 2 ):- x 2 x 3 x 4 x 5 b 0 1 1 1 3 1 2 1 0 2 c B = ( ) 1 2 0 2 2 0 21 2 3 1 1 14 Initial tableau phase II Pivoting as shown gives x 2 x 3 x 4 x 5 b 1/2 0 1/2 1 2 1/2 1 1/2 0 1 1 0 1 0 19 Final tableau phase II The solution x 3 = 1, x 5 = 2 can be inserted in the expression ( ) for x 1 giving x 1 = 7 + 2(1) + 2(2) = 1 Thus the final solution is x 1 = 1, x 2 = 0, x 3 = 1, x 4 = 0, x 5 = 2, with x 0 = 19 26

Various possible cases when applying the Simplex Method (1) Degeneracy [Ref HA Taha Chapter 3] x 1 x 2 x 3 x 4 x 5 b x 3 4 3 1 0 0 12 x 4 4 1 0 1 0 8 x 5 4 1 0 0 1 8 x 0 2 1 0 0 0 0 ( ) x 1 x 2 x 3 x 4 x 5 b x 3 0 4 1 0 1 4 x 4 0 2 0 1 1 0 x 1 1 1/4 0 0 1/4 2 0 3/2 0 0 1/2 4 Degenerate Vertex {x 4 = 0 and basic} x 1 x 2 x 3 x 4 x 5 b x 3 0 0 1 2 1 4 x 2 0 1 0 1/2 1/2 0 x 1 1 0 0 1/8 1/8 2 0 0 0 3/4 1/4 4 ( ) x 1 x 2 x 3 x 4 x 5 b x 3 0 2 1 1 0 4 x 1 1 1/4 0 1/4 0 2 x 5 0 2 0 1 1 0 0 1/2 0 1/2 0 4 Degenerate Vertex {x 5 = 0 and basic} x 1 x 2 x 3 x 4 x 5 b x 2 0 1 1/2 1/2 0 2 x 1 1 0 1/8 3/8 0 3/2 x 5 0 0 1 2 1 4 0 0 1/4 1/4 0 5 Degenerate Vertex {x 4 = 0 and basic} x 1 x 2 x 3 x 4 x 5 b x 5 0 0 1 2 1 4 x 2 0 1 1/2 1/2 0 2 x 1 1 0 1/8 3/8 0 3/2 0 0 1/4 1/4 0 5 Degenerate Vertex: V is represented by : {x 2 = 0, x 4 = 0}, {x 4 = 0, x 5 = 0}, {x 2 = 0, x 5 = 0} Exercise: Try pivotting in variable x 2 from the very beginning Do you see any degeneracy? Why? 27

Example of Degenracy and Cycling (Beale) Maximize x 0 = 20x 1 + 1/2x 2 6x 3 + 3/4x 4 subject to x 1 2 8x 1 x 2 + 9x 3 + 1/4x 4 16 12x 1 1/2x 2 + 3x 3 + 1/2x 4 24 x 2 1 x 1 0, x 2 0, x 3 0, x 4 0 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 b x 5 1 0 0 0 1 0 0 0 2 x 6 8 1 9 1/4 0 1 0 0 16 x 7 12 1/2 3 1/2 0 0 1 0 24 (T0) x 8 0 1 0 0 0 0 0 1 1 x 0 20 1/2 6 3/4 0 0 0 0 0 x 1 1 0 0 0 1 0 0 0 2 x 6 0 1 9 1/4 8 1 0 0 0 x 7 0 1/2 3 1/2 12 0 1 0 0 (T1) x 8 0 1 0 0 0 0 0 1 1 x 0 0 1/2 6 3/4 20 0 0 0 40 x 1 1 0 0 0 1 0 0 0 2 x 4 0 4 36 1 32 4 0 0 0 x 7 0 3/2 15 0 4 2 1 0 0 (T2) x 8 0 1 0 0 0 0 0 1 1 x 0 0 7/2 33 0 4 3 0 0 40 x 1 1 3/8 15/4 0 0 1/2 1/4 0 2 x 4 0 8 84 1 0 12 8 0 0 x 5 0 3/8 15/4 0 1 1/2 1/4 0 0 (T3) x 8 0 1 0 0 0 0 0 1 1 x 0 0 2 18 0 0 1 1 0 40 28

x 1 1 0 3/16 3/64 0 1/16 1/8 0 2 x 2 0 1 21/2 1/8 0 3/2 1 0 0 x 5 0 0 3/16 3/64 1 1/16 1/8 0 0 (T4) x 8 0 0 21/2 1/8 0 3/2 1 1 1 x 0 0 0 3 1/4 0 2 3 0 40 x 1 1 0 0 0 1 0 0 0 2 x 2 0 1 0 5/2 56 2 6 0 0 x 3 0 0 1 1/4 16/3 1/3 2/3 0 0 (T5) x 8 0 0 0 5/2 56 2 6 1 1 x 0 0 0 0 1/2 16 1 1 0 40 x 1 1 0 0 0 1 0 0 0 2 x 6 0 1/2 0 5/4 28 1 3 0 0 x 3 0 1/6 1 1/6 4 0 1/3 0 0 (T6) x 8 0 1 0 0 0 0 0 1 1 x 0 0 1/2 0 7/4 44 0 2 0 40 x 1 1 0 0 0 1 0 0 0 2 x 6 0 1 9 1/4 8 1 0 0 0 x 7 0 1/2 3 (1/2) 12 0 1 0 0 (T7) x 8 0 1 0 0 0 0 0 1 1 x 0 0 1/2 6 3/4 20 0 0 0 40 Hence a cycle (of period 6) is detected as T1 = T7 To break the cycle, bring in x 4 and remove x 7 Then the next iteration yields the (non-degenerate) optimal solution x 1 = 2, x 2 = 1, x 3 = 0, x 4 = 1, x 5 = 0, x 6 = 3/4, x 7 = 0, x 8 = 0, with x 0 = 4125 When an LP is degenerate, ie its feasible region (the convex polytope) possesses degenerate vertices, cycling may occur as follows: Suppose the current basis is B and such that this basis B yields a degenerate BFS Since moving from a degenerate vertex (BFS) to another degenerate vertex does not affect (ie increase or decrease) the objective function value It is then possible for the Simplex procedure to start from the current (degenerate) basis B, and after some number p of iterations, to return to B with no change in the objective function value as long as all vertices in-between are degenerate This means 29

that a further p-iterations will again bring us back to this same basis B The process is then said to be cycling In our example, starting from basis B = (a 1, a 6, a 7, a 8 ), we move to (a 1, a 4, a 7, a 8 ), to (a 1, a 4, a 5, a 8 ), to (a 1, a 2, a 5, a 8 ), to (a 1, a 2, a 3, a 8 ) to (a 1, a 6, a 3, a 8 ) and finally back to (a 1, a 6, a 7, a 8 ) in six iterations, or a cycle of period p = 6 To get out of cycling, one way is to try a different pivot element (Degeneracy guarantees the existence of more than one feasible pivot element, ie tie-ratios exist) This is done as indicated in our example above Another way in terms of computer implementation is by perturbation of data For our example, this may be done by changing b = (2, 16, 24, 1) T in T0 to (200001, 16000001, 240000001, 100000001) T Yet another is by using the concept of lexicographic order of vectors (cf GB Dantzig, Linear Programming and Extensions ) The best of all, however, is Bland s smallest index rule as described in Mathematics of Operations Research, Vol 2, No 2 (1977) (2) Unbounded Solutions (2a) Unbounded optimal solution (ie x 0 ) Max x 0 = 2x 1 + x 2 subject to x 1 x 2 10 (1) 2x 1 x 2 40 (2) x 1, x 2 0 x 1 x 2 x 3 x 4 b x 3 1 1 1 0 10 x 4 2 1 0 1 40 x 0 2 1 0 0 0 No positive ratio exists in x 2 column Hence x 2 can be increased without bound while maintaining feasibility (Why?) 30

(2b) Unbounded feasible region but bounded optimal solution Max x 0 = 6x 1 2x 2 subject to 2x 1 x 2 2 (1) x 1 4 (2) x 1, x 2 0 2 1 1 0 2 1 0 0 1 4 6 2 0 0 0 1 1/2 1/2 0 1 0 1/2 1/2 1 3 0 1 3 0 6 1 0 0 1 4 0 1 1 2 6 0 0 2 2 12 Any (x 1, x 2 ) = (1, k) for k being any positive number is a feasible solution Optimal tableau (3) Infinite number of Optimal Solutions Max x 0 = 4x 1 + 14x 2 subject to 2x 1 + 7x 2 21 7x 1 + 2x 2 21 x 1, x 2 0 31

2 7 1 0 21 7 2 0 1 21 4 14 0 0 0 2/7 1 1/7 0 3 45/7 0 2/7 1 15 0 0 2 0 42 0 1 7/45 2/45 7/3 1 0 2/45 7/45 7/3 0 0 2 0 42 Zero reduced cost coefficients for non-basic variables at optimality indicate alternative optimal solutions, since if we pivot in those columns, x 0 value remains the same after a change of basis for a different BFS Notice that Simplex Method yields only the extreme point optimal (BFS) solutions More generally, the set of alternative optimal solutions is given by the convex combination of optimal extreme point solutions Suppose x 1, x 2,, x p are extreme point optimal solutions, then x = p λ k = 1 is also an optimal solution k=1 p k=1 λ k x k, where 0 λ k 1 and (4) Non-existence of feasible solutions In terms of the methods of artificial variable techniques, the solution at optimality could include one or more artificial variables at a positive level (ie as a non-zero basic variable) In such a case the corresponding constraint is violated and the artificial variable cannot be driven out of the basis The feasible region is thus seen to be empty (Can this ever happen to an LP that can be put into feasible canonical form?) More Compact Simplex Tableau (Jordan interchange) Consider an LP tableau such as the following: x 1 x 2 x 3 x 4 x 5 x 6 b x 5 1 1 0 1 1 0 1 x 6 0 1 1 1 0 1 3 x 0 1 0 1 2 0 0 4 32

Notice that the same amount of information is contained in the more compact tableau with the basic columns omitted: x 1 x 2 x 3 x 4 b x 5 1 1 0 1 1 x 6 0 0 1 1 3 x 0 1 0 1 2 4 To carry out a pivot operation, say to have x 4 replaced by x 5, we note that in the resulting compact tableau, we should have columns x 1, x 2, x 3 and x 5 only since these are then the non-basic columns after this pivot operation x 1 x 2 x 3 x 4 x 5 x 6 b x 4 1 1 0 1 1 0 1 x 6 1 2 1 0 1 1 2 x 0 1 2 1 0 2 0 2 Full tableau x 1 x 2 x 3 x 5 b x 4 1 1 0 1 1 x 6 1 2 1 1 2 x 0 1 2 1 2 2 Compact tableau For the full tableau we use the pivot rule as usual, that is a c b d 1 b/a 0 d bc/a In particular for column x 5 (the newly formed non-basic column), we have a 1 c 0 1 1/a 0 c/a x 4 x 5 x 4 x 5 col col col col 33

For the compact tableau we use the same rule except that we also do a replacement (in position) of x 4 column by x 5 Therefore in the compact scheme, pivot and replacement becomes a c b d 1/a b/a c/a d bc/a The Revised Simplex Method (Simplex Method in Explicit Inverse Form; or Simplex Method in Matrix Form) Consider the general representation of an LP wrt the basis B:- { B 1 Nx N + Ix B = B 1 b x 0 (c T N ct B B 1 N)x N = c T B B 1 b Observe that at any time during the application of Simplex procedure, the knowledge of B 1 is sufficient to read off a BFS, ie x B = B 1 b, x N = 0 and x 0 = c T B B 1 b Hence the idea behind the Revised Simplex Method is as follows: Instead of carrying out the computation on the entire simplex tableau, we keep only the current basis inverse B 1 (and the original data: A, b and c) and only compute what we need for that iteration Step 0 Given the current basis inverse B 1 and hence the current BFS x B = B 1 b ( y 0 in tableau) Step 1 Calculate the reduced cost coefficients rn T = ct N ct B B 1 N (This is best done by first calculating λ c T B B 1 (εir m ) and then rn T = ct N λn) If r N 0 (for Min Program) or r N 0 (for Max Program), the current BFS is optimal Step 2 Select column a s from among those non-basic columns with r s < 0 (for Min Program) or r s > 0 (for Max Program) and calculate y s = B 1 a s, which is the current column associated with variable x s in terms of the current basis B Step 3 Calculate the ratios y i0 /y is, for which y is > 0, to determine the column a r which is to leave the basis Step 4 Update B 1 (ie replacing column a r by column a s in B and obtaining its inverse) and the current BFS x B = B 1 b Return to Step 1 34

Numerical Example on Revised Simplex Method To maximize c T x, where c = (3, 1, 3, 0, 0, 0) T with the table of coefficients a 1 a 2 a 3 a 4 a 5 a 6 b Initial Basis 2 1 1 1 0 0 2 1 2 3 0 1 0 5 B = B 1 = I 3 2 2 1 0 0 1 6 (1) Basic variables B 1 x 4 1 0 0 x B 2 x 5 0 1 0 5 x 6 0 0 1 6 λ = c T B B 1 = (0, 0, 0)I = (0, 0, 0) rn T = ct N λn = (3, 1, 3) (0, 0, 0)N = (3, 1, 3) > 0 Bring a 2 into the basis, with y 2 = B 1 a 2 = Ia 2 = a 2 y 2 1 2 2 (2) Basic variables x 2 x 5 x 6 B 1 1 0 0 2 1 0 2 0 1 λ = c T B B 1 = (1, 0, 0)B 1 = (1, 0, 0) rn T = ct N λn = (c 1, c 3, c 4 ) λ[a 1, a 3, a 4 ] = (3, 3, 0) (1, 0, 0) 2 1 1 1 3 0 2 1 0 = (1, 2, 1) 0 Bring a 3 into the basis, with y 3 = 1 0 0 2 1 0 1 3 = 1 1 2 0 1 1 1 (3) Basic variables x 2 x 3 x 6 B 1 3 1 0 2 1 0 4 1 1 λ = c T B B 1 = (1, 3, 0)B 1 = ( 3, 2, 0) rn T = ct N λn = (c 1, c 4, c 5 ) λ[a 1, a 4, a 5 ] = (3, 0, 0) ( 3, 2, 0) 2 1 0 1 0 1 2 0 0 = (7, 3, 2) 0 Bring a 1 into the basis, with y 1 = 3 1 0 2 1 0 = 5 3 4 1 1 5 x B 2 1 2 x B 1 1 3 y 3 1 1 1 y 1 5 3 5 35

(4) Basic variables x 1 x 3 x 6 B 1 3/5 1/5 0 1/5 2/5 0 1 0 1 λ = c T B B 1 = (3, 3, 0)B 1 = (6/5, 3/5, 0) rn T = ct N λn = (c 2, c 4, c 5 ) λ[a 2, a 4, a 5 ] = (1, 0, 0) (6/5, 3/5, 0) 1 1 0 2 0 1 2 0 0 = ( 7/5, 6/5, 3/5) < 0 Optimal solution x = (1/5, 0, 8/5, 0, 0, 4) T, with value x 0(= c T B x B ) = ct B B 1 b = λb = (6/5, 3/5, 0) (2, 5, 6) = 27/5 x B 1/5 8/5 4 Duality of Linear Programming Every LP has associated with it another LP, called its dual and that the two problems have such a close relationship that whenever one problem is solved, the other is solved as well They are called the dual pair (primal + dual) in the sense that the dual of the dual will again be the primal Primal Dual x : col n-vector Max c T x Min yb c : col n-vector subject to Ax b subject to ya c T b : col m-vector x 0 y 0 y : row m-vector A is m n (NB Calling which one primal and the other one dual is completely arbitrary) We observe from the above the following correspondence: P Max Program D Min Program c j : n obj ftn coeff n rhs b i : m rhs m obj ftn coeff y i : m( ) constraints m non-neg variables x j : n non-neg variables n( ) constraints Definition This pair of dual programs is called the symmetric form of the dual pair 36

The Diet Problem (I) Q: How can a dietician design the most economical diet that satisfies the basic daily nutritional requirements for a good health? We have the following information: Available at the market are n different types of food Unit cost of food j is c j (j = 1, 2,, n) There are m basic nutritional ingredents (nutrients) Each individual requires daily at least b i units of nutrient i (i = 1, 2,, m) Each unit of food j contains a ij units of nutrient i Denoting by x j (our decision variable) the number of units of food j to include in a diet, the problem is to select the x j s such as to minimize the total cost x 0 of a diet, ie Min x 0 = subject to the nutriational constraints: c j x j a ij x j b i (i = 1, 2,, m) and the non-negativity constraints: x j 0 (j = 1, 2,, n) That is: (I) becomes Min{x 0 = c T x Ax b, x 0} The Diet Problem (II) Q: How can a pharmacentical company determine the price for each unit of nutrient pill so as to maximize revenue, if a synthetic diet made up of nutrient pills of various pure nutrients is adopted? Denoting by y i the unit price of nutrient pill i, the problem is to maximize the total revenue y 0 from selling such a synthetic diet, ie m Max y 0 = y i b i subject to the constraints that the cost of a unit of synthetic food j made up of nutrient pills is no greater than the unit market price of food j: i=1 and m y i a ij c j (j = 1, 2,, n) i=1 y i 0 (i = 1, 2,, m) That is: (II) becomes Max{y 0 = yb ya c T, y 0} 37

Hence (I) and (II) form a dual pair of LP, and the solution to one should lead to the solution of the other Now consider an LP in standard form: Max{c T x Ax = b (b 0), x 0} Converting to canonical form gives Max{c T x Ax b, Ax b, x 0} Using a dual vector partitioned as (u, v), the dual is Min{ub vb ua va c, u, v 0} Setting λ u v gives Min{λb λa c T, λ unrestricted in sign (free)} And we have the unsymmetric form of a dual pair: (Primal) Max{c T x Ax = b, x 0} and (Dual) Min {λb λa c T, λ free} Comparing this with the symmetric form, we have the conclusion that while inequality constraints correspond to non-negative dual variables, equality constraints correspond to free (unrestricted) dual variables Max subject to c j x j General rule of the relationship between a dual pair m Min y i b i a ij x j b i (i = 1, 2,, k) subject to y i 0 (i = 1, 2,, k) a ij x j = b i (i = k + 1,, m) y i free (i = k + 1,, m) x j 0 (j = 1, 2,, l) x j free (j = l + 1,, n) i=1 m y i a ij c j (j = 1, 2,, l) m y i a ij = c j (j = l + 1,, n) i=1 i=1 Example (The Transportation Problem - TP) The following is called the costs and requirements table for a TP Sink (destination) c 11 c 12 c 1n Supply s 1 Source c 21 c 22 c 2n c m1 c m2 c mn Demand d 1 d 2 d n (Assume m s i = n d j ) 38 s 2 s m i=1

c ij unit transportation cost from source i to sink j s i supply available from source i d j demand required for sink j The problem is to decide the amount x ij to be shipped from i to j so as to minimize the total transportation cost while meeting all demands That is m Min c ij x ij subject to i=1 x ij = s i (i = 1, 2,, m) m x ij = d j (j = 1, 2,, n) i=1 The dual is then given by (Exercise): Max x ij 0 (i = 1, 2,, m ; j = 1, 2,, m) m s i u i + d j v j i=1 subject to u i + v j c ij (i = 1, 2,, m ; j = 1, 2,, n) u i, v j free The Duality Theory of Linear Programming Theorem 1 (Weak Duality) If x and y are feasible solutions to the dual pair such that x is for the max program and y is for the min program, then c T x yb Proof Using the symmetric form, we get c T x yax yb, since x, y 0 and feasible Corollary If x and y are feasible to the dual pair and c T x = yb, then x and y are both optimal Theorem 2 (Strong Duality) If either of a dual pair of LP s has a finite optimum, so does the other and the two objective function values are equal If either has an unbounded objective function value, the other has no finite feasible solution 39

Proof Consider the unsymmetric form of a dual pair: (P) Max{c T x Ax = b, x 0} and (D) Min{λb λa c T, λ free} Suppose x is a finite optimal solution to P with its corresponding basis B Then the reduced cost coefficients r T = c T c T B B 1 A 0 Let λ c T B B 1 So c T λa 0, ie λ is feasible for D Also c T x = c T B B 1 b = λb Hence λ is optimal for D Next, for any feasible y to D, c T x yb Now if c T x (unbounded for max program), then yb as well That is, there cannot exist a finite feasible solution to D Corollary The vector λ = c T B B 1 is an optimal solution to the dual Theorem 3 (Complementary Slackness) If x and y are feasible solutions to the dual pair, then x and y are optimal if and only if and y i ( a ij x j b i ) = 0 (i = 1, 2,, m) m x j ( a ij y i c j ) = 0 (j = 1, 2,, n) i=1 (In matrix form: y(b Ax) = 0 and (ya c T )x = 0) Proof y(b Ax) = (ya c T )x = 0 if and only if c T x = yax = yb upshot: In an optimal non-degenerate solutions x (y ) to the primal (dual), a variable x j > 0 (y i > 0) the corresponding j th dual (i th primal) constraint is tight (or m binding ), ie yi a ij = c j ( n a ij x j = b i) i=1 Example on Dual Prices Max x 1 + 4x 2 + 3x 3 subject to 2x 1 + 2x 2 + x 3 4 (P ) x 1 + 2x 2 + 2x 3 6 x 1, x 2, x 3 0 (D) Min 4y 1 + 6y 2 subject to 2y 1 + y 2 1 2y 1 + 2y 2 4 y 1 + 2y 2 3 y 1, y 2 0 40

Initial tableau x 1 x 2 x 3 x 4 x 5 b x 4 2 2 1 1 0 4 x 5 1 2 2 0 1 6 1 4 3 0 0 0 Optimal tableau x 1 x 2 x 3 x 4 x 5 b x 2 3/2 1 0 1 1/2 1 x 3 1 0 1 1 1 2 2 0 0 1 1 10 By duality theory, the optimal dual variables are y = c T B B 1 opt, where B opt = [a 2, a 3 ] Hence y = c T B B 1 opt = c T B B 1 opti 0 = c T B B 1 opt[a 4, a 5 ] (c 4, c 5 ) = (r 4, r 5 ) = (1, 1) That is, the optimal solution to D is obtained directly from the (optimal) objective function row of the final optimal tableau for P under the columns where the identity matrix appeared in the initial tableau (Exercise: What about when compact form is used?) Checking for complementary slackness for x = (0, 1, 2) T and y = (1, 1) gives: y1 > 0 2x 1 + 2x 2 + x 3 = 4 ie 2(0) + 2(1) + 2 = 4 y2 > 0 x 1 + 2x 2 + 2x 3 = 6 ie 0 + 2(1) + 2(2) = 6 x 1 = 0 2y1 + y2 1 ie 2(1) + 1 = 3 1 x 2 > 0 2y1 + 2y2 = 4 ie 2(1) + 2(1) = 4 x 3 > 0 y1 + 2y2 = 3 ie 1 + 2(1) = 3 (Exercise: Solve D using Simplex method and read off the primal optimal solution Which one of P and D is easier to solve?) Dual Simplex Method 1 Given a dual feasible basic solution x B If x B 0, then the current solution is optimal; otherwise select an index r such that the component x r (of x B ) < 0 2 If all y rj 0 (j = 1, 2,, n), then the dual is unbounded; otherwise determine an index s such that [ ] y os yoj = Min yrj < 0 y rs j y rj 3 Pivot at element y rs and return to step 1 41