THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Size: px
Start display at page:

Download "THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I"

Transcription

1 LN/MATH2901/CKC/MS/ THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions of the unknowns, called the decision variables It calls for optimizing (maximizing or minimizing) a linear function of decision variables, called the objective function, subject to a set of linear equalities and/or inequalities called the constraints Example (Simplified Oil Blending Problem) The capacity of the blending tank is 100 tons of oil but at present it contains only 20 tons of an oil, which costs $650/ton Selling price of oil is at $6/ton There are two properties of the mixture if we decide to blend the oil:- (a) Viscosity 32 units, and (b) S-content 3% The oil in the tank has a viscosity of 24 units and an S-content of 25% There are three types of oil available to mix with the oil in the tank: heavy oil (H), light oil (L) and cutter stock (C) Viscosity S-content Cost/ton H 40 4% $4 L 36 25% $45 C 24 2% $7 The inequalities (capacity constraint) (Viscosity constraint) (S-content) H + L + C , ie H + L + C 80 (1) 40H + 36L + 24C H + L + C H + 25L + 2C H + L + C , ie 8H + 4L 8C 160 (2) 3, ie H 5L C 10 (3) 1

2 Any triple of non-negative values (H, L, C) satisfying (1), (2) and (3) is called a feasible solution The objective function To find the feasible solution which yields the max profit P P = 6(H + L + C + 20) (4H + 45L + 7C ) = 2H + 15L C 10 And we thus have a linear programming problem:- Max P =2H + 15L C subject to H + L + C 80 8H + 4L 8C 160 H 5L C 10 H 0, L 0, C 0 A solution (H, L, C ) to the LP is called an optimal solution For this problem, we have H = 0, L = , C = , P = $ The general LP problem can be stated as follows: Max (or Min) x 0 =c 1 x 1 + c 2 x c n x n subject to a 11 x 1 + a 12 x a 1n x n (, =, )b 1 a 21 x 1 + a 22 x a 2n x n (, =, )b 2 a m1 x 1 + a m2 x a mn x n (, =, )b m x 1 0, x 2 0,, x n 0 Max (or Min) x 0 = subject to c j x j a ij x j (, =, )b i, i = 1, 2,, m x j 0, j = 1, 2,, n 2

3 The canonical form of an LP Characteristics: Max x 0 = subject to 1 All decision variables 0 2 All constraints of ( ) type 3 Objective function is of max type c j x j a ij x j b i, Note that any LP can be put into the canonical form: 1 Min program, ie Min x 0 = n c j x j This is equivalent to Max g 0 x 0 = n ( c j )x j 2 ( ) type constraint, ie This is equivalent to n 3 (=) type constraint, ie This is equivalent to or a ij x j b i ( a ij )x j b i a ij x j = b i a ij x j b i a ij x j b i i = 1, 2,, m x j 0, j = 1, 2,, n and and a ij x j b i, ( a ij )x j b i 4 Free variables, ie x j is unrestricted in sign Let x j x + j x j, where x+ j 0 and x j 0 Substitute x + j x j for x j everywhere in the LP, the problem is then expressed in (n + 1) non-negative variable x 1, x 2,, x j 1, x + j, x j, x j+1,, x n Further, if, in the canonical form of an LP, we have b i 0 (i = 1, 2,, m), then we have what we shall call a feasible canonical form The standard form of an LP Max (or Min) x 0 = subject to c j x j a ij x j = b i (b i 0), i = 1, 2,, m x j 0, j = 1, 2,, n 3

4 Characteristics: 1 All decision variables 0 2 All constraints are equations 3 The rhs element (b i ) of each constraint equation is 0 4 The objective function is of the max or min type Note that constraints of the inequality type can be changed to equations by the use of slack variables or surplus variables : (a) a ij x j b i can be expressed as n a ij x j + s i = b i, where s i 0 is a slack variable (b) a ij x j b i, can be expressed as variable a ij x j t i = b i, where t i 0 is a surplus Exercise: vice versa Verify that an LP in standard form can be put into its canonical form and A useful way of presenting the information of the standard form in preperation for solution is the LP tableau Example (LP tableau for feasible canonical form) Max {x 0 = c T x Ax b (b 0), x 0}, where xεir n, c T εir n, bεir m, AεIR m IR n Putting into standard form yields Max x 0 = c j x j subject to a ij x j + s i = b i (b i 0), i = 1, 2,, m x j 0, j = 1, 2,, n; s i 0, i = 1, 2,, m [Max{x 0 = c T x Ax + s = b (b 0), x, s 0}, where xεir n, c T εir n, bεir m, AεIR m IR n, sεir m ] This can then be presented as an LP tableau: 4

5 obj ftn value rhs constant {}}{ {}}{{ decision}} variables {{ slack variables }}{ x 0 x 1 x 2 x n s 1 s 2 s m b 0 a 11 a 12 a 1n b 1 0 a 21 a 22 a 2n b 2 0 a m1 a m2 a mn b m 1 c 1 c 2 c n constraint equations } objective function equation The objective function equation is obtained by considering x 0 + ( c j )x j = 0 The LP tableau assumes that all x j and s i are 0 Tableau has (m + 1) equations (rows) and, not counting the x 0 column, (m + n + 1) variables (columns) Example (A Simple Graphical Example) Max x 0 = x 1 + x 2 (0) subject to 2x 1 + x 2 4 (1) x 1 + 2x 2 6 (2) x 1, x 2 0 (0) For various values of x 0, x 1 + x 2 = x 0 is a family of lines with slope dx 2 /dx 1 = 1 Optimal solution is x = (x 1, x 2) = (2/3, 8/3) and x 0 = 10/3 (0 ) If the objective function is of the form 1 4 x 1 + x 2, then for various values of x 0, 1 4 x 1 + x 2 = x 0 is a family of lines with slope 1 4 Optimal solution is x = (x 1, x 2) = (0, 3) and x 0 = 3 (0 ) If the objective function is of the form x x 2, then for various values of x 0, x x 2 = x 0 is a family of lines with slope 4 Optimal Solution is x = (x 1, x 2) = (2, 0) and x 0 = 2 Intuitively, it is clear that the optimal solution is at a corner point (ie vertex of the solution space) Exercise: Construct the LP tableau for the example above 5

6 Consider the linear system of equalities Ax = b, (1) where xεir n, bεir m and AεIR m IR n Assume A is of full rank m(< n) Suppose that from the n columns of A, we select a subset B of m linearly independent columns (For notational simplicty, assume these are the last m columns of A) We rewrite (1), using A = (N, B), (N, B) ( xn x B ) = b Here N is a submatrix of A with its first n m columns ( xn xb ) is a partition of x into n m and m elements, respectively for x N and x B, to correspond to the dimensions of N and B Hence Nx N + Bx B = b (2) Now if we put x N = 0, ie x = ( x N xb ) = ( 0 x B ), then (2) becomes Bx B = b Uniquely, we can solve for x B = B 1 b } (3) Conclusion: x = ( 0 x B ) is a solution to the system of equalities (1) under this particular selection of the basis B of A (B is in fact, a basis of the vector space spanned by the columns of A) 6

7 Definition Let B be any non-singular m m submatrix (ie a basis) of A in (1) Then if all the n m components of x not associated with the columns of B (ie x N ) are set to zero, the solution to the resulting set of equations as given in (3) is said to be a basic solution to (1) wrt the basis B The components of x associated with the columns of B (ie x B ) are called the basic variables, while those associated with N (ie x N ) are called non-basic variables Definition If one or more of the basic variables (x B ) in a basic solution x = ( 0 x B ) has value zero, that solution is said to be a degenerate basic solution Otherwise, it is said to be non-degenerate Now consider adding the non-negativity constraints, ie Ax = b x 0 } (4) Definition A vector xεir n satisfying (4) is said to be a feasible solution for these constraints A feasible solution to (4) that is also a basic solution is said to be a basic feasible solution (BFS); if this solution is non-degenerate then it is called a non-degenerate basic feasible solution (NBFS), otherwise it is a degenerate basic feasible solution (DBFS) [We shall be mostly concerned with non-degenerate basic feasible solution Hence frequently we shall write only BFS for NBFS; and specify degeneracy as exception] Example (Old Example Revisited) 2x 1 + x 2 4 x 1 + 2x 2 6 x 1, x 2 0 Adding slack variables to get it into equalities (hence standard form) gives, 2x 1 + x 2 + x 3 = 4 (51) x 1 + 2x 2 + x 4 = 6 (52) x 1, x 2, x 3, x 4 0 (53) Here A = [ ] and b = ( ) (a) If basis B is chosen to be the last 2 columns, ie B = I, then N = x B = ( x 3 x 4 ) 7 (5) [ ] 2 1, x 1 2 N = ( x 1 ) x 2,

8 And Nx N + Bx B = b becomes [ ] ( x1 x 2 ) + [ Putting the non-basic variables x 1 = x 2 = 0 gives [ ] ( ) 1 0 x3 = 0 1 x 4 ( ) 4, or 6 Hence x = (0, 0, 4, 6) T is a NBFS (or simply BFS) (b) If we subtract 2 (51) from (52), we get ] ( ) x3 = x 4 ( x3 x 4 ) = { 2x1 + x 2 + x 3 = 4 ( ) 4 6 ( ) 4 (= b) 6 3x 1 2x 3 + x 4 = 2 Now if we select the current column 2 and column 4 as B (which is an identity matrix), we get x 1 = x 3 = 0 (non-basic variables) x 2 = 4, x 4 = 2 (basic variable) Hence x = (0, 4, 0, 2) is a basic solution to (51) and (52), it is not a feasible solution to (5) (ie not a BFS) because x 4 < 0 violates (53) (c) If we subtract 1/2 (52) from (51), we get { 3/2x1 + x 3 1/2x 4 = 1 x 1 + 2x 2 + x 4 = 6 Dividing the 2 nd equation above by 2 gives { 3/2x1 + x 3 1/2x 4 = 1 (61) 1/2x 1 + x 2 + 1/2x 4 = 3 (62) Selecting B to be the 3 rd and 2 nd columns gives [ 3/2 1/2 1/2 1/2 ] ( x1 x 4 ) + [ ] ( ) x3 = x 2 ( ) 1 3 } (6) Putting x 1 = x 4 = 0 (non-basic) gives x 3 = 1 and x 2 = 3 (basic) Hence x = (0, 3, 1, 0) T is another NBFS to (5) 8

9 (d) Further in (c), if we subtract 1/3 (61) from (62), we get { 3/2x1 + x 3 1/2x 4 = 1 x 2 1/3x 3 + 2/3x 4 = 8/3 Multiplying the 1 st equation above by 2/3 gives { x1 + 2/3x 3 1/3x 4 = 2/3 x 2 1/3x 3 + 2/3x 4 = 8/3 Selecting the basis to be the 1 st two columns yields [ 2/3 1/3 1/3 2/3 ] ( x3 x 4 ) + [ ] ( ) x1 = Putting x 3 = x 4 = 0 gives x 1 = 2/3 and x 2 = 8/3 x 2 ( ) 2/3 8/3 Hence x = (2/3, 8/3, 0, 0) T is another NBFS (Note that this x is also optimal, hence optimal NBFS for the objective function of Max x 1 + x 2 ) Graphically, Max x 0 = x 1 + x 2 subject to 2x 1 + x 2 4 x 1 + 2x 2 6 x 1, x 2 0 The Fundamental Theorem of Linear Programming (Source: Luenberger) In this section, through the fundamental theorem of linear programming, we establish the primary importance of basic feasible solutions in solving linear programming problems The method of proof of the theorem is in many respects as important as the result itself, since it represents the beginning of the development of the simplex method The theorem 9

10 itself shows that it is necessary only to consider basic feasible solutions when seeking an optimal solution to a linear program because the optimal value is always achieved at such a solution Corresponding to a linear program in standard form Min c T x subject to Ax = b, (11) x 0 a feasible solution to the constraints that achieves the minimum value of the objective function subject to those constraints is said to be an optimal feasible solution solution is basic, it is an optimal basic feasible solution If this Fundamental theorem of linear programming Given a linear program in standard form (11) where A is an m n matrix of rank m, i) if there is a feasible solution, there is a basic feasible solution; ii) if there is an optimal feasible solution, there is an optimal basic feasible solution Proof of (i) Denote the columns of A by a 1, a 2,, a n Suppose x = (x 1, x 2,, x n ) is a feasible solution Then, in terms of the columns of A, this solution satisfies: x 1 a 1 + x 2 a x n a n = b Assume that exactly p of the variables x i are greater than zero, and for convenience, that they are the first p variables Thus x 1 a 1 + x 2 a x p a p = b (12) There are now two cases, corresponding as to whether the set a 1, a 2,, a p is linearly independent or linearly dependent Case 1: Assume a 1, a 2,, a p are linearly independent Then clearly, p m If p = m, the solution is basic and the proof is complete If p < m, then, since A has rank m, m p vectors can be found from the remaining n p vectors so that the resulting set of m vectors is linearly independent Assigning the value zero to the corresponding m p variables yields a (degenerate) basic feasible solution Case 2: Assume a 1, a 2,, a p are linearly dependent Then there is a non-trivial linear combination of these vectors that is zero Thus there are constants y 1, y 2,, y p, at least one of which can be assumed to be positive, such that y 1 a 1 + y 2 a y p a p = 0 (13) 10

11 Multiplying this equation by a scalar ε and subtracting it from (12), we obtain (x 1 εy 1 )a 1 + (x 2 εy 2 )a (x p εy p )a p = b (14) This equation holds for every ε, and for each ε the components x i εy i correspond to a solution of the linear equations although they may violate x i εy i 0 Denoting y = (y 1, y 2,, y p, 0, 0,, 0), we see that for any ε x εy (15) is a solution to the equalities For ε = 0, this reduces to the original feasible solution As ε is increased from zero, the various components increase, decrease, or remain constant, depending upon whether the corresponding y i is negative, positive, or zero Since we assume at least one y i is positive, at least one component will decrease as ε is increased Increase ε to the first point where one or more components becomes zero Specifically, set ε = min{x i /y i : y i > 0} For this value of ε the solution given by (15) is feasible and has at most p 1 positive variables Repeating this process if necessary, we can eliminate positive variables until we have a feasible solution with corresponding columns that are linearly independent At that point case 1 applies Proof of (ii) Let x = (x 1, x 2,, x n ) be an optimal feasible solution and, as in the proof of (i) above, suppose there are exactly p positive variables x 1, x 2,, x p Again there are two cases; and case 1, corresponding to linear independence, is exactly the same as before Case 2 also goes exactly the same as before, but it must be shown that for any ε the solution (15) is optimal To show this, note that the value of the solution x εy is c T x εc T y (16) For ε sufficiently small in magnitude, x εy is a feasible solution for positive or negative values of ε Thus we conclude that c T y = 0 For, if c T y 0, an ε of small magnitude and proper sign could be determined so as to render (16) smaller than c T x while maintaining feasibility This would violate the assumption of optimality of x and hence we must have c T y = 0 Having established that the new feasible solution with fewer positive components is also optimal, the remainder of the proof may be completed exactly as in part (i) 11

12 This theorem reduces the task of solving a linear programming problem to that of searching over basic feasible solutions For a problem having n variables and m constraints the number of basic solutions is at most ( ) n = m n! m!(n m)! Relation to convexity Definition (1) A set C in IR n is said to be convex if x 1, x 2 εc and 0 λ 1, the point λx 1 + (1 λ)x 2 εc (2) A point xεc is said to be an extreme point (vertex, corner point) of C if there are no two distinct point x 1, x 2 εc such that x = λx 1 + (1 λ)x 2 for some 0 < λ < 1 Theorem The set of all feasible solutions to an LP problem is a convex set Proof Suppose x 1 and x 2 are two feasible solutions Then and Ax 1 = b, x 1 0 Ax 2 = b, x 2 0 For 0 λ 1, let x = λx 1 + (1 λ)x 2 be any convex combination of x 1 and x 2 Hence (i) x 0, since λx 1 0 and (1 λ)x 2 0 and (ii) Ax = A[λx 1 + (1 λ)x 2 ] = λax 1 + (1 λ)ax 2 = λb + (1 λ)b = b Theorem Let A be an m n matrix and b an m-vector Let K be the convex polytope consisting of all n vectors satisfying Ax = b, x 0 (1) A vector x is an extreme point of K iff x is a basic feasible solution to (1) (NB Def A convex polytope is the intersection of a finite no of closed half spaces) Proof Assume x = (x 1, x 2,, x m, 0, 0,, 0) T is a BFS to (1) Then x 1 a 1 + x 2 a x m a m = b, where a i is the i th column of A, i = 1, 2,, m; and {a i } are independent 12

13 Suppose x could be expressed as a convex combination of two distinct points y, zεk, say x = λy + (1 λ)z, for some 0 < λ < 1 Since x 0, y 0, z 0 and 0 < λ < 1, we have y j = z j = 0, j = m + 1, m + 2,, n { y1 a 1 + y 2 a y m a m = b z 1 a 1 + z 2 a z m a m = b (y 1 z 1 )a 1 + (y 2 z 2 )a (y m z m )a m = 0 Hence {a i } indepndent z j = y j = x j j x is an extreme point of K Conversely, assume x is an extreme point of K Let s assume that the non-zero components of x are the first k components Then x 1 a 1 + x 2 a x k a k = b, with x i > 0 (i = 1, 2,, k) In order for x to be basic, we must have {a i } independent (hence also k m) Now suppose {a i } is dependent Then there is a non-trivial linear combination of {a i } such that y 1 a 1 + y 2 a y k a k = 0 Define an n-vector y = (y 1, y 2,, y k, 0, 0,, 0) T Since x i > 0, i = 1, 2,, k, it is possible to select some ε > 0 such that x + εy 0 and x εy 0 Also A(x + εy) = b and A(x εy) = b We then have x = 1 2 (x + εy) (x εy) which expresses x as a convex combination of two distinct points in K x is not an extreme point (!) {a i } are linearly independent x is a BFS Corollary 1 If there is a finite optimal solution to an LP problem, there is a finite optimal solution which is an extreme point of the constraint set Proof Finite optimal solution finite optimal BFS extreme point (optimal) Corollary 2 The constraint set (ie the convex polytope K) possesses at most a finite no of extreme point Proof There are at most ( n m) BFS, each of which corresponds to an extreme point of K 13

14 Proposition A linear objective function cx achieves its optimum over a convex polyhedron (a bounded convex polytope) K at an extreme point of K Proof Let x 1, x 2,, x k be the extreme points of K Then any point xεk can be expressed in the form of x =λ 1 x 1 + λ 2 x λ k x k, where λ i 0 i = 1, 2,, k and Then c T x = λ 1 c T x 1 + λ 2 c T x λ k c T x k Let x 0 Max i=1,2,,k ct x i then from ( ) k λ i = 1 i=1 c T x (λ 1 + λ λ k )x 0 = x 0 ( ) Hence the optimum of c T x over K is equal to x 0, achieved at some extreme point of K Example Consider the constraint set in IR 2 defined as x x 2 4 x 1 + x 2 2 2x 1 3 x 1, x 2 0 (1) (2) (3) (4) 14

15 Adding slack variables x 3, x 4 and x 5 to convert it into standard form gives, x x 2 + x 3 = 4 x 1 + x 2 + x 4 = 2 (1) (2) 2x 1 + x 5 = 3 x 1, x 2, x 3, x 4 x 5 0 (3) (4) A basic solution ε{a, b, c, d, e} is obtained by setting any 2 variables of x 1, x 2, x 3, x 4, x 5 to zero and solving for the remaining three, for example Extreme point a: (i) Set x 1 = 0, x 3 = 0 (2 binding constraints) (ii) Solve 8/3x 2 = 4 x 2 + x 4 = 2 x 5 = 3 giving (0, 3 2, 0, 1 2, 3) which corresponds to extreme point a of the convex polyhedron K defined by (1), (2), (3), (4) extreme point a b c d e set to x 1 x 3 x 4 x 2 x 1 zero x 3 x 4 x 5 x 5 x 2 Note: There is a maximum total of ( 5 3) = ( 5 2) = 10 and here we have 9 Simplex Method (Adjacent Extreme Point Method) for an LP in feasible canonical form The idea of the Simplex method is to proceed from one BFS (ie extreme point) of the feasible region of an LP problem expressed in tableau form to another BFS, in such a way as to continually increase (or decrease) the value of the objective function until optimality is reached The simplex method moves from one extreme point to its neighbouring extreme point For the following LP in feasible canonical form (ie its rhs vector b 0): Max{x 0 = c T x Ax b (b 0), x 0} 15

16 its LP tableau is x 1 x 2 x s x n s 1 s 2 s r s m b s 1 a 11 a 12 a 1s a 1n b 1 s 2 a 21 a 22 a 2s a 2n b 2 s r a r1 a r2 a rs a rn b r s m a m1 a m2 a ms a mn b m c 1 c 2 c s c n Since all b i 0, we can read off directly from the tableau a starting BFS (0, 0,, 0, b 1, b 2,, b m ) T Note that this corresponds to the origin of the n-dimensional subspace (the solution space) of IR n (ie all structural variables x j are set to zero) The set B of basic variables is {s 1, s 2,, s r,, s m } and we say that for each varriable s r εb, s r is in the basis B The set N of non-basic variables is {x 1, x 2,, x s,, x n } and we say that any x s εn is not in the basis B Consider now replacing s r εb by x s εn We say that s r is to leave the basis and x s is to enter the basis Consequently after this operation s r becomes non-basic (εn) and x s becomes basic (εb) This of course amounts to a different (selection of columns of matrix A to give a different) basis B We shall achieve this change of basis by a Pivot Operation (or simply called a pivot) This pivot operation is designed to maintain an identity matrix as the basis in the tableau at all time Pivot Operation (wrt element a rs > 0) Definition (a) a rs > 0 is called the pivot element (b) row r is called the pivot row (c) column s is called the pivot column Rules (a) In pivot row, a rj a rj /a rs j (b) In pivot column, a rs 1, a is 0 i r (c) For all other elements, a ij a ij a rj a is /a rs 16

17 Graphically, j s j s i a ij a is r a rj a rs becomes i a ij a rj a is /a rs 0 r a rj /a rs 1 Or, simply, a b c d becomes a bc/d 0 c/d 1 Exercise: Verify that this pivot operation is simply the Gaussian elimination such that variable x s is eliminated from all m + 1 but the r th equation, and in the r th equation, the coefficient of x s is equal to 1 Example (Pivot operation and feasibility) x 1 + x 2 x 3 + x 4 = 5 2x 1 3x 2 + x 3 + x 5 = 3 x 1 + 2x 2 x 3 + x 6 = 1 x 1 x 2 x 3 x 4 x 5 x 6 b x x x B Basic solution is (0, 0, 0, 5, 3, 1) T feasible x 1 x 2 x 3 x 4 x 5 x 6 b x x x B Basic solution is (5, 0, 0, 0, 7, 6) T infeasible x 1 x 2 x 3 x 4 x 5 x 6 b x /5 3/5 1/5 0 18/5 x /5 2/5 1/5 0 7/5 x /5 1/5 3/5 1 9/5 B Basic solution is (18/5, 7/5, 0, 0, 0, 9/5) T feasible 17

18 x 1 x 2 x 3 x 4 x 5 x 6 b x x x Basic solution is (4, 2, 9, 0, 0, 0) T infeasible B Exercise: Let y j denote the current tableau column under variable x j For each of the four tableaus above, calculate the matrix product of B [y 4, y 5, y 6 ] Can you explain the results and generalize? Pivoting Criterion (Feasibility Condition) For a given selection of pivot column (say, with entering variable x s ), the pivot row (ie the leaving basic variable, say x r ) must be selected as the basic variable corresponding to the smallest positive ratio of the values of the current rhs to the current (positive) constraint coefficient of the entering non-basic variable x s Graphically, x s b ratio To determine row r y 1s y 10 y 10 /y 1s y 2s y 20 y 20 /y 2s y r0 /y rs = Min i { yi0 /y is yis > 0 } y is y i0 y i0 /y is To see why this works, note that the tableau is y ms y m0 y m0 /y ms x i + jεn y ij x j = y i0 ( 0), i = 1, 2,, m, or x i = y i0 jεn y ij x j 0, i = 1, 2,, m To increase the value of a non-basic variable x s from zero to positive and maintain feasibility needs y is x s y i0 (i = 1, 2,, m) or x s y i0 /y is (i = 1, 2,, m) Hence we should select row r such that x s = y r0 /y rs = Min i {y i0 /y is y is > 0} 18

19 Following this pivoting criterion, we have the result of a new BFS with x r replaced by x s as a basic variable That is x s is increased from zero to x s = y r0 /y rs, while new x r = y r0 y rj x j = y r0 y rs x s = 0 jεn Exercise: Verify that pivoting means replacing column a r (of the original matrix A) that is in B by column a s (of A) that is currently not in B Hence pivoting is also called a change of basis Optimality Condition (for a max program) Given the objective function row (ie the x 0 -equation) in terms of the non-basic variables x j εn in the tableau x 0 = y 00 y 0j x j, jεn where y 00 is the current objective function value associated with the current BFS in the tableau (the (m + 1, n + 1) th entry) The entering variable x s εn can be selected as a non-basic variable x s having a negative coefficient (such as the first negative y 0s or the most negative y 0s ) If all coefficients y 0j are non-negative, the objective function cannot be increased by making any non-basic variable positive (ie basic); hence an optimal solution has been reached Summary of Computation Procedure (for feasible canonical form LP) Once the initial tableau has been constructed, the Simplex procedure calls for the successive iteration of the following steps 1 Testing of the coefficients of the objective function row to determine whether an optimal solution has been reached, ie coefficients non-negative in that row is satisfied whether the optimality condition that all 2 If not, select a (currently non-basic) variable x s to enter the basis (eg the 1 st negative coefficient or the most negative) 3 Then determine the (currently basic) variable x r to leave the basis using the feasibility condition, ie select x r where y r0 /y rs = Min i {y i0 /y is y is > 0} 4 Perform a pivot operation with pivot row corresponding to x r and pivot column corresponding to x s Return to 1 Exercise: In step 3, if all y is 0, verify that the LP has an unbounded objective function value, ie x 0 can tend to 19

20 Example (Simplex Method for feasible canonical form) Max x 0 = 3x 1 + x 2 + 3x 3 Subject to 2x 1 + x 2 + x 3 2 x 1 + 2x 2 + 3x 3 5 (Initial Tableau) x 1 x 2 x 3 x 4 x 5 x 6 b x x x x 1 + 2x 2 + x 3 6 x 1, x 2, x 3 0 ratio 2/1 = 2 5/2 = 25 6/2 = 3 Current BFS x = (0, 0, 0, 2, 5, 6) T, x 0 = 0 x 1 x 2 x 3 x 4 x 5 x 6 b x x x ratio 2/1 = 2 1/1 = 1 Current BFS x = (0, 2, 0, 0, 1, 2) T, x 0 = 2 x 1 x 2 x 3 x 4 x 5 x 6 b x x x ratio 1/5 Current BFS x = (0, 1, 1, 0, 0, 3) T, x 0 = 4 (Optimal tableau) x 1 x 2 x 3 x 4 x 5 x 6 b x 1 1 1/5 0 3/5 1/5 0 1/5 x 3 0 3/5 1 1/5 2/5 0 8/5 x /5 0 6/5 3/5 0 27/5 Optimal BFS x = (1/5, 0, 8/5, 0, 0, 4) T, x 0 = 27/5 20

21 Extreme point sequence: {x 4, x 5, x 6 } {x 2, x 5, x 6 } {x 2, x 3, x 6 } {x 1, x 3, x 6 } Exercise: Apply the Simplex Method again, but using the first negative coefficient rule to select a pivot column Simplex Method for an LP in Standard Form (Artificial Variables Techniques) Consider an LP in standard form: Max{x 0 = c T x Ax = b (b 0), x 0} There is no obvious initial starting basis B such that B = I m For notational simplicity, assume that we pick B as the last m (linearly independent) columns of A We then have for the augmented system : { Nx N + Bx B = b x 0 c T N x N c T B x B = 0 Multiplying by B 1 yields, { B 1 Nx N + x B = B 1 b (or x B = B 1 b B 1 Nx N ) x 0 c T N x N c T B (B 1 b B 1 Nx N ) = 0 ie { B 1 Nx N + x B = B 1 b x 0 (c T N ct B B 1 N)x N = c T B B 1 b Denoting zn T ct B B 1 N (an (n m) row vector) gives { B 1 Nx N + x B = B 1 b x 0 (c T N zt N )x N =c T B B 1 b, which is called the General representation of an LP in standard form wrt the basis B Its simplex tableau is then x N x B b x B B 1 N I B 1 b x 0 (c T N zt n ) 0 c T B B 1 b Definition The coefficients r j c j z j (where zn T = (z j) T = c T B B 1 N) are called the reduced cost coefficients wrt the basis B Remark (a) Current BFS optimal when r j = c j z j 0 j for Max Program (b) Current BFS optimal when r j = c j z j 0 j for Min Program because x 0 = c T B B 1 b + (c j z j )x j = c T B B 1 b + r j x j jεn 21 jεn

22 Example (The Big-M method) [Ref Taha-Chapter 3] Max x 0 = x 1 + x 2 subject to 2x 1 + x 2 4 x 1 + 2x 2 = 6 x 1, x 2 0 Putting into standard form, the augmented system is: 2x 1 + x 2 x 3 = 4 x 1 + 2x 2 = 6 x 0 x 1 x 2 = 0 Introducing artificial variables x 4 and x 5 yields, 2x 1 + x 2 x 3 + x 4 = 4 x 1 + 2x 2 + x 5 = 6 x 0 x 1 x 2 + Mx 4 + Mx 5 = 0 Calculating reduced cost coefficients r j = c j z j gives c B = ( ) M M r 1 = c 1 ( M, M)a 1 = 1 + 3M ; r 2 = c 2 ( M, M)a 2 = 1 + 3M r 3 = c 3 ( M, M)a 3 = M ; r 4 = r 5 = 0 Objective function value = c T B B 1 b = ( M, M)b = 10M x 1 x 2 x 3 x 4 x 5 b x x x 0 (1 + 3M) (1 + 3M) +M M (Note: An artificial variable can be dropped from consideration once it becomes non-basic) x 1 x 2 x 3 x 5 b x 1 x 2 x 3 b x 1 1 1/2 1/2 0 2 x 5 0 3/2 1/2 1 4 x 0 0 (1 + 3M)/2 (1 + M)/ M x /3 2/3 x /3 8/3 x /3 10/3 At this point all artificial variables are dropped from the problem, and x = (2/3, 8/3, 0) T is an initial BFS x 1 x 2 x 3 b x x x Optimal solution x = (6, 0, 8) T, with x 0 = 6 22

23 Example (The Two-Phase method) [cf Example of Big-M method] Max x 0 = x 1 + x 2 subject to 2x 1 + x 2 4 x 1 + 2x 2 = 6 x 1, x 2 0 Putting into standard form, the augmented system is: 2x 1 + x 2 x 3 = 4 x 1 + 2x 2 = 6 x 0 x 1 x 2 = 0 Introducing artificial variables x 4 and x 5 yields the (min-program) Artificial Problem as: 2x 1 + x 2 x 3 + x 4 = 4 x 1 + 2x 2 + x 5 = 6 x 0 x 4 x 5 = 0 Calculating reduced cost coefficients r j = c j z j gives c B = ( ) 1 1 r 1 = 0 (1, 1)a 1 = 3 ; r 2 = 0 (1, 1)a 2 = 3 r 3 = 0 (1, 1)a 3 = 1 ; r 4 = r 5 = 0 Objective function value = c T B B 1 b = (1, 1)b = 10 x 1 x 2 x 3 x 4 x 5 b x x x (Note: An artificial variable can be dropped from consideration once it becomes non-basic) x 1 x 2 x 3 x 5 b x 1 x 2 x 3 b x 1 1 1/2 1/2 0 2 x 5 0 3/2 1/2 1 4 x 0 0 3/2 1/2 0 4 x /3 2/3 x /3 8/3 x Phase I computation completes with objective function value = 0 and x = (2/3, 8/3, 0) T an initial BFS Phase II begins with calculating reduced cost coefficients to restore the original objective function, followed by pivot operation(s) to optimality x 1 x 2 x 3 b x 1 x 2 x 3 b x /3 2/3 x /3 8/3 x /3 10/3 x x x Phase II computation is complete, giving optimal solution x = (6, 0, 8) T, with x 0 = 6 23

24 The Two-phase Method Phase I: (Search for a starting BFS) Introduce artificial variables to give a starting basis as an identity matrix Replace the original objective function by the sum of all artificial variables thus introduced The Simplex tableau of this derived (artificial) problem is then put into Canonical form (by calculating reduced cost coefficients) Apply the Simplex procedure to obtain a minimum optimal solution The minimum objective function value can be either (a) zero (ie all artificial variables equal zero) implying a BFS for the original problem; or (b) positive (ie at least one artificial variable basic and positive) implying no feasible solutions exist for the original problem In case of (a), proceed to Phase II In case (b), stop Phase II: (Conclude with an optimal BFS) Use the solution obtained at the end of Phase I as a starting BFS while restoring the original objective function Again, this Simplex tableau is put into Canonical form Apply the Simplex procedure to obtain an optimal solution A Complete Example (using the Two-phase Method) Min 2x 1 + 4x 2 + 7x 3 + x 4 + 5x 5 Subject to x 1 + x 2 + 2x 3 + x 4 + 2x 5 = 7 x 1 + 2x 2 + 3x 3 + x 4 + x 5 = 6 x 1 + x 2 + x 3 + 2x 4 + x 5 = 4 x 1 free, x 2 0, x 3 0, x 4 0, x 5 0 Since x 1 is free, it can be eliminated by solving for x 1 in terms of the other variables from the 1 st equation and substituting everywhere else This can be done nicely using our pivot operation on the following simplex tableau: x 1 x 2 x 3 x 4 x 5 b Initial tableau 24

25 We select any non-zero element in the first column as our pivot element this will eliminate x 1 from all other rows:- x 1 x 2 x 3 x 4 x 5 b ( ) Equivalent Problem Saving the first row ( ) for future reference only, we carry on only the sub-tableau with the first row and the first column deleted There is no obvious basic feasible solution, so we use the two-phase method: After making b 0, we introduce artificial variables y 1 0 c B = ( ) Initial tableau for phase I Transforming the last row to give a tableau in canonical form, we get x 2 x 3 x 4 x 5 y 1 y 2 b and y 2 0 to give the artificial problem:- x 2 x 3 x 4 x 5 y 1 y 2 b First tableau phase I which is in canonical form We carry out the pivot operations with the indicated pivot elements:- x 2 x 3 x 4 x 5 y 1 y 2 b Second tableau phase I 25

26 x 2 x 3 x 4 x 5 y 1 y 2 b Final tableau phase I At the end of phase I, we go back to the equivalent reduced problem (ie discarding the artificial variables y 1, y 2 ):- x 2 x 3 x 4 x 5 b c B = ( ) Initial tableau phase II Pivoting as shown gives x 2 x 3 x 4 x 5 b 1/2 0 1/ /2 1 1/ Final tableau phase II The solution x 3 = 1, x 5 = 2 can be inserted in the expression ( ) for x 1 giving x 1 = 7 + 2(1) + 2(2) = 1 Thus the final solution is x 1 = 1, x 2 = 0, x 3 = 1, x 4 = 0, x 5 = 2, with x 0 = 19 26

27 Various possible cases when applying the Simplex Method (1) Degeneracy [Ref HA Taha Chapter 3] x 1 x 2 x 3 x 4 x 5 b x x x x ( ) x 1 x 2 x 3 x 4 x 5 b x x x 1 1 1/ / / /2 4 Degenerate Vertex {x 4 = 0 and basic} x 1 x 2 x 3 x 4 x 5 b x x /2 1/2 0 x /8 1/ /4 1/4 4 ( ) x 1 x 2 x 3 x 4 x 5 b x x 1 1 1/4 0 1/4 0 2 x /2 0 1/2 0 4 Degenerate Vertex {x 5 = 0 and basic} x 1 x 2 x 3 x 4 x 5 b x /2 1/2 0 2 x /8 3/8 0 3/2 x /4 1/4 0 5 Degenerate Vertex {x 4 = 0 and basic} x 1 x 2 x 3 x 4 x 5 b x x /2 1/2 0 2 x /8 3/8 0 3/ /4 1/4 0 5 Degenerate Vertex: V is represented by : {x 2 = 0, x 4 = 0}, {x 4 = 0, x 5 = 0}, {x 2 = 0, x 5 = 0} Exercise: Try pivotting in variable x 2 from the very beginning Do you see any degeneracy? Why? 27

28 Example of Degenracy and Cycling (Beale) Maximize x 0 = 20x 1 + 1/2x 2 6x 3 + 3/4x 4 subject to x 1 2 8x 1 x 2 + 9x 3 + 1/4x x 1 1/2x 2 + 3x 3 + 1/2x 4 24 x 2 1 x 1 0, x 2 0, x 3 0, x 4 0 x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 b x x / x /2 3 1/ (T0) x x /2 6 3/ x x / x 7 0 1/2 3 1/ (T1) x x 0 0 1/2 6 3/ x x x 7 0 3/ (T2) x x 0 0 7/ x 1 1 3/8 15/ /2 1/4 0 2 x x 5 0 3/8 15/ /2 1/4 0 0 (T3) x x

29 x /16 3/64 0 1/16 1/8 0 2 x /2 1/8 0 3/ x /16 3/64 1 1/16 1/8 0 0 (T4) x /2 1/8 0 3/ x / x x / x /4 16/3 1/3 2/3 0 0 (T5) x / x / x x 6 0 1/2 0 5/ x 3 0 1/6 1 1/ /3 0 0 (T6) x x 0 0 1/2 0 7/ x x / x 7 0 1/2 3 (1/2) (T7) x x 0 0 1/2 6 3/ Hence a cycle (of period 6) is detected as T1 = T7 To break the cycle, bring in x 4 and remove x 7 Then the next iteration yields the (non-degenerate) optimal solution x 1 = 2, x 2 = 1, x 3 = 0, x 4 = 1, x 5 = 0, x 6 = 3/4, x 7 = 0, x 8 = 0, with x 0 = 4125 When an LP is degenerate, ie its feasible region (the convex polytope) possesses degenerate vertices, cycling may occur as follows: Suppose the current basis is B and such that this basis B yields a degenerate BFS Since moving from a degenerate vertex (BFS) to another degenerate vertex does not affect (ie increase or decrease) the objective function value It is then possible for the Simplex procedure to start from the current (degenerate) basis B, and after some number p of iterations, to return to B with no change in the objective function value as long as all vertices in-between are degenerate This means 29

30 that a further p-iterations will again bring us back to this same basis B The process is then said to be cycling In our example, starting from basis B = (a 1, a 6, a 7, a 8 ), we move to (a 1, a 4, a 7, a 8 ), to (a 1, a 4, a 5, a 8 ), to (a 1, a 2, a 5, a 8 ), to (a 1, a 2, a 3, a 8 ) to (a 1, a 6, a 3, a 8 ) and finally back to (a 1, a 6, a 7, a 8 ) in six iterations, or a cycle of period p = 6 To get out of cycling, one way is to try a different pivot element (Degeneracy guarantees the existence of more than one feasible pivot element, ie tie-ratios exist) This is done as indicated in our example above Another way in terms of computer implementation is by perturbation of data For our example, this may be done by changing b = (2, 16, 24, 1) T in T0 to (200001, , , ) T Yet another is by using the concept of lexicographic order of vectors (cf GB Dantzig, Linear Programming and Extensions ) The best of all, however, is Bland s smallest index rule as described in Mathematics of Operations Research, Vol 2, No 2 (1977) (2) Unbounded Solutions (2a) Unbounded optimal solution (ie x 0 ) Max x 0 = 2x 1 + x 2 subject to x 1 x 2 10 (1) 2x 1 x 2 40 (2) x 1, x 2 0 x 1 x 2 x 3 x 4 b x x x No positive ratio exists in x 2 column Hence x 2 can be increased without bound while maintaining feasibility (Why?) 30

31 (2b) Unbounded feasible region but bounded optimal solution Max x 0 = 6x 1 2x 2 subject to 2x 1 x 2 2 (1) x 1 4 (2) x 1, x /2 1/ /2 1/ Any (x 1, x 2 ) = (1, k) for k being any positive number is a feasible solution Optimal tableau (3) Infinite number of Optimal Solutions Max x 0 = 4x x 2 subject to 2x 1 + 7x x 1 + 2x 2 21 x 1, x

32 /7 1 1/ /7 0 2/ /45 2/45 7/ /45 7/45 7/ Zero reduced cost coefficients for non-basic variables at optimality indicate alternative optimal solutions, since if we pivot in those columns, x 0 value remains the same after a change of basis for a different BFS Notice that Simplex Method yields only the extreme point optimal (BFS) solutions More generally, the set of alternative optimal solutions is given by the convex combination of optimal extreme point solutions Suppose x 1, x 2,, x p are extreme point optimal solutions, then x = p λ k = 1 is also an optimal solution k=1 p k=1 λ k x k, where 0 λ k 1 and (4) Non-existence of feasible solutions In terms of the methods of artificial variable techniques, the solution at optimality could include one or more artificial variables at a positive level (ie as a non-zero basic variable) In such a case the corresponding constraint is violated and the artificial variable cannot be driven out of the basis The feasible region is thus seen to be empty (Can this ever happen to an LP that can be put into feasible canonical form?) More Compact Simplex Tableau (Jordan interchange) Consider an LP tableau such as the following: x 1 x 2 x 3 x 4 x 5 x 6 b x x x

33 Notice that the same amount of information is contained in the more compact tableau with the basic columns omitted: x 1 x 2 x 3 x 4 b x x x To carry out a pivot operation, say to have x 4 replaced by x 5, we note that in the resulting compact tableau, we should have columns x 1, x 2, x 3 and x 5 only since these are then the non-basic columns after this pivot operation x 1 x 2 x 3 x 4 x 5 x 6 b x x x Full tableau x 1 x 2 x 3 x 5 b x x x Compact tableau For the full tableau we use the pivot rule as usual, that is a c b d 1 b/a 0 d bc/a In particular for column x 5 (the newly formed non-basic column), we have a 1 c 0 1 1/a 0 c/a x 4 x 5 x 4 x 5 col col col col 33

34 For the compact tableau we use the same rule except that we also do a replacement (in position) of x 4 column by x 5 Therefore in the compact scheme, pivot and replacement becomes a c b d 1/a b/a c/a d bc/a The Revised Simplex Method (Simplex Method in Explicit Inverse Form; or Simplex Method in Matrix Form) Consider the general representation of an LP wrt the basis B:- { B 1 Nx N + Ix B = B 1 b x 0 (c T N ct B B 1 N)x N = c T B B 1 b Observe that at any time during the application of Simplex procedure, the knowledge of B 1 is sufficient to read off a BFS, ie x B = B 1 b, x N = 0 and x 0 = c T B B 1 b Hence the idea behind the Revised Simplex Method is as follows: Instead of carrying out the computation on the entire simplex tableau, we keep only the current basis inverse B 1 (and the original data: A, b and c) and only compute what we need for that iteration Step 0 Given the current basis inverse B 1 and hence the current BFS x B = B 1 b ( y 0 in tableau) Step 1 Calculate the reduced cost coefficients rn T = ct N ct B B 1 N (This is best done by first calculating λ c T B B 1 (εir m ) and then rn T = ct N λn) If r N 0 (for Min Program) or r N 0 (for Max Program), the current BFS is optimal Step 2 Select column a s from among those non-basic columns with r s < 0 (for Min Program) or r s > 0 (for Max Program) and calculate y s = B 1 a s, which is the current column associated with variable x s in terms of the current basis B Step 3 Calculate the ratios y i0 /y is, for which y is > 0, to determine the column a r which is to leave the basis Step 4 Update B 1 (ie replacing column a r by column a s in B and obtaining its inverse) and the current BFS x B = B 1 b Return to Step 1 34

35 Numerical Example on Revised Simplex Method To maximize c T x, where c = (3, 1, 3, 0, 0, 0) T with the table of coefficients a 1 a 2 a 3 a 4 a 5 a 6 b Initial Basis B = B 1 = I (1) Basic variables B 1 x x B 2 x x λ = c T B B 1 = (0, 0, 0)I = (0, 0, 0) rn T = ct N λn = (3, 1, 3) (0, 0, 0)N = (3, 1, 3) > 0 Bring a 2 into the basis, with y 2 = B 1 a 2 = Ia 2 = a 2 y (2) Basic variables x 2 x 5 x 6 B λ = c T B B 1 = (1, 0, 0)B 1 = (1, 0, 0) rn T = ct N λn = (c 1, c 3, c 4 ) λ[a 1, a 3, a 4 ] = (3, 3, 0) (1, 0, 0) = (1, 2, 1) 0 Bring a 3 into the basis, with y 3 = = (3) Basic variables x 2 x 3 x 6 B λ = c T B B 1 = (1, 3, 0)B 1 = ( 3, 2, 0) rn T = ct N λn = (c 1, c 4, c 5 ) λ[a 1, a 4, a 5 ] = (3, 0, 0) ( 3, 2, 0) = (7, 3, 2) 0 Bring a 1 into the basis, with y 1 = = x B x B y y

36 (4) Basic variables x 1 x 3 x 6 B 1 3/5 1/5 0 1/5 2/ λ = c T B B 1 = (3, 3, 0)B 1 = (6/5, 3/5, 0) rn T = ct N λn = (c 2, c 4, c 5 ) λ[a 2, a 4, a 5 ] = (1, 0, 0) (6/5, 3/5, 0) = ( 7/5, 6/5, 3/5) < 0 Optimal solution x = (1/5, 0, 8/5, 0, 0, 4) T, with value x 0(= c T B x B ) = ct B B 1 b = λb = (6/5, 3/5, 0) (2, 5, 6) = 27/5 x B 1/5 8/5 4 Duality of Linear Programming Every LP has associated with it another LP, called its dual and that the two problems have such a close relationship that whenever one problem is solved, the other is solved as well They are called the dual pair (primal + dual) in the sense that the dual of the dual will again be the primal Primal Dual x : col n-vector Max c T x Min yb c : col n-vector subject to Ax b subject to ya c T b : col m-vector x 0 y 0 y : row m-vector A is m n (NB Calling which one primal and the other one dual is completely arbitrary) We observe from the above the following correspondence: P Max Program D Min Program c j : n obj ftn coeff n rhs b i : m rhs m obj ftn coeff y i : m( ) constraints m non-neg variables x j : n non-neg variables n( ) constraints Definition This pair of dual programs is called the symmetric form of the dual pair 36

37 The Diet Problem (I) Q: How can a dietician design the most economical diet that satisfies the basic daily nutritional requirements for a good health? We have the following information: Available at the market are n different types of food Unit cost of food j is c j (j = 1, 2,, n) There are m basic nutritional ingredents (nutrients) Each individual requires daily at least b i units of nutrient i (i = 1, 2,, m) Each unit of food j contains a ij units of nutrient i Denoting by x j (our decision variable) the number of units of food j to include in a diet, the problem is to select the x j s such as to minimize the total cost x 0 of a diet, ie Min x 0 = subject to the nutriational constraints: c j x j a ij x j b i (i = 1, 2,, m) and the non-negativity constraints: x j 0 (j = 1, 2,, n) That is: (I) becomes Min{x 0 = c T x Ax b, x 0} The Diet Problem (II) Q: How can a pharmacentical company determine the price for each unit of nutrient pill so as to maximize revenue, if a synthetic diet made up of nutrient pills of various pure nutrients is adopted? Denoting by y i the unit price of nutrient pill i, the problem is to maximize the total revenue y 0 from selling such a synthetic diet, ie m Max y 0 = y i b i subject to the constraints that the cost of a unit of synthetic food j made up of nutrient pills is no greater than the unit market price of food j: i=1 and m y i a ij c j (j = 1, 2,, n) i=1 y i 0 (i = 1, 2,, m) That is: (II) becomes Max{y 0 = yb ya c T, y 0} 37

38 Hence (I) and (II) form a dual pair of LP, and the solution to one should lead to the solution of the other Now consider an LP in standard form: Max{c T x Ax = b (b 0), x 0} Converting to canonical form gives Max{c T x Ax b, Ax b, x 0} Using a dual vector partitioned as (u, v), the dual is Min{ub vb ua va c, u, v 0} Setting λ u v gives Min{λb λa c T, λ unrestricted in sign (free)} And we have the unsymmetric form of a dual pair: (Primal) Max{c T x Ax = b, x 0} and (Dual) Min {λb λa c T, λ free} Comparing this with the symmetric form, we have the conclusion that while inequality constraints correspond to non-negative dual variables, equality constraints correspond to free (unrestricted) dual variables Max subject to c j x j General rule of the relationship between a dual pair m Min y i b i a ij x j b i (i = 1, 2,, k) subject to y i 0 (i = 1, 2,, k) a ij x j = b i (i = k + 1,, m) y i free (i = k + 1,, m) x j 0 (j = 1, 2,, l) x j free (j = l + 1,, n) i=1 m y i a ij c j (j = 1, 2,, l) m y i a ij = c j (j = l + 1,, n) i=1 i=1 Example (The Transportation Problem - TP) The following is called the costs and requirements table for a TP Sink (destination) c 11 c 12 c 1n Supply s 1 Source c 21 c 22 c 2n c m1 c m2 c mn Demand d 1 d 2 d n (Assume m s i = n d j ) 38 s 2 s m i=1

39 c ij unit transportation cost from source i to sink j s i supply available from source i d j demand required for sink j The problem is to decide the amount x ij to be shipped from i to j so as to minimize the total transportation cost while meeting all demands That is m Min c ij x ij subject to i=1 x ij = s i (i = 1, 2,, m) m x ij = d j (j = 1, 2,, n) i=1 The dual is then given by (Exercise): Max x ij 0 (i = 1, 2,, m ; j = 1, 2,, m) m s i u i + d j v j i=1 subject to u i + v j c ij (i = 1, 2,, m ; j = 1, 2,, n) u i, v j free The Duality Theory of Linear Programming Theorem 1 (Weak Duality) If x and y are feasible solutions to the dual pair such that x is for the max program and y is for the min program, then c T x yb Proof Using the symmetric form, we get c T x yax yb, since x, y 0 and feasible Corollary If x and y are feasible to the dual pair and c T x = yb, then x and y are both optimal Theorem 2 (Strong Duality) If either of a dual pair of LP s has a finite optimum, so does the other and the two objective function values are equal If either has an unbounded objective function value, the other has no finite feasible solution 39

40 Proof Consider the unsymmetric form of a dual pair: (P) Max{c T x Ax = b, x 0} and (D) Min{λb λa c T, λ free} Suppose x is a finite optimal solution to P with its corresponding basis B Then the reduced cost coefficients r T = c T c T B B 1 A 0 Let λ c T B B 1 So c T λa 0, ie λ is feasible for D Also c T x = c T B B 1 b = λb Hence λ is optimal for D Next, for any feasible y to D, c T x yb Now if c T x (unbounded for max program), then yb as well That is, there cannot exist a finite feasible solution to D Corollary The vector λ = c T B B 1 is an optimal solution to the dual Theorem 3 (Complementary Slackness) If x and y are feasible solutions to the dual pair, then x and y are optimal if and only if and y i ( a ij x j b i ) = 0 (i = 1, 2,, m) m x j ( a ij y i c j ) = 0 (j = 1, 2,, n) i=1 (In matrix form: y(b Ax) = 0 and (ya c T )x = 0) Proof y(b Ax) = (ya c T )x = 0 if and only if c T x = yax = yb upshot: In an optimal non-degenerate solutions x (y ) to the primal (dual), a variable x j > 0 (y i > 0) the corresponding j th dual (i th primal) constraint is tight (or m binding ), ie yi a ij = c j ( n a ij x j = b i) i=1 Example on Dual Prices Max x 1 + 4x 2 + 3x 3 subject to 2x 1 + 2x 2 + x 3 4 (P ) x 1 + 2x 2 + 2x 3 6 x 1, x 2, x 3 0 (D) Min 4y 1 + 6y 2 subject to 2y 1 + y 2 1 2y 1 + 2y 2 4 y 1 + 2y 2 3 y 1, y

41 Initial tableau x 1 x 2 x 3 x 4 x 5 b x x Optimal tableau x 1 x 2 x 3 x 4 x 5 b x 2 3/ /2 1 x By duality theory, the optimal dual variables are y = c T B B 1 opt, where B opt = [a 2, a 3 ] Hence y = c T B B 1 opt = c T B B 1 opti 0 = c T B B 1 opt[a 4, a 5 ] (c 4, c 5 ) = (r 4, r 5 ) = (1, 1) That is, the optimal solution to D is obtained directly from the (optimal) objective function row of the final optimal tableau for P under the columns where the identity matrix appeared in the initial tableau (Exercise: What about when compact form is used?) Checking for complementary slackness for x = (0, 1, 2) T and y = (1, 1) gives: y1 > 0 2x 1 + 2x 2 + x 3 = 4 ie 2(0) + 2(1) + 2 = 4 y2 > 0 x 1 + 2x 2 + 2x 3 = 6 ie 0 + 2(1) + 2(2) = 6 x 1 = 0 2y1 + y2 1 ie 2(1) + 1 = 3 1 x 2 > 0 2y1 + 2y2 = 4 ie 2(1) + 2(1) = 4 x 3 > 0 y1 + 2y2 = 3 ie 1 + 2(1) = 3 (Exercise: Solve D using Simplex method and read off the primal optimal solution Which one of P and D is easier to solve?) Dual Simplex Method 1 Given a dual feasible basic solution x B If x B 0, then the current solution is optimal; otherwise select an index r such that the component x r (of x B ) < 0 2 If all y rj 0 (j = 1, 2,, n), then the dual is unbounded; otherwise determine an index s such that [ ] y os yoj = Min yrj < 0 y rs j y rj 3 Pivot at element y rs and return to step 1 41

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

UNIT-4 Chapter6 Linear Programming

UNIT-4 Chapter6 Linear Programming UNIT-4 Chapter6 Linear Programming Linear Programming 6.1 Introduction Operations Research is a scientific approach to problem solving for executive management. It came into existence in England during

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3) The Simple Method Gauss-Jordan Elimination for Solving Linear Equations Eample: Gauss-Jordan Elimination Solve the following equations: + + + + = 4 = = () () () - In the first step of the procedure, we

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

3. Duality: What is duality? Why does it matter? Sensitivity through duality. 1 Overview of lecture (10/5/10) 1. Review Simplex Method 2. Sensitivity Analysis: How does solution change as parameters change? How much is the optimal solution effected by changing A, b, or c? How much

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Lecture 6 Simplex method for linear programming

Lecture 6 Simplex method for linear programming Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form 4.5 Simplex method min z = c T x s.v. Ax = b x 0 LP in standard form Examine a sequence of basic feasible solutions with non increasing objective function value until an optimal solution is reached or

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Introduction. Very efficient solution procedure: simplex method.

Introduction. Very efficient solution procedure: simplex method. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 2: Intro to LP, Linear algebra review. Yiling Chen SEAS Lecture 2: Lesson Plan What is an LP? Graphical and algebraic correspondence Problems

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

Numerical Optimization

Numerical Optimization Linear Programming Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on min x s.t. Transportation Problem ij c ijx ij 3 j=1 x ij a i, i = 1, 2 2 i=1 x ij

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

3.7 Cutting plane methods

3.7 Cutting plane methods 3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x

More information

MATH 445/545 Test 1 Spring 2016

MATH 445/545 Test 1 Spring 2016 MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15 Fundamentals of Operations Research Prof. G. Srinivasan Indian Institute of Technology Madras Lecture No. # 15 Transportation Problem - Other Issues Assignment Problem - Introduction In the last lecture

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

MATHEMATICAL PROGRAMMING I

MATHEMATICAL PROGRAMMING I MATHEMATICAL PROGRAMMING I Books There is no single course text, but there are many useful books, some more mathematical, others written at a more applied level. A selection is as follows: Bazaraa, Jarvis

More information

February 17, Simplex Method Continued

February 17, Simplex Method Continued 15.053 February 17, 2005 Simplex Method Continued 1 Today s Lecture Review of the simplex algorithm. Formalizing the approach Alternative Optimal Solutions Obtaining an initial bfs Is the simplex algorithm

More information

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N Brief summary of linear programming and duality: Consider the linear program in standard form (P ) min z = cx s.t. Ax = b x 0 where A R m n, c R 1 n, x R n 1, b R m 1,and its dual (D) max yb s.t. ya c.

More information

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017 Linear Programming (Com S 4/ Notes) Yan-Bin Jia Nov 8, Introduction Many problems can be formulated as maximizing or minimizing an objective in the form of a linear function given a set of linear constraints

More information

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food. 18.310 lecture notes September 2, 2013 Linear programming Lecturer: Michel Goemans 1 Basics Linear Programming deals with the problem of optimizing a linear objective function subject to linear equality

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1 AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1 Prof. Yiling Chen Fall 2018 Here are some practice questions to help to prepare for the midterm. The midterm will

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Linear programs Optimization Geoff Gordon Ryan Tibshirani

Linear programs Optimization Geoff Gordon Ryan Tibshirani Linear programs 10-725 Optimization Geoff Gordon Ryan Tibshirani Review: LPs LPs: m constraints, n vars A: R m n b: R m c: R n x: R n ineq form [min or max] c T x s.t. Ax b m n std form [min or max] c

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Introduction to Linear and Combinatorial Optimization (ADM I)

Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

More information

AM 121: Intro to Optimization Models and Methods Fall 2018

AM 121: Intro to Optimization Models and Methods Fall 2018 AM 121: Intro to Optimization Models and Methods Fall 2018 Lecture 5: The Simplex Method Yiling Chen Harvard SEAS Lesson Plan This lecture: Moving towards an algorithm for solving LPs Tableau. Adjacent

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

MAT 2009: Operations Research and Optimization 2010/2011. John F. Rayman

MAT 2009: Operations Research and Optimization 2010/2011. John F. Rayman MAT 29: Operations Research and Optimization 21/211 John F. Rayman Department of Mathematics University of Surrey Introduction The assessment for the this module is based on a class test counting for 1%

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information