"SYMMETRIC" PRIMAL-DUAL PAIR

Size: px
Start display at page:

Download ""SYMMETRIC" PRIMAL-DUAL PAIR"

Transcription

1 "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax = b st A T y c T x y unrestricted IN GENERAL, PRIMAL Objective: Min cx DUAL Objective: Max y T b Variable: x j Constraint: (A j ) T y c j Variable: x j unrestricted (A j ) T y = c j Constraint: (A i )x b i Variable: y i (A i )x = b i Variable: y i unrestricted Coefficient matrix: A RHS Vector: b Cost Vector: c Coefficient matrix: A T Cost Vector: b RHS Vector: c 1

2 WEAK DUALITY THEOREM: Consider the primal-dual pair: (P) Minimize cx (D) Maximize y T b st Ax b st A T y c T x y Suppose x is primal feasible, and y is dual feasible. Then cx y T b Proof: x is feasible in (P); so Ax b, x. Similarly, y is feasible in (D); so A T y c T and y. Thus, Ax b and y, therefore y T Ax y T b (1) Also, A T y c T and x x T (A T y) x T c T y T Ax cx (2) From (1) and (2) cx y T Ax y T b Some corollaries (x * and y * are optimum vectors): 1) Primal objective for any primal-feasible x is (y * ) T b 2) Dual objective for any dual-feasible y is cx * 3) If x and y are feasible in (P) and (D) respectively, and cx = y T b, then x and y are optimal in (P) and (D) 4) If (P) feasible and unbounded, then (D) is infeasible 5) If (D) feasible and unbounded then, (P) is infeasible 6) If one of the problems is infeasible, then the other problem is (1) infeasible, OR (2) feasible, but with an unbounded objective function 2

3 STRONG DUALITY THEOREM Consider the Primal-Dual pair (P) Minimize cx (D) Maximize y T b st Ax b st A T y c T x y If either one of (P) or (D) has an optimal solution, then so does the other, and their optimal values are equal, i.e., cx = y T b. PROOF: Assume without loss of generality that the primal has an optimal solution x * and that it is in standard form so that the dual variables are unrestricted in sign. We know that the optimal solution x * is one of the basic feasible solutions for (P). We also know that x B =(A B ) -1 b and that = ( ), i.e, ( ) Let y * = = ( ( ) ) (i.e., y * is the optimal simplex multiplier vector) Then (y * ) T A = = ( ) = ( ) [ ] = [ ( ) ] [ ] =. Thus (y * ) T A c, i.e., ((y * ) T A) T = A T y * c T. So y * = is feasible in (D). Moreover the value of the primal objective at x * is cx * = = ( ), while the dual objective at y * is (y * ) T b = = ( ). So the objectives of (P) and (D) have the same value at x * and y * respectively. Then Corollary 3 implies they must be optimum as well to (P) and (D). 3

4 LEMMA OF FARKAS FARKAS LEMMA: Let A R m n, b R m, x R n, y R m. Then the following statements are equivalent to each other: A. y T A y T b B. The system Ax=b, x is feasible PROOF: Consider the primal-dual pair (note that this merely uses T for c...) Minimize T x Maximize y T b st Ax = b st A T y x y unrestricted If the statement y T A y T b is true, then for any y that is feasible in (D), y T b is. Thus the optimal value of (D) can never exceed. Since the vector y= is feasible and yields a value of for the objective, it MUST therefore be optimal for (D). This implies that (D) is feasible and bounded; therefore it has an optimal solution and by the strong duality theorem, (P) is also feasible, i.e., Ax = b, x is feasible. This proves that B implies A. Now consider the converse, i.e., assume that Ax = b is satisfied for some x. If A T y for some arbitrary y, then x T A T y (since x ), i.e., (Ax) T y = b T y. This proves that A implies B. 4

5 Graphical Interpretation of Farkas' Lemma y T A y { jh j- } where H j- is the closed halfspace on the side of H j that does not contain A j So H 1 H 2 H 3 b Suppose H j is the hyperplane through the origin that is orthogonal to A j A 1 b A 2 A 3 y T A y T b simply implies that { jh j- } H b- Note that b makes an angle > 9 with all y ( jh j- ) and can be expressed as a nonnegative linear combination of the columns of A. On the other hand, b' does not make an angle > 9 with all y ( jh j- ) and cannot be expressed as a nonnegative linear combination of the columns of A. 5

6 GORDAN'S THEOREM OF THE ALTERNATIVE This is an alternative version/variation of Farkas Lemma... Exactly one of the following is feasible: I. = II. PROOF: Suppose that satisfies I and satisfies II. Then, A =b A = b > (1) Similarly, A T A T = (A ) = A (2) Obviously, (1) and (2) contradict each other, hence Systems I and II cannot both be feasible. Now suppose that both I and II are infeasible. Then the infeasibility of II implies that y T b for all y such that A T y. Then by Farkas' Lemma, I has to be feasible, which contradicts our assumption. Hence both I and II cannot be infeasible. 6

7 COMPLEMENTARY SLACKNESS THEOREM Vectors and which are feasible in (P) and (D) respectively, are optimal in (P) and (D) if, and only if whenever a constraint in one problem is inactive, the corresponding variable in the other problem is zero, whenever a variable in one problem is nonzero, the corresponding constraint in the other problem is active. PROOF: Introducing slacks and surpluses, we have (P) Minimize cx (D) Maximize y T b st Ax - u = b st A T y + v = c T x, u y, v Consider the vectors [ ] and [ ] feasible in (P) and (D) respectively. Then c - b = [A T + ] T - [A - ] = A + - A + = + (1) First, suppose that and are also optimal. Then c = b, so from (1) + =, i.e., + Since every variable in every term in the above summation is restricted to be nonnegative, each and every term HAS to be equal to zero. Thus either = or = (or both = ) for all j either = or = (or both = ) for all i 7

8 This proves the first part of the theorem. Conversely, suppose the above conditions hold so that + =. From (1), we then have c = b and since x and ã are feasible in their respective problems, they must also be optimal. This proves the second part of the theorem. KARUSH-KUHN-TUCKER LP OPTIMALITY CONDITIONS Given a linear program in standard form, x is an optimal solution to the problem if, and only if, there exist vectors y and v such that 1. Ax = b, x (primal feasibility) 2. A T y + v = c T, v (dual feasibility) 3. v T x = (complementary slackness) Note that in this case, y is and optimal solution to the dual (follows from the complementary slackness theorem). The (primal) simplex method we have seen thus far maintains (1) and (3) and seeks to attain (2). We will later look at the dual simplex method that maintains (2) and (3) and seeks to attain (1). 8

9 A. OBJECTIVE FUNCTION SENSITIVITY ANALYSIS Consider the LP Minimize cx, st Ax=b, x. Suppose the coefficient of x k in the objective function changes from c k to c k ' = c k +δ k. Under what restrictions on δ k will the optimal basis remain unchanged? CASE 1: x k is NOT in the optimal basis, i.e., k N. Then x k will remain nonbasic as long as =,.., ( + ) Where π = c B (A B ) -1 is the optimal simplex multiplier vector (which is unaffected by the change in c k because k B, and hence c k does not form part of c B ). Let π A k = c B (A B ) -1 A k = z k. Then the basis is unchanged as long as z k c k +δ k = c k '. In other words the range of values for c k ' for which the basis is unchanged is z k c k ' CASE 2: x k is IN the optimal basis, i.e., k B. This case is a little more complicated. Suppose that A k is the p th basic variable. For B to remain unchanged = 9

10 where is the (modified) simplex multiplier vector given by = ( ), and = [ ( + ) ] = c B + [... δ k... ] p th entry = ( ) + [ ]( ) = z j - c j + δ k where is the p th entry of the updated version of column A j (= y j = ( ) ) Thus the basis remains optimal as long as for all j k: ( )/ > ( )/ < This leads to maximum ( ) minimum ( ) 1

11 B. RIGHT HAND SIDE Once again, consider the LP Minimize cx, st Ax=b, x. Suppose the RHS of the i th constraint changes from b i to b i ' = b i + δ i. Under what restrictions on δ i will the optimal basis remain unchanged? Note that the optimality conditions are NOT affected, since = c B (A B ) -1 A j - c k does not involve the vector b in any way and is therefore unaffected for all k. Feasibility of the current basis may be affected though, since x B = (A B ) -1 b changes. In what range of values for b i ' does the current basis remain feasible? We have x B ' = (A B ) -1 b' = (A B ) -1 + = (A B ) -1 b + γ i δ i (where γ i is the i th column of (A B ) -1 ) = x B + γ i δ i For feasibility of the current basis, we require x B ' = x B + γ i δ i i.e., δ i for j=1,2,...,m > < maximum minimum 11

12 C. ADDING A NEW VARIABLE Suppose we add a new variable x n+1 with cost c n+1 and column A n+1. Without resolving the problem we can determine whether x n+1 will be attractive to enter the basis. First we find z n+1 c n+1 = ( ). If this quantity is nonpositive (for a minimization) then x n+1 = at the optimum and the current optimum solution is unchanged. On the other hand if z n+1 c n+1 > then x n+1 is introduced into the basis and we continue until we find the new optimal solution. D. ADDING A NEW CONSTRAINT If the current optimal solution is feasible in the new constraint then the optimum solution is unchanged. On the other hand, if the current optimum is infeasible in the new constraint, then the feasible region with the new constraint is cutting out the current optimum solution (and other parts of the original feasible region). A new solution may be found from the current basis by using the dual simplex method we will study this later on... 12

13 EXAMPLE: Consider the LP MAX Z = [ ] st = At the optimal iteration, we have Z * = 32 with B={2,5,6,3}; c B = [ ] x = [ ] A B = (A B ) -1 = The optimal simplex multiplier vector (dual solution) is π = c B (A B ) -1 = [-29/2 19/2 33 4] 1. Objective Coefficient Ranging Case 1: k B e.g. k=1 (x 1 is not basic) For a Max problem: Basis is unchanged as long as = = Here k=1 and z 1 = = [-29/2 19/2 33 4] i.e., as long as (-3) - c 1 ', i.e., - c 1 ' = -3 13

14 Thus, if we rewrite c 1 ' as c 1 ' = c 1 + δ 1 = -3+δ 1, then Max. allowable increase = and Max. allowable decrease = Case 2: k B, e.g., k=3 (x 3 is basic) For a MAX problem, the basis is unchanged as long as = Noting that x 3 is the 4 th basic variable (p=4), = c B + [ δ k ] and = ( ) = = ( ) + [ ]( ) = 4 th entry of the updated version of Column A j Since k=3, the basis thus remains optimal as long as j 3: ( )/ 3 > ( )/ 3 < i.e., maximum ( ) minimum ( ) We therefore first need y j = (A B ) -1 A j and c j z j = c j - for all j 3. These are given below: j z j -c j 29/2 19/2 y j p=4 14

15 maximum ( ) = maximum / /, / / = minimum ( ) = minimum = the basis is unchanged as long as -29/5 δ 3 1, i.e., Max. allowable increase = and Max. allowable decrease = 29/5 2. RHS Ranging: Basis is unchanged as long as maximum minimum Recall that γ i is the i th column of (A B ) -1. For example consider i=1, and suppose we have a change of δ 1 in the RHS, i.e., b 1 ' = b 1 + δ 1 = 4 + δ 1 ; γ 1 = We have maximum = - since < for every j minimum = minimum [-2/2.5-2/-2-5/-2.5-4/-2.5] = 1 the basis is unchanged as long as - δ 1 1, i.e., Max. allowable increase = 1 and Max. allowable decrease = 15

16 SIMULTANEOUS CHANGES IN PARAMETERS: THE 1% RULE Consider the case where MORE THAN ONE element of c or b are changed simultaneously. Under what conditions does the optimal basis remain unchanged? Unfortunately, it is NOT possible to state that the basis remains unchanged if each change is within its INDIVIDUAL limit. However, the use of the 1% Rule provides us with a conservative bound. 1. OBJECTIVE COEFFICIENTS CASE 1: All coefficients that are changed correspond to variables that are NOT in the optimal basis. Since π = c B (A B ) -1 is unaffected, each change is independent of the others, and thus the basis is unchanged as long as each change is within its INDIVIDUAL bounds. CASE 2: At least one of the coefficients that are changed corresponds to a basic variable. Suppose we let I j = maximum INDIVIDUAL INCREASE possible in c j, D j = maximum INDIVIDUAL DECREASE possible in c j, for the basis to remain unchanged; these value are as computed in the previous section. 16

17 Let us define r j = / > / < to be the fraction of the maximum individual change that can take place in the coefficient for x j (r j = if δ j =). Then the 1% Rule states that if each change δ j is such that 1, then the basis will remain unchanged with the new set of cost coefficients. B. RIGHT HAND SIDE In a similar fashion let us define I j = maximum INDIVIDUAL INCREASE possible in b j, D j = maximum INDIVIDUAL DECREASE possible in b j, for the basis to remain unchanged; these value once again, are as computed in the previous section. If we define r j = / / to be the fraction of the maximum individual change that can take place in b j, then the 1% Rule states that if each change δ j is such that 1, then the basis will remain unchanged with the new set of RHS values. NOTE: In both cases, if r i exceeds 1, the basis MAY OR MAY NOT change. 17

18 Consider our example once again: Suppose the current cost vector is changed from c = [ ] to c'= [ ]. The r j values are r 1 =(2/ )=; r 2 =(2/19); r 3 =(1/5.8); r 4 =; r 5 =; r 6 =(4/ )=; r 7 =(6/14.5); r 8 =(2/9.5). so that r j yields a value of.92. Since this value is less than 1, the 1% rule states that the same basis remains optimal with the basic variables having the same values (although of course, the value of the objective function will be different). Next, consider the RHS vector b = [4 6 1 ] T, and suppose this is changed to b'= [ ] T. Then r 1 =(.5/1)=.5; r 2 =(1.5/2)=.75; r 3 =; r 4 =(2/ )=; which yields r i = Since this value exceeds 1, by the 1% Rule, the basis is no longer guaranteed to be optimal. 18

19 PARAMETRIC PROGRAMMING Used to investigate how sensitive the optimal solution is to continuous and simultaneous changes in the RHS vector (e.g. resources) or in the cost parameters (e.g. profit margins). Usually we represent the cost vector c or the RHS vector b as parameterized functions c(θ) or b(θ) of some parameter θ (such as time, interest rate, some physical dimension etc.). Note that b(θ) or c(θ) need NOT be linear. Thus if we have constraints corresponding to three different resources, the functions may look like the ones shown below: b 1 (θ) b i b 2 (θ) b 3 (θ) θ 19

20 The general idea is to find x * at θ=, and find the range of values for θ (say (,θ 1 ]) in which the optimal basis is the same. Thus for θ>θ 1 the current basis becomes suboptimal or infeasible. We now reoptimize and find the new optimal basis along with the range of values (say (θ 1, θ 2 ]) in which it is valid etc., until we reach a point beyond which the basis never changes any more or always stays infeasible. A. OBJECTIVE FUNCTION c = c(θ)= [c 1 (θ) c 2 (θ)... c n (θ)] Suppose that B is the optimal basis at θ=θ i with corresponding basis matrix A Bi. We want to find the value of θ at which this basis is no longer optimal. Let x Bi = (A Bi ) -1 b be the optimal solution at θ i and c B (θ) be the corresponding cost vector. Once again, as θ changes, x Bi and hence feasibility is unaffected. However, (assuming minimization) the solution stays OPTIMAL only if for all variables, z j c j = c B (θ) (A B ) -1 A j - c j ( if maximizing) Consider our example. At the optimal iteration, for θ= B ={2,5,6,3}; x B = [ ] A B = (A B ) -1 =

21 Suppose that the cost vector we used earlier, namely c = [ ] is parameterized as c(θ) = [-3-2θ 3-2θ 2-4θ -2-4θ -1-5θ 4-2θ ] Thus, c B (θ) = [3-2θ -1-5θ 4-2θ 2-4θ] and the optimal simplex multiplier vector π(θ) = c B (θ)(a B ) -1 is given by [ θ θ 33-54θ 4-69θ] We now find z j = π(θ)a j and z j c j for each j... j c j z j z j c j 1-3-2θ -3+2θ 4θ 2 3-2θ 3-2θ 3 2-4θ 2-4θ 4-2-4θ -2+4θ 8θ 5-1-5θ -1-5θ 6 4-2θ 4-2θ θ θ θ θ 21

22 Thus the basis remains unchanged as long as z j c j j. 4θ, 8θ, θ, θ θ Thus θ 1 =.55769, and when θ exceeds this value x 7 has a negative reduced cost and therefore enters the basis. We can then reoptimize the problem and find a new basis, and once again find θ 2 so that, for θ from to θ 2 this NEW basis stays optimal etc., etc. B. RIGHT HAND SIDE In an analogous fashion, the current basis remains optimal as long as x Bi = (A Bi ) -1 b(θ). Given θ i, we can then find θ i+1 for which the current basic solution stays feasible. In our example, suppose b(θ) = [4+2θ 6+θ 1-2θ +θ] (A B ) -1 b(θ)= = Thus the basis is unchanged as long as (A B ) -1 b(θ), i.e., θ 1/3, after which x 5 becomes negative and we need to use the dual simplex method to re-attain feasibility. 22

23 The (primal) simplex method solves DUAL SIMPLEX METHOD Min cx, st Ax=b, x starts with a (primal) feasible basis B, i.e., x B = (A B ) -1 b and while maintaining complementary slackness, works towards satisfying the primal optimality condition, which is the same as DUAL FEASIBILITY: [c B (A B ) -1 ]A j c j πa j -c j πa j c j for all j The DUAL SIMPLEX method does the exact opposite. It starts with a DUAL FEASIBLE basis satisfying the dual constraints A T π c T, i.e., π A T c. This is equivalent (as seen above) to the PRIMAL OPTIMALITY conditions, namely [c B (A B ) -1 ]A j c j. While maintaining complementary slackness, the method then pivots to attain DUAL OPTIMALITY, which is the same as PRIMAL FEASIBILITY, namely (A B ) -1 b. Thus, the PRIMAL SIMPLEX starts feasible but suboptimal, and finishes up feasible and optimal, through a sequence of primal feasible suboptimal points. The DUAL SIMPLEX starts infeasible and "superoptimal", and ends feasible and optimal through a sequence of primal infeasible but superoptimal points. 23

24 EXAMPLE: Minimize Z = 2x 1 + 3x 2 + 4x 3 st x 1 + 2x 2 + x 3 3 x 1 + 2x 2 + x 3 - x 4 = 3 2x 1 - x 2 + 3x 3 4 2x 1 - x 2 + 3x 3 - x 5 = 4; all x j. After introducing surplus variables x 4 and x 5 consider the basis B={4,5}. We get the basic BASIC but INFEASIBLE solution = = ; = = 3 4 ; = (A B ) -1 = A B = 1 1 ; π= cb (A B ) -1 = [ ], and = = Notice that ALL reduced costs are nonpositive, i.e., the optimality criterion is met for a minimization. We thus have a solution that is superoptimal but infeasible. How do we pivot so that we MAINTAIN the satisfaction of the optimality criterion, yet reduce the infeasibility? The leaving variable may be arbitrarily selected corresponding to any basic variable that is negative in value (say, the MOST negative one). Let us say that this corresponds to the s th basic variable (i.e., = < ). 24

25 Refer to problem (P1) when we developed the primal simplex method. The constraints for this stated that for a given basis B + = Looking at row s of the above system, constraint no. s can be written in terms of the current basis B as + = < Suppose we wish to bring variable k N into the basis to replace while keeping the rest of N unchanged and consider = First, it is clear that if we want to remove value from the current (negative) value of from the basis we must increase its. Since we also plan to increase the entering variable x k from its current (nonbasic) value of, it follows that we must pick a variable x k for which <. The maximum allowable increase in the value of x k would be /, at which point = and exits the basis. In addition, we must also choose x k in such a way that the primal optimality (dual feasibility) conditions continue to be satisfied when we bring it into the basis, i.e., we want the new reduced costs for the nonbasic variables (say, ) to remain nonpositive. These new values are given by 25

26 = and so we want (if j=k, then of course, this is zero). Thus the smallest ratio with < determines which reduced cost first goes to zero. This ratio thus determines our entering variable via argmin, We now have a new basis with x k replacing at position s in the basis and we continue the process. Note that if for all j then the dual is unbounded and the primal is infeasible. Back to our example... = = ; = = 3 4 ; = (A B ) -1 = A B = 1 1 ; π= cb (A B ) -1 = [ ], and = = Let us choose s=2 (corr. to x 5 ) as the leaving variable. Then the updated columns for j N are given by y j =(A B ) -1 A j = 1 2, = 2 1, = 1 3 = 2 26

27 Then min, = min(-2/-2, -, -4/-3) = 1 corresponding to the first member of N (x k =x 1 ). So our new basis will be B={4,1} and N={5,2,3} So, A B = and (AB ) -1 = and = = ( ) = Z=c B x B =4. We now recompute = 1 2 ; = =, with π= c B (A B ) -1 = [ 2] = [ 1], and = [ 1] 1 = 2 [ 1] 1 3 = [ 1] Let us choose s=1 (corr. to x 4 ) as the leaving variable; this is the only option we have. Then the updated columns for j N are given by y j =(A B ) -1 A j = = = =.5.5, 2 1 = 2.5.5, 1 3 =

28 Then min, = min(-1/-.5, -4/-2.5, - ) = 1.6 corresponding to the second member of N (x k =x 2 ). So our new basis will be B={2,1} and N={5,4,3} So, A B = and (AB ) -1 = and = = ; = = ( ) = Z=c B x B = = ; = =, with At this point all of our variables are nonnegative and we have preserved the optimality conditions (CHECK AND VERIFY). Therefore this is the optimal solution to the original LP. Note that for this particular instance we did not need artificial variables etc. and were able to solve the problem in 2 iterations! 28

29 QUESTION. Why is this called the DUAL SIMPLEX method? Consider the primal-dual pair for our example... Minimize Z = 2x 1 + 3x 2 + 4x 3 st x 1 + 2x 2 + x 3 3 x 1 + 2x 2 + x 3 - x 4 = 3 2x 1 - x 2 + 3x 3 4 2x 1 - x 2 + 3x 3 - x 5 = 4; all x j. Maximize W = 3π 1 + 4π 2 st π 1 + 2π 2 2 π 1 + 2π 2 + π 3 = 2 2π 1 + π 2 3 2π 1 - π 2 + π 4 = 3 π 1 + 3π 2 4 π 1 + 3π 2 + π 5 = 4 all π i. The correspondence of π and x may be summarized as π 1 x 4 π 2 x 5 x 1 π 3 x 2 π 4 x 3 π 5 (The decision variable in one corresponds to the slack/surplus in the other) At Iteration 1, we had the basic, but infeasible solution = = 3 4 with Z=. Corresponding to this, the simplex multiplier vector was π = [ ] Plugging this into the dual we have π 3 =2, π 4 =3, π 5 =4. This corresponds to a BASIC FEASIBLE solution IN THE DUAL, namely = = 2 3 4, with W= 29

30 At Iteration 2, we had another basic infeasible solution x B = = 1 2 with Z=4. Corresponding to this, simplex multiplier vector was π = [ 1]. Plugging this into the dual we have π 3 =, π 4 =3, π 5 =4. This corresponds to the BASIC FEASIBLE solution IN THE DUAL, namely = with W=4. Finally, at Iteration 3, we had the basic feasible solution = =.4 2.2, with Z=5.6. Corresponding to this, the simplex multiplier vector isπ = [1.6.2]. Plugging this into the dual we have π 3 =, π 4 =, π 5 =1.8. This corresponds to the BASIC FEASIBLE solution IN THE DUAL, namely = with W=5.6. Notice that the dual simplex method generates a sequence of improving BASIC FEASIBLE SOLUTION IN THE DUAL! Hence the name DUAL SIMPLEX METHOD. At the optimum point we have an optimal BFS for the primal and an optimal BFS for the dual, both yielding the same value for the dual and primal objectives. In general, corresponding to any basic (not necessarily feasible) solution to one problem, there exists a complementary basic solution to the other (given by the current simplex multiplier vector = c B (A B ) -1 ). Furthermore, both these complementary solutions yield the same value for their respective objectives. 3

31 In the PRIMAL simplex method we move through a sequence of improving BFSs in the primal. The complementary basic solutions in the dual are all infeasible (in the dual) until the last (optimal) in the sequence. Thus the primal seeks optimality while the dual seeks feasibility. In the DUAL simplex method we move through a sequence of improving BFSs in the DUAL. The complementary basic solutions in the primal are all infeasible (in the primal) until the last (optimal) in the sequence. Thus the dual seeks optimality while the primal seeks feasibility. If the primal has n variables, m constraints and we define x n+1, x n+2,..., x n+m as the slacks/surplus in the primal, and π m+1, π m+2,..., π m+n as the slacks/surplus in the dual, then complementary slackness is satisfied at each iteration in both methods. That is, for the pair of complementary basic solutions x and π. x n+i π i =, for i=1,2,...,m, and π m+j x j =, for j=1,2,...,n 31

32 THE PRIMAL-DUAL METHOD We now briefly mention the Primal-Dual method, which is similar to the Dual simplex method, in that it starts with a dual feasible solution and tries to find a complementary primal solution that is feasible (in the primal). The main difference is that the dual feasible solutions NEED NOT BE BASIC. Suppose at the current dual feasible solution π we let the set J index all DUAL constraints that are active. Thus J = {j: πa j = c j }, and J {1,2,...,m}. Then complementary slackness tells us that the only primal variables that can be positive are those that correspond to active dual constraints, i.e., with indices in J. We now try and attain primal feasibility using ONLY x j with j J by solving the following RESTRICTED PRIMAL problem: Min = st = i=1,2,...,n x j, A i Note that this is like a Phase 1 problem where A i is the artificial variable corresponding to Constraint i. 32

33 If the above problem has an optimal value of =, then all the A i values are equal to zero, and we have a primal feasible vector x, with x j = for j J and x j obtained from above for j J. Furthermore for this vector and the dual feasible vector π, complementary slackness holds: if j J, then the dual constraint j is tight so that π m+j = ; if j J, then x j =. So in either case, x i π m+j =. Then by the complementary slackness theorem x and π are optimal in (P) and (D). STOP. Now, if the optimal value of the RESTRICTED PRIMAL is greater than, then primal feasibility is not attainable with the current set J, and we therefore need a new dual feasible solution that will admit a new variable to set J, in such a way that is reduced. Consider now the restricted primal problem. Let ω = [ω 1 ω 2... ω n ] be the simplex multiplier vector corresponding to the optimal iteration. Since we are at its optimal iteration, = = = 1 ( ) i.e., ω i 1 for all i, and ωa j for all j J. 33

34 Now, to reduce the value of any further would require a variable x j for which ωa j >. This would give it a negative reduced cost, and we could enter it into the basis and reduce. So we need to find such a variable x j from j J and FORCE it into set J, so that it can be introduced into the restricted master problem. In order to accomplish this, we need to modify the dual vector π so that with the modified vector, (a) all dual constraints that were previously active remain active, and (b) a new constraint as defined in the previous paragraph be activated so that the corresponding x j can enter the set J. Suppose the new vector π'=π+θω, where θ is a POSITIVE constant. Then the j th dual constraint is π'a j c j (π+θω)a j = πa j + θωa j c j For j J: we know πa j = c j, ωa j, so the constraint is automatically satisfied. For j J, we want θωa j c j - πa j We know πa j < c j (since π is dual feasible). So c j -πa j >. If ωa j, then Constraint j is thus satisfied automatically (and remains inactive). If ωa j >, then the constraint is active as long as θ=(c j -πa j )/(ωa j ). 34

35 Thus, to activate at least one constraint we merely need to define a new dual vector π'=π+θω, where = minimum : NOTE: If we cannot find a j J such that ωa j >, then the primal must be infeasible, so we can stop. Consider our earlier example again... Minimize Z = 2x 1 + 3x 2 + 4x 3 st x 1 + 2x 2 + x 3 3 x 1 + 2x 2 + x 3 - x 4 = 3 2x 1 - x 2 + 3x 3 4 2x 1 - x 2 + 3x 3 - x 5 = 4; all x j. Maximize W = 3π 1 + 4π 2 st π 1 + 2π 2 2 π 1 + 2π 2 + π 3 = 2 2π 1 + π 2 3 2π 1 - π 2 + π 4 = 3 π 1 + 3π 2 4 π 1 + 3π 2 + π 5 = 4 all π i. Consider the solution π 1 =1.5, π 2 =. Thus J={2} The restricted master is Minimize = A 1 +A 2 st 2x 2 + A 1 =3 -x 2 + A 2 =4 x 2, A 1, A 2 35

36 The optimal solution is A 1 =, A 2 =5.5, x 2 =1.5 with objective =5.5 and the simplex multiplier vector ω = [.5 1]. Since, we need to get a new vector π' feasible in the dual via π'=π+θω, where θ = Mimumum over all j J such that ωa j >. Here we have ωa 1 = [.5 1] 1 2 = 2.5 (>), ωa3 = [.5 1] 1 3 = 3.5 (>) and so θ = Min.,. = Min {.5/2.5, 2.5/3.5} =.2 π'= = Thus the new J={1,2} and the new restricted master is Minimize = A 1 +A 2 st x 1 + 2x 2 +A 1 = 3 2x 1 - x 2 +A 2 = 4 x 1, x 2, A 1, A 2 The optimal solution is A 1 =, A 2 =, x 1 =2.2, x 2 =.4 with =. So this must be the optimum solution to (P)! STOP. 36

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

Sensitivity Analysis

Sensitivity Analysis Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Week 3 Linear programming duality

Week 3 Linear programming duality Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the

More information

Introduction. Very efficient solution procedure: simplex method.

Introduction. Very efficient solution procedure: simplex method. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

LINEAR PROGRAMMING II

LINEAR PROGRAMMING II LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science The Simplex Method Lecture 5 Standard and Canonical Forms and Setting up the Tableau Lecture 5 Slide 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3

IE 400 Principles of Engineering Management. The Simplex Algorithm-I: Set 3 IE 4 Principles of Engineering Management The Simple Algorithm-I: Set 3 So far, we have studied how to solve two-variable LP problems graphically. However, most real life problems have more than two variables!

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Sensitivity Analysis and Duality in LP

Sensitivity Analysis and Duality in LP Sensitivity Analysis and Duality in LP Xiaoxi Li EMS & IAS, Wuhan University Oct. 13th, 2016 (week vi) Operations Research (Li, X.) Sensitivity Analysis and Duality in LP Oct. 13th, 2016 (week vi) 1 /

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3.

M340(921) Solutions Problem Set 6 (c) 2013, Philip D Loewen. g = 35y y y 3. M340(92) Solutions Problem Set 6 (c) 203, Philip D Loewen. (a) If each pig is fed y kilograms of corn, y 2 kilos of tankage, and y 3 kilos of alfalfa, the cost per pig is g = 35y + 30y 2 + 25y 3. The nutritional

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

(includes both Phases I & II)

(includes both Phases I & II) (includes both Phases I & II) Dennis ricker Dept of Mechanical & Industrial Engineering The University of Iowa Revised Simplex Method 09/23/04 page 1 of 22 Minimize z=3x + 5x + 4x + 7x + 5x + 4x subject

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

The Simplex Method: An Example

The Simplex Method: An Example The Simplex Method: An Example Our first step is to introduce one more new variable, which we denote by z. The variable z is define to be equal to 4x 1 +3x 2. Doing this will allow us to have a unified

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

March 13, Duality 3

March 13, Duality 3 15.53 March 13, 27 Duality 3 There are concepts much more difficult to grasp than duality in linear programming. -- Jim Orlin The concept [of nonduality], often described in English as "nondualism," is

More information

Chapter 4 The Simplex Algorithm Part I

Chapter 4 The Simplex Algorithm Part I Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

56:270 Final Exam - May

56:270  Final Exam - May @ @ 56:270 Linear Programming @ @ Final Exam - May 4, 1989 @ @ @ @ @ @ @ @ @ @ @ @ @ @ Select any 7 of the 9 problems below: (1.) ANALYSIS OF MPSX OUTPUT: Please refer to the attached materials on the

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

56:171 Operations Research Fall 1998

56:171 Operations Research Fall 1998 56:171 Operations Research Fall 1998 Quiz Solutions D.L.Bricker Dept of Mechanical & Industrial Engineering University of Iowa 56:171 Operations Research Quiz

More information

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 13 Dr. Ted Ralphs IE406 Lecture 13 1 Reading for This Lecture Bertsimas Chapter 5 IE406 Lecture 13 2 Sensitivity Analysis In many real-world problems,

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information