END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

Size: px
Start display at page:

Download "END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur"

Transcription

1 END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur

2 Introduction Consider the following problem where x 1 and x 2 corresponds to the # of product 1 (soldier) and 2 (train) produced per week, and C1, C2 and C3 corresponds to the constraints of resource 1 (finishing), resource 2 (carpentry) and resource 3 (demand), respectively: s.t. max z = 3x 1 + 2x 2 2x 1 + x (C1=Finishing) x 1 + x 2 80 (C2=Carpentry) x 1 40 (C3=Demand) x 1, x 2 0 The optimal solution to the problem is z = 180; x 1 = 20, x 2 = 60 where B = x 1, x 2, s 3 and N = s 1, s 2.

3 Introduction How would the optimal solution to this problem change when we change the parameters of this problem (objective function coefficients, right-hand side terms etc.)? That s sensitivity analysis.

4 Change in an Objective Function Coefficient Currently we earn $3 when we produce a soldier and $2 when we produce a train. If we increase the profit of producing a soldier sufficiently, would it be still optimal to produce 20 soldier and 60 trains? We now have x 1 = 20, x 2 = 60 z = 3x 1 + 2x 2 = 180 Assume that we fix all other parameters except the coefficient of x 1, and let it be c 1, the contribution to the profit by each soldier. We can then write z = c 1 x 1 + 2x 2 x 2 = c 1 2 x 1 + z 2 m = c 1 2

5 Change in an Objective Function Coefficient We see that if we fix all others and change the coefficient of x 1, we actually change the slope of the function, which means such a change will make the isoprofit line (z-line) flatter or steeper. If we look at the figure, we see that if it is flatter than the carpentry constraint, then, what will be the new optimal point to the problem? Obviously, instead of point B, we will have a new optimal solution at point A! What about if we make the objective function steeper than the finishing constraint?

6 Change in an Objective Function Coefficient

7 Change in an Objective Function Coefficient Now, we can write z = c 1 x 1 + 2x 2 x 2 = c 1 2 x 1 + z 2 m o = c 1 2 2x 1 + x 2 = 100 x 2 = 2x m c1 = 2 x 1 + x 2 = 80 x 2 = 1x m c2 = 1 (1) We can thus write, objective function will be flatter than the carpentry constraint if c 1 2 > 1 c 1 < 2 (2) Similarly, objective function will be steeper than the finishing constraint if c 1 2 < 2 c 1 > 4

8 Change in an Objective Function Coefficient Cases (1) and (2) cause the current optimal point change from point B to point A and from point B to point C, respectively, where we will have a new set of basic and non-basic variables. In other words, we can say the current optimal solution (point B) will remain optimal if c c 1 2 c 1 2 c c 1 4 c c 1 4

9 Change in Right-Hand Side Term Currently 100 units of resource 1 and 80 units of resource 2 and 40 units of resource 3 are available, respectively. Can we increase or decrease the # of available units for these constrains? Will it change the optimal solution? In other words, for what values of a right-hand side, will the current solution be optimal? Consider the right-hand side of the first constraint, and let it be b 1. Note that we fix all the other parameters.

10 Change in Right-Hand Side Term Currently, what are the constraints binding at point B (the optimal solution)? They are the first 2 constraints, finishing and carpentry constraints. What happens if we change b 1 (currently, it is 100)? These changes will shift the finishing constraint parallel to its current position. If you are asked to solve this problem graphically, how would you determine the coordinates of the optimal point B? We note that the optimal point occurs where the first 2 constraints intersect. How will changing one of these right-hand sides change the optimal solution?

11 Change in Right-Hand Side Term

12 Change in Right-Hand Side Term We can thus state the following rule: As long as the intersection of the first 2 constraints is feasible, current optimal solution will remain optimal. We see that if we increase b 1 until b 1 > 120, where does the intersection of the first 2 constraints occur? What about if we decrase it until b 1 < 80? Note that the current optimal solution remains optimal if 80 b 1 120

13 Shadow Prices When we change a parameter such as an objective coefficient or righthand side what changes in a model even if the current optimal solution remains optimal? Consider the right-hand side of the first constraint, and let Δ be the amount of change on the right-hand side of the first constraint (b 1 ). We then have 2x 1 + x 2 = Δ x 1 + x 2 = 80 x 1 = 20 + Δ x 2 = 60 Δ z = 3x 1 + 2x 2 = Δ What does it mean?

14 Shadow Prices We define the shadow price of a constraint as the amount by which the objective value is improved (increase in a max, decrease in a min) when the right-hand side of the constraint is increased by 1 unit. What is the shadow price of the first constraint in the example? What about the second? What is different about the third constraint?

15 Shadow Prices In general, we can write, for constraint i, for a max problem, new optimal z = old optimal z + shadow price i Δ i and for a min problem, new optimal z = old optimal z shadow price i Δ i

16 Some Definitions Assume that we have a max LP with n variables and m constraints as follows where we want max z or min z such that z = c 1 x 1 + c 2 x c n x n a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2 = a m1 x 1 + a m2 x a mn x n = b m x 1, x 2,, x n 0

17 Some Definitions For instance, consider the Dakota example without x 2 5 constraint: max z = 60x x x 3 + 0s 1 + 0s 2 + 0s 3 8x x 2 + x 3 + s 1 = 48 4x x x s 2 = 20 2x x x s 3 = 8 x 1, x 2, x 3, s 1, s 2, s 3 0

18 Some Definitions The optimal solution for Dakota: z x s s 3 = x s 1 + 2s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 20 x x s s 3 = 8 x 1, x 2, x 3, s 1, s 2, s 3 0

19 Some Definitions For such an LP with an optimal solution, we define B and N as the set of basic and non-basic variables, and let x B and x N as the m 1 and n m 1 vectors of basic and non-basic variables, respectively. For this solution, we have x B = s 1 x 3 x 1 and x N = x 2 s 2 s 3

20 Some Definitions We define c B and c N as the 1 m and 1 n m vectors objective function coefficients of basic and non-basic variables, respectively. For our example, we have c B = and c N =

21 Some Definitions Let B be the m m matrix of constraint coefficients of basic variables in initial BFS. For our example, B = Let N be the m n m matrix of constraint coefficients of non-basic variables in initial BFS. N =

22 Some Definitions Let a j be the m 1 vector of constraint coefficients of variable x j in initial BFS. a 2 = and a s2 = We define b as the m 1 vector of right-hand side terms. b =

23 Some Definitions Our model can be written as s.t. z = c B x B + c N x N Bx B + Nx N = b x B, x N 0

24 Some Definitions Using this representation, we can write, for the Dakota example z = s 1 x 3 + x s 1 x 3 + x 1 s 1 x 3 x 1, x 2 s 2 s 3 x 2 s 2 = 48 s 3 x 2 s 2 s

25 Some Definitions In the following equations, Bx B + Nx N = b If we multiply the expression by B 1, we obtain B 1 Bx B + B 1 Nx N = B 1 b x B + B 1 Nx N = B 1 b

26 Some Definitions For our Example, we have B = B 1 = We can then write s 1 x 3 + x 1 s 1 x 3 + x x 2 s 2 = s 3 x 2 s 2 = s

27 Some Definitions For the objective function, we can write, z = c B x B + c N x N z c B x B c N x N = 0 c B B 1 Bx B + c B B 1 Nx N = c B B 1 b c B x B + c B B 1 Nx N = c B B 1 b By organizing the above expressions, We obtain z c B x B c N x N = 0 c B x B + c B B 1 Nx N = c B B 1 b z + c B B 1 N c N x N = c B B 1 b By letting c j be the coefficient of x j in the objective row, c j = c B B 1 a j c j

28 Some Definitions For our example, we have c B = and B 1 = We can then write c2 = c B B 1 a 2 c 2 = =

29 Some Definitions cs 2 = c B B 1 a s2 c s = = cs 3 = c B B 1 a s3 c s = =

30 Some Definitions The optimal objective function value is z = c B B 1 b = = 280

31 Some Definitions Example: Compute the solution for the following LP when B = x 2, s 2. Standard form of the model is max z = x 1 + 4x 2 x 1 + 2x 2 6 2x 1 + x 2 8 x 1, x 2 0 max z = x 1 + 4x 2 x 1 + 2x 2 + s 1 = 6 2x 1 + x 2 + s 2 = 8 x 1, x 2, s 1, s 2 0

32 Some Definitions Example: We start with B = B 1 = a 1 = B 1 a 1 = a s1 = B 1 a s1 = = = The right-hand side b = = 3 5 c1 = c B B 1 a 1 c 1 = 1 cs 1 = c B B 1 a s1 c s1 = 2 Finally, the objective function value is z = c B B 1 b = 12

33 Sensitivity Analysis Changing the objective function coefficient of a non-basic variable Changing the objective function coefficient of a basic variable Changing the right-hand side of an equation Changing the column of a non-basic variable Adding a new variable (or activity) Adding a new constraint

34 Sensitivity Analysis In this section, we will consider the following example: max z = 60x x x 3 + 0s 1 + 0s 2 + 0s 3 8x x 2 + x 3 + s 1 = 48 4x x x s 2 = 20 2x x x s 3 = 8 x 1, x 2, x 3, s 1, s 2, s 3 0 The optimal solution for Dakota: z x s s 3 = x s 1 + 2s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 20 x x s s 3 = 8 x 1, x 2, x 3, s 1, s 2, s 3 0

35 Sensitivity Analysis Objective Function Coefficient of a Non-Basic Variable Currently, we have c 2 = 30. For what values of c 2, will the current optimal solution remain optimal? When we change c 2, what will change in the solution? We can say that, if, (for a max problem, like this), c2 0, the current solution remains optimal. Let c 2 = 30 + Δ. Since c B B 1 = , we have c2 = Δ = 5 Δ We can then write, the current solution is still optimal if c2 0 5 Δ 0 Δ 5 c 2 35

36 Sensitivity Analysis Objective Function Coefficient of a Non-Basic Variable If, for instance, c 2 = 40, what will happen? We then have c2 = = 5 meaning the current solution is not optimal anymore. We now have a sub-optimal solution and perform the simplex as follows: z 5.00x s s 3 = x 2 + s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 20 x x 2 0.5s s 3 = 8 z + 4.0x s s 3 = x 1 + s s 2 5.6s 3 = x 1 + x s 2 1.6s 3 = x 1 + x 2 0.4s s 3 = 1.6

37 Sensitivity Analysis Objective Function Coefficient of a Basic Variable Consider the objective function coefficient of x 1. Currently we have c 1 = 60. We let c 1 = 60 + Δ, and then, c B B 1 = Δ = Δ Δ

38 Sensitivity Analysis Objective Function Coefficient of a Basic Variable Now, we compute the new objective row c2 = c B B 1 a 2 c 2 = Δ Δ = Δ cs 2 = c B B 1 a s2 c s2 = Δ Δ = Δ cs 3 = c B B 1 a s3 c s2 = Δ Δ = Δ

39 Sensitivity Analysis Objective Function Coefficient of a Basic Variable We thus have Δ Δ Δ 0 4 Δ 20

40 Sensitivity Analysis Objective Function Coefficient of a Basic Variable If, c 1 = 100, by proceeding similarly, we obtain the sub-optimal solution as follows: z x s s 3 = x 2 + s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 20 x x 2 0.5s s 3 = 8 and obtain the following optimal solution after another iteration: z x x s 3 = 400 x 3 + s 1 4.0s 3 = 16 x x 3 2.0s 3 = 4 x x x 3 + s s 3 = 4

41 Sensitivity Analysis Reduced Cost This analysis allows us to define a concept called reduced cost. The reduced cost for a non-basic variable is the maximum amount by which the variable s objective function coefficient can be increased before the current basis becomes sub-optimal.

42 Sensitivity Analysis Changing the Right-Hand Side of an Equation We can say that as long as the right-hand side of each constraint in the optimal tableau remains non-negative, the current solution remains feasible and optimal.

43 Sensitivity Analysis Changing the Right-Hand Side of an Equation If for instance, if we change b 2 to b 2 + Δ, we have b = B 1 b = Δ 8 = Δ 8 + 2Δ 2 0.5Δ We thus have Δ Δ Δ 0 4 Δ 4 16 b 2 24

44 Sensitivity Analysis Changing the Right-Hand Side of an Equation What about the variables and objective function? If, for instance, b 2 = 22, we have s 1 x 3 x 1 = B 1 b = = and z = c B B 1 b = = 300

45 Sensitivity Analysis Changing the Right-Hand Side of an Equation If b 2 = 30, current solution is no longer optimal. We then have s 1 x 3 x 1 = B 1 b = = and z = c B B 1 b = = 380

46 Sensitivity Analysis Changing the Right-Hand Side of an Equation z x s s 3 = x 2 + s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 28 x x 2 0.5s s 3 = 3 What to do now? We will come back to this later!!!

47 Sensitivity Analysis Changing the Column of a Non-Basic Variable When the column of a non-basic variable changes, we need to check the objective row of the variable if it violates the optimality condition or not. To do it, we price out that variable, that is, we compute, c j = c B B 1 a j c j

48 Sensitivity Analysis Changing the Column of a Non-Basic Variable If, for instance, we change the column of the non-basic variable x 2 from c 2 = 30 to c 2 = 43 and a 2 = T to a 2 = T, c2 = c B B 1 a 2 c 2 = = which means the current solution is no longer optimal. We then have the following sub-optimal solution:

49 Sensitivity Analysis Adding a New Variable or Activity Assume that we have a new product that can be sold for $15 and use 1 board foot of lumber, and 1 hour of carpentry, finishing hours. That is c 4 = 15 and a 4 = T To find out if the current solution remains optimal, we price out the new variable. c4 = c B B 1 a 4 c 4 1 = = 5 0 current solution remains optimal

50 Sensitivity Analysis Summary We can summarize our discussion using the following table: Change Effect Optimality Condition Changing c j (non-basic) Compute c j c j 0 Changing c j (basic) Compute Row 0 c 0 Changing b i Compute RHS b 0 Changing a j Compute a j and c j c j 0 New Variable x j Compute a j and c j c j 0

51 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-Objective Function Case 1: All variables whose objective function coefficients are changed have nonzero reduced costs in the optimal row 0. Case 2: At least one variable whose objective function coefficient is changed has a reduced cost of zero.

52 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-Objective Function In Case 1, the current basis remains optimal if and only if the objective function coefficient for each variable remains within the allowable range. If the current basis remains optimal, then both the values of the decision variables and objective function remain unchanged. If the objective function coefficient for any variable is outside its allowable range, then the current basis is no longer optimal.

53 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-Objective Function In Case 2, we can often show that the current basis remains optimal by using the 100% Rule. Let c j = original objective function coefficient Δc j = change in c j i j = max allowable increase in c j d j = max allowable decrease in c j

54 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-Objective Function We then define the ratio, r j as r j = Δc j i j, Δc j 0 Δc j d j, Δc j 0 j r j 1 current solution is optimal j r j > 1 current solution might or might not be optimal

55 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-RHS Case 1: All constraints whose right-hand sides are being modified are nonbinding constraints. Case 2: At least one of the constraints whose right-hand side is being modified is a binding constraint.

56 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-RHS In Case 1, the current basis remains optimal if and only if the objective function coefficient for each variable remains within the allowable range. If the current basis remains optimal, then both the values of the decision variables and objective function remain unchanged. If the objective function coefficient for any variable is outside its allowable range, then the current basis is no longer optimal.

57 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-RHS In Case 2, we can often show that the current basis remains optimal by using the 100% Rule. Let b j = original right-hand side term Δb j = change in b j i j = max allowable increase in b j d j = max allowable decrease in b j

58 Sensitivity Analysis Multiple Parameter Changes (100% Rule)-RHS We then define the ratio, r j as r j = Δb j i j, Δb j 0 Δb j d j, Δb j 0 j r j 1 current solution is optimal j r j > 1 current solution might or might not be optimal

59 Sensitivity Analysis Multiple Parameter Changes (100% Rule) Example for Case 1: Consider the following example (Diet Problem): min z = 50x x x x 4 400x x x x x 1 + 2x x 1 + 2x 2 + 4x 3 + 4x x 1 + 4x 2 + x 3 + 5x 4 8 x 1, x 2, x 3, x 4 0

60 Sensitivity Analysis Multiple Parameter Changes (100% Rule) We have the following ranges for the corresponding parameters: c 1 = Δ < c 2 = 20 5 Δ < c 3 = Δ < 10 c 4 = Δ < If we change the parameters as c 1 = 60 and c 4 = 50, since both have non-zero reduced costs, we have = 22.5 c 1 = = = 30 c 4 = =

61 Sensitivity Analysis Multiple Parameter Changes (100% Rule) Another Example for Case 1: If we change the parameters as c 1 = 40 and c 4 = 25, since both have non-zero reduced costs, we have = 22.5 c 1 = = = 30 c 4 = = The current solution is no longer optimal. The problem must be solved again.

62 Sensitivity Analysis Multiple Parameter Changes (100% Rule) Example for Case 2: Consider the following example (Dakota Problem): max z = 60x x x 3 + 0s 1 + 0s 2 + 0s 3 8x x 2 + x 3 + s 1 = 48 4x x x s 2 = 20 2x x x s 3 = 8 x 1, x 2, x 3, s 1, s 2, s 3 0

63 Sensitivity Analysis Multiple Parameter Changes (100% Rule) The optimal solution for Dakota: z x s s 3 = x s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 8 x x s s 3 = 2 x 1, x 2, x 3, s 1, s 2, s 3 0

64 Sensitivity Analysis Multiple Parameter Changes (100% Rule) We have the following ranges for the corresponding parameters: c 1 = 60 4 Δ < 20 c 2 = 30 Δ < 5 c 3 = 20 5 Δ < 2.5

65 Sensitivity Analysis Multiple Parameter Changes (100% Rule) If we change the parameters as c 1 = 70, c 3 = 18, r 1 = Δc 1 i 1 = = 0.5 r 2 = 0 r 3 = Δc 3 i 3 = 2 5 = j =1 r j = current solution is optimal

66 Duality Consider the following max problem (normal max problem): max z = c 1 x 1 + c 2 x c n x n a 11 x 1 + a 12 x a 1n x n b 1 a 21 x 1 + a 22 x a 2n x n b 2 a m1 x 1 + a m2 x a mn x n b m x 1, x 2,, x n 0

67 Duality The dual of the problem is given as follows (normal min problem): min w = b 1 y 1 + b 2 y b m y m a 11 y 1 + a 21 y a m1 y m c 1 a 12 y 1 + a 22 y a m2 y m c 2 a 1n y 1 + a 2n y a mn y m c n y 1, y 2,, y m 0

68 Duality Consider the Dakota Example: max z = 60x x x 3 8x x 2 + x x x x x x x 3 8 x 1, x 2, x 3 0

69 Duality The dual of the problem is then min w = 48y y 2 + 8y 3 8y y 2 + 2y y y y 3 30 y y y 3 20 y 1, y 2, y 3 0

70 Duality Consider the Diet Example: min w = 50y y y y 4 400y y y y y 1 + 2y 2 6 2y 1 + 2y 2 + 4y 3 + 4y y 1 + 2y 2 + y 3 + 5y 4 8 y 1, y 2, y 3, y 4 0

71 Duality The dual of the problem is then max z = 500x 1 + 6x x 3 + 8x 4 400x 1 + 3x 2 + 2x 3 + 2x x 1 + 2x 2 + 2x 3 + 4x x 1 + 4x 3 + x x 1 + 4x 3 + 5x 4 80 x 1, x 2, x 3, x 4 0

72 Duality If we have a non-normal LP, we can consider the following 2 alternatives: Transform the non-normal LP to a normal one and write the dual of the LP. Directly write the dual of the non-normal LP.

73 Duality Example of a Non-Normal LP: max z = 2x 1 + x 2 x 1 + x 2 = 2 2x 1 x 2 3 x 1 x 2 1 x 1 0 x 2 : urs

74 Duality Another Example of a Non-Normal LP: min w = 2y 1 + 4y 2 + 6y 3 y 1 + 2y 2 + y 3 2 y 1 y 3 1 y 2 + y 3 = 1 2y 1 + y 2 3 y 1 : urs y 2, y 3 0

75 Duality Transform the non-normal LP to a normal one and write the dual of the LP as Convert each ( ) type constraint to a ( ) type constraint. Convert each equation into two inequalities. Represent each unrestricted variable using two non-negative variables. Write the dual of the transformed normal LP.

76 Duality Directly write the dual of the non-normal max (min) LP as For each constraint and variable in normal form, write the corresponding dual variable and constraint as before. For each ( ) type constraint, the corresponding dual variable is non-positive. For each non-positive variable, the corresponding dual constraint is a ( ) type constraint.

77 Duality Consider the Dakota Example: max z = 60x x x 3 8x x 2 + x x x x x x x 3 8 x 1, x 2, x 3 0

78 Duality The dual of the problem is then min w = 48y y 2 + 8y 3 8y y 2 + 2y y y y 3 30 y y y 3 20 y 1, y 2, y 3 0

79 Duality Note that the first constraint is associated with desks, the second with tables and the third with chairs. Also, note that y 1, y 2 and y 3 are associated with lumber, finishing hours and carpentry hours, respectively. Suppose that someone wants to purchase all of Dakota s resources. She must determine the price she is willing to pay for a unit of each resource. We thus define y 1 = price for 1 board ft. of lumber y 2 = price for 1 finishing hour y 3 = price for 1 carpentry hour

80 Duality Consider the Diet Example: min w = 50y y y y 4 400y y y y y 1 + 2y 2 6 2y 1 + 2y 2 + 4y 3 + 4y y 1 + 2y 2 + y 3 + 5y 4 8 y 1, y 2, y 3, y 4 0

81 Duality The dual of the problem is then max z = 500x 1 + 6x x 3 + 8x 4 400x 1 + 3x 2 + 2x 3 + 2x x 1 + 2x 2 + 2x 3 + 4x x 1 + 4x 3 + x x 1 + 4x 3 + 5x 4 80 x 1, x 2, x 3, x 4 0

82 Duality Now suppose that there is a salesperson that sells calories, chocolate, sugar and fat and wants to ensure that a dieter will meet all of our daily requirements by purchasing calories, chocolate, sugar and fat while maximizing her profit. We must then determine x 1 = price per calorie to charge dieter x 2 = price per ounce of chocolate to charge dieter x 3 = price per ounce of sugar to charge dieter x 4 = price per ounce of fat to charge dieter

83 Duality Consider the following max problem (normal max problem): max z = c 1 x 1 + c 2 x c n x n a 11 x 1 + a 12 x a 1n x n b 1 a 21 x 1 + a 22 x a 2n x n b 2 a m1 x 1 + a m2 x a mn x n b m x 1, x 2,, x n 0

84 Duality The dual of the problem is given as follows (normal min problem): min w = b 1 y 1 + b 2 y b m y m a 11 y 1 + a 21 y a m1 y m c 1 a 12 y 1 + a 22 y a m2 y m c 2 a 1n y 1 + a 2n y a mn y m c n y 1, y 2,, y m 0

85 Duality Lemma (Weak Duality): Let x T = x 1 x 2 x n and y = y 1 y 2 y m be any feasible solutions to the primal and dual, respectively. We can then write z w

86 Duality Proof: Multiply the ith primal constraint with y i 0, n j =1 y i a ij x j b i y i, i = 1,, m If we sum the above expressions for all primal constraints, m i=1 n j =1 y i a ij x j m i=1 b i y i

87 Duality Similarly, multiply the jth dual constraint with x j 0, m i=1 x j a ij y j c j x j, j = 1,, n If we sum the above expressions for all dual constraints, m i=1 n j =1 y i a ij x j n j =1 c j x j

88 Duality Combining the above expressions, we obtain n m n m c j x j y i a ij x j b i y i j =1 z i=1 j =1 i=1 w

89 Duality For the Dakota Example, we see that x 1, x 2, x 3 = 1,1,1 is a primal feasible solution with z = 110. Weak Duality implies that any dual feasible solution y 1, y 2, y 3 must satisfy Check that! 48y y 2 + 8y 3 110

90 Duality

91 Duality Lemma: Let x T = x 1 x 2 x n and y = y 1 y 2 y m be any feasible solutions to the primal and dual, respectively. If cx = yb, x is optimal for the primal and y is optimal for the dual.

92 Duality Lemma: If the primal is unbounded, then, the dual is infeasible. Lemma: If the dual is unbounded, then, the primal is infeasible.

93 Duality Theorem: If B is the set of an optimal basis for primal, then, c B B 1 is an optimal solution to the dual and z = w.

94 Duality The optimal solution for Dakota: z x 2 0s s s 3 = x s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 8 x x s s 3 = 2 x 1, x 2, x 3, s 1, s 2, s 3 0

95 Duality z = c B B 1 b = = = y 1 y 2 y

96 Duality We can find the optimal dual solution from the optimal primal tableau as follows: If the primal is a max problem, y i = cs i, if constraint i is of type ce i, if constraint i is of type ca i M, if constraint i is of = type (max) ca i + M, if constraint i is of = type (min)

97 Duality Example: max z = 3x 1 + 2x 2 + 5x 3 x 1 + 3x 2 + 2x x 2 x 3 5 2x 1 + x 2 5x 3 = 10 x 1, x 2, x 3 0

98 Duality z x 1 x 2 x 3 s 1 e 2 a 2 a 3 RHS z M M x x x

99 Duality Since 1 st constraint is of type, y 1 = cs 1 = 51 Since 2 nd constraint is of type, y 2 = ce 2 = 58 Since 3 rd constraint is of type =, y 3 = ca 3 M = M M =

100 Duality Shadow Prices Definition (Shadow Price): The shadow price of the ith constraint is the amount by which the optimal z value is improved if b i is increased by 1 assuming that the current basis remains optimal.

101 Duality Shadow Prices We can use the Dual Theorem to find the shadow prices. Consider the Dakota Example, and let us find the shadow price of the second constraint. We have c B B 1 = y 1 y 2 y 3 = From the dual theorem, increasing the right hand side of the second constraint by 1, from 20 to 21, the objective improves by b 2 = 20 z 1 = 48y y 2 + 8y 3 b 2 = 21 z 2 = 48y y 2 + 8y 3 z 2 z 1 = y 2 = 10 which shows the shadow price of the ith constraint in a max problem is the value of the ith dual variable.

102 Duality Shadow Prices Our proof of the Dual Theorem demonstrated the following result: Assuming that a set of basic variables B is feasible, then B is optimal if and only if the associated dual solution (c B B 1 ) is dual feasible. This result can be used for an alternative way of doing the following types of sensitivity analysis: Changing the objective coefficient of a non-basic variable Changing the column of a non-basic variable Adding a new variable (activity)

103 Duality Sensitivity Analysis Our proof of the Dual Theorem demonstrated the following result: Assuming that a set of basic variables B is feasible, then B is optimal if and only if the associated dual solution (c B B 1 ) is dual feasible. This result can be used for an alternative way of doing the following types of sensitivity analysis: Changing the objective coefficient of a non-basic variable Changing the column of a non-basic variable Adding a new variable (activity)

104 Duality Sensitivity Analysis Consider the Dakota Example: max z = 60x x x 3 The optimal solution of the primal is 8x x 2 + x x x x x x x 3 8 x 1, x 2, x 3 0 z = 280; x 1 = 2, x 2 = 0, x 3 = 8, s 1 = 24, s 2 = 0, s 3 = 0

105 Duality Sensitivity Analysis The dual of the problem is The optimal solution of the dual is min w = 48y y 2 + 8y 3 8y y 2 + 2y y y y 3 30 y y y 3 20 y 1, y 2, y 3 0 w = 280; y 1 = 0, y 2 = 10, y 3 = 10

106 Duality Sensitivity Analysis Assume that we want to change the objective coefficient of x 2. Note that, we have 6y 1 + 2y y 3 c 2 and since we have y 1 = 0, y 2 = 10, y 3 = 10, we can write y 1 = 0, y 2 = 10, y 3 = 10 6y 1 + 2y y 3 = 35 c 2 Hence, we see that as long as c 2 35, the current solution remains optimal.

107 Duality Sensitivity Analysis Look at the other examples where we change the column of non-basic variable and add a new variable (activity).

108 Complementary Slackness The Theorem of Complementary Slackness is an important result that relates the optimal primal and dual solutions. To state this theorem, we assume that we have the following primal and dual problems:

109 Complementary Slackness Consider the following max problem (normal max problem): max z = c 1 x 1 + c 2 x c n x n a 11 x 1 + a 12 x a 1n x n b 1 a 21 x 1 + a 22 x a 2n x n b 2 a m1 x 1 + a m2 x a mn x n b m x 1, x 2,, x n 0

110 Complementary Slackness The dual of the problem is given as follows (normal min problem): min w = b 1 y 1 + b 2 y b m y m a 11 y 1 + a 21 y a m1 y m c 1 a 12 y 1 + a 22 y a m2 y m c 2 a 1n y 1 + a 2n y a mn y m c n y 1, y 2,, y m 0 Moreover, let s 1, s 2,, s m and e 1, e 2,, e n be the slack and excess variables for the primal and the dual.

111 Complementary Slackness We can state the Theorem of Complementary Slackness as follows: Theorem: Complementary Slackness If x T = x 1 x n T and y = y 1 y m be feasible primal and dual solutions, respectively, then, x is primal optimal and y is dual optimal if and only if s i y i = 0, e j x j = 0, i = 1,, m j = 1,, n

112 Complementary Slackness The Theorem of Complementary Slackness implies that if a constraint in either primal or dual is non-binding (s i > 0 or e j > 0), then, the corresponding variable value in the corresponding problem (primal or dual) equals zero.

113 Complementary Slackness Consider the Dakota Example and its dual: max z = 60x x x 3 8x x 2 + x x x x x x x 3 8 x 1, x 2, x 3 0

114 Complementary Slackness The dual of the problem is min w = 48y y 2 + 8y 3 8y y 2 + 2y y y y 3 30 y y y 3 20 y 1, y 2, y 3 0

115 Complementary Slackness The optimal solution of the dual is w = 280; y 1 = 0, y 2 = 10, y 3 = 10, e 1 = 0, e 2 = 5, e 3 = 0 The optimal solution of the primal is z = 280; x 1 = 2, x 2 = 0, x 3 = 8, s 1 = 24, s 2 = 0, s 3 = 0 Now, we note that s 1 y 1 = s 2 y 2 = s 3 y 3 = 0 e 1 x 1 = e 2 x 2 = e 3 x 3 = 0

116 Complementary Slackness Can we use the Theorem of Complementary Slackness to solve LPs? Think about it!

117 The Dual Simplex Method Step 1: If the RHS of each constraint is non-negative, then, stop. The optimal solution is found. Otherwise go to Step 2.

118 The Dual Simplex Method Step 2: Choose the most negative basic variable as the leaving variable the row of which is the pivot row. To determine the entering variable, compute the following ratio for each variable x j with a negative coefficient in the pivot row and choose the smallest absolute ratio, and then, perform the row operations to obtain the new solution. cx j a xj, j: a xj < 0

119 The Dual Simplex Method Step 3: If there is any negative RHS corresponding to a constraint in which all coefficients are non-negative, then, the LP has no feasible solution. Otherwise return to Step 1.

120 The Dual Simplex Method We can typically use the dual simplex method in the following cases: Finding the new optimal solution after a constraint is added. Finding the new optimal solution after a changing a RHS term. Solving normal min problem.

121 The Dual Simplex Method When we add a new constraint, we can have the following cases: The current optimal solution satisfies the new constraint The current optimal solution does not satisfy the new constraint, but the LP still has a feasible solution. The current optimal solution does not satisfy the new constraint.

122 The Dual Simplex Method The optimal solution for Dakota: z x s s 3 = x 2 + s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 8 x x 2 0.5s s 3 = 2 If we add the constraint x 1 + x 2 + x 3 11, we see that the current optimal solution z = 280; x 1 = 2, x 2 = 0, x 3 = 8 satisfies this constraint.

123 The Dual Simplex Method As an example of the second case, consider that we add the constraint x 2 1. We can proceed as follows by the dual simplex algorithm: z x s s 3 = x 2 + s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 8 x x 2 0.5s s 3 = 2 x 2 + e 4 = 1

124 The Dual Simplex Method z + 10s s 3 + 5e 4 = 275 s 1 + 2s 2 8s 3 2e 4 = 26 x 3 + 2s 2 4s 3 2e 4 = 10 x s s e 3 4 = 4 x 2 e 4 = 1

125 The Dual Simplex Method As an example of Case 3, assume that we add a new constraint x 1 + x We can then write z x s s 3 = x 2 + s s 2 8.0s 3 = x 2 + x s 2 4.0s 3 = 8 x x 2 0.5s s 3 = 2 x 1 x 2 + e 4 = 12

126 The Dual Simplex Method z + 5.0x s 3 = 280 x 2 + s 1 8.0s 3 = 24 x 2 + x 3 4.0s 3 = 8 x 1 + x s 3 = 2 0.5x 2 + s s 3 + e 4 = 10 z x s e 4 = x 2 + s 1 2s 3 + 4e 4 = x 2 + x s 3 + 4e 4 = 32 x x 2 e 4 = x 2 + s 2 3s 3 2e 4 = 20

127 The Dual Simplex Method z 10x s e 4 = 240 x 3 + s 1 4s 3 = 16 x 2 x 3 + 2s 3 4e 4 = 32 x 1 + x 3 + 2s 3 + 3e 4 = 20 + s 2 4s 3 4e 4 = 36 No feasible solution!

128 Data Envelopment Analysis (DEA) Often we wonder if a university, hospital, restaurant, or other business is operating efficiently. The Data Envelopment Analysis (DEA) method can be used to answer this question. To illustrate how DEA works, let s consider three hospitals. To simplify matters, we assume that each hospital converts two inputs into three different outputs.

129 Data Envelopment Analysis (DEA) The two inputs used by each hospital are Input 1 = capital (measured by the number of hospital beds) Input 2 = labor (measured in thousands of labor hours) The outputs produced by each hospital are Output 1 = hundreds of patient-days for patients under age 14 Output 2 = hundreds of patient-days for patients between 14 and 65 Output 3 = hundreds of patient-days for patients over 65

130 Data Envelopment Analysis (DEA) Hospital Input 1 Input 2 Output 1 Output 2 Output To determine the efficiency of a hospital, we can let t r and w s be the value of one unit of output r and the cost of one unit of input s. The efficiency of the hospital i is then defined as the value / cost ratio.

131 Data Envelopment Analysis (DEA) For our example, e 1 = 9t 1 + 4t t 3 5w w 2 e 2 = 5t 1 + 7t t 3 8w w 2 e 3 = 4t 1 + 9t t 3 7w w 2

132 Data Envelopment Analysis (DEA) We also use the following assumptions: No hospital can be more than 100% efficient. An efficiency value of 1 means the corresponding hospital is efficient We can scale the output values. Each input cost and output value must be strictly positive.

133 Data Envelopment Analysis (DEA) For Hospital 1, max z 1 = 9t 1 + 4t t 3 9t 1 4t 2 16t 3 + 5w w 2 0 5t 1 7t 1 10t 1 + 8w w 2 0 4t 1 9t 1 13t 1 + 7w w 2 0 5w w 2 = 1 t t t w w

134 Data Envelopment Analysis (DEA) For Hospital 2, max z 1 = 5t 1 + 7t t 3 9t 1 4t 2 16t 3 + 5w w 2 0 5t 1 7t 1 10t 1 + 8w w 2 0 4t 1 9t 1 13t 1 + 7w w 2 0 8w w 2 = 1 t t t w w

135 Data Envelopment Analysis (DEA) For Hospital 3, max z 1 = 4t 1 + 9t t 3 9t 1 4t 2 16t 3 + 5w w 2 0 5t 1 7t 1 10t 1 + 8w w 2 0 4t 1 9t 1 13t 1 + 7w w 2 0 7w w 2 = 1 t t t w w

136 Data Envelopment Analysis (DEA) By solving these 3 LPs, we find that z 1 = 1, z 2 =.773 and z 3 = 1 (verify it), meaning that the efficiency of the hospitals are 100%, 77.3% and 100%, respectively. Interpret the solutions!

137 To be continued

Sensitivity Analysis and Duality

Sensitivity Analysis and Duality Sensitivity Analysis and Duality Part II Duality Based on Chapter 6 Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

The Simplex Algorithm and Goal Programming

The Simplex Algorithm and Goal Programming The Simplex Algorithm and Goal Programming In Chapter 3, we saw how to solve two-variable linear programming problems graphically. Unfortunately, most real-life LPs have many variables, so a method is

More information

Chapter 4 The Simplex Algorithm Part I

Chapter 4 The Simplex Algorithm Part I Chapter 4 The Simplex Algorithm Part I Based on Introduction to Mathematical Programming: Operations Research, Volume 1 4th edition, by Wayne L. Winston and Munirpallam Venkataramanan Lewis Ntaimo 1 Modeling

More information

SEN301 OPERATIONS RESEARCH I LECTURE NOTES

SEN301 OPERATIONS RESEARCH I LECTURE NOTES SEN30 OPERATIONS RESEARCH I LECTURE NOTES SECTION II (208-209) Y. İlker Topcu, Ph.D. & Özgür Kabak, Ph.D. Acknowledgements: We would like to acknowledge Prof. W.L. Winston's "Operations Research: Applications

More information

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European

More information

ECE 307- Techniques for Engineering Decisions

ECE 307- Techniques for Engineering Decisions ECE 307- Techniques for Engineering Decisions Lecture 4. Dualit Concepts in Linear Programming George Gross Department of Electrical and Computer Engineering Universit of Illinois at Urbana-Champaign DUALITY

More information

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1 AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1 Prof. Yiling Chen Fall 2018 Here are some practice questions to help to prepare for the midterm. The midterm will

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

Linear Programming. Xi Chen. Department of Management Science and Engineering International Business School Beijing Foreign Studies University

Linear Programming. Xi Chen. Department of Management Science and Engineering International Business School Beijing Foreign Studies University Linear Programming Xi Chen Department of Management Science and Engineering International Business School Beijing Foreign Studies University Xi Chen (chenxi0109@bfsu.edu.cn) Linear Programming 1 / 148

More information

Linear Programming. Dr. Xiaosong DING

Linear Programming. Dr. Xiaosong DING Linear Programming Dr. Xiaosong DING Department of Management Science and Engineering International Business School Beijing Foreign Studies University Dr. DING (xiaosong.ding@hotmail.com) Linear Programming

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

c) Place the Coefficients from all Equations into a Simplex Tableau, labeled above with variables indicating their respective columns

c) Place the Coefficients from all Equations into a Simplex Tableau, labeled above with variables indicating their respective columns BUILDING A SIMPLEX TABLEAU AND PROPER PIVOT SELECTION Maximize : 15x + 25y + 18 z s. t. 2x+ 3y+ 4z 60 4x+ 4y+ 2z 100 8x+ 5y 80 x 0, y 0, z 0 a) Build Equations out of each of the constraints above by introducing

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T.

SAMPLE QUESTIONS. b = (30, 20, 40, 10, 50) T, c = (650, 1000, 1350, 1600, 1900) T. SAMPLE QUESTIONS. (a) We first set up some constant vectors for our constraints. Let b = (30, 0, 40, 0, 0) T, c = (60, 000, 30, 600, 900) T. Then we set up variables x ij, where i, j and i + j 6. By using

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Chapter 3 Introduction to Linear Programming PART 1. Assoc. Prof. Dr. Arslan M. Örnek

Chapter 3 Introduction to Linear Programming PART 1. Assoc. Prof. Dr. Arslan M. Örnek Chapter 3 Introduction to Linear Programming PART 1 Assoc. Prof. Dr. Arslan M. Örnek http://homes.ieu.edu.tr/~aornek/ise203%20optimization%20i.htm 1 3.1 What Is a Linear Programming Problem? Linear Programming

More information

March 13, Duality 3

March 13, Duality 3 15.53 March 13, 27 Duality 3 There are concepts much more difficult to grasp than duality in linear programming. -- Jim Orlin The concept [of nonduality], often described in English as "nondualism," is

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Sensitivity Analysis

Sensitivity Analysis Dr. Maddah ENMG 500 /9/07 Sensitivity Analysis Changes in the RHS (b) Consider an optimal LP solution. Suppose that the original RHS (b) is changed from b 0 to b new. In the following, we study the affect

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1

The Graphical Method & Algebraic Technique for Solving LP s. Métodos Cuantitativos M. En C. Eduardo Bustos Farías 1 The Graphical Method & Algebraic Technique for Solving LP s Métodos Cuantitativos M. En C. Eduardo Bustos Farías The Graphical Method for Solving LP s If LP models have only two variables, they can be

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered: LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

Review Questions, Final Exam

Review Questions, Final Exam Review Questions, Final Exam A few general questions 1. What does the Representation Theorem say (in linear programming)? 2. What is the Fundamental Theorem of Linear Programming? 3. What is the main idea

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm How to Convert an LP to Standard Form Before the simplex algorithm can be used to solve an LP, the LP must be converted into a problem where all the constraints are equations and

More information

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

3. Duality: What is duality? Why does it matter? Sensitivity through duality. 1 Overview of lecture (10/5/10) 1. Review Simplex Method 2. Sensitivity Analysis: How does solution change as parameters change? How much is the optimal solution effected by changing A, b, or c? How much

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Sensitivity Analysis and Duality in LP

Sensitivity Analysis and Duality in LP Sensitivity Analysis and Duality in LP Xiaoxi Li EMS & IAS, Wuhan University Oct. 13th, 2016 (week vi) Operations Research (Li, X.) Sensitivity Analysis and Duality in LP Oct. 13th, 2016 (week vi) 1 /

More information

Worked Examples for Chapter 5

Worked Examples for Chapter 5 Worked Examples for Chapter 5 Example for Section 5.2 Construct the primal-dual table and the dual problem for the following linear programming model fitting our standard form. Maximize Z = 5 x 1 + 4 x

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

(b) For the change in c 1, use the row corresponding to x 1. The new Row 0 is therefore: 5 + 6

(b) For the change in c 1, use the row corresponding to x 1. The new Row 0 is therefore: 5 + 6 Chapter Review Solutions. Write the LP in normal form, and the optimal tableau is given in the text (to the right): x x x rhs y y 8 y 5 x x x s s s rhs / 5/ 7/ 9 / / 5/ / / / (a) For the dual, just go

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker 56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker Answer all of Part One and two (of the four) problems of Part Two Problem: 1 2 3 4 5 6 7 8 TOTAL Possible: 16 12 20 10

More information

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions. Prelude to the Simplex Algorithm The Algebraic Approach The search for extreme point solutions. 1 Linear Programming-1 x 2 12 8 (4,8) Max z = 6x 1 + 4x 2 Subj. to: x 1 + x 2

More information

Operations Research Graphical Solution. Dr. Özgür Kabak

Operations Research Graphical Solution. Dr. Özgür Kabak Operations Research Graphical Solution Dr. Özgür Kabak Solving LPs } The Graphical Solution } The Simplex Algorithm } Using Software 2 LP Solutions: Four Cases } The LP has a unique optimal solution. }

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

UNIT-4 Chapter6 Linear Programming

UNIT-4 Chapter6 Linear Programming UNIT-4 Chapter6 Linear Programming Linear Programming 6.1 Introduction Operations Research is a scientific approach to problem solving for executive management. It came into existence in England during

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

Introduction. Very efficient solution procedure: simplex method.

Introduction. Very efficient solution procedure: simplex method. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider

More information

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form The Simplex Method 1 The Simplex Method Formulate Constrained Maximization or Minimization Problem Convert to Standard Form Convert to Canonical Form Set Up the Tableau and the Initial Basic Feasible Solution

More information

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

Answer the following questions: Q1: Choose the correct answer ( 20 Points ): Benha University Final Exam. (ختلفات) Class: 2 rd Year Students Subject: Operations Research Faculty of Computers & Informatics Date: - / 5 / 2017 Time: 3 hours Examiner: Dr. El-Sayed Badr Answer the following

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Introduction to linear programming using LEGO.

Introduction to linear programming using LEGO. Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different

More information

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N

Brief summary of linear programming and duality: Consider the linear program in standard form. (P ) min z = cx. x 0. (D) max yb. z = c B x B + c N x N Brief summary of linear programming and duality: Consider the linear program in standard form (P ) min z = cx s.t. Ax = b x 0 where A R m n, c R 1 n, x R n 1, b R m 1,and its dual (D) max yb s.t. ya c.

More information

Theory of Linear Programming

Theory of Linear Programming SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT BF360 Operations Research Unit Two Theory of Linear Programming Moses Mwale e-mail: moses.mwale@ictar.ac.zm BF360 Operations Research Contents Unit 2: Theory

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

Study Unit 3 : Linear algebra

Study Unit 3 : Linear algebra 1 Study Unit 3 : Linear algebra Chapter 3 : Sections 3.1, 3.2.1, 3.2.5, 3.3 Study guide C.2, C.3 and C.4 Chapter 9 : Section 9.1 1. Two equations in two unknowns Algebraically Method 1: Elimination Step

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

AMS341 HW1. Graphically solving this LP we find the optimal solution to be z = $10,000, W = C = 20 acres.

AMS341 HW1. Graphically solving this LP we find the optimal solution to be z = $10,000, W = C = 20 acres. AMS341 HW1 p.63 #6 Let W = acres of wheat planted and C = acres of corn planted. Then the appropriate LP is max = 200W + 300C st W + C 45 (Land) 3W + 2C 100 (Workers) 2W + 4C 120 (Fertilizer) W, C 0 Graphically

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

Optimisation. 3/10/2010 Tibor Illés Optimisation

Optimisation. 3/10/2010 Tibor Illés Optimisation Optimisation Lectures 3 & 4: Linear Programming Problem Formulation Different forms of problems, elements of the simplex algorithm and sensitivity analysis Lecturer: Tibor Illés tibor.illes@strath.ac.uk

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

56:171 Operations Research Fall 1998

56:171 Operations Research Fall 1998 56:171 Operations Research Fall 1998 Quiz Solutions D.L.Bricker Dept of Mechanical & Industrial Engineering University of Iowa 56:171 Operations Research Quiz

More information

Linear Programming: The Simplex Method

Linear Programming: The Simplex Method 7206 CH09 GGS /0/05 :5 PM Page 09 9 C H A P T E R Linear Programming: The Simplex Method TEACHING SUGGESTIONS Teaching Suggestion 9.: Meaning of Slack Variables. Slack variables have an important physical

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality CSCI5654 (Linear Programming, Fall 013) Lecture-7 Duality Lecture 7 Slide# 1 Lecture 7 Slide# Linear Program (standard form) Example Problem maximize c 1 x 1 + + c n x n s.t. a j1 x 1 + + a jn x n b j

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15 Fundamentals of Operations Research Prof. G. Srinivasan Indian Institute of Technology Madras Lecture No. # 15 Transportation Problem - Other Issues Assignment Problem - Introduction In the last lecture

More information

The Strong Duality Theorem 1

The Strong Duality Theorem 1 1/39 The Strong Duality Theorem 1 Adrian Vetta 1 This presentation is based upon the book Linear Programming by Vasek Chvatal 2/39 Part I Weak Duality 3/39 Primal and Dual Recall we have a primal linear

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

6.2: The Simplex Method: Maximization (with problem constraints of the form )

6.2: The Simplex Method: Maximization (with problem constraints of the form ) 6.2: The Simplex Method: Maximization (with problem constraints of the form ) 6.2.1 The graphical method works well for solving optimization problems with only two decision variables and relatively few

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

Linear Programming. H. R. Alvarez A., Ph. D. 1

Linear Programming. H. R. Alvarez A., Ph. D. 1 Linear Programming H. R. Alvarez A., Ph. D. 1 Introduction It is a mathematical technique that allows the selection of the best course of action defining a program of feasible actions. The objective of

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Notes: Deterministic Models in Operations Research

Notes: Deterministic Models in Operations Research Notes: Deterministic Models in Operations Research J.C. Chrispell Department of Mathematics Indiana University of Pennsylvania Indiana, PA, 15705, USA E-mail: john.chrispell@iup.edu http://www.math.iup.edu/~jchrispe

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

LINEAR PROGRAMMING. Introduction

LINEAR PROGRAMMING. Introduction LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing

More information

MATH3017: Mathematical Programming

MATH3017: Mathematical Programming MATH3017: Mathematical Programming Nothing happens in the universe that does not have a sense of either certain maximum or minimum Leonhard Euler, Swiss Mathematician and Physicist, 1707 1783 About the

More information

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters ) Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation

More information

Lecture 5. x 1,x 2,x 3 0 (1)

Lecture 5. x 1,x 2,x 3 0 (1) Computational Intractability Revised 2011/6/6 Lecture 5 Professor: David Avis Scribe:Ma Jiangbo, Atsuki Nagao 1 Duality The purpose of this lecture is to introduce duality, which is an important concept

More information